It's something you can do since a lot of years. I used to do so 10 years ago, when I've got the first motherboard with UEFI. But is it useful? It saves a minimal time in the boot sequence, but at what cost?
The bootloader (being it grub, or something more simple as systemd-boot) is useful to me for a couple of reasons:
- it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
- it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
- it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
- it has a voice for entering the UEFI setup menu: in most modern systems again entering the UEFI with a keyboard combination is unnecessarily difficult and has a too short timeout
- it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
If I'm understanding correctly, it might help to point out that in spite of the title they are proposing a bootloader, which can still let you modify the cmdline, boot to other OSs, etc. It's just that the bootloader is itself using the Linux kernel so it can do things like read all Linux filesystems for "free" without having to rewrite filesystem drivers.
It could kexec other kernels but probably won't be able to jump to other OS bootloaders after it already called ExitBootServices.
The sibling comments who think you need to jump back to EFI to solve this, are thinking in layer-ossified terms. This is Redhat proposing this, and they're perfectly confident in upstreaming kernel patches to make this happen.
I would assume that in their proposed solution, the kernel would have logic to check for a CMDLINE flag (or rather, lack of any CMDLINE flags!) to indicate that it's operating in bootloader mode; and if decides that it is, then it never calls ExitBootServices. All the EFI stuff stays mapped for the whole lifetime of the kernel.
(Also, given that they call this a "unified kernel image", I presume that in the case where the kernel decides to boot the same kernel image that's already loaded in memory as the bootloader, then nothing like a kexec needs to occur — rather, that's the point at which the kernel calls ExitBootServices (basically to say "I'm done with caring about being able to potentially boot into something else now"), and transitions from "phase 1 initramfs for running bootload-time logic" into "phase 2 initramfs to bootstrap a multi-user userland.")
That's unlikely, I think that would mean you cannot use native drivers, at which point you're just writing another bootloader. I suspect they only planning to kexec into target kernel, not chainloading other EFI bootloaders.
Something that hasn't been addressed by comments here yet is that you could implement EFI boot services in the Linux kernel and essentially turn Linux into a firmware interface. Though note that I generally shy away from any attempts to make the kernel into a really fat bootloader.
I mean, you can and you can't.
AFAIK, the UEFI spec imposes no requirement that (non-hotplug) devices be re-initializable after you've already initialized them once. Devices are free to take the "ExitBootServices has been called" signal from EFI and use it to latch a mask over their ACPI initialization endpoints, and then depend on the device's physical reset line going low to unmask these (as the device would start off in this unmasked state on first power-on.)
Devices are also free to have an "EFI-app support mode" they enter on power-on, and which they can't enter again once they are told to leave that mode (except by being physically reset.) For example, a USB controller's PS2 legacy keyboard emulation, or a modern GPU's VGA emulation, could both be one-way transitions like this, as only EFI apps (like BIOS setup programs) use these modes any more.
Of course, presuming we're talking about a device that exists on a bus that was designed to support hotplug, the ability to "logically" power the device off and on — essentially, a software-controlled reset line — is part of the abstraction, something the OS kernel necessarily has access to. So devices on such busses can be put back in whatever their power-on state is quite easily.
But for non-hotplug busses (e.g. the bus between the CPU to DRAM), bringing the bus's reset line low is something that the board itself can do; and something that the CPU can do in "System Management Mode", using special board-specific knowledge burned into the board's EFI firmware (which is how EFI bring-up and EFI ResetSystem manage to do it); but which the OS kernel has no access to.
So while a Linux kernel could in theory call ExitBootServices and then virtualize the API of EFI boot services, the kernel wouldn't be guaranteed to be able to actually do what EFI boot services does, in terms of getting the hardware back into its on-boot EFI-support state.
The kernel could emulate these states, by having its native drivers for these devices configure the hardware into states approximating their on-boot EFI-support states; but it would just be an emulation at best. And some devices wouldn't have any kind of runtime state approximating their on-boot state (e.g. the CPU in protected mode doesn't have any state it can enter that approximates real mode.)
You're right (as I saw another comment cite the primary-source for); but I'm still curious now, whether there'd be a way to pull this off.
Yes, that's right.
But that's not necessarily true.
Even if you could only use EFI boot+runtime services until you call ExitBootServices, in theory, an OS kernel could have a HAL for which many different pieces of hardware have an "EFI boot services driver" as well as a native driver; and where the active driver for a given piece of discovered hardware could be hotswapped "under" the HAL abstraction, atomically, without live HAL-intermediated kernel handles going bad — as long as the kernel includes a driver-to-driver state-translation function for the two implementations.
So you could "bring up" a kernel and userland while riding on EFI boot services; and then the kernel would snap its fingers at some critical point, and it'd suddenly all be native drivers.
Of course, Linux is not architected in a way that even comes close to allowing something like this. (Windows might be, maybe?)
---
I think a more interesting idea, though, would come from slightly extending the UEFI spec. Imagine two calls: PauseBootServices and ResumeBootServices.
PauseBootServices would stop all management of devices by the EFI (so, as with ExitBootServices, you'd have to be ready to take over such management) — but crucially, it would leave all the stuff that EFI had discovered+computed+mapped into memory during early boot, mapped into memory (and these pages would be read-only and would be locked at ring-negative-3 or something, so the kernel wouldn't have permission to unmap them.)
If this existed, then at any time (even in the middle of running a multi-user OS!), the running kernel that had previously called PauseBootServices, could call ResumeBootServices — basically "relinquishing back" control over the hardware to EFI.
EFI would then go about reinitializing all hardware other than the CPU and memory, taking over the CPU for a while the same way peripheral bring-up logic does at early boot. But when it's done with getting all the peripherals into known-good states, it would then return control to the caller[1] of ResumeBootServices, with the kernel now having transitioned into being an EFI app again.
[1] ...through a vector of the caller's choice. To get those drivers back into being EFI boot services drivers before the kernel tries using them again, naturally.
It's a dumb idea, mostly useless, thoroughly impractical to implement given how ossified EFI already is — but it'd "work" ;)
Giving "the control of hardware back" is going to be extremely difficult. Just look at the mess that ACPI is: there are lots of notebooks that Linux can not put into/back from hibernation, and here we're talking simply about pausing/resuming devices themselves. What you are proposing means that an OS would have to revert the hardware back to the state that would be compatible with its state at the moment of booting, so that UEFI could manage it correctly. I don't think that's gonna happen.
Theoretically, couldn't it just write to a "boot this image next time" field (is the legacy MBR area available?) and trigger a reboot?
The target image would need to reset that field so that a second reboot puts you back into the bootloader because otherwise you'll be stuck booting that image forever.
The image doesn't need to do it, that's how UEFI bootnext works: the firmware resets the flag before it loads the image.
The boot disk isn't guaranteed to be writable.
Even after you’ve already installed a custom boot laser to it? I mean, I agree with you in principle, but we already have the chicken - can’t existence of the egg be assumed?
Well you could change default boot entry in efivars, but if you're relying on firmware for that why not use firmware provided boot menu anyway?
This is being discussed more extensively in other comment threads but it sounds like maybe there's a way for it to just reboot but set a flag so the firmware boots into a different .efi next time (once).
You can set BootNext variable to number of BootXXX variable you want to use once for next boot.
you seem to be saying that they are using two separate kernels, one for the bootloader and one for the final boot target
the title text says 'Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target' which sounds like they're not talking about using two separate kernels, one for the bootloader and one for the final boot target, but rather only one single kernel. possibly that is not the case because the actual information is hidden in a video i haven't watched
https://news.ycombinator.com/item?id=40909165 seems to confirm that they are indeed not saying what you thought
edit: they're proposing both configurations
This doesn't make sense. There's nothing in the post you responded to which could realistically be interpreted as making that point. And there haven't been any edits, which might have explained your confusion.
the comment says 'they are proposing a bootloader, which can still let you modify the cmdline, (...) the bootloader is itself using the Linux kernel'
possibly you don't know this, but in order to run a kernel with a modified command line, the bootloader-kernel would need to run a second kernel, for example using kexec; linux doesn't have a useful way to modify the command line of the running kernel. that's why i interpreted the comment as saying that they are proposing using two separate kernels. in https://news.ycombinator.com/item?id=40910796 comex clarifies that they are in fact proposing using two separate kernels; the reason i was confused is that that's not the only configuration they're proposing
What I know or don't know is irrelevant, because what matters is that your statement rests of bringing in external knowledge/assumptions, so it's clearly not what the commenter is saying (alone).
Using external knowledge to interpret the meaning of sentences is how every communication works.
I watched the video. They have two different configurations, one where there’s only one kernel, one where there are indeed two separate kernels with one kexec’ing to the other.
thank you for your sacrifice and for the resulting correction to my error
To be clear: the win here is that there's no longer duplicated (or worse - less capable and outdated) code to do the same things in both the bootloader and the kernel, however the two versions of that code might be deployed.
This sentence does not say "the bootloader is itself another, separate, Linux kernel", so I'm not seeing him saying what you're saying he seems to be saying.
You can have command line parameters baked into the EFISTUB. I also have two kernels, so there's two UKIs on /efi, and I have both added as separate boot options in BIOS.
Do people really dual boot a lot in 2024? It was a good use case when virtualization was slow but decades after the CPU started shipping with virtualization extensions there is virtually zero overhead in using VM nowadays and it is much more convenient than rebooting and losing all your open applications just to start one on another OS.
How many times in a decade are you running memtest?
Getting to UEFI firmware or booting another OS/drive is just a matter of holding one key on my thinkpad. I would just simply not buy and bad hardware that doesn't allow me to do that. Vote with you wallet damit.
I would also argue that you can perfectly have grub sitting alongside a direct boot to kernel in an UEFI setup. There are many other bootloaders than grub and users are still free to use them instead of what the distro is shipping. UEFI basically allows you to have as many bootloader as you have space on that small fat partition.
Yes, there are still use cases for it.
The state of GPU virtualisation, for example, is a spectrum from doesn't exist/sucks to only affordable for enterprise customers.
So unless you have a second graphics card to do pass through with, if you want to use your GPU under both OSes then you almost always have to dual boot (yes, there are other options like running Linux headless, but it's not even remotely easier to set up than dual boot)
Most mainboards comes with an integrated gpu though? If you use that one for the host OS, it is easy to pass the discrete through no?
Consumer motherboards haven't had gpus for a while now (IPMI usually comes with one, so servers do), they're built in to the CPU (if they are, not all cpus have them). These can't usually be easily allocated to a vm.
I clicked randomly on a number of motherboards sold by the 2 brands that came to my mind, Asrock and Gigabyte, and all of them advertised hdmi and usb-c graphics output so I am surprised by your declaration that consumer motherboards don't have GPU. If I am not mistaken on AMD Ryzen architecture it comes down to choosing a CPU with a G or 3D suffix which states they have an integrated GPU.
It really still is the case that most if not all consumer motherboards don’t have built in graphics. For the most part especially on the intel side, they’ve relied on the iGPU in the CPU for output for probably 10 years now
Well my case still stand that you still have an integrated graphics, if not by the motherboard but the GPU, that you can use on the host while you dedicate a discrete card for VM passthrough.
Desktop Ryzen 4's and newer have a very small iGPU that's just enough to put up a desktop (and presumably a framebuffer fast enough to feed a discrete card's output into)
He's saying the opposite: Host has integrated graphics, VM has dedicated GPU.
How can the host have integrated graphics, if integrated graphics don't exist?
Per, Korhojoa, and my personal experience plenty of desktop CPUs simply don't have integrated GPUs. Consumer mainboards simply don't come with them at all. Consider my previous workstation CPU, top of the line a few years ago and no iGPU: https://www.amd.com/en/products/processors/desktops/ryzen/50...
Integrated GPUs is a feature of server mainboards so that there is something to display with for troubleshooting, but not on any retail mainboards I am aware of. It is a feature of some consumer grade GPUs designed for either budget or low-power gaming. It simply doesn't exist on all CPUs, consider the AMD 5600, 5600x and 5600g, last gen mid-range CPUs adequate for gaming and the x had a little more clock speed, and the g had an iGPU.
Yes, people dual boot. Particularly people who are contemplating a move from Windows. I'd hate to see Linux take the "my way or the highway" attitude of Windows.
My experience when I had a dual boot in the late 90's was that rebooting is such an interruption that you never become fully comfortable on one of the OS. You just stick to the OS you are used to and never really do the switch.
While if don't dual boot you can switch completely to another OS and only use VM or remote desktop for the handful of use cases when you aren't ready yet (and then end ip abandoning them completely as well).
Keep in mind that booting takes a tiny fraction of the time today that it did in the 90s.
Regardless if it takes 20 seconds or 2 minutes it is still an interruption.
I don't think you got the point.
The experience of using a VM is not good, that's exactly why people are doing dual boot. They know what they are doing.
Yes. I work on Linux and play most games on Windows. Playing games on a VM is... pretty terrible.
One of the big problems is with the graphics cards, because the vendors block a driver functionality ( SR-IOV ) for consumer GPUs that would allow single GPU passthrough for VMs.
The alternative is to leave the system headless (reboot needed, and the VM need to run as root), or to use two graphics cards (wasting power, hardware resources, etc.), for which you also need to add an extra delay layer inside the VM for to re-send the graphics back to the screen, or to connect two wire inputs to the monitor.
Yes.
Not for real-time audio production. The state of audio plugins having Linux support from vendors like EastWest, Spitfire, Native Instruments, iZotope is abysmal and even Wine does not run them nowadays.
Even with a virtual machine that has pinned cores and USB pass-through of a dedicated audio interface, it practically locks you to one sample rate, any change causes crackles, try to load more than one plugin and you hear crackles. There is plenty of overhead.
What kind of machines are people using that entering the UEFI boot menu is difficult? On all three of mine I just press F10 during the first 5 or seconds the vendor logo shows, and I end up in a nice menu where I could select Windows, other kernels, memtest, or the EFI shell or setup.
One easy way to meet Microsoft's boot time requirements is to skip input device enumeration, so there's a lot of machines meeting the Windows sticker requirements where entering the firmware either requires a bunch of failed boots or getting far enough into the boot process that you can be offered an opportunity to reboot into the setup menu.
Huh, today I learned. I'll consider myself lucky I didn't come across one of these machines yet.
I've encountered way too many of these and I hate them with all my being.
How many of these don't have a setting to turn quick boot off?
I have a system where you need to hold down power when turning on the PC to get out of "Quick Boot" mode, and get the ability to get to the bios screen. It's a Sandy-Bridge-era Intel motherboard.
I was working on my Dad's Dell laptop this weekend, and no matter how quickly I spammed the correct key (F12 in this case) it would miss it and continue to a full boot about 3/4 times. I never figured out if it is just picky about timing, or if it had different types of reboots where some of them entering BIOS wasn't even an option.
I start tapping as soon as the screen blanks, probably twice a second. I find this to be best for all BIOS/UEFI interfaces.
Mine has a large delay between when the keypress is registered and the menu actually shows up. But, the window for pressing the key itself is quite short. Also, if you spam the key too quickly, it will hang indefinitely instead of entering the menu necessitating a hard-reboot. Good times.
Newer Dell laptops have a BIOS option to artificially delay the boot process by a configurable number of seconds to give you more time to enter the menu. Which should be proof enough that the default time window is an issue.
Grub is the same everywhere. Motherboard bios/uefi is not. It isn't F10 for me.
How many computers are you operating though? Maybe you'll have to reboot a couple times until you figure out the proper key but then you'll know it. And if you forget it, you clearly aren't doing this often enough for it to be a problem either
It really depends on users. Personally... ~100? Servers, clients, dual-boot configurations, lost machines with PXE boot, various brands and BIOS versions, some even still boot in legacy mode because their UEFI support is bad (like PXE boot doesn't work as well as it should, and as well as it does in "BIOS" mode). So having GRUB on basically all these machines, I'm very happy.
If I could do the same with something that is as small in terms of footprint, and is as flexible as GRUB is (we also PXE-boot into GRUB loaded from the network, both in BIOS and UEFI mode), then I'm interested.
On my last two uefi boards, if I press F12 or F8 too soon after power on it either stalls the boot, or it makes it restart. When the latter happens, I’m always too careful in pressing it causing me to miss the window of opportunity and booting right to the OS. Entering the bios or choosing the boot drive regularly takes me 3 tries. (Gigabyte with Intel and Asus with AMD.)
Just because the boot loader is using Linux, it doesn’t prevent an alternative OS from being booted into, so there is nothing fundamentally stopping all of grub’s features from working in this new scheme.
It is a bit more complex, though. Quoting "nmbl: we don’t need a bootloader" from last month[1]:
It could be seen as an advantage to do chainloading by setting BootNext and resetting. I think Windows even does this now. However, it certainly is a different approach with more moving parts (e.g. the firmware has to not interfere or do anything stupid, harder than you'd hope) and it's definitely slower. It'd be ideal if both options were on the table (being able to `kexec` arbitrary UEFI PE binaries) but I can't imagine kexec'ing random UEFI binaries will ever be ideal. It took long enough to really feel like kexec'ing other Linux kernels was somewhat reliable.
[1]: https://fizuxchyk.wordpress.com/2024/06/13/nmbl-we-dont-need...
Let's say I have a dual-boot system with two totally independent OSes, Systems A and B. It is powered down. I want to boot into System B but the EFI is configured to boot into System A by default.
Am I correct in understanding that the offered solution here is to first boot into System A, find some well-hidden EFI configuration utility (which varies from OS to OS, if it even exists), and then tell EFI to boot into System B on the next reboot?
If so, that's a pretty terrible experience.
Sort of, except it's automated.
Basically, System A's kernel boots. But, instead of immediately loading the System A userland, it loads a boot menu of systems that it reads from UEFI NVRAM and presents it to the user. So you select System B from the list, the menu sets BootNext in NVRAM and issues a reboot.
In practice, the main UX difference is that it takes a bit longer and you'll see the UEFI vendor splash screen again after selecting the boot option.
I'm not a user of Windows anymore but I seem to recall Windows doing something quite similar, where it had a boot menu that felt suspiciously like it was inside of Windows, and to actually change the boot target, it had to reboot.
I mean, it kind of is loading the System A userland. At least the initramfs of it. AFAICT in the proposal the bootloader would now be a regular userland program living in the initramfs.
I get the impression that the eventual goal would be to make this bootloader program into the "init(8) but for the initramfs phase of boot" — i.e. rather than there being a tool like update-grub that calls mkinitramfs, feeding it a shell-script GRUB generated (which then becomes the /init of the initramfs); instead, there'd be a tooling package you'd install that's related to the kernel itself, where you call e.g. kernel-update(8) and that would call mkinitramfs — and the /init shoved inside it would be this bootloader. This bootloader would then be running for the whole initramfs phase of boot, "owning" the whole bootstrap process.
What the architecture is at that point, I'm less clear on. I think either way, this initramfs userland, through this bootloader program, will now handle both the cases of "acting like a bootloader" and "acting like the rest of initramfs-based boot up to pivot-root." That could mean one monolithic binary, or an init daemon and a hierarchy of services (systemd: now in your bootloader), or just a pile of shell scripts like GRUB gives you, just now written by Redhat.
Yes of course. I really mean to say, before/instead of pivoting to the OS root. It sounds like this will synergize well with the UKI effort too, at least from a Secure Boot perspective.
I wonder if I have ever had a laptop where the UEFI worked correctly and without bugs. It always required some workaround somewhere to get stuff working.
Presumably nmbl would show you a menu to select the which OS start if you’re dual booting. You wouldn’t have to manually set some UEFI variable
Does Windows not ensure that the UEFI boots back into Windows when it does an auto-reboot for updates? There's a UEFI variable called BootNext which Windows already knows how to use since the advanced startup options must be setting it to allow rebooting directly to the UEFI settings.
Given that Windows tries to restore open windows to make it look like it didn't even reboot, I'm surprised they wouldn't make sure that the reboot actually goes back into Windows.
Not in my experience. For my typical dual boot situation where Grub is installed as the bootloader, I have to update the Grub settings like so to allow Windows updates to go smoothly:
I am not certain about this, but I think that these options no longer work on UEFI machines. GRUB does not have control over what options are presented if GRUB isn't the selected bootloader. This stuff is BIOS-only.
I have this working on a UEFI system. You select your Linux drive in the UEFI configuration (so the computer always boots into GRUB) and then GRUB will boot into Linux or Windows depending on the last saved option.
Sure, but whether that GRUB entry is remembered as the default is up to the UEFI not GRUB. If you pick another entry GRUB is powerless to effect it.
No, it doesn't. Even a sysprepped image of Windows (which thus runs Setup to install drivers and finalize the installation) doesn't change the boot order on UEFI machines. I think just the installer does this when you first install Windows.
Hardly a problem in my experience - just hold down the key while booting.
And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.
You can change the EFI boot entries including priority from the OS, e.g. via efibootmgr under Linux. Should be easy to setup each OS to make itself the default on boot if that's really what you want.
All motherboards I have used had an EFI shell that you can use to run EFI programs such as the Linux kernel with efistub with whatever command-line options you want.
EFI can have many boot entries too.
What does "a voice" here mean? Or you meant "a choice"? Either way, same as with the boot menu you can just hold down the key while booting IME.
In my experience the EFI shell has always been accessible without a bootloader.
I've been dual-booting linux since the kernel 2.2.x era and being able to do it was a major driver to migrate away from windows. It is super important for onboarding of new users that can't yet get rid of windows fully - mostly because of gaming (yes proton is nice, but anything competive that uses anti-cheat won't work yet is the majority share of gaming). And that is the reason I still boot into Windows on my dual-boot machine: Gaming. For me that windows is just a glorified bootloader into GoG or Steam, yet desperately needed and virtualization won't solve anything here.
Ideally rather than dual booting I would welcome something like running both OSes in sort of a virtual machine but being able to switch between them as easy as with a physical KVM.
Having to actually restart a PC is a pain in the ass which is why I don't dual boot.
grubonce "osname" && reboot
is a pain in the ass? All the virtualization solutions are moot for gaming due to anticheat (plus 3d graphics virtualization not really working for windows)
I have experience with two different laptops: 1. Dell enterprise laptops generally have a robust EFI system which allows for all kinds of `.efi` files to boot on `vfat` partitions. Dell laptops also have a good firmware setup for stuff like mokutils to work so that people can use measured boot with their own version of linux. They also work extremely well with self-encrypting nvme drives. 2. HP consumer laptops which are the worst of lot and essentially prevent you from doing anything apart from stock configurations, almost like on purpose. 3. All other laptops which have various levels of incompetence but seems pretty harmless.
For all laptops apart from Dell, Grub is the bootloader that EFI could never be.
Maybe it is time to re-think the entire hardware boot process and ditch the BIOS altogether.
It probably was, but UEFI was not a good answer.
I'd have preferred CoreBoot or OpenFirmware, but the PC industry was too slow to move and let Intel -- still smarting from Microsoft forcing it to adopt AMD's 64-bit x86 extensions -- take control of the firmware.
The problem with all of the alternatives is that they aren't friendly for alternative OS. They mostly operate on a fork model, so upstreaming support for an OS doesn't mean everyone using that bootloader will support your OS. You either need to pretend to be Linux with a sort of boot shim, or build and flash a custom bootloader with support, which might be non trivial if you cannot get access to the forked bootloader's code.
UEFI is just a standard interface, not an implementation of a bootloader. This enables multiple UEFI compliant implementations as well as an easy way for OS to support all UEFI based bootloaders without needing to coordinate with the owner of the bootloader. While I'm sure most would agree the UEFI interface may not be ideal, it has a lot of industry momentum, and is therefore probably the best option to get behind. There are a lot of players in this space (mostly hardware vendors) and coordinating anything is very difficult and takes a very long time.
Both the suggestions I gave were designed and built to be FOSS and work with any OS.
UEFI is more restrictive -- and tightly controlled by large industry vendors, not the community -- than either of them.
So, no, I totally disagree on all points.
As much as I generally detest indirection, for me a bootloader is a necessity; I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility. NetBSD's bootloader is best for me. UEFI seems like an OS unto itself. A command line, some utilties and network connectivity (UNIX-like textmode environment) is, with few exceptions, 100% of what I need from a computer. To me, UEFI seems potentially quite useful. But not as a replacement for a bootloader.
Yes it does, I use it with two kernels, just have different entry for each stub in UEFI. Whenever I want to boot the non-default kernel I just hit F11 (for BIOS boot menu, on my motherboard) and choose the boot option. You just need to add the boot options in UEFI, pointing to the corresponding EFI files. They also have the kernel command line parameters baked into them and you can set your desired ones (silent boot whatever).
Thank you.
Isn’t this how Apple’s Bootcamp works (at least on Intel based Macs)?
If you embed an x86 system somewhere then you might find yourself not wanting to use GRUB because you don't want to display any boot options anywhere other than the Linux kernel. The EFI stub is really handy for this use case. And on platforms where UBoot is common UBoot supports EFI which makes GRUB superfluous in those cases.
Many of the Linux systems I support don't have displays and EFI is supported through UBoot. In those cases you're using a character-based console of some sort like RS232.
A lot of those GRUB options could also be solved by embedding a simple pre-boot system in an initial ramdisk to display options, which maintains all of the advantages of not using GRUB and also gives you the ability to make your boot selection. The only thing GRUB is doing here is allowing you to select which kernel to chain-load, and you can probably do the same thing in initramfs too through some kind of kernel API that is disabled after pivot root.
I just have two kernels with two boot options in BIOS. I just hit F11 at boot time and choose a BIOS boot option for either kernel. Of-course, you need to add the entries in UEFI, either from UEFI shell either with some tool (efibootmgr). This scheme also supports secure booting and silent booting. The stubs are signed after being generated.
I must admit that on U-Boot platforms, I use U-Boot EFI to load grub-efi, so that I can have a non-terrible bootloader…
rEFInd is the magic tool here.
Personally I still use GRUB for all of the reasons you stated above. But rEFInd + kernel gets you pretty close.
rEFInd is great! I wish they just updated the default theme to something nicer.
You can use the UEFI shell for this. It's kind of a replacement for the old MS-DOG command line.
I dual boot Win/Arch easily with EFISTUB setup. It's super quick to boot to a usb stick of arch if I need to edit anything with the configuration in an "emergency" situation as well. https://wiki.archlinux.org/title/EFISTUB
It is bold of RedHat to claim this is 'their solution'. UEFI has already been used for years to boot without grub. Some examples, MacOS, HP-UX, or systemd-boot via UEFI.
It allows you to enter your passphrase to unlock your Linux LUKS partition before you even get a menu to chainload Windows.
At least this is what an Arch Linux derivative (Artix) system of mine does, amusingly. It sort of gives an observer the impression that it's an encrypted Windows system on boot.
You left out the most important reason I went back to using grub: some motherboards have dodgy UEFI support, and having an extra layer of indirection seems to be more robust sometime for some reason.
Except they've made it increasingly harder to do this over the years. Nowadays you have to guess when it is on the magic 1 second of "GRUB time" before it starts loading and then smack all the F keys and ESC key and DEL key at the same time with both hands and both feet because there is nothing on the screen that tells you which key it actually is.
All while your monitor blanks out for 3 seconds trying to figure out what HDMI mode it is using, hoping that after those 3 seconds are over that you smacked the right key at the right time.
And then you accidentally get into the BIOS instead of the GRUB.
It used to be a nice long 10 seconds with a selection menu and clearly indicated keyboard shortcuts at the bottom, and you could press ENTER to skip the 10 second delay. That was a much better experience. If you're in front of the computer and care about boot time, you hit enter. If you're not in front of the computer, the 10 seconds don't matter.
I know you can add the delay back, I just wish the defaults were better.
This is an indication of bad admin choice. The kernel defaults should not corrupt the boot process and if you add further experimental flags for testing you ought to have a recovery mechanism in place beforehand.
Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.
It's a feature that goes back to Windows NT (NTLDR) supporting dual boot for Windows 9x, but it can be repurposed to boot anything you would like so long as it can execute on its own merit.
eg: Boot into Windows Boot Manager and, instead of booting Windows, it can hand off control to GRUB or systemd-boot to boot Linux.
I need a bootloader that automatically deletes Windows partitions upon detection.
And is also themed like XBill.