return to table of content

Writing a BIOS bootloader for 64-bit mode from scratch

ThinkBeat
20 replies
2d5h

All to me entirely unnecessary steps required to get the CPU into the correct mode is astounding.

They all seem to be steps needed for backwards compatibility.

Could Intel just provide a flag, command, to start in the right mode from the beginning.

Or just remove all the backwards compatibility.

I think I remember doing some research and ARM64 has some of the same issues.

Are there any CPUs that are designed from scratch as 64 bit it will not have any need for backwards compatibility and would enter the required state by default?

I guess sthat was the goal / design of Itanium?

are made to start in the desired 64 bit state from th

LiamPowell
15 replies
2d5h

UEFI exists. You just put a Windows-like binary in a folder on a partition and it runs in a hosted environment in 64-bit mode. And of course there's countless bootloaders that can take care of all this for you too.

rep_lodsb
9 replies
2d3h

And then you're free from dealing with the somewhat convoluted processor init stuff, but instead depend on the Windows PE format, FAT filesystem, and an overcomplicated API.

Seems like a bad tradeoff, and part of a slippery slope towards a completely locked down system, where writing your own code and getting it to run on the 'bare metal' is flat out impossible.

immibis
4 replies
2d3h

What's wrong with depending on the Windows PE format, FAT filesystem and UEFI? You're always going to have some dependencies. FAT32 is better than having the first sector load some magic reserved sectors. Windows PE is better than a fixed memory address.

rep_lodsb
3 replies
2d3h

It's adding pointless complexity, and baking assumptions about how an OS should work into the firmware. Loading a sector at a fixed memory address and jumping to it (with some function provided so that your code can go on to load other sectors) is both easier to understand, and doesn't require you to use some multi-megabyte toolchain.

immibis
1 replies
2d

Wouldn't it be better to use a header to specify to load multiple sectors at multiple memory addresses?

ForOldHack
0 replies
4h13m

Yes, and considering that your boot sector is actually a boot cluster... You need not use compression. You could even steal the entire first cylinder. (Now that is old school)...

p_l
0 replies
20h17m

Unless you need to load anything else other than what's in that sector, because you can't fit enough I/O drivers into one sector to load the rest of the OS.

Unless you're targeting S/360, but even there XA and newer made it a bit more complex.

Joker_vD
3 replies
1d8h

instead depend on the Windows PE format, FAT filesystem, and an overcomplicated API.

Sometimes I wonder: if UEFI instead used ELF format, ext2 filesystem, and somewhat less complicated API (let's be honest: UEFI API is pretty straightforward if tedious), would people still complain about such dependencies? Or would it be deemed to be "fine" since it's not Microsoft technology even though it still would require one to use a multi-megabyte toolchain?

blueflow
1 replies
1d6h

Its not the fact that its from Microsoft, its the fact that both FAT and PE are full of legacy cruft that is no less painful than going through real mode in the first place.

FAT originally only can do 8.3 filenames, but take a look at this to see what they hacked on top: https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system#...

This is all the legacy shit you now have to implement when you want a so-called "legacy-free" boot. Infuriating.

p_l
0 replies
20h19m

Meanwhile ext family, even in ext4, requires that you handle optimizations decisions made for Fujitsu Eagle and its contemporaries.

I could easily read NTFS with hexeditor, calculator and pen&paper. Trivial even (most annoying part was the run length packing in 4-bit nibbles).

I gave up trying to read ext's cylinder-oriented mappings.

And FWIW, everyone has a read/write driver for FAT with LFN (and UEFI generally avoids names outside 8.3 just in case), using a different filesystem would lead to issues with OS support.

PE itself isn't really that much of legacy cruft, especially as far as it is used in UEFI (sure, you have the "program not for MS-DOS header", but that's useful for the rare case someone tries to open it under FreeDOS).

p_l
0 replies
1d6h

Yes, also probably if you used FDT instead of ACPI despite all the problems that are fixed by ACPI but present with FDT, because "ACPI is evil" :V

leeter
4 replies
2d1h

This is fine if you're only running on a single core, however if you're a multiprocessor OS you still need to deal with legacy when bringing up the other cores. Intel and AMD should consider a mode that disables that and brings up the other cores using a 64bit SIPI. While I applaud Intel on the X86S idea... I think there is room for bits of that without throwing out all the backwards compat. An X86SC which drops real mode and only supports 16bit 'real mode' in virtualization.

Yes, I see the argument that if you go to that point you might as well just use emulation. However running mixed 32bit/16bit code (Windows 98/95) becomes problematic just because of performance reasons. DosBox does well, but good luck supporting games for the Pentium 3 era that still used 16bit libraries because of course they did. (16bit Installshield was so common going into 64bit that MS just has code to replace it so 32bit applications can still be installed despite having a 16bit installer)

LiamPowell
3 replies
1d20h

I vaguely recall there being some multicore stuff in UEFI, but it's been years since I looked at it.

leeter
2 replies
1d19h

Intel did a prototype of a multiprocessor UEFI application that would start up cores and UEFI itself does support synchronization on the assumption that applications/bootloaders will start other cores before calling ExitBootServices. However, there are no protocols as of 2.10 (the current spec[1]) that I can find that would bring up another processor. That said searching it can be a bit arcane.

[1] https://uefi.org/specs/UEFI/2.10/index.html

leeter
0 replies
1d5h

Interesting, very interesting. Curious that the PI spec doesn't include update dates on the versions. The MP spec was introduced in 1.5 and I have no idea when that was released.

userbinator
1 replies
1d14h

Intel tried that with the 80376 and it did not go well: https://en.wikipedia.org/wiki/Intel_80376

Neither did the Itanium (Itanic).

Backwards compatibility is the whole reason for choosing x86 over ARM, MIPS, RISC-V, etc. Sadly it seems some people at Intel and AMD don't realise this.

bigstrat2003
0 replies
1d12h

Backwards compatibility is good and necessary. But I don't think backwards compatibility going all the way back to the 8086 is. If someone has software written for the 8086 at this point, they would be far better served by running it in dosbox or something than on bare metal.

saagarjha
0 replies
1d13h

What’s wrong with arm64?

nullindividual
0 replies
2d5h

This is what Intel's proposed X86S [0] is designed for.

X86S is a legacy-reduced-OS ISA that removes outdated execution modes and operating system ISA.

The presence of the X86S ISA is enumerated by a single, main CPUID feature LEGACY_REDUCED_ISA in CPUID 7.1.ECX[2] which implies all the ISA removals described in this document. A new, 64-bit “start-up” interprocessor interrupt (SIPI) has a separate CPUID feature flag.

[0] https://cdrdv2.intel.com/v1/dl/getContent/776648 [pdf warning]

AstralStorm
18 replies
2d8h

How old is UEFI now? Pity nobody deprecated BIOS alongside long mode.

deaddodo
17 replies
2d7h

BIOS is deprecated. All of its functionality on new motherboards is basically emulated via the UEFI; and it's certainly not being extended upon.

Deprecated doesn't mean deleted, it just means "no longer updated/developed with a goal towards removal".

livrem
16 replies
2d6h

This killed FreeDOS (and presumably all the other *DOS as well) on modern hardware unfortunately. It was fun as long as it lasted. I do not know what the next-best single-user, single-process, non-bloated OS would be to run on modern hardware that still has some reasonably modern software and can be used for distraction-free (hobby) development the way FreeDOS could.

trueismywork
7 replies
2d6h

Linux in single user mode

mschuster91
6 replies
2d5h

That's still multi-process though, there's an awful lot of background tasks running in pretty much every non-fossil kernel version, not to mention userspace daemons (udev, dbus, dhcp) without which most normal userspace stuff doesn't even work.

Brian_K_White
5 replies
2d4h

None of that exists in single user. When you say init=/bin/foo, then that's it, the only process is /bin/foo.

ryandrake
4 replies
2d4h

/bin/foo is the initial process. It can fork and/or exec other processes, right?

p_l
1 replies
1d6h

As much as DOS allowed multiple processes, even if only one was executed at the time and there was no multitasking outside of ill-fated MS-DOS 4.0 and various Concurrent DOS products.

ForOldHack
0 replies
3h39m

You were not there. Multitasking? Xenix. Multitasking DOS? DOS Merge. Pre-emptive multitasking? AmigaOS. DOS 4.0M was task switching. OS/2 2.0? I'm being extremely facetious.

Give me a date, and I will tell you what existed.

Nothing mainstream existed until windows 3.1. at least on x86/32

There also was OLEC on windows 2, that did run in real mode, but the only thing that took advantage of it was the demos, and samples: no commercial product used it.

Brian_K_White
1 replies
2d1h

Sure the facility to fork still exists. So what? Observing that the kernel still provides fork() is like observing that the cpu still provides JMP.

It won't fork random processes you don't explicitly tell it to. I thought it was obvious that if you don't want unsolicited processes, then don't specify /bin/init as /bin/foo. The practical example is /bin/sh, but it could be any other executable.

Up to you to specify a binary that does what you want, and doesn't require a bunch of other processes like gdbus to function itself.

init=/bin/sh is more or less like ms-dos loading command.com

gtirloni
0 replies
1d23h

It's obvious but many people here seem to be confusing the Linux kernel with kernel+systemd and complaining Linux has many processes like it's not customizable.

rwmj
4 replies
2d4h

What's the reason why FreeDOS can't use the CSM (the BIOS compatibility mode of UEFI)?

EvanAnderson
3 replies
2d4h

AFAIK it can. I believe some UEFI implementations don't have CSM.

p_l
2 replies
1d6h

a Type 3 UEFI implementation has no CSM, Type 2 has CSM available, Type 1 enforces booting into CSM (what many "BIOS"es actually was in later days)

deaddodo
0 replies
1d2h

Just for clarification's sake, the proper terminology is "UEFI class" not "type".

Otherwise, this is accurate.

EvanAnderson
0 replies
1d2h

Thanks for that. That sent me down an enjoyable rabbit hole. I got started with PCs back in the 80s and became fairly familiar with how boot worked back then. UEFI happened while I was paying attention to other things and I've never become as familiar with it as I should be. This was a good excuse to do some reading.

jasaldivara
1 replies
2d4h

I do not know what the next-best single-user, single-process, non-bloated OS would be to run on modern hardware that still has some reasonably modern software and can be used for distraction-free (hobby) development the way FreeDOS could.

Not sure why would you want a single-process OS on modern hardware, but there are some alternatives that run much less things on the background than regular Linux: Haiku, FreeBSD, NetBSD, OpenBSD, or some lightweight non-glibc, non-systemd Linux-based like Adelie or Alpine.

nineteen999
0 replies
1d2h

Or, you know, just booting the Linux kernel with init=/bin/sh with /bin/sh being a statically linked binary. You're overthinking things.

cl91
0 replies
15h26m

the next-best single-user, single-process, non-bloated OS

is UEFI.

5-
14 replies
2d7h

note that you can switch to long mode directly, without going into protected mode first, with way less code:

https://wiki.osdev.org/Entering_Long_Mode_Directly

i've had a bootloader for a small 64-bit kernel based on this that fit comfortably into the bootsector, including loading the kernel from disk and setting up vesa modes, no stage2 required.

dataflow
6 replies
2d7h

i've had a bootloader for a small 64-bit kernel based on this that fit comfortably into the bootsector, including loading the kernel from disk and setting up vesa modes, no stage2 required.

How in the world do you fit all that in 512 bytes? I'm guessing you don't have a real-world filesystem (that allows the kernel to be anywhere on the disk just as a normal file)? Because just dealing with file fragmentation should bump you way over 512 bytes I would imagine.

rep_lodsb
2 replies
2d2h

I've been thinking about how to structure a filesystem such that code complexity - in the boot loader and elsewhere - can be minimized. You'd want something similar to NTFS, except that it should be possible to refer to a file (MFT entry) directly by starting sector number. So they would either need to always remain at a fixed position, or have a link pointing back to any reference so it can be updated.

Somewhere in the first sector (aligned to a 64 bit boundary), there would be a small structure that just contains a unique signature (not ASCII but some random value; 64 bits seems like more than enough as opposed to a GUID), and a pointer to the "FS info block". Leaving all remaining space in the sector available for boot code or possibly another "overlayed" filesystem.

That info block in turn would point to the MFT entry for the stage 2 boot code. An MFT entry would contain at a minimum an easy to locate list of (start sector,count) extents. Maybe there could be a flag that tells the OS that a certain file should never be fragmented?

File names would be entirely optional and for human consumption; for direct links within a filesystem, sector numbers would be used, and some kind of unique numeric identifier for anything "higher-level".

I'm genuinely wondering if some expert here sees any problem with this scheme, other than it not conforming to how mainstream OSes do things?

sim7c00
0 replies
1d7h

the first somewhere in the first sector - this makes me think of the superblock concept. u scan the start of the disk for it (ext2 or 4? or both?). Not the first sector because that's for MBR purposes, but after that...

not fragmenting files.... well. U don't fragment files until you do.. u know. If u write an FS it will only fragment what u tell it to. So if u want all contigious files then u can simply do it that way. store them as (file_id,sector_count,start). or even more simple (fname_len,fname,sector_count,start_sector) so u dont need to keep nameblock around for filenames (fat?)

If u look at Ext2/4, FAT16/32 and others u will see that it all started quite basically, keepin a list of files and offsets. But due to things like disk reliability, user errors, system stability issues etc., you need a lot of extra stuff.

Also, questions like : how long is a maximum for a filename length? This kind of stuff can really impact what kind of features u need to implement.

How long is a file allowed to be? How many files are maximum for the FS?

This might sound silly, but there's already datacenters out there (a lot actually) who cannot use most filesystems because either files are too huge, partitions are too huge, too many files are present for index structures etc.

If you want dead simple for a starting OS: (sector_count,start_sector,fname_len,fname,0)

If you want more, try looking at ext2 or FAT32 or if your system specifications require it, look even beyond. (ext4, ZFS, NTFS, etc.) - A lot of these are subtely different, trying to solve different problems, or similar problems in different ways.

RiverCrochet
0 replies
1d23h

File names would be entirely optional and for human consumption; for direct links within a filesystem, sector numbers would be used, and some kind of unique numeric identifier for anything "higher-level".

Each file has index number ("Inode"?) in the MFT. The first 24 (0-23) are reserved, and 11 of them contain metadata about the volume . https://flatcap.github.io/linux-ntfs/ntfs/files/ - somewhere in the Windows API is something that allows opening a file by "Inode" number. This link and info may be really old so it could be more of the reserved inodes are used now.

So, if 23 isn't being used yet, you could use that to put your 2BL - create a file there and name it $2BL or something. Would be funny to see what future Windows update that does use it does to it, if that ever happens (and of course maybe it is used).

Maybe there could be a flag that tells the OS that a certain file should never be fragmented?

Haven't looked but I recall from an old book I read that small files are stored right in the MFT and I think existing data + the cluster size is the limit there.

skissane
0 replies
1d18h

Because just dealing with file fragmentation should bump you way over 512 bytes I would imagine.

Historically, many filesystems have had a special file type or attribute for contiguous files - a file guaranteed to not be fragmented on disk. Most commonly used for boot loaders, OS kernels, or other essential system files - although historically some people used them for database files for a performance boost (which is likely rather marginal with contemporary systems, but decades ago could be much more significant).

Some systems required certain system files to be contiguous without having any special file metadata to mark them as such - for example, MS-DOS required IO.SYS and MSDOS.SYS to be contiguous on disk, but didn’t have any file attribute to mark them as contiguous. Unlike an operating system with proper support for contiguous files, DOS won’t do anything to stop you fragmenting IO.SYS or MSDOS.SYS, it is just the system will fail to start if you do. (Some might interpret the System attribute as implying Contiguous, but officially speaking it doesn’t.)

sim7c00
0 replies
1d7h

you can just stick the kernel at sector 2 and read it from there using extended bios disk read. specify your DAP to read the kernels amount of sectors starting from sector 2, load it to something like 0x7e00 or some reachable place. It will have still limits on how much it can read, per read and in total.

If you do this all within 1 sector, equally you do not do any error checking. just ram stuff into memory and yolojump into it.

the basic would be: load kernel from disk using bios interrupt get memory map using bios interrupt parse kernel ELF header / program headers and relocate it - elf header to find ph_off and ph_num and entry_point - program headers to find all pt_load and rep movb them into tgt phys addr.

Also with 510 bytes generally u will not make nice 'page tables' though this is actually possible with only few bytes. - i did not manage it yet in 510 :D but i am sure there's someone who can do it... it can be done really efficiently.

the disk would be formed by assemblding the mbr. then doing something like

cat mbr.bin kernel.bin > disk.bin (maybe here use truncate to padd the disk.bin to a certain size like 10+MB or so - helps some controllers recognize it better)

all that said it's not useful to do this. you will find it an interesting excersize at best. like trying to make 'tiny ELF' file or so. fun to learn, useless in practice.

5-
0 replies
2d6h

yes, the kernel was in a known location on disk (directly after the bootsector).

the whole boot disk image was generated during build, which is common for small systems.

sim7c00
2 replies
1d9h

you are right. Though with the partition table in there so u can support a 'modern' AHCI controller and SATA it will shrink your bootloader further and require some optimizations.... - you don't have 510 bytes for the loader in this case but a bunch less. if you want to populate the table with valid entries then it becomes even more tricky (can't use any bytes inside of the table...)

If you want to use an actual modern harddisk, you might want to look at GPT rather than MBR, as it won't overflow partition table stuff and allow for very large disks (2TB+?) (uefi gets rid of all of that and allows u to use a proper disk layout without any additional difficulty!)

there is no need for protected mode if you want to launch into 64-bit mode. I would say though, DO NOT USE BIOS. It's a piece of rubbish which will just make things more tedious.

Using UEFI via EDK2 or GnuEFI is the way to go, and both methods are really easy and a blessin to implement. It's a bit hard to get around the initial idea of UEFI, but if you view some other people's example projects on github you can find easily how it works. EDK is a bit shitty with .dec and .inf files etc, and GnuEFI is just reading headerfiles to discover what is there, but it's inifnitely better than the unspecified bios interface. You can litterally not even assume the int 0x10, int 0x15 etc. are there properly if you start running on real hardware. On UEFI systems, you can assume a stable minimal basis, and easily enumerate other things (hardware/platform capabilities etc.) in a sane way.

Also UEFI sets up the platform a long way already, so you don't need to do any initialization for your os-loader component (stage2 or 3). You can simply start loading your os right away. or drivers or whatever kind of design your kernel is. get memory map, get some access to efi file system, start pulling in and loading stuff.

pitust2
1 replies
1d2h

ACHI (which is how SATA is exposed) doesn't change any of this. The only thing that is affected is how the loaded OS has to talk to the disk controller. The real thing that loses space is the BPB (a 60 bytes or so, iirc) because some BIOSes are broken and require it and the MBR (but that's only a couple bytes). At least [bootelf] manages to fit in (without a BPB or an MBR) with 128 bytes to spare, enough for a dummy BPB that makes all BIOSes happy.

Additionally, UEFI's reliability is.. sketchy, as far as I know (using the classic logic of "if Windows doesn't use it does it really matter?"). And GNU-EFI suffers from build portability troubles, AFAIK.

[bootelf]: https://github.com/n00byEdge/bootelf

sim7c00
0 replies
9h6m

If you chose AHCI on QEMU it will require a partition table to be present on the disk, so it recognizes it as a bootable disk. in addition to the magic value. If you do not add the partition table.

Thanks for this comment. It makes me realize this is likely not an AHCI thing ,but how the seaBIOS(? qemu's flavor?) handles enumerating disks via AHCI rather than IDE.

If it uses the IDE controller then it will recognize the boot disk. If i pick AHCI, i need to add the partition table.

UEFI reliability is sketch, but really BIOS is incredibly crap, so much more than UEFI.

Windows DOES use EFI/UEFI, how else will it boot on a system that has EFI/UEFI firmware inside of it? It can let you do secureboot, edit efi variables... - where do you get this classic logic from? (maybe i am totally misisng something, but they interface with it, and should thus use the spec? even tho they might not use gnuefi or EDK2 ofc :P (edk2 is still likley...))

ChickeNES
1 replies
1d13h

Or just use https://limine-bootloader.org/, which greatly simplifies everything. No messing around in real mode (even when doing SMP), automatically loads your kernel using a higher-half mapping, and also works on aarch64 and riscv64.

bigstrat2003
0 replies
1d12h

To be fair, writing a bootloader is an interesting and educational project in its own right. But yes, for most people interested in osdev they should just use an existing bootloader. It gets you to the part that interests most people (writing the kernel) faster, and without having to worry if you are going to run into gnarly bugs because the bootloader is buggy and you never realized.

xelxebar
0 replies
1d12h

Oh, cool. I never knew that was possible. Showing my ignorance here, but assuming we're just trying to get to long mode, why would we tour through protected mode at all?

D4ckard
0 replies
2d7h

Yes, you can do that too

amelius
6 replies
2d6h

Is this any simpler on ARM?

gtirloni
3 replies
1d23h

Not sure, I wouldn't count on it. Currently deep in RISC-V and it seems there's hope.

zokier
2 replies
1d7h

I don't see how riscv can be anything but worse than Arm in this regard. With Arm at least Arm Holdings has some nominal power to steer towards sanity (devicetrees, systemready etc), with riscv it's again full freedom for vendors to make their own bespoke crap.

gtirloni
0 replies
1d5h

> With Arm at least Arm Holdings has some nominal power to steer towards sanity

How's that working?

RISC-V at least can have a formal spec that companies can choose to follow. The profile mentioned in another comment is one way. Companies can say they're compliant with this or that profile and software can target it.

surajrmal
0 replies
2d4h

Yes. Bootloaders are still complex, but there is less legacy setup that is required. That said, if you're targeting UEFI instead of BIOS, it's a great deal simpler on x86 as well.

rwmj
0 replies
2d4h

Only in the sense that every board vendor does their own random thing, which makes it simpler for the board vendors and horribly complicated for everyone else.

hyperman1
4 replies
2d5h

The 80286 has the Machine Status Word (MSW), a 16 bit register. The 80386 expands this to CR0, a 32 bits register. Then 64 bit long mode adds the EFER MSR and expands CR0 to 64 bits. But even today only 11 bits of CR0 are in use and EFER has 8 active bits. I wonder why intel/AMD did not simply use the free bits of the existing register, and made that decision twice?

https://wiki.osdev.org/CPU_Registers_x86-64#CR0.

rcxdude
2 replies
2d4h

Probably for more robust backwards compatibility with software that might assume a given value for or write to the reserved bits. The assignment of bits to registers like this in the hardware is pretty arbitrary, there's not really any cost to using the higher bits

rep_lodsb
0 replies
2d3h

The flag register layout is another case of extreme backwards compatibility - its lower bits have the same definitions they had on the 8-bit 8080, even the same fixed values:

Sign : Zero : always '0' : AuxCarry : always '0' : Parity : always '1' : Carry

(the parity flag came all the way from the 8008 / Datapoint 2200[1], and is the inverted XOR of the result's lower 8 bits; aux carry is the carry out of bit 3, used for BCD arithmetic)

Flag bit 15 has also stayed reserved, except at one time it was used by the NEC Vxx chips for their 8080 compatibility mode. That feature had to be first unlocked by executing a special instruction, because there is code out there that loads the entire (16 bit) flag register with 0000 or FFFF. With the mode bit unlocked, that would inadvertently switch the CPU to running a completely different set of opcodes!

[1] https://www.righto.com/2023/08/datapoint-to-8086.html

monocasa
0 replies
2d3h

Particularly AMD made the 64 bit extension without any real input from Intel and didn't want to use any bits that would later conflict with a bit Intel might use in CR0. So a brand new register was in order.

userbinator
0 replies
1d14h

The one-word answer is probably "bureaucracy". Large groups of people just don't tend to make particularly good decisions overall, and a lot of nonsensical choices arise from that.

Ditto for why CR1 and 5-7 are still "reserved" and CR8 came into existence.

ruslan
3 replies
2d6h

Does this boot procedure work with EFI/UEFI ? If so, does UEFI supervisor emulate swithing real/protected/long modes or does it go in real hardware ?

khaledh
2 replies
2d5h

No. UEFI firmware creates a completely different environment for a UEFI bootloader than the legacy BIOS environment (real-address mode). The UEFI firmware enters 64-bit long mode directly on modern systems, and sets up a flat memory model GDT, as well as identity-mapped paging.

I've written about creating a UEFI bootloader (for my hobby OS) here: https://0xc0ffee.netlify.app/osdev/05-bootloader-p1.html

surajrmal
1 replies
2d4h

I thought many UEFI implementations support legacy bios mode as well. Or well they used to.

the_panopticon
0 replies
2d4h

There is still support for CSM in the open source https://github.com/tianocore/tianocore.github.io/wiki/Compat... and even nice projects like https://github.com/coreboot/seabiosto fabricate the CSM16 binary, but many vendors have stopped validating this path, including production of legacy BIOS option roms for adapters (net, gfx, etc) https://www.phoronix.com/news/Intel-Legacy-BIOS-EOL-2020. I still believe CSP's maintain some of this support in their hypervisors' guest firmware for legacy OS binaries/ISO boot support? Also since Windows requires UEFI Secure boot enabled by default and CSM has to be disabled for the UEFI secure boot path, this is another reason legacy BIOS boot isn't exercised so much these days. We could have added legacy oroms hashes to UEFI Secure boot implementations https://patents.google.com/patent/US8694761B2/en, too, but again folks pushed back in their zeal to remove legacy BIOS overall. We didn't add the CSM spec https://www.intel.com/content/dam/www/public/us/en/documents... to the PI https://uefi.org/specs/PI/1.8A/ since folks were hoping UEFI would remove the need for CSM. I still remember being challenged in the early 2000's by a long-time BIOS mgr at Intel "Is removing legacy a good idea with EFI? You know, we're really at legacy."

rep_lodsb
3 replies
2d3h

The most unnecessarily complicated thing in this article to me is the Makefile and linker script. NASM supports generating flat binary output, but apparently using it would be too "hacky"?

darby_nine
1 replies
2d3h

I find linker scripts much easier to read and reason about than flat nasm but that's just me. Especially with multiple source files.

sim7c00
0 replies
1d6h

From how I view linker scripts:

U can use a linker script to create file layout. If you want a flat binary file... u dont want a file layout. So linkerscript is really useless for a flat binary blob. Even if you make it with multiple files. It'd be just the same as saying: cat blob1 blob2 blob2 > finalyblob.

If you'd say have multiple blobs, and use linker directives to align them, the position dependent code within the assembled files will be wrong, unless you specifically define that using for example ORG directive in NASM. If you use the ORG directive in NASM, u will need to keep that synchronized with the linker script in order for all the labels etc. to keep the right offsets calculated.

So essentially.. this linker script might even add complexity and issues when working with multiple binary blobs. u can't align them or use nice linker features....

If you use more structured files, which allow for example for relocation... then ur already using ELF or PE and can simple produce those files. They can be more masterfuly linked with nice features, and linker scripts are then essential.

You can add your binary blob in the right location in the output file of a linking run, using a linker script. This is useful to add data into your files but for an MBR it's not particularly useful. People do it, but it adds no benefit over just sticking it on the front if your disk using 'dd' for example. ------

That being my views, I am wondering what you see the benefit here? Are there some linker features i am unaware of that are particularly useful here? (I really know only alignment stuff in there... and include blobs or put stuff into /discard/ and some basic define sections/segments etc.). I am not familiar with perhaps more advanced linking features.

sim7c00
0 replies
1d7h

you are totally right.

Later on, make files and linker scripts are an important headache. but when generating flat binary, just generate flat binary! No need to bloat it.

My OS used to have a file called make.sh to tease at this :D Now i am using a 'fileformat' and other fancy things and alas -fbin and --oformat=binary are but fleeting memories. I tried a long time to write separate data c files and code c files, dump it out to binary, and then building some monstrosity from there but it gets really difficult to link&load etc. xD --- better to just use the ELFs or PEs! I suppose that is litteraly what they do for u ;P

cf100clunk
0 replies
2d3h

A laudable project. UEFI proponents here wondering why the person bothered to create a new bootloader approach might be missing the point of why people undertake such tasks as this. As the writer ends:

Cool if you actually came along this far.

Cool indeed.

ForOldHack
0 replies
4h35m

This seems both cool, and a good exercise, but is it useful? Does it have a UX like a fisher/price toy that you can verify/change your settings on the fly?

Booting is the process of going from mini-me mode/single user/recovery mode to flying.

I have been running Unix along side a Microsoft product since Xenix/dos. ( Looks like 40 years...) How much have we advanced?

I also have been using Linux since the swedish version came out ( first release ) and GNU 0.1.

My apologies about calling Xenix, Unix, It is a has-been wanna-be me-too square-excrament from shortly after release until it's languishing demise.

Microsoft does not release products, they empty their cat boxes onto customers. ( The most recent example is both co-pilot And 22H2. )

If you look at how F1 cars have evolved, and pencils as well as pocket calculators - how close are we to the usable ideal?

Why isn't the bootloader a static kernel mode? It used to be. Someone recently suggested it should be, and I agreed.