return to table of content

Oasis – a small, statically-linked Linux system

kentonv
123 replies
1d2h

Doesn't linking everything statically imply that the base image -- and memory, at runtime -- will be bloated by many copies of libc and other common libraries? I do like the simplicity of static linking but it sort of seems to go against the idea of avoiding "bloat".

jezze
57 replies
1d2h

A linker typically only includes the parts of the library it needs for each binary so some parts will definately have many copies of the same code when you statically link but it will not make complete copies.

But I wouldnt consider this bloat. To me it is just a better seperation of concerns. To me bloat would be to have a system that has to keep track of all library dependencies instead, both from a packaging perspective but also in runtime. I think it depends where you are coming from. To me static linking is just cleaner. I dont care much for the extra memory it might use.

jvanderbot
49 replies
1d2h

Dynamic linking served us when OS upgrades came infrequently, user software was almost never upgraded short of mailing out new disks, and vendors had long lead times to incorporate security fixes.

In the days of fast networks, embedded OSs, emphemeral containers, and big hard drives, a portable static binary is way less complex and only somewhat less secure (unless you're regularly rebuilding your containers/execs in which case it's break even security wise or possibly more secure, simply because each exec may not include vulnerable code)

gnramires
18 replies
23h37m

As far as I can see, it would be unwise to roll back 30 years of (Linux) systems building with dynamic linking in favor of static linking. It mostly works very well and does save some memory, disk, and has nice security properties. Both have significant pros and cons.

I've been thinking (not a Linux expert by any means) the ideal solution would be to have better dependency management: I think a solution could be if say binaries themselves carried dependency information. That way you get the benefits of dynamic and static linking by just distributing binaries with embedded library requirements. Also, I think there should be a change of culture in library development to clearly mark compatibility breaks (I think something like semantic versioning works like that?).

That way, your software could support any newer version up to a compatibility break -- which should be extremely rare. And if you must break compatibility there should be an effort to keep old versions available, secure and bug free (or at least the old versions should be flagged as insecure in some widely accessible database).

Moreover, executing old/historical software should become significantly easier if library information was kept in the executable itself (you'd just have to find the old libraries, which could be kept available in repositories).

I think something like that could finally enable portable Linux software? (Flatpak and AppImage notwithstanding)

kentonv
8 replies
23h3m

Everything you describe already exists. Executables do list their dependencies, and we have well-defined conventions for indicating ABI breaks. It is entirely normal to have multiple major versions of a library installed for ABI compatibility reasons, and it is also entirely normal to expect that you can upgrade the dependencies out from under a binary as long as the library hasn't had an ABI break.

The bigger dependency management problem is that every distro has their own package manager and package repository and it's tough for one application developer to build and test every kind of package. But if they just ship a binary, then it's up to the poor user to figure out what packages to install. Often the library you need may not even be available on some distros or the version may be too old.

rwmj
5 replies
22h42m

That's why distros ask you to provide just the sources and we'll do the packaging work for you. The upstream developers shouldn't need to provide packages for every distro. (Of course you can help us downstream packagers by not having insane build requirements, using semantic versioning, not breaking stuff randomly etc).

kentonv
4 replies
20h29m

This is only realistic for established applications with large userbases. For new or very niche apps, distros are understandably not going to be very interested in doing this work. In that case the developer needs to find a way to distribute the app that they can reasonably maintain directly, and that's where containers or statically-linked binaries are really convenient.

rwmj
1 replies
20h17m

This isn't really true, Fedora, Debian and Arch have huge numbers of packages, many very niche. You might well need to make the distro aware that the new program exists, but there are established routes for doing that.

TimeBearingDown
0 replies
3h40m

Arch particularly has the user repository where anyone can submit a package and vote on the ones they use most often to be adopted into the community repository, yes.

It’s a great way to start contributing to the distribution at large while scratching an itch and providing a service to individual projects.

xorcist
0 replies
6h10m

This is not grounded in reality. Look at popcon or something like it. It is a nearly perfect "long tail" distribution. Most software is niche, and it's packaged anyway. It's helped by the fact that the vast majority of software follows a model where it is really easy to build. There are a lot more decisions to take with something like Chromium, which perhaps ironically is also the type of software which tends to package its own dependencies.

palata
0 replies
19h10m

I agree with everything you said up to this. We're talking about a software library, for which the user is a software developer. IMO a software developer should be able to package a library for their own distro (then they can share that package with their community and become this package's maintainer).

As the developer of an open source library, I don't think that you should distribute it for systems that you don't use; someone else who uses it should maintain the package. It doesn't have to be a "distro maintainer". Anyone can maintain a single package. I am not on a very mainstream distro, and I still haven't found a single package that I use and is not already maintained by someone in the community (though I wish I did, I would like to maintain a package). My point is that it really works well :-).

I disagree with the idea that we should build a lot of tooling to "lower the bar" such that devs who don't know how to handle a library don't have to learn how to do it. They should learn, it's their job.

For proprietary software, it's admittedly a bit harder (I guess? I don't have much experience there).

charcircuit
1 replies
19h47m

Executables do list their dependencies

They list paths to libraries, but not the exact version that the executable depends on. It is a common occurrence for executables to load versions of libraries they were not designed to be used with.

YoshiRulz
0 replies
6h32m

If you're talking about ELF for desktop Linux, they for the most part don't contain file paths, and may specify the version but usually just have the major version (to allow for security updates). You can use ldd to read the list of deps and also do a dry run of fulfilling them from the search path, for example:

  $> ldd $(command -v ls)
    linux-vdso.so.1 (0x00007ffd5b3a0000)
    libcap.so.2 => /usr/lib/libcap.so.2 (0x00007f6bd398c000)
    libc.so.6 => /usr/lib/libc.so.6 (0x00007f6bd3780000)
    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f6bd39e5000)

josephg
6 replies
23h16m

Yes, if someone actually did dependency management in Linux properly then I agree - dynamic linking would be fine. It works pretty well in Nixos as I understand it. But it’s called dependency hell for a reason. And the reason is almost no operating systems handle C dependencies well. There’s always weird, complex, distribution specific systems involving 18 different versions of every library. Do you want llvm18 or llvm18-dev or llvm-18-full-dev or something else entirely? Oh, you’re on gentoo? Better enable some USE flags. Redhat? It’s different again.

If Linux dependency management worked well, there would be no need or appetite for docker. But it works badly. So people just use docker and flatpak and whatnot instead, while my hard drive gently weeps. I don’t know about you, but I’m happy to declare bankruptcy on this project. I’d take a 2mb statically linked binary over a 300mb Linux docker image any day of the week.

palata
3 replies
19h3m

If Linux dependency management worked well, there would be no need or appetite for docker.

I kindly disagree here. Linux dependency management does work well. The problem is the bad libraries that don't do semver properly, and the users who still decide to use bad libraries.

If people stopped using libraries that break ABI compatibility, then the authors of those libraries would have to do it properly, and it would work. The reason it doesn't work is really just malpractice.

zaphar
2 replies
18h51m

If Linux dependency management works well in theory but not in practice then it doesn't work. It works in nix because it can literally use multiple minor versions of a library when it needs to with no problem. Most distro's can't or won't do that.

You can call it malpractice but it's not going to stop so in practice you need a way to deal with it.

palata
1 replies
18h36m

Well, by calling it "malpractice", I say that it works for "true professionals". Then we could say that "it doesn't work in practice if people who don't know what they are doing cannot use it", of course.

The question then is where we want to put the bar. I feel like it is too low, and most software is too bad. And I don't want to participate in making tooling that helps lowering the bar even more.

palata
0 replies
7h49m

And by the way it does work really well for good software. Actually most Linux distros use a system package manager and have been doing it for decades.

So I think it would be more accurate to say that "it doesn't work for lower quality software". And I agree with that.

StillBored
1 replies
19h35m

This isn't really an "operating system" problem. Particularly in the open-source world, there are a number of fairly core libraries that refuse to provide any kind of API compatibility.

Then, when there are a couple dozen applications/etc that depend on that library, it's almost an impossible problem because each of those applications then needs to be updated in lockstep with the library version. There is nothing "clean" about how to handle this situation short of having loads of distro maintainers showing up in the upstream packages to fix them to support newer versions of the library. Of course, then all the distro's need to agree on what those versions are going to be...

Hence containers, which don't fix the problem at all. Instead they just move the responsibility away from the distro, which should never really have been packaging applications to begin with.

palata
0 replies
19h2m

away from the distro, which should never really have been packaging applications to begin with.

I disagree here: the whole point of a "software distribution" is to "distribute" software. And it does so by packaging it. There is a ton of benefit in having distro/package maintainers, and we tend to forget it.

arghwhat
1 replies
16h29m
gnramires
0 replies
8h30m

I should have been more balanced or nuanced a bit: I also don't think static linking is to be forbidden or completely shunned. As Linus himself says, a combination of both may be ideal. For basic system libraries like GUI libraries the current approach works well. But you should be free to static link if you want, and if there are serious issues if you don't. Maybe dynamic linking should be focused on a smaller number of well curated libraries and the rest should be left to static. Library archeology seems like a potential serious problem years from now.

I still think better listing dependencies (perhaps with the option to pin an exact version?) would be helpful, as well as better usage of something like semver. Someone mentioned binaries include paths to dependencies, but as far as I know, there is no tool to automatically try to resolve those dependencies or standard interface, maybe some more tooling in this area would help.

Another nice point about how it current works is that I think it relieves work from programmers. The policy of "Don't worry about distribution (just tell us it exists)" from distros seems like one less headache for the creator (and you can provide static linked binaries too if you want).

As most things in life, the ideal is somewhere in the middle...

bscphil
18 replies
23h18m

In the days of fast networks, embedded OSs, emphemeral containers, and big hard drives, a portable static binary is way less complex and only somewhat less secure

If what you're trying to do is run a single program on a server somewhere, then yes absolutely a static binary is the way to go. There are lots of cases, especially end user desktops, where this doesn't really apply though.

In my opinion the debate over static vs dynamic linking is resolved by understanding that they are different tools for different jobs.

moffkalast
9 replies
21h26m

It applies very much to end user desktops as well, with snap, flatpak, etc. working towards it. Lots of software requires dependencies that aren't compatible with each other and result in absolute dependency hell or even a broken install when you dare to have more than one version of something. Because who would ever need that, right? Especially not in a dev desktop environment...

Windows is basically all self-contained executables and the few times it isn't it's a complete mess with installing VC++ redistributables or the correct Java runtime or whatever that clueless users inevitably mess up.

We have the disk space, we have the memory, we have the broadband to download it all. Even more so on desktop than on some cheap VPS.

palata
6 replies
19h20m

when you dare to have more than one version of something. Because who would ever need that, right?

If done properly, you can have multiple major versions of something and that's fine. If one app depends on libA.so.1.0.3, the other on libA.so.1.1.4, and they can't both live with 1.1.4, it means that `libA` did something wrong.

One pretty clear solution to me is that the dev of libA should learn good practice.

zaphar
3 replies
18h55m

Yep, the dev(s) of libA should learn good practice. But they didn't and app1 and app2 still have the problem. Static linking solves it for them more reliably than trying to get the dev of libA to "git gud". Much of the desire to statically link binaries comes from this specific scenario playing out over and over and over.

Heck for a long time upgrading glibc by a minor version was almost guaranteed to break your app and that was often intentional.

palata
2 replies
18h50m

Yep, the dev(s) of libA should learn good practice. But they didn't and app1 and app2 still have the problem.

Sure :-). I just find it sad that app1 and app2 then use the bad libA. Of course that is more productive, but I believe this is exactly the kind of philosophy that makes the software industry produce worse software every year :(.

zaphar
1 replies
18h27m

I used to think the same. But after nearly 30 years of doing this. I no longer think that people will meet the standard you propose. You can either work around it or you can abandon mainstream software entirely and make everything you use bespoke. There are basically no other choices.

palata
0 replies
17h18m

Yeah I try really hard to not use "bad" dependencies. When I really can't, well... I can't.

But still I like to make it clear that the software industry goes in that direction because of quality issues, and not because the modern ways are superior (on the contrary, quite often) :-).

moffkalast
1 replies
18h20m

Wishing that all people will be smart and always do the correct thing is setting yourself up for madness. The dependency system needs to be robust enough to endure a considerable amount of dumbfuckery. Because there will be a lot of it.

palata
0 replies
17h15m

Because I have to live with "malpractice" doesn't mean I should not say it is, IMHO.

I can accept that someone needs to make a hack, but I really want them to realize (and acknowledge) that it is a hack.

shusfuejdn
0 replies
15h51m

It should be noted though that flatpaks and related solutions are NOT equivalent to static linking. They do a lot more and serve a wildly different audience than something like Oasis. They are really much too extreme for non-GUI applications, and I would question the competence of anybody found running ordinary programs packaged in that manner.

I recognize that you probably weren't confused on this I'm just clarifying for others since the whole ecosystem can be a bit confusing.

marwis
0 replies
20h58m

Windows is basically all self-contained executables

With the caveat that the "standard library" they depend on is multiple GBs and provides more features than entire Gnome.

Also MS always worked in some tech to avoid library duplication such as WinSxS or now MSIX has autodedupe even at the time of download.

StillBored
6 replies
19h47m

  understanding that they are different tools for different jobs
Right, but this goes against the dogma on both sides and the fact that much of Linux userspace is the wild west. Ideally, there should be a set of core system libraries (ex glibc, openssl, xlib, etc) that have extremely stable API/ABI somatics and are rarely updated.

Then one dynamically links the core libraries and statically links everything else. This solves the problem that a bug/exploit found in something like OpenSSL doesn't require the entire system to be recompiled and updated while allowing libraries that are not stable, used by few packages, etc, to be statically linked to their users. Then, when lib_coolnew_pos has a bug, it only requires rebuilding the two apps linked to it, and not necessarily even then if those applications don't expose the bug.

palata
4 replies
19h23m

Then one dynamically links the core libraries and statically links everything else.

Agreed, and that is already totally possible.

- If you split your project in libraries (there are reasons to do that), then by all means link them statically.

- If you depend on a third party library that is so unstable that nobody maintains a package for it, then the first question should be: do you really want to depend on it? If yes, you have to understand that you are now the maintainer of that library. Link it dynamically or statically, whichever you want, but you are responsible for its updates in any case.

The fashion that goes towards statically linking everything shows, to me, that people generally don't know how to handle dependencies. "It's simpler" to copy-paste the library code in your project, build it as part of it, and call that "statically linking". And then probably never update it, or try to update it and give up after 10min the first time the update fails ("well, the old version works for now, I don't have time for an update").

I am fine with people who know how to do both and choose to statically link. I don't like the arguments coming from those who statically link because they don't know better, but still try to justify themselves.

superb_dev
1 replies
17h58m

Statically linking does not imply copying the code into the project

palata
0 replies
17h13m

Of course not. My point was that people who say "static linking is better" because the only thing they know (which is copying the code into their project) results in something that looks like static linking are in the wrong.

jcelerier
1 replies
3h55m

Agreed, and that is already totally possible

How? Take for instance OpenSSL mentioned above. I have a software to distribute for multiple Debian versions, starting from Bullseye which uses OpenSSL 1.x and libicu67. Bookworm the more recent has icu72 and OpenSSL 3.x which are binary-incompatible. My requirement is that I do only one build, not one per distro as i do not have the manpower or CI availability for this. What's your recommendation?

palata
0 replies
2h31m

How?

Well you build OpenSSL as a static library, and you use that...

Take for instance OpenSSL mentioned above.

However for something like OpenSSL on a distro like Debian, I really don't get why one would want it: it is most definitely distributed by Debian in the core repo. But yeah, I do link OpenSSL statically for Android and iOS (where anyway the system does not provide it). That's fairly straightforward, I just need to build OpenSSL myself.

My requirement is that I do only one build

You want to make only one build that works with both OpenSSL 1 and OpenSSL 3? I am not sure I understand... the whole point of the major update is that they are not compatible. I think there is fundamentally no way (and that's by definition) to support two explicitly incompatible versions in the same build...

formerly_proven
0 replies
8h28m

Right, but this goes against the dogma on both sides and the fact that much of Linux userspace is the wild west. Ideally, there should be a set of core system libraries (ex glibc, openssl, xlib, etc) that have extremely stable API/ABI somatics and are rarely updated.

This is largely true and how most proprietary software is deployed on Linux.

glibc is pretty good about backwards compatibility. It gets shit for not being forwards compatible (i.e. you can't take a binary linked against glibc 2.34 and run it on a glibc 2.17 system). It's not fully bug for bug compatible. Sometimes they'll patch it, sometimes not. On Windows a lot of applications still link and ship their own libc, for example.

xlib et al don't break in practice. Programs bring their own GUI framework linking them and it'll work. Some are adventurous and link against system gtk2 or gtk3. Even that generally works.

OpenSSL does have a few popular SONAMEs around but they have had particularly nastily broken APIs in the past. Many distros offer two or more versions of OpenSSL for this reason. However, most applications ship their own.

If you only need to talk to some servers, you can link against system libcurl though (ABI compatible for like twenty years). This would IMHO be much better than what most applications do today (shipping their own crypto + protocol stack which invariably ends up with holes). While Microsoft ships curl.exe nowadays, they don't include libcurl with their OS. Otherwise that would be pretty close to a universally compatible protocol client API and ABI and you really wouldn't have any good reason any more to patch the same tired X.509 and HTTP parser vulnerabilities in each and every app.

wongarsu
0 replies
15h20m

Windows makes up the lion's share of desktop computing, and seems to be doing fine without actually sharing libraries. Lots of dynamic linking going on, but since about the XP days the entire Windows ecosystem has given up on different software linking the same library file, except for OS interfaces and C runtimes. Instead everyone just ships their own version of everything they use, and dynamic linking is mostly used to solve licencing, for developer convenience, or for plugin systems. The end result isn't that different from everything being statically linked

nequo
3 replies
19h28m

Dynamic linking served us when OS upgrades came infrequently, user software was almost never upgraded

Even today, dynamic linking is not only a security feature but also serves convenience. A security fix in OpenSSL or libwebp can be applied to everything that uses them by just updating those libraries instead of having to rebuild userland, with Firefox, Emacs, and so on.

plopz
2 replies
14h19m

Then why does every steam game need to install a different version of visual c++ redistributable?

nequo
0 replies
14h3m

Because they are not packaged by the distros so they are not guaranteed to have the libraries present that they were linked against? I am just guessing, I haven’t used Steam.

YoshiRulz
0 replies
6h21m

Does this happen on Windows too? The reason it happens on Linux is because every game ran via Proton/WINE gets its own virtual C: drive.

chaxor
3 replies
19h27m

I'm not versed in this, so apologies for the stupid question, but wouldn't statically linking be more secure, if anything? Or at least have potentially better security?

I always thought the better security practice is statically linked Go binary in a docker container for namespace isolation.

tyingq
2 replies
19h20m

If there is a mechanism to monitor the dependency chain. Otherwise, you may be blissfully unaware that some vulnerability in libwhatever is in some binary you're using.

Golang tooling provides some reasonable mechanisms to keep dependencies up to date. Any given C program might or might not.

palata
1 replies
18h39m

If there is a mechanism to monitor the dependency chain.

So that would not be less secure, but it would also not make it more secure than dynamic linking with a good mechanism, right?

tyingq
0 replies
17h47m

Personally, I think any inherent security advantage (assuming it has great dependency management) would be very small. This "Oasis" project doesn't seem to call it out at all, even though they are making a fair amount of effort to track dependencies per binary.

They cite the main benefits being this: "Compared to dynamic linking, this is a simpler mechanism which eliminates problems with upgrading libraries, and results in completely self-contained binaries that can easily be copied to other systems".

Even that "easily be copied to other systems" sort of cites one of the security downsides. Is the system you're copying it to going to make any effort to keep the transient statically linked stuff in it up to date?

manmal
1 replies
10h28m

Apple has been pushing dynamic libraries for a while, but now realized that they really like static linking better. The result is they found a way to convert dynamic libraries into static ones for release builds, while keeping them dynamic for debug builds: https://developer.apple.com/documentation/xcode/configuring-...

TimeBearingDown
0 replies
3h38m

Very interesting, as of Xcode 15? I wonder if anyone has explored doing this on Linux, and hope this gets a little more attention.

teaearlgraycold
0 replies
1d2h

Yeah I’d prefer we just use another gigabyte of storage than add so much complexity. Even with what is a modest SSD capacity today I have a hard time imagining how I’d fill my storage. I’m reminded of my old workstation from 8 years ago. It had a 500GB hard drive and a 32GB SSD for caching. I immediately reconfigured to just use the SSD for everything by default. It ended up being plenty.

rwmj
1 replies
22h44m

You should be keeping track of those library dependencies anyway if you want to know what you have to recompile when, say, zlib or openssl has a security problem.

ithkuil
0 replies
22h23m

Well, you have to do that anyways

jhallenworld
1 replies
1d

A linker typically only includes the parts of the library it needs for each binary so some parts will definately have many copies of the same code when you statically link but it will not make complete copies.

Just to add to what you said: in the old days the linker would include only the .o files in the .a library that were referenced. Really common libraries like libc should be made to have only a single function per .o for this reason.

But modern compilers have link time optimization, which changes everything. The compiler will automatically leave out any items not referenced without regard to .o file boundaries. But more importantly, it can perform more optimizations. Perhaps for a given program a libc function is always called with a constant for a certain argument. The compiler could use this fact to simplify the function.

I'm thinking that you might be giving up quite a lot of performance by using shared libraries, unless you are willing to run the compiler during actual loading.

Even without lto, you can have the same results in C++ by having your library in the form of a template- so the library is fully in the /usr/include header file, with nothing in /usr/lib.

inkyoto
0 replies
15h32m

Just to add to what you said: in the old days the linker would include only the .o files in the .a library that were referenced.

It was not exactly like that. Yes, the .o file granularity was there but the unused code from that .o file would also get linked in.

The original UNIX linker had a very simple and unsophisticated design (compared to its contemporaries) and would not attempt to optimise the final product being linked. Consider a scenario where the binary being linked references A from an «abcde.o» file, and the «abcde.o» file has A, B, C, D and E defined in it, so the original «ld» would link the entire «abcde.o» into the final product. Advanced optimisations came along much later on.

inkyoto
1 replies
15h39m

A linker typically only includes the parts of the library it needs for each binary […]

It is exactly the same with the dynamic linking due to the demand paging available in all modern UNIX systems: the dynamic library is not loaded into memory in its entirety, it is mapped into the process's virtual address space.

Initially, there is no code from the dynamic library loaded into memory until the process attempts to access the first instruction from the required code at which point a memory fault occurs, and the virtual memory management system loads the required page(s) into the process's memory. A dynamic library can be 10Gb in size and appear as a 10Gb in the process's memory map but only 1 page can be physically present in memory. Moreover, under the heavy memory pressure the kernel can invalidate the memory page(s) (using LRU or a more advanced memory page tracking technique) and the process (especially true for background or idlying processes) will reference zero pages with the code from the dynamic library.

Fundamentally, dynamic linking is the deferred static linking where the linking functions are delegated to the dynamic library loader. Dynamic libraries incur a [relatively] small overhead of slower (compared to statically linked binaries) process startup times due to the dynamic linker having to load the symbol table, the global offset table from the dynamic library and performing the symbol fixup according to the process's own virtual memory layout. It is a one-off step, though. For large, very large and frequently used dynamic libraries, caching can be employed to reduce such overhead.

Dynamic library mapping into the virtual address space != loading the dynamic library into memory, they are two disjoint things. It almost never happens when the entire dynamic library is loaded into memory as the 100% code coverage is exceedingly rare.

akira2501
0 replies
7h58m

It is a one-off step, though.

Yes, but often a one off step that sets all your calls to call through a pointer, so each call site in a dynamic executable is slower due to an extra indirection.

For large, very large and frequently used dynamic libraries, caching can be employed to reduce such overhead.

The cache is not unlimited nor laid out obviously in userspace, and if you have a bunch of calls into a library that end up spread all over the mapped virtual memory space, sparse or not, you may evict cache lines more than you otherwise would if the functions were statically linked and sequential in memory.

as the 100% code coverage is exceedingly rare.

So you suffer more page faults than you otherwise have to in order to load one function in a page and ignore the rest.

giljabeab
0 replies
19h37m

Can’t file systems de dupe this now

Gazoche
27 replies
1d

I'll take bloat over dependency hell every day of the week. Feels like every single app is a bundled web browser these days anyways.

palata
16 replies
1d

dependency hell

Dependency hell comes from bad dependencies that don't do semver properly. Choose your deps carefully, and that's perfectly fine.

Feels like every single app is a bundled web browser these days anyways.

Yep, that's apparently the best way to use the bad libraries people want to use and not give a damn about semver.

IshKebab
8 replies
19h14m

There are various kinds of "dependency hell". To be honest I can't think of any that are due to not doing semver properly. Usually it's:

1. Software depending on versions of libraries that are newer than the latest version available on the distro you have to use (cough RHEL 8). E.g. this very day I ran into a bug where some Asciidoctor plugin craps out with an error because my version of Ruby isn't new enough. Ruby's advice for how to install Ruby is "use your package manager; you will get an old version btw fuck you".

90% of the time it's bloody glibc. Every Linux user has run into the dreaded glibc version error dozens of times in their career.

2. Software that can't install multiple versions of the same package, leading to diamond dependency issues. Python is very bad for this.

palata
7 replies
18h53m

Software depending on versions of libraries that are newer than the latest version available on the distro you have to use (cough RHEL 8).

That is a fair point, but it raises a question: if you absolutely need to use software that is not packaged by your distro of choice and that you cannot package yourself (are you sure you can't maintain a "community" package yourself with RHEL?), maybe you don't want that distro.

Different distros come with different goals. If you take a "super slow but secure" distro, it will be slow and secure. If you take a rolling distro, you get updates very quickly but it has drawbacks. It depends on the use-case, but going for a "slow and secure" distro and then building tooling to work around that choice ("nevermind, I'll ship new and less mature software anyway, statically linked") seems to defeat the purpose of the distro... right?

IshKebab
6 replies
10h11m

maybe you don't want that distro

Well I definitely don't want RHEL 8 but unfortunately I have to use it because some software I use requires it (RHEL 9 doesn't have old enough versions of some libraries) or is only certified on it (this is for work).

But even if I was using a more modern distro, none of them have all software packaged. And no I obviously don't want want to become a packager. Some of the software I use is closed source so that's not even an option.

The only real option is Docker (or Apptainer/Distrobox etc), which sucks.

The fundamental model of "we'll just ship all software that exists; all software is open source" that most distros try to use is just fundamentally wrong.

Snap and Flatpak are trying to fix that but in my experience they aren't remotely ready yet.

palata
4 replies
7h52m

And no I obviously don't want want to become a packager.

That's where I disagree. It's not that hard, and if more people did it, more software would be packaged. Actually I am yet to find a library that I actually need and that is not already packaged and maintained by someone from the community. Then I could finally maintain one myself.

To me, you're basically saying: "I don't want to learn and commit to maintain a package for my distro, because reason, but I am fine spending time with all that tooling that I say "sucks" (Docker/Apptainer/Distrobox)". That's what I don't really get. There is a solution that works well (for me, at least): package the software that is not already available yourself.

Some of the software I use is closed source so that's not even an option.

I would not want to maintain a package with proprietary binaries that I don't own, that's for sure. But if you need to, you can. As long as the author distributes binaries for your platform, it's not much harder than making an open source package.

TimeBearingDown
1 replies
3h17m

I agree with essentially all of this, and I really think the barrier to entry for packaging should be lower. It was deeply helpful to me while learning Linux to be able to write a Bash PKGBUILD, maybe 20-40 lines, to have that clear structure and ease my own update process, while also making it available to others on the Arch User Repository and learning from comments others left. These days I can whip up a simple PKGBUILD for a simple project I discover in just a minute or three, and it led me to so much experience handling build issues and software dependency structure.

I would leap for joy to see Red Hat or Debian or even Gentoo make inroads here, but I haven’t looked closely enough and recently at Debian, and .ebuild files hurt my brain. I do believe I recall Gentoo requiring more work to get my packages available and listed anywhere.

palata
0 replies
2h27m

Yeah I do agree, I find Arch's PKGBUILDs and Alpine's APKBUILDs much easier to write than e.g. a debian package. Not that the debian package is impossible, but it's not as straightforward.

IshKebab
1 replies
4h46m

That's where I disagree.

Well we'll have to agree to disagree on that, but I think if you told most people that the normal way to install third party software for Linux was to become a package maintainer they would rightly laugh you straight to the asylum.

That's what I don't really get.

The reason is that Docker, Apptainer etc are much easier than creating packages for all the dependencies of the software I want to run. Multiplied by the number of distros I need to use. Pretty obvious no?

palata
0 replies
2h19m

It is pretty obvious indeed, coming from what I get is your point of view. You seem to believe that you have to become a package maintainer for all the dependencies of the software you want to run. But I think you have this wrong.

Take it like this: in the current state, I am struggling to find a single interesting library for which I could become a package maintainer for my non-mainstream Linux distro, because there always exists one. Maybe not in the core repo, maybe only in the community repo. But still: I don't maintain a single package today, because I haven't found one that I use and that it not already maintained by somebody else.

Really, if you decide to create packages for all the dependencies of the software you want to run, congratulations: you have just created a new distro from scratch. But even most new distros don't do that :-).

In other words, there are way more developers than libraries that are worth being depended on. So even if we wanted to, not everybody can maintain a single package. There are just not enough packages out there for that, by very, very far.

xorcist
0 replies
5h56m

With traditional Linux distributions like Red Hat, you can sometimes take a package from a newer release (or something like Fedora) in source form and rebuild it for your release. When it works, it's literally just one command, which does everything to give you a binary package. If there is some problematic patch, you can often take it out, but also put in patches from the old version. It's usually documented enough to make it obvious.

It's usually straightforward with end user applications, such as bash or git or ruby. Things more likely to be tied to the rest of the operating system, such as SELinux or PAM, are less likely to work. If there are dependencies to things that is release dependent, it's not worth the bother.

Maybe you can argue you don't want to "become a packager", but someone has already done the work for you and you don't need more than superficial knowledge about the system to do it. In most distributions, source packages aren't harder to install than binary packages.

hn_go_brrrrr
4 replies
1d

Semver is nearly impossible to do "properly" because of https://xkcd.com/1172. With a sufficient number of users, all bug fixes are breaking changes. If the behavior can possibly be observed in any way, some user will be depending on it, deliberately or otherwise.

Ar-Curunir
1 replies
23h46m

Semver defines what is breaking and not-breaking. E.g., Rust semver says that "code should continue compiling with a minor version bump, but not necessarily for a major version bump"

steveklabnik
0 replies
23h5m

Yes. The very first line of the spec:

Software using Semantic Versioning MUST declare a public API. This API could be declared in the code itself or exist strictly in documentation. However it is done, it SHOULD be precise and comprehensive.

If it's not in the API, it is not bound by the rules. Many ecosystems come up with various norms, like Rust has, to help guide people in this. But it's almost certainly not a semver violation to make the change described in the XKCD because "handle unknown unknowns" is not possible. That doesn't mean that we should throw out the entire idea of software assisted upgrades to dependencies.

palata
0 replies
21h42m

I would argue that https://xkcd.com/1172 is a case where the user "deserves" the breaking change, because they relied on a hack in the first place.

That's the thing: I feel like people tend to call "dependency hell" what I would consider downright malpractice. "Shared libraries don't work because they require good practice" is, IMO, not a good argument against shared libraries. If you need to design your tool with the constraints that "users will use it wrongly", then it's already lost.

dwattttt
0 replies
21h27m

Semver doesn't stop people from depending on unstable/implementation-specific behaviour; it needs to be coupled with a strong mechanism for defining what behaviour is defined by an API, and the result is that "the bug" is with all those users who depend on un-guaranteed behaviour.

The breaks happen regardless, but you have a principled way of defining whose fault/problem it is.

avgcorrection
1 replies
7h42m

It seems impossible to solve this by just everyone adopting a manifesto that one GitHub guy wrote many years ago and which has been adopted in some communities but not in many others. And besides there is plenty of (1) human judgement about what is breaking and not (which goes against machine-readability), and (2) worrying about the minutiae of what is a “patch” and a “feature”, and (3) weird implicit social taboos about doing major releases “too often” (?).[1][2]

Most things might be solved by everyone doing SemVer. And for all I know some communities might be running like greased pigs in a chute exactly because they use SemVer (I don’t tend to hear about the everyday everything-is-working stories on HN). But also doing static linking a bit more seems like it would help a lot with the same problem.

[1] All based on discussions I’ve seen. Not really personal experience.

[2] Again, making a spec/manifesto which is both about machine-readability and about shaming people for vague things is very muddled. Although I don’t know how much the latter is about the culture around it rather than the spec itself.

palata
0 replies
6h54m

Good points.

It seems impossible to solve this by just everyone adopting a manifesto that one GitHub guy wrote many years ago

Well by "semver" I mostly mean "change the major number to indicate a change of ABI", I don't mind so much about the other numbers in this case. But that's a good question: I don't know when it started being a thing. I would guess much, much earlier than GitHub, though.

human judgement about what is breaking and not

Hmmm... ABI compatibility for the public interface is not really subjective, or is it?

weird implicit social taboos about doing major releases “too often”

Yes I don't get that one and I fight hard against it.

But also doing static linking a bit more seems like it would help a lot with the same problem.

Well I am not fundamentally against static linking; to me it makes sense to do a mix, with the caveat that if you link something statically, then you are the maintainer of that code. Whereas if you link a system library dynamically, you merely depend on it.

My problem is about moving to "static linking only" (or "by default", but I don't even know if Rust allows dynamic linking at all?).

nerpderp82
9 replies
1d

Dynamic Library hell is why Docker exists. If operating systems had less global state and less ambient authority, our systems would be vastly more tractable. Instead we still create environments that look like replicas of whole hosts.

Might as well go all in and use something with pervasive virtualization like Qubes.

https://www.qubes-os.org/

palata
4 replies
19h39m

To be fair, QubesOS does not really solve the problem of bad libraries creating dependency hell. If you need to ship every app with its own rootfs because you can't handle dependencies, then you will have to do that on QubesOS as well (you don't want one VM per app).

Also the biggest problem I had with QubesOS is that it doesn't support GPU (for security reasons). It feels like that was a big cause for the reduced performance. I wish there was a solution for the GPU, and then I would love to daily-drive QubesOS.

soulofmischief
3 replies
19h17m

Same, I love Qubes' philosophy and UX, but GPU passthrough support was a dealbreaker in the end and I switched to a KVM system.

TimeBearingDown
2 replies
3h30m

I’m pretty sure GPU passthrough does work in Qubes HVMs, although I haven’t tried it myself. Here are three quick and recent tutorials I found including one with a newer VirtualGL approach that offloads work instead of passing the entire card.

https://neowutran.ovh/qubes/articles/gaming_windows_hvm.html

https://forum.qubes-os.org/t/nvidia-gpu-passthrough-into-lin...

https://forum.qubes-os.org/t/seamless-gpu-passthrough-on-qub...

Yes, the passthrough is probably a huge avenue for attacks. Possibly VirtualGL too, I know less about that.

TimeBearingDown
1 replies
3h7m
Qwertious
0 replies
1h27m

That's the same as the first link in your previous comment. Did you manage to edit it after all?

IshKebab
3 replies
19h28m

Exactly this. Windows apps aren't distributed as Docker images. Guess why...

palata
2 replies
19h0m

Well nothing prevents you from dynamically linking only glibc and statically linking everything else, without Docker at all.

The fact that people distribute their app with a full rootfs in a Docker containers says more about the fact that they don't know how to link stuff properly, IMHO.

IshKebab
1 replies
10h8m

It's not about static vs dynamic linking at all. It's about bundling dependencies or not.

And yes, you totally can do it. Most Linux software just doesn't bother because - while you can do it, in a lot of languages (C, Python, etc.) it's quite a pain to do. Especially if you have lots of dependencies.

It's much easier to bundle dependencies in languages that statically link by default (Go, Rust) because of course statically linking implicitly bundles them.

palata
0 replies
7h58m

Dynamic Library hell is why Docker exists.

It's much easier to bundle dependencies in languages that statically link by default

It's not about static vs dynamic linking at all.

Sorry I'm confused :/. What did I say that you disagree with?

Shorel
13 replies
1d2h

In a world where Docker and Kubernetes exist, where whole copies of operating systems are added to each running service...

This seems a weird thing to complain about =)

lnxg33k1
6 replies
1d2h

Yeah but there I can still update vulnerable libraries independently, to be a statically linked system just means that if there is a bug in libpng then I have to recompile everything?

Shorel
1 replies
1d1h

I was under the impression only Gentoo users recompile everything.

In a statically linked system, your dependency manager will update more packages.

And if your program is written in C/C++/Go/Rust, then yes, it will be recompiled.

lnxg33k1
0 replies
1d1h

I use Gentoo, so I am not against rebuild everything, but afaik unless you have static-libs USE flag for something, it's dynamically linked so relinking on rebuilding the dependency is enough, with static-libs the dependent package is also rebuilt

nordsieck
0 replies
1d

if there is a bug in libpng then I have to recompile everything?

You say that as if it's such a burden. But it's really not.

I'm somewhat sympathetic to the space argument, but a package manager/docker registry means that updating software is very easy. And it happens all the time for other reasons today anyhow.

greyw
0 replies
1d1h

In most cases relinking is enough.

colonwqbang
0 replies
1d

Not recompile I guess, but you need to relink everything.

Oasis seems to have a good way of doing that, with the whole system being built in a single tree by an efficient build tool (my recollection from last time it was posted).

A dynamic executable needs to relink every time it's run, which also takes time.

bzzzt
0 replies
1d2h

Yes, although it very much depends on how big 'everything' is if that's a problem.

kentonv
3 replies
1d2h

I mean, if you ran every single executable on your desktop in a separate container I think you'd see problems. There are a pretty large number of programs running on most desktops, plus all the programs that get called by shell scripts, etc.

Running a handful of containers representing major applications is more reasonable and the memory wastage may be worth it to avoid dependency conflicts.

drakenot
1 replies
1d1h

You've just described Qubes OS!

palata
0 replies
21h49m

Except that QubesOS uses VMs for their security benefits, which are greater than those of containers.

Containers make a lot of sense to me on servers ("deploy a controlled environment"), but often on Desktop I feel like they are used as a solution to "I don't know how to handle dependencies" or "My dependencies are so unstable that it is impossible to install them system-wide", both of which should be solved by making slightly better software.

Gabrys1
0 replies
1d1h

Each electron app is like that

palata
1 replies
1d

This seems a weird thing to complain about =)

On the contrary, I find it relevant: I think that the modern way is wasting way, way too much.

Shorel
0 replies
22h41m

On that respect, we agree.

arghwhat
7 replies
21h13m

Static linked binaries are a generally lot smaller than a dynamically linked library and its dependencies, especially with link-time optimizations and inlining.

You wouldn't want have 100 tools statically link the entirety of chromium, but for normal C library sizes you don't get bloat. The preference for dynamic libraries in Linux distros is just so they can roll out patch updates in one place instead of rebuilding dependents.

marwis
6 replies
21h4m

But dynamically linked library only needs to be loaded to RAM once whereas with static linking you'd be loading the same code many times (unless you compile everything to single binary like BusyBox). This also gets you better cache utilization.

Also I think inlining would typically increase the total size of output rather than decrease it.

arghwhat
3 replies
16h31m

Static linking gives you better instruction cache utilization as you are executing local code linearly rather than going through indirection with more boilerplate. This indirection costs a few cycles too.

Inlining external code reduces the size not only by saving the call, PLT and and stack dance, but also through specialization (removal of unused conditional, pruning of no longer referenced symbols) as the code is locally optimized. This further reduction in size further improves cache behavior and performance.

Duplication can be an issue (not necessarily for performance, but for total binary size), but compilers have heuristics for that. Even just having the symbol local saves some space and call overhead though (no PLT).

The case for the shared library having better caching implies multiple processes that are distinct executables (otherwise they share program memory regardless of linkage) trying to hammer it at once, sharing the continued caching, but such scenario is hurt by the call overhead and lower optimization opportunities, negating the result.

marwis
1 replies
15h33m

The case for the shared library having better caching implies multiple processes that are distinct executables

But this is the most common case for desktops/multipurpose systems.

On my desktop there are tens or hundreds distinct processes sharing most of their code.

arghwhat
0 replies
6h35m

No it is not.

Depending on your CPU, you might have, say, 32KB of 8-way associative instruction cache per core. Just being shared does not make it fit in the cache.

A shared library would only be there across processes of different executable images if its users primarily, continuously execute the same paths in shared libs rather than anything unique in their own executable image - e.g., they'd more or less need to be stuck in the same processing-intensive shared routine in the lib. There would also have to be no other processing done in between by other processes that would have trashed the cache.

On the other hand, the severe cache penalty of longer code paths for each executable and the larger PLT call overhead will universally lead to a loss in performance for all library usage.

The scenarios you may hit where different processes are actually executing the same shared code paths to the point of benefiting from shared cache utilization would be cases where they share executable image as well. E.g., browser processes, threads, compilers. Electron too if using system-packaged electron binaries.

inkyoto
0 replies
15h17m

Static linking gives you better instruction cache utilization as you are executing local code linearly rather than going through indirection with more boilerplate.

No, it does not, it worsens it.

For example, «strlen», if it comes from a dynamic library, will be loaded into the physical memory once and only once, and it will be mapped into each process's address space as many times as there are processes. Since «strlen» is a very frequently used function, there is a very high chance that the page will remain resident in memory for a very long time, and since the physical page is resident in memory, there is also a very good chance that the page will remain resident at least in the L2 cache, but – depending on circumstances – in the L1 cache, too. A TLB flush might not even be necessary in specific circumstances, which is a big performance win. It is a 1:N scenario.

With the static linking, on the other hand, if there are 10k processes in the system, there will be 10k distinct pages containing «strlen» loaded into memory at 10k random addresses. It is a M:N scenario. Since the physical memory pages are now distinct, the context switching will nearly always require the TLB to be flushed out which is costly or very costly, and more frequent L1/L2 cache invalidations due to «strlen» now residing at 10k distinct physical memory addresses.

P.S. I am aware that C compilers now inline «strlen» so there is no actual function call, but let's pretend that it is not inlined for the sake of the conversation.

jhallenworld
1 replies
20h55m

So you only need to load duplicated code for each different statically linked program. If there are many processes running the same program, they will all share the same physical pages for the code. So for example, having 100s of "bash" instances running does not use that much memory.

You can see this by running "pmap <pid> -XX" (the output is very wide- probably load it into an editor). Look at the shared vs. private pages.

Also: There is another way to automatically share pages between different programs: de-duplication. This would require common libraries to be statically linked on page boundaries. The OS would quickly de-duplicate during loading by hashing the pages. VMs use this technique to increase effective memory when there are many guest OS running.

marwis
0 replies
20h46m

Yes but most processes are unique. The only bash process I have running is my interactive shell.

bzzzt
3 replies
1d2h

I know lots of compilers/linkers don't optimize for it but it should be possible to 'tree shake' libraries so only the parts that are used by an application are included. That would shake off a lot of the 'bloat'.

volemo
2 replies
1d1h

Wait, it's not being done?

dieortin
0 replies
1d

It is, major compilers do that by default

Fronzie
0 replies
21h16m

As far as I know, even with LTO, it requires -ffunction-sections -fdata-sections in order to strip out unused functions.

javierhonduco
2 replies
20h14m

Once it’s loaded in memory, if Kernel Samepage Merging is enabled it might not be as bad, but would love to hear of somebody has any thoughts https://docs.kernel.org/admin-guide/mm/ksm.html

LegionMammal978
1 replies
20h1m

From the link:

KSM only merges anonymous (private) pages, never pagecache (file) pages.

So it wouldn't be able to help with static libraries loaded from different executables. (At any rate, they'd have to be at the same alignment within the page, which is unlikely without some special linker configuration.)

javierhonduco
0 replies
19h39m

Had completely missed that line — great point!

zshrc
1 replies
1d2h

musl is significantly smaller and "less bloat" than glibc, so even with a statically linked program, it still remains small in both system memory and storage.

skywal_l
0 replies
1d2h

And using LTO[0] can also help.

[0]https://gcc.gnu.org/wiki/LinkTimeOptimization

liampulles
1 replies
1d2h

It would be bloated, but how big of a problem is that these days? A TB of storage is pretty cheap.

cmovq
0 replies
1d2h

A TB of memory is not

thanatos519
0 replies
1d2h

KSM could help with that: https://docs.kernel.org/admin-guide/mm/ksm.html

... oh wait, the apps have to hint that it's possible. Nebbermind.

jacquesm
0 replies
1d

Not necessarily. Bloat is one reason why originally dynamic linking was rolled out but the bigger benefit (to manufacturers) was to be able to update libraries without updating the applications. This has been the source of much trouble (dependency hell) and statically linked binaries suffer none of these issues. It's not like every application uses all of every library and an efficient linker is able to see which parts of the library it needs to link and which parts it can safely leave out.

Gabrys1
0 replies
1d1h

I guess each of the copies of libc can be optimized away and only the functions the specific binary calls will be left (and the compiler should be allowed to optimize past the library boundary), so maybe this balances the issues a bit.

Not that I really know anything about it, ask jart

1vuio0pswjnm7
0 replies
16h52m

I have seen this sort of statement on HN before. I am guessing that the persons who propagate this idea have never actually experimented with replacing dynamically-linked programs having numerous dependencies with statically-compiled ones. It's a theory that makes sense in the abstract, but they have not actually tested it.

Though it is not a goal of mine to save storage space by using static binaries, and I actually expect to lose space as a tradeoff, I have actually saved storage space in some cases by using static binaries. This comes from being able to remove libraries from /usr/lib. TBH, I am not exactly sure why this is the case. Perhaps in part because one might be storing large libraries containing significant numbers of functions that one's programs never use.

For me using static binaries works well. Even "common" libraries can be removed in some cases by using a multi-call/crunched binary like busybox. This might not work for everyone. I think much depends on what selection of programs the computer owner prefers. (Namely, the dependencies required by those programs.)

sluongng
34 replies
1d3h

What is the comparison between using musl and traditional glibc?

Is there performance differences between the two?

I have been seeing musl used more and more in both Rust and Zig ecosystems lately.

znpy
19 replies
1d3h

What is the comparison between using musl and traditional glibc?

you get weird bugs and failures that don't happen with glibc (like the incomplete dns resolving routines that would fail under some conditions) but you can brag about saving 30-40 mb of disk space.

this project seems to be compromising on quality overall, in the name of having smaller size.

Even BearSSL, by their own website is beta-quality: "Current version is 0.6. It is now considered beta-quality software" (from https://bearssl.org/).

electroly
10 replies
1d2h

incomplete dns resolving routines

They eventually did fix this, as of musl 1.2.4.

o11c
9 replies
1d1h

While not an issue for musl-centric distros if they keep updated, note that e.g. Debian stable doesn't have that version yet, so good luck testing.

electroly
8 replies
1d1h

At least we have light at the end of the tunnel now. This is a tremendous improvement from the previous status quo of the musl maintainers not even agreeing that it's a problem.

znpy
7 replies
1d

This alone “musl maintainers not even agreeing it’s a problem” should be a good reason to avoid musl imho

electroly
6 replies
1d

What's a better option for static linking? glibc is religiously against it; they have far worse dogmatic beliefs than this musl DNS thing. I'd be happy to choose a better alternative if one exists, but if one does not, I have to live with the options at hand. From where I'm standing, musl seems like the only game in town. uClibc doesn't seem like it's appropriate for general purpose Linux applications on desktop computers (maybe I'm wrong?).

o11c
4 replies
23h49m

Static linking doesn't actually solve any problem. Just use dynamic linking with (probably relative) rpath, and compile against a sufficiently old libc.

There's some common FUD about rpath being insecure, but that only applies if the binary is setuid (or otherwise privileged) and the rpath is writable by someone other than the binary's owner (all relative rpaths are writable since you can use symlinks; absolute rpaths are writable if they point to /tmp/ or a similar directory, which used to be common on buildbots).

This is really not hard; working around all static linking's quirks is harder.

schemescape
3 replies
22h45m

What are the quirks of static linking you need to work around (in general, not for glibc)?

o11c
2 replies
22h19m

You have to know the internals of your dependencies so you can link them explicitly, recursively. (admittedly, pkg-config helps a ton, but not all libraries ship (good) .pc files)

Global constructors no longer reliably fire unless you are extremely careful with your build system, nor do they run in a predictable order (e.g. you can call a library before it is actually initialized, unlike dynamic linking where only preinit - which nobody uses - is weird), nor can you defer them until dlopen time if you want (which is, admittedly, overdone).

It's possible to link to parts of multiple versions of a library (remember, you have to recurse into your dependencies), as opposed to dynamic libraries where at least you're guaranteed all-or-nothing (which is much easier to detect).

Linking is slower since it always has to be redone from scratch.

Not resilent against system changes. For example, old versions of `bash-static` (grab them from e.g. Debian snapshot and extract them manually; don't install them) are no longer runnable on modern systems since certain system files have changed formats, whereas the dynamically-linked `bash` packages still run just fine.

It also encourages bad stability habits, leading to the equivalent of NPM hell, which is far worse than DLL hell ever was.

You can't use LD_PRELOAD or other dynamic interception tools.

There are probably more reasons to avoid static linking, but I'm trying to ignore the handful from the popular lists.

schemescape
1 replies
19h26m

Thanks! Most of those seem like a fair trade-off for portability… for an app.

I’m not sure it’s a great idea for an OS as in the OP, but I do like that they claim accurate incremental rebuilds, to ensure everything get updated. Certainly an interesting experiment!

Edit: just to clarify, I meant "app" as in "something that isn't part of the OS/distribution".

o11c
0 replies
18h2m

The bash-static example alone is proof that the "usefulness" for apps isn't actually there.

znpy
0 replies
17h4m

glibc is religiously against it

from my understanding glibc is not "religiously" against it, they're against it for technical reasons. In the sense, this is not a dogma. It's about internal details of their implementations.

See: https://stackoverflow.com/questions/57476533/why-is-statical...

raesene9
3 replies
1d2h

A small point on that last bit. The bearssl authors are pretty conservative when it comes to development milestones, I'd guess that their 0.6 would be pretty solid :)

znpy
2 replies
1d2h

I'd guess that their 0.6 would be pretty solid :)

Would you accept that kind of reasoning for software running on your pacemaker, or on your insuline pump?

I think we should respect the developers here: they're not claiming production quality level (they're claiming beta-quality level) so it's not correct to use that library in any kind of product and claim any kind of production-level quality.

foul
0 replies
1d2h

Would you accept that kind of reasoning for software running on your pacemaker, or on your insuline pump?

God helps me I wouldn't implant anything so fundamental in my body with hard dependencies on encrypted communication to a remote agent elsewhere, no matter the advantage.

ComputerGuru
0 replies
1d2h

I would run away from using glibc on an insulin pump or pacemaker, so I’m not sure what point you’re trying to make.

ghotli
2 replies
1d2h

https://musl.libc.org/releases.html

I maintain a large codebase, widely deployed, cross compiled to many cpu architecures that's built atop musl. You're right that historically in the context of people blindly using alpine for their container base that sort of thing might be the case. The newest version of musl solves the thing you're describing and in general most of the complaints about malloc perf or otherwise have been addressed. Avoiding musl to me seems like an outdated trope, but there was a time wherein that take was valid indeed.

NewJazz
1 replies
1d

malloc performance is still sub-par IMO. It is not nearly as terrible as it was, but scudo, memalloc, and glibc's malloc are better.

raverbashing
0 replies
7h10m

Yeah, and with glibc you can even LDLIBRARY an alternative glibc, not so much with musl (unless it changed recently)

fullspectrumdev
0 replies
1d1h

30-40mb of disk space is absolutely huge in some environments even today though

ComputerGuru
4 replies
1d2h

Speaking from heavy experimentation and experience, [0] glibc has some more optimized routines but musl has significantly less bloat. If you are haphazardly calling libc functions left and right for everything and have a generally unoptimized code base, your code may fare better better with glibc. But musl’s smaller codebase is a win for faster startup and micro optimizations otherwise - and that’s without lto where it stands to gain more.

[0]: https://neosmart.net/blog/a-high-performance-cross-platform-...

Edit:

Sorry, the correct link is this one: https://neosmart.net/blog/using-simd-acceleration-in-rust-to...

jart
2 replies
1d2h

If you want an optimized Musl, try Cosmopolitan in `make toolchain MODE=tinylinux`, since it's based on Musl, and its string routines go 2x faster.

ComputerGuru
1 replies
1d2h

I don’t think that was around back then but I can add it to the backlog of things to try for next round. Does that play nice with rust? Presumably I’d have to at least build the standard library from scratch (which I’d want to do against musl as a separate benchmark anyway since it’s now a single environment variable away).

(Not that the codebase makes much string function usage.)

jart
0 replies
21h45m

It should if everything is static and you're only targeting Linux.

scns
0 replies
5h51m

in addition to its direct usage of AVX2 functions and types, it also made a call to the BZHI and LZCNT intrinsics/asm instructions – which rustc/llvm do not recognize as being supported via the avx2 feature! So although (to the best of this developer’s knowledge) there does not exist a processor on the face of this planet that supports AVX2 but doesn’t support BZHI and LZCNT

Looks like a "bug" or better put needed enhancement to LLVM.

o11c
2 replies
1d2h

The real comparison is: musl does not provide any preprocessor macro to tell you what libc you're using.

And it has so many weird quirks that you need to work around.

***

Static linking makes linking more painful, especially regarding global constructors (which are often needed for correctness or performance). This is not a musl-specific issue, but a lot of people are interested in both.

Just do your builds on the oldest supported system, and dynamic linking works just fine. You can relative-rpath your non-libc dependencies if they would be a pain to install, though think twice about libstdc++.

***

The major advantage of MUSL is that if you're writing a new OS, it's much easier to port.

yjftsjthsd-h
1 replies
10h46m

musl does not provide any preprocessor macro to tell you what libc you're using.

And it has so many weird quirks that you need to work around.

I was under the impression that musl stuck closely to the standard, and glibc frequently did its own thing, so 1. it's not musl that's quirky, 2. if you need to detect something, just detect glibc.

o11c
0 replies
1h23m

The standard is uselessly incomplete and vague.

There are places where MUSL implements a broad set of GLIBC extensions in order to actually be useful. However, it does not indicate that in any way, and sometimes violates the conditions that GLIBC documents. This requires workarounds.

There are places where MUSL implements a standard interface in a particular way. If you're lucky, this "just" means giving up on performance if you don't know you're using MUSL.

Sometimes MUSL implements its own ABI-incompatible extensions. The time64 transition, for example, is a huge mess regardless, but musl provides no blessed way to figure out what's going on. The only reason it's not an even bigger disaster is that almost nobody uses musl.

digikata
2 replies
1d3h

One of the reasons I've switched some builds over to musl over glibc, is that I found that glibc linking is brittle if you're going to run a binary over multiple distros in various container environments. Particularly if you want one binary to work on linux across RH and Debian/Ubuntu derived distros or even different ages of distro.

skywal_l
1 replies
1d1h
raverbashing
0 replies
7h14m

As much as I think about Linux compared to the competition in desktop the more I realize this is right

"If it's a bug people rely on it's not a bug, it's a feature" Let me guess, he was thinking of the memcpy issue that broke flash. Or maybe something else. And I agree, nobody cares

The spec says that because it was the 70s and nobody had thought better of that or how things would work 30 yrs on, and going with it does not make sense.

And I feel the pain of this hardheadedness when any library deprecates an API when they didn't need. "Oh but it's cleaner now" Again, nobody cares

skywal_l
1 replies
1d2h

glibc is LGPL. Static linking your application implies some obligation on your part. Musl being MIT is less restrictive.

actionfromafar
0 replies
1d

Not very tough obligations, but it can be a practical hassle. This answer describes it quite well I think:

https://opensource.stackexchange.com/questions/13588/how-sho...

joveian
0 replies
22h12m
nightowl_games
20 replies
1d3h

Can someone explain a couple use cases for something like this?

ghotli
8 replies
1d3h

I routinely get embedded linux devices at $dayjob that need my time and attention and they basically never have the tooling I need to get my job done. I'm a pro at looking at how Alpine builds a tool and then just making my own statically linked / minimal size tool to drop in place on the device. The allure of something like this is that I can just potentially grab a drop-in binary and get on with my day. I simply don't attempt to link to libraries already on the device since they're all built in wildly different ways, old tools, old compilers.

Hopefully that's helpful context. Overall since I did linux from scratch half a lifetime ago I've always wondered why something like Oasis hasn't gotten more traction. It's got some ambitious ideas in the README so maybe others have other nice use-cases atop all that. I just see small, statically linked and think 'oh boy if i never have to build my own tools again for some weird board'. If so, I'm here for it.

8organicbits
6 replies
1d2h

grab a drop-in binary

This is a cool approach on Docker as well.

    FROM some:thing AS bins
    FROM debian:latest
    COPY --from=bins /bin/foo /bin/

ghotli
5 replies
1d2h

Agreed, if the binary is statically linked. If you run `file` on the output from that and it shows 'dynamically linked' then you're playing games with porting over libraries, changing the library loading path, or just going full chroot like linux from scratch does with the bootstrapping part of the install. I find static binaries simplest to work with in that context but agreed I use that pattern too with docker and generally build ad-hoc tools within containers like that. If only these devices could run docker but I'm left to my own tooling to figure out per device.

lloeki
3 replies
1d2h

In a way, that's what Nix sets out to do, isolating even dynamically linked libraries: if two derivations depend on the same shared lib derivation then it's reused, if not then they don't conflict. Each leaf derivation can be handled completely independently of the others, and independently of the original system†.

And then when Nix† is not an option at runtime, dockerTools†† can build a Docker image to do the minimisation+isolation.

That said, Nix might also be completely overkill in some scenarios where static linking would be just fine and very practical. The practical simplicity of a single binary should not be overlooked.

† nixpkgs is sufficient, a full nixos is not needed

†† https://nixos.org/manual/nixpkgs/stable/#sec-pkgs-dockerTool...

vacuity
1 replies
1d

So Nix keeps track of different versions of shared libraries?

sporeray
0 replies
23h16m

Yeah, each package/lib is stored in a unified directory by it's hash https://zero-to-nix.com/concepts/nix-store. Different variation different hash.

YoshiRulz
0 replies
6h0m

You've missed nix-bundle and pkgsStatic, which are much closer to the above idea re: copying to another machine.

8organicbits
0 replies
1d2h

Agreed, dynamically linked binaries don't drop in well.

nightowl_games
0 replies
13h31m

I still don't understand.

Is oasis the "drop in binary" you would use? Or do you use oasis to build the tool that you would use?

"The allure of something like this is I could potentially grab a drop in binary"

From where?

enriquto
5 replies
1d1h

Can someone explain a couple use cases for something like this?

At this point, it would be more useful if someone explained a couple of use cases for dynamic linking.

pjmlp
4 replies
1d

Plugins, unless you want to have one process per plugin.

Which in the days of running Kubernetes clusters on laptops maybe isn't a big deal.

enriquto
3 replies
23h21m

You can still call dlopen from your static binary, if you really want to.

mappu
1 replies
21h38m

I tried to do this recently at $DAYJOB, but when you statically link a binary with musl, the dlopen() you get is a no-op:

https://github.com/bpowers/musl/blob/master/src/ldso/dlopen....

I tried to hack in a copy of the musl's dynamic loader (and also from old uclibc). But it took a few hours and my only result was segfaults.

Do you have any pointers on making this work?

enriquto
0 replies
19h5m

Have you tried it with glibc? It's harder to build static binaries with it, but it's still possible, and it may work.

To debug your problem, do you have a minimal example at your fingertips to try? Just a "hello world" dynlib that is called from a static program that doesn't do anything else.

pjmlp
0 replies
22h14m

Sure lets go back to 1980's UNIX, it was such a great experience.

pjmlp
3 replies
1d2h

You miss UNIX developer experience until mid-1980's, before shared objects came to be.

mech422
2 replies
22h16m

Heh...rebuilding gcc on slackware to enable shared libs was an adventure - but that wasn't till the late 90s(??). I think I spent like a week bootstrapping the new gcc, rebuilding glibc and rebuilding all the stuff I used.

pjmlp
1 replies
22h2m

UNIX System V 4.0 was the one that kind of uniformized existing parallel solutions from UNIX variants, alongside ELF in the late 1980's.

mech422
0 replies
13h46m

yeah - I never had access to a 'real' unix. Closest I came was solaris and maybe Irix. Other then that, it's just been Linux. Keep meaning to give *BSD a try...

P.S. - oh! and I had friends that loved HP/UX - another one I never got to try

ekianjo
0 replies
1d3h

immutable images

dijit
20 replies
1d3h

I cant speak much about the system, it just works, but the community was really nice when I interacted with them over IRC

I had the plan to build oasis with bazel for some immutable OS images that could run as kubernetes nodes. I succeeded with a little pointing.

public_void
10 replies
21h58m

Why did you need to use bazel?

dijit
9 replies
21h55m

I didnt need to use bazel, I like bazel and want to learn more about it.

I also have a small, but burning, passion for reproducible builds, distributed compilation and distributed caching.

Being able to build an entire OS and essentially anything I want on top in a reproducible and relatively organic way (with incremental compilation) is pretty dope.

i-use-nixos-btw
7 replies
19h42m

You sound like the perfect Nix cult memb… erm, user. It’s everything you describe and more (plus the language is incredibly powerful compared with starlark).

But you speak from sufficient experience that I presume Nix is a “been there, done that” thing for you. What gives?

chaxor
3 replies
19h33m

Nix has decentralized caching and memorizing?

i-use-nixos-btw
2 replies
18h53m

Decentralised caching, absolutely - unless I’m misunderstanding what you mean there. You can build across many machines, merge stores, host caches online with cachix (or your own approach), etc. I make fairly heavy use of that, otherwise my CI builds would be brutal.

Memorizing isn’t a term I’m familiar with in this context.

chaxor
1 replies
18h5m

Sorry - memoizing.

I am interested in making a system that can memoize large databases from ETL systems and then serve that on iroh or ipfs/torrent, such that a process that may take a supercomputer a week to process can have the same code run on a laptop and it will notice it's been done my a university supercomputer before already and grab that result automatically from the decentralized network of all people using the software (who downloaded the ETL database).

That way you save compute and time.

i-use-nixos-btw
0 replies
15h24m

Oh I see!

Yes, absolutely doable in Nix.

Derivations are just a set of instructions combined with a set of inputs, and a unique hash is made from that.

If you make a derivation whose result is the invocation of another, and you try and grab the outcome from that derivation, here’s what will happen: - it will generate the hash - it will look that hash up in your local /nix/store - if not found it will look that hash up in any remote caches you have configured - if not found it will create it using the inputs and instructions

This is transitive so any missing inputs will also be searched for and built if missing, etc.

So if the outcome from your process is something you want to keep and make accessible to other machines, you can do that.

If the machines differ in architecture, the “inputs” might differ between machines (e.g. clang on Mac silicon is not the same as clang on x86-64) and that would result in a different final hash, thus one computation per unique architecture.

This is ultimately the correct behaviour as guaranteeing identical output on different architectures is somewhat unrealistic.

IshKebab
2 replies
19h30m

Nix isn't as fine-grained as Bazel as I understand it? I don't think it's incremental within a package, which is presumably what dijit achieved.

i-use-nixos-btw
0 replies
18h59m

Weirdly enough I came across a blog post last week that talked about exactly this. https://j.phd/nix-needs-a-native-build-system/

Nix can be used as a build system in the same way that bazel can. It already has all of the tooling - a fundamental representation of a hermetic DAG, caching, access to any tool you need, and a vast selection of libraries.

The only catch is that no one has used it to write a build system for it in public yet. I’ve seen it done in a couple of companies, though, as using Nix to only partially manage builds can be awkward due to caching loss (if your unit of source is the entire source tree, a tiny change is an entirely new source).

gallexme
0 replies
4h34m

Nix can do it incremental U could split it into multiple derivations which get built into one package For rust there ist the excellent https://crane.dev/index.html project

Or you can also go to the extreme and do 1:1 source to derivation mapping So for example if ur project has 100 source files it could be built from 100 derivations, the language/CLI tools are flexible enough for that

https://discourse.nixos.org/t/distributed-nix-build-split-la... https://discourse.nixos.org/t/per-file-derivations-with-c/19...

Don't know tho if there any well working smart nix tools which can make it well working /efficient, in theory it's very possible, just unsure about practicality/overheads

yx827ha
0 replies
15h40m

You should check out the ChromeOS Bazelification project[1]. It has those exact same goals. Not all packages are reproducible though because they embed timestamps.

[1]: https://chromium.googlesource.com/chromiumos/bazel/+/HEAD/do...

MuffinFlavored
2 replies
21h24m

I cant speak much about the system, it just works,

What systems don't just work by this criteria?

Just because something is statically linked vs dynamically linked, as long as you are within "normal expected operating conditions", does it really make a "just works vs doesn't work" quality difference?

Koshkin
1 replies
21h0m

Read after the comma:

it just works, but...
Qwertious
0 replies
1h35m

...but the community was really nice.

That still doesn't tell us how low the parent commenter's standards for "just works" are. It's irrelevant.

gravypod
1 replies
1d3h

Have you shared your BUILD files upstream?

dijit
0 replies
1d3h

No, they were quite happy with Samurai

eek2121
1 replies
14h49m

"it just works" so you are doing the tech support when it doesn't, right?

EDIT: that was meant to be a joke, I forgo HN doesn't support emojies.

xenophonf
0 replies
1h53m

As an aside, emoticons work just fine. ;)

malux85
0 replies
1d2h

Thats a cool idea! Will you open source it or make it available somehow? I would like to play with it for running Atomic T

colatkinson
0 replies
17h6m

If you don't mind I'm super curious as to what approach you ended up taking. Did you use rules_foreign_cc to build the ninja files they generate? Or generating BUILD files directly? Or something completely different? Sounds like a really cool project!

notfed
6 replies
1d

Fast builds that are 100% reproducible.

It's unclear to me what "100%" refers to here, but surely it does not include the Linux kernel or drivers? (I've recently read conversations about how difficult this would be.)

azornathogron
2 replies
1d

I'm no expert, but as an interested amateur I thought the Linux kernel could already be built reproducibly?

There is some documentation at least... and I know several Linux distributions have been working on reproducible builds for a long time now - I'd be surprised if there hasn't been good progress on this.

https://www.kernel.org/doc/html/latest/kbuild/reproducible-b...

dayjaby
1 replies
21h12m

In container context I've heard a definition of "100% reproducible" that means even file timestamps are 100% the same. Like your entire build is bit-by-bit precisely the same if you didn't modify any source.

Not sure if that's what they mean here.

YoshiRulz
0 replies
5h55m
hn_go_brrrrr
1 replies
1d

Got a link? Sounds interesting.

notfed
0 replies
16h10m

Here's a link to a recent HN discussion:

https://news.ycombinator.com/item?id=38852616

TLDR: Linux kernel doesn't have a stable binary kernel interface. And they don't want one.

Given this, the definition of "reproducible build" needs, well, a refined definition, if it includes the Linux kernel.

[1] https://www.kernel.org/doc/Documentation/process/stable-api-...

YoshiRulz
0 replies
5h34m

I expect it refers to the proportion, weighted by filesize, of programs that are byte-for-byte reproducible across machines. (The assumption being that to take the same measurement in a dynamically-linked context would result in a number less than 100% due to machines having different copies of some libs.) In other contexts, it might simply be a proportion of whole packages, for example Arch Linux' core package set is 96.6% reproducible[1] (=256/(256+9)).

The Linux kernel's lack of a stable ABI (specifically [2]; many userspace APIs are stabilised) doesn't mean individual revisions can't be built reproducibly.

[1]: https://reproducible.archlinux.org

[2]: https://en.wikipedia.org/wiki/Linux_kernel#In-kernel_ABI

Rochus
6 replies
1d3h

Interesting, but what is the use case?

What is the advantage of using the croc C compiler instead of e.g. TCC?

I wasn't aware of Netsurf (https://www.netsurf-browser.org/); this is really amazing. But it seems to use Duktape as the JS engine, so performance might be an issue.

mike_hock
1 replies
18h49m

https://www.netsurf-browser.org/documentation/

Every single link on that page is dead.

https://www.netsurf-browser.org/about/screenshots/

Judging by the screenshots, it can render BBC, its own website, and Wikipedia. Well, it might be able to render others, we just can't tell from the shots. But we can tell those three websites work with all sorts of different window decorations.

Rochus
0 replies
18h1m

Every single link on that page is dead

Unfortunately, as it seems. On the start page they say "Last updated 2 January 2007". But version 3.11 was released on 28 Dec 2023.

helloimhonk
1 replies
1d1h

cproc supports C11, tcc only goes up to c99. There is also something to be said for cproc using QBE which is slowly growing backends like risc-v etc which tcc doesnt support afaik.

Rochus
0 replies
1d

Ok, thanks, that makes sense. QBE looks interesting, but I'm missing 32 bit support. So currently I'm trying to reuse the TCC backend, which is far from trivial.

willy_k
0 replies
20h16m

Tangential, but the trailing “/“ in the URL you gave seems to include the “);” in the hyperlink, giving a “Not Found” error.

Working link: https://www.netsurf-browser.org

cpach
0 replies
1d2h

AFAICT it could be useful for embedded devices.

sylware
3 replies
21h30m

For a real statically-linked linux system, the main issue is GPU support: you must relink all apps _really using_ a GPU, that to include the required GPU drivers.

With sound, alsa, it is fine since there is IPC/shared-memory based mixing that whatever the playback/capture devices [dmix/dsnoop]. Static linking is reasonable. (pulseaudio[012] IPC interfaces are bloaty kludges, hardly stable in time, 0..1..2.., not to be trusted compared to the hardcore stability of alsa one able to do a beyond good enough job *and* _real_ in-process low latency hardware access at the same time).

x11 and wayland are IPC based, then no issue here neither.

But for the GPU, we would need a wayland vulkan3D-inspired set of IPC/shared-memory interfaces (with a 3D enabled wayland compositor). For compute, the interfaces would be de-coupled from the wayland compositor (shared dma-buffers).

The good part of this would be to free our system interfaces from the ultra complex ELF (one could choose an excrutiatingly simple executable file format, aka a modern executable file format, but will need compilers/linkers support to help legacy support).

There is a middle ground though: everything statically linked, except the apps requiring the GPU driver (for that ELF is grotesquely overkill), still provided as a shared library.

stefan_
1 replies
21h10m

To be fair ELF is complex mostly because of relocations, which are not purely to support shared libraries but also the nowadays ubiquitous PIE. But GPU drivers is a good point; I don't believe you can even statically link them today, you would only be statically linking a shim that tries to find the real driver at runtime.

sylware
0 replies
20h39m

I am exploring an executable file format of my own (excrutiatingly simple, basically userland syscalls) which is only PIE, and until now, the main real "issue" (not really) is actually the lack of support from compilers for static relative global data init (handled by ELF... which is not there anymore).

About the shared libs, well, they are the utility shared libs, and the system interface shared libs. With a mostly statically linked elf/linux distro, all the utility libs would be statically linked, and the system interface shared libs would be statically linked if they have an IPC/shared-mem interface. In the end, only the GPU driver is an issue, namely would stay a shared libs.

AshamedCaptain
0 replies
20h39m

I ponder which kind of malaise would push one to dismiss ELF as "ultra complex" and at the same time propose pervasive IPC through the entire system including Vulkan calls through IPC.

schemescape
3 replies
1d1h

Does anyone know how big the base installation is? I couldn't find an answer anywhere, and the link to the QEMU image appears to be broken, currently.

I'm curious how it compares to, say, Alpine with a similar set of packages.

jackothy
2 replies
21h6m

I have an old (2020) .qcow2 lying around that's about 360MB

xiconfjs
1 replies
20h32m

could you please upload it?

elfstead
0 replies
19h43m
Koshkin
3 replies
1d3h

There’s also the “suckless” sta.li

ratrocket
2 replies
1d1h

A comment up-thread (currently) says/implies Oasis is a successor to sta.li by the same person.

https://news.ycombinator.com/item?id=39143029

I also thought sta.li when I saw this was about a statically linked linux system...

lubutu
1 replies
21h5m

It's not by the same person — sta.li was by Anselm R Garbe. It's more like a spiritual successor.

ratrocket
0 replies
19h19m

Ah, thank you for the clarification. I read the linked to (by me) comment too quickly and/or without thinking enough! Cheers!

__s
2 replies
1d3h

michaelforney was also who did the wayland port of st: https://github.com/michaelforney/st

oasis's predecessor would be https://dl.suckless.org/htmlout/sta.li

sigsev_251
1 replies
1d3h

Michaelforney has also built croc [1], a qbe based C compiler. Really impressive!

[1]: https://github.com/michaelforney/cproc

Koshkin
0 replies
21h47m

Not as "impressive" as TCC, I'd say. Why? TCC has its own backend, and it has the preprocessor built in. (But QBE is indeed impressive.)

speedgoose
1 replies
23h51m

BearSSL development’s seems to have stopped and it’s lacking TLS1.3. Are there promising alternatives?

asmvolatile
0 replies
23h28m

wolfSSL. Open source, widely used, flexible licensing model, TLS + DTLS 1.3 support, support for all modern ciphers and protocol extensions, extremely tuneable for performance/size, FIPS module, excellent customer support, the list goes on....

jollyllama
1 replies
21h47m

Anyway, here's wonder -Wall

notnmeyer
0 replies
21h21m

this is funnier than it has any right to be

transfire
0 replies
12h45m

Somehow this reminds me of Gentoo.

ratrocket
0 replies
19h16m

There's a (dead) comment lamenting that you can't access Github with javascript turned off. The Oasis repo seems to be mirrored on sourcehut, though, so if that's more acceptable:

https://git.sr.ht/~mcf/oasis

peter_d_sherman
0 replies
1d2h

I've added links to the below quote:

"oasis uses smaller and simpler implementations of libraries and tools whenever possible:

musl instead of glibc (https://www.musl-libc.org/)

sbase instead of coreutils (https://git.suckless.org/sbase/file/README.html)

ubase instead of util-linux (https://git.suckless.org/ubase/file/README.html)

pigz instead of gzip (https://zlib.net/pigz/)

mandoc instead of man-db (https://mandoc.bsd.lv/)

bearssl instead of openssl (https://bearssl.org/)

oksh instead of bash (https://github.com/ibara/oksh)

sdhcp instead of dhclient or dhcpcd (https://core.suckless.org/sdhcp/)

vis instead of vim or emacs (https://github.com/martanne/vis)

byacc instead of bison (https://invisible-island.net/byacc/)

perp and sinit instead of sysvinit or system 44 (http://b0llix.net/perp/ https://github.com/wereHamster/perp https://troubleshooters.com/linux/diy/suckless_init_on_plop....)

netsurf instead of chromium or firefox (https://www.netsurf-browser.org/)

samurai instead of ninja (https://github.com/michaelforney/samurai)

velox instead of Xorg (https://github.com/michaelforney/velox)

netbsd-curses instead of ncurses (https://github.com/sabotage-linux/netbsd-curses)"

(Oh, and not to quote Dwayne "The Rock" Johnson's character "Maui" from Disney's Moana or anything -- but "You're welcome!" <g> :-) <g>)

malux85
0 replies
1d2h

Anyone have a link to the QEMU tarball in the README? It is hosted on a private server and it looks like it's been HN hugged

m463
0 replies
20h14m

How big is it?

I could imagine there were unexpected efficiencies. Although dynamic libraries should be able to share an address space, I think with static libraries, the linked might strip out unused routines.

also, it might be faster

lproven
0 replies
18h46m

Previous discussion (Aug 2022):

https://news.ycombinator.com/item?id=32458744

lordwiz
0 replies
1d2h

Interesting, Like how its focused on making it lean by having less bloated versions of the tools

hkt
0 replies
1d3h

This is very very cool. I love the bloat free nature of the thing, especially velox (the WM). Samurai (build system) also looks pretty interesting. I've not managed to work out quite how samurai works, or truthfully, why it differs from ninja, but this project is exactly the kind of brain food I intend on learning a lot from.

Many, many props to Michael Forney.

eterps
0 replies
1d3h

Interesting choices, finally something that isn't just another Linux distribution.

alexnewman
0 replies
20h8m

Now I just need them to switch to GitHub actions for the ci/cd