return to table of content

Maestro: A Linux-compatible kernel in Rust

insanitybit
82 replies
7h39m

A memory safe linux kernel would be a fairly incredible thing. If you could snap your fingers and have it, the wins would be huge.

Consider that right now a docker container can't be relied upon to contain arbitrary malware, exactly because the Linux kernel has so many security issues and they're exposed to containers. The reason why a VM like Firecracker is so much safer is that it removes the kernel as the primary security boundary.

Imagine if containers were actually vm-level safe? The performance and operational simplicity of a container with the security of a VM.

I'm not saying this is practical, at this point the C version of Linux is here to stay for quite a while and I think, if anything, Fuschia is the most likely successor (and is unlikely to give us the memory safety that a Rust kernel would). But damn, if Linux had been built with safety in mind security would be a lot simpler. Being able to trust the kernel would be so nice.

edit: OK OK. Yeesh. I meant this to be a hypothetical, I got annoyed at so many of the replies, and this has spiraled. I'm signing off.

I apologize if I was rude! Not a fun start to the morning.

opportune
16 replies
6h20m

Memory safety isn’t why containers are considered insufficient as a security boundary. It’s exposing essentially the entire Linux feature surface, and the ability to easily interact with the host/other containers that makes them unsafe by themselves. What you’re saying about VMs vs containers makes no sense to me. VMs are used to sandbox containers. You still need to sandbox containers if your kernel is written in rust

Even just considering Linux security itself: there are so, so many ways OS security can break besides a slight (you’re going to have to use unsafe a whole lot) increase in memory safety

jvanderbot
15 replies
6h18m

The culture around memory safe languages is a positive improvement for programmer zeitgeist. Man though the overreach all the way to "always safe forever" needs to be checked.

fires10
5 replies
6h4m

Can you expound in this some? I am not fully grasping your point. Are you saying "building safe by default" is a bad thing or assuming "safe forever" is a bad thing. Or are you saying something entirely different?

jvanderbot
2 replies
6h1m

I said "the idea of using memory safe languages is great!" And "using memory safe languages does not eliminate attack surface". (It's pre coffee here so I appreciate your probe)

I meant that it's over-reach to say it's completely trustworthy just bc it's written in a GC/borrow checked language.

insanitybit
1 replies
5h55m

The premise of my post was "imagine a memory safe kernel". I repeatedly use the word "imagine".

yjftsjthsd-h
0 replies
4h9m

The disagreement is that you wrote "imagine a memory safe kernel" but appear to have meant "imagine a kernel with zero vulnerabilities of any kind", and those things are not equivalent.

RHSeeger
1 replies
5h54m

I expect it's likely more of "memory safety in a language doesn't make it _safe_, it makes it less vulnerable". It removes _some_ issues, in the same way that a language with static types removes some ways a program can be wrong, it doesn't make it correct.

chatmasta
0 replies
38m

The problem is the word "safe," which is inherently ambiguous. Safe from what? A better term would be "correct," because at least that implies there is some spec to which the developer expects the program to conform (assuming the spec itself is "correct" and devoid of design flaws).

meltyness
4 replies
6h10m

Just the other day they were full kum ba yah over a holiday-time-released feature that's likely going to greatly increase the likelihood of building race conditions and deadlocks.

https://news.ycombinator.com/item?id=38721039

zanellato19
0 replies
4h19m

This is such a bizarre take on the stabilization of async that I can't even tell if you are being serious or just hate Rust.

jvanderbot
0 replies
5h59m

I know it's the curmudgeon NIMBY in me, and I love Rust and use it daily, but it's starting to feel like the worst of the JS crowd scribbled in grandpa's ANSI C books just to make him mad.

I am super happy with most features, but demand is demand and people demand runtimes. As a chimera it perfectly represents the modern world.

jcgrillo
0 replies
2h55m

To be clear you're talking about async fn and impl trait in return position in traits? If so, how does that impact the likelihood one way or the other of race conditions or deadlocks?

dartos
0 replies
3h59m

How do you figure?

Is it just because it makes async possible?

atq2119
1 replies
2h49m

JS and Rust are memory safe languages with a culture of pulling in hundreds if not thousands of dependencies. So unfortunately, in terms of culture, at least those languages are not Pareto improvements.

chatmasta
0 replies
40m

Also a lot of "in Rust" rewrites include a substantial amount of unsafe FFI calls to the original C libraries...

a-dub
1 replies
2h47m

serious question: how much additional safety do you get over best practices and tooling in modern c++?

jvanderbot
0 replies
2h36m

It's not possible for me to say.

Clearly you can only do worse in Rust than you'd have with perfect C. But what's that?

The question is: what is the expected loss (time, bugs, exploits that lead to crashes or injury or death or financial catastrophe) with Rust vs other languages.

Unfortunately that's not the conversation we have. We instead have absolutism re managed memory, which does account for about half of known security flaws that have been discovered and patched. Removing half of bugs in one fell swoop sounds amazing.

It's not that we can't remove those bugs other ways. Maybe modern c++ can cut bugs in half at compile time too. But Rust seems nicer to play with and doesn't require much more than what it has out of the box. Also it's shiny and new.

Given that Rust is making its way into platforms that Cpp struggled into, it's potentially moot. I sincerely doubt Linux will accept Cpp, but we're on the cusp of Rust in Linux.

snvzz
14 replies
4h26m

If you find this amazing, perhaps you should take a look at seL4, which has formal proofs of correctness, going all the way down to the generated assembly code still satisfying the requirements.

It also has a much better overall architecture, the best currently available: A third generation microkernel multiserver system.

It provides a protected (with proof of isolation) RTOS with hard realtime, proof of worst case timing as well as mixed criticality support. No other system can currently make such claims.

plagiarist
6 replies
4h10m

I wish L4 had taken off for general purpose computing. That and Plan9 are things I'd really like to try out but I don't have space to fit operating systems in amongst the other projects. They both strike me as having the Unix nature, either "everything is messages in userspace processes" or "everything is a file."

jodrellblank
5 replies
3h43m

I don't think I've ever seen an argument for why "everything is a file" is a desirable thing. It seems like a kitchen where "everything is a bowl". Fine when you want to eat cereal, mix a cake, do some handwashing up, store some fruit; tolerable when you want to bake a cake in the oven. Intolerable when you've got a carrot in a bowl-shaped chopping board and you've got to peel and cut it using a bowl.

Why in principle should everything we do on computers behave and beshaped like a file?

baq
2 replies
3h31m

the file is inconsequential. it could be any other universal abstraction, e.g. HTTP POST.

it's just something that every program running on a computer knows how to do, so why bother with special APIs you have to link against if you can just write to a file? (note you can still develop those layers if you wish, but you can also write a device driver in sh if you wish, because why not?)

thfuran
0 replies
2h37m

the file is inconsequential. it could be any other universal abstraction

Bad abstractions are notoriously problematic, and no abstraction is fit for every purpose.

kybernetikos
0 replies
2h4m

So everything-is-a-file is a bit like REST - restrict the verbs, and move the complexity that causes into the addressing scheme and the clients.

plagiarist
0 replies
3h32m

I don't think the metaphor applies because kitchen utensils' forms are dictated are their purpose, but software interfaces are abstractions.

A fairer analogy would be if everything in the kitchen was bowl-shaped, but you could do bowl-like actions and get non-bowl behavior. Drop the carrot in the peelbowl and it is peeled, drop the carrot in the knifebowl and it is diced, drop the cubes in the stovebowl and they are cooked. Every manipulation is placing things in bowls. Every bowl is the same shape which means you can store them however is intuitive to you (instead of by shape).

MisterTea
0 replies
1h11m

I don't think I've ever seen an argument for why "everything is a file" is a desirable thing.

A file system is a tree of named objects. These objects are seamlessly part of the OS and served by a program or kernel driver called a file server which can then be shared over a network. Security is then handled by file permissions so authentication is native through the system and not bolted on. It fits together very well and removes so much pointless code and mechanisms.

A great example is a old uni demo where a system was built to control X10 outlets and switches (early home automation gear). Each device was a file in a directory tree that represents a building with sub directories for floors and rooms - e.g. 'cat /mnt/admin-building/2fl/rm201/lights' would return 'on' or 'off' (maybe it's a dimmer and its 0-255, or an r,g,b, value or w/e, sky's the limit, just put the logic in the fs). To change the state of the lights you just echo off >/mnt/admin-building/2fl/rm201/lights.

Now you can make a script that shuts all the lights off in the building by walking directories and looking for "lights" then writing off to those files. Maybe it's a stage, all your lights are on DMX and you like the current settings so then you 'tar -c /mnt/auditorium/stage/lighting|gzip >student_orientation_lighting_preset.tar.gz' and do the reverse over-writing all the archived settings back to their respective files. You could even serve those files over smb to a windows machine and turn lights on and off using notepad or whatever. And the file data doesn't have to be text, it could be binary too. It's just that for some things like the state of lights, temperature or w/e can easily be stored and retrieved as human readable text.

That is the beauty and power of 9p - its removes protocol barriers and hands you named objects seamlessly integrated into your OS which you can read/write using regular every day tools. It's a shame so many people can't grasp it.

px43
3 replies
3h26m

Eh, seL4 has a suite of tools that turn their pile of C and ASM into an obscure intermediate language that has some formally verifiable properties. IMO this is just shifting the compiler problem somewhere else, into a dark corner where no one is looking.

I highly doubt that it will ever have a practical use beyond teaching kids in the classroom that formal verification is fun, and maybe nerd-sniping some defense weirdos to win some obscene DOD contracts.

Some day I would love to read a report where some criminal got somewhere they shouldn't, and the fact that they landed on an seL4 system stopped them in their tracks. If something like that exists, let me know, but until then I'm putting my chips on technologies that are well known to be battle tested in the field. Maestro seems a lot more promising in that regard.

snvzz
0 replies
3h16m

(Eh, ) I highly doubt that it will ever have a practical use beyond teaching kids in the classroom that formal verification is fun, and maybe nerd-sniping some defense weirdos to win some obscene DOD contracts.

Uh, perhaps take a look at the seL4 foundation's members[0], who are using it in the wild in very serious scenarios.

You can learn more about them as well as ongoing development work in seL4 Summit[1].

0. https://sel4.systems/Foundation/Membership/home.pml

1. https://sel4.systems/Foundation/Summit/home.pml

lifthrasiir
0 replies
3h10m

seL4 actually has an end-to-end proof, which proves that the final compiled binary matches up with the formal specification. There are not many places that bugs can be shifted---probably the largest one at this point is the CPU itself.

justneedaname
0 replies
2h58m

See here[0] one of Gernot Heiser's comments (part of the seL4 foundation) talking about how "there are seL4-based devices in regular use in several defence forces. And it's being built in to various products, including civilian, eg critical infrastructure protection".

There is also an interesting case study[1][2] where seL4 was shown to prevent malicious access to a drone. Using seL4 doesn't necessarily make an entire system safe but for high security applications you have to build from the ground up and having a formally proven kernel is the first step in doing that.

I have been fortunate enough to play a small role in developing some stuff to be used with seL4 and it's obvious that the team are passionate about what they've got and I wish them the best of luck

0 - https://news.ycombinator.com/item?id=25552222 1 - https://www.youtube.com/watch?v=TH0tDGk19_c 2 - http://loonwerks.com/publications/pdf/Steal-This-Drone-READM...

packetlost
2 replies
3h55m

Ok, but can I run a desktop on it? Not knocking seL4, it's damn amazing, but it's not exactly a Linux killer.

justincormack
0 replies
2h51m

Genode runs on sel4 and has a desktop gui.

als0
0 replies
2h50m

I think it's possible to run Genode[1] as a desktop on top of seL4 (Genode supports different kernels). However, I'm struggling to find a tutorial to get that up and running.

[1] https://en.wikipedia.org/wiki/Genode

scoot
7 replies
7h34m

a docker container can't be relied upon to contain arbitrary malware

"to not contain"?

Edit to contain (ahem!) the downvotes: I was genuinely confused by the ambiguous use of "contain", but comments below cleared that up.

OscarCunningham
5 replies
7h24m

They're using 'contain' to mean 'keep isolated'. If you put some malware in a docker container, you can't rely on docker to keep the rest of your system safe.

FpUser
3 replies
6h56m

Does the fact that docker runs as a root have something to do with it?

progval
1 replies
6h50m

Yes, but even rootless containers rely on user namespaces, which are a recurring source of privilege escalation vulnerabilities in Linux.

insanitybit
0 replies
6h41m

The issue of root vs rootless is unrelated to escaping the container. User namespaces lead to privescs because attackers who can enter a namespace and become the root within that namespace have access to kernel functionality that is far less hardened (because upstream has never considered root->kernel to be a privesc and, of course, most people focus on unprivileged user -> kernel privesc). The daemon running as root doesn't change anything there

insanitybit
0 replies
6h41m

No, it's because the malware would have direct access to the (privileged) Linux Kernel via system calls.

scoot
0 replies
6h41m

Got it, thanks.

quickthrower2
0 replies
6h55m

a docker image can’t be relied on to not contain malware and a docker container can’t be relied on to contain malware.

K0nserv
7 replies
7h27m

I largely agree, but this seems quite unfair to Linux.

But damn, if Linux had been built with safety in mind security would be a lot simpler. Being able to trust the kernel would be so nice.

For its time, it was built with safety in mind, we can't hold it to a standard that wasn't prevalent until ~20 years later

insanitybit
5 replies
6h34m

I don't think it's that unfair, but I don't want to get into a whole thing about it, people get really upset about criticisms of the Linux kernel in my experience and I'm not looking to start my morning off with that conversation.

We can agree that C was definitely the language to be doing these things in and I don't blame Linus for choosing it.

My point wasn't to shit on Linux for its decisions, it was to think about a hypothetical world where safety built in from the start.

pjmlp
1 replies
6h24m

Apollo Computer is notable for having had a UNIX compatible OS, written in a Pascal dialect, as was Mac OS (migrating to C++ later on).

jll29
0 replies
5h11m

Clarification: The Apollo computer series by Apollo Domain is meant here, not the Apollo space mission, just to be sure.

The Pascal-based operating system is Aegis (later DomainOS), which - with UNIX - is a joint precursor of HP-UX: https://en.wikipedia.org/wiki/Domain/OS#AEGIS .

peoplefromibiza
1 replies
6h24m

where safety built in from the start

Don't worry, in 30 years people will write the same thing about using Rust, assuming that Rust will still be in use 30 years from now.

thom
0 replies
6h4m

Yeah, how naive we were building operating systems without linear and dependent types. Savages.

bluGill
0 replies
4h5m

why not ada? Sure rust didn't exist when linux was first being built, but ada did and had a number of memory safety features. (not the same as rust's, but still better than C)

ladyanita22
0 replies
6h31m

*30 years...

Yes, we're that old.

badrabbit
6 replies
6h48m

Consider that right now a docker container can't be relied upon to contain arbitrary malware, exactly because the Linux kernel has so many security issues and they're exposed to containers

If you don't run docker as root, it's fairly ok for normal software. Kernel memory safety is not the main issue with container escapes. Even with memory safety, you can have logical bugs that result in privilege escalation scenarios. Is docker itself in Rust?

Memory safety is not a magic bullet, the Linux kernel isn't exactly trivial to exploit either these days, although still not as hardened as windows (if you don't consider stuff like win32k.sys font parsing kernel space since NT is hybrid after all) in my humble opinion.

Linux had been built with safety in mind security would be a lot simpler

I think it was, given the resources available in 1993. But if Trovalds caved in and allowed a mini-kernel or NT like hybrid design instead if hard-core monolithic unix, it would have been a game changer. In 1995, Ada was well accepted mainstream, it was memory safe and even Rust devs learned a lot from it. It just wasn't fun to use for the devs (on purpose, so devs were forced to do tedious stuff to prevent even non-memory bugs). But since it is developed by volunteers, they used what attracts the most volunteers.

The main benefit of Rust is not it's safety but its popularity. Ada has been running on missiles, missile defense, subways, aircraft, etc... for a long time and it even has a formally verified subset (SPARK).

In my opinion, even today Ada is a better suit technically for the kernel than Rust because it is time tested and version stable and it would open up the possibility easily formal-verifying parts of the kernel.

Given how widely used Linux is, it would require a massive backing fund to pay devs to write something not so fun like Ada though.

Throw839
3 replies
5h29m

If I remember correctly, Ada was much slower compared to C. Stuff like boundary checks on arrays has a cost.

docandrew
2 replies
4h26m

Runtime checks can be disabled in Ada. They’re useful for debug builds though!

Throw839
1 replies
4h10m

But that elimites purpose for Ada. Rust has better type system to deal with this.

badrabbit
0 replies
2h2m

I thought both Ada and rust have good compile time checks for memory safety that eliminates the need for run time checks?

insanitybit
1 replies
6h43m

. Kernel memory safety is not the main issue with container escapes.

I disagree, I think it is the primary issue. Logical bugs are far less common.

the Linux kernel isn't exactly trivial to exploit either these days

It's not that hard, though of course exploitation hasn't been trivial since the 90s. We did it at least a few times at my company: https://web.archive.org/web/20221130205026/graplsecurity.com...

Chompie certainly worked hard (and is one of if not the most talented exploit devs I've met), but we're talking about a single exploit developer developing highly reliable exploits in a matter of weeks.

badrabbit
0 replies
2h8m

A single talented developer taking weeks sounds about right, that's what I meant by difficult but also you have vulns that never get a cve issued or exploit developed because of kernel specific hardening.

As for container escapes, there are tools like deepce:

https://github.com/stealthcopter/deepce

I can't honestly say I've heard of real life container escapes by attackers or pentesters using kernel exploits. Although I am sure it happens and there are people who won't update the host's kernel to patch it.

t8sr
5 replies
6h5m

Container vulnerabilities are rarely related to memory bugs. Most vulnerabilities in container deployments are due to logical bugs, misconfiguration, etc. C-level memory stuff is absolutely NOT the reason why virtualization is safer, and not something Rust would greatly improve. On the opposite end of the spectrum, you have hardware vulnerabilities that Rust also wouldn't help you with.

Rust is a good language and I like using it, but there's a lot of magical thinking around the word "safe". Rust's definition of what "safe" means is fairly narrow, and while the things it fixes are big wins, the majority of CVEs I've seen in my career are not things that Rust would have prevented.

insanitybit
4 replies
6h2m

Container vulnerabilities are rarely related to memory bugs.

The easiest way to escape a container is through exploitation of the Linux kernel via a memory safety issue.

C-level memory stuff is absolutely NOT the reason why virtualization is safer

Yes it is. The point of a VM is that you can remove the kernel as a trust boundary because the kernel is not capable of enforcing that boundary because of memory safety issues.

but there's a lot of magical thinking around the word "safe"

There's no magical thinking on my part. I'm quite familiar with exploitation of the Linux kernel, container security, and VM security.

the majority of CVEs I've seen in my career are not things that Rust would have prevented.

I don't know what your point is here. Do you spend a lot of time in your career thinking about hardening your containers against kernel CVEs?

t8sr
2 replies
5h48m

I don't know what your point is here. Do you spend a lot of time in your career thinking about hardening your containers against kernel CVEs?

Yes, I literally led a team of people at a FAANG doing this.

You're saying the easiest way to escape a container is a vulnerability normally priced over 1 million USD. I'm saying the easiest way is through one of the million side channels.

insanitybit
1 replies
5h45m

OK, I apologize if I was coming off as glib or condescending. I will take your input into consideration.

I'm not looking to argue, I was just annoyed that I was getting so many of the same comments. It's too early for all of this negativity.

If you want to discuss this via an avenue that is not HN I would be open to it, I'm not looking to make enemies here, I'd rather have an earnest conversation with a colleague rather than jumping down their throats because they caught me in the middle of an annoying conversation.

t8sr
0 replies
5h10m

Same, re-reading my replies I realize I phrased things in a stand-offish way. Sorry about that.

Thanks for being willing to take a step back. I think possibly we are talking about two different things. IME most instances of exploitation are due to much more rudimentary vulnerabilities.

My bias is that, while I did work on mitigations for stuff like Meltdown and Rowhammer, most "code level" memory vulnerabilities were easier to just patch, than to involve my team, so I probably under-estimate their number.

Regardless, if I were building computation-as-a-service, 4 types of vulnerability would make me worry about letting multiple containers share a machine:

1. Configuration bugs. It's really easy to give them access to a capability, a mount or some other resource they can use to escape.

2. Kernel bugs in the filesystems, scheduling, virtual memory management (which is different from the C memory model). It's a big surface. As you said, better use a VM.

3. The kernel has straight up vulnerabilities, often related to memory management (use after free, copy too much memory, etc.)

4. The underlying platform has bugs. Some cloud providers don't properly erase physical RAM. x86 doesn't always zero registers. Etc.

Most of my experience is in 1, a bit of 2 and mitigation work on 4.

The reason I think we're talking past each other a bit is that you're generating CVEs, while I mostly worked on mitigating and detecting/investigating attacks. In my mind, the attacks that are dirt cheap and I see every week are the biggest problem, but if we fix all of those, and the underlying platform gets better, I see that it'll boil down to trusting the kernel doesn't have vulnerabilities.

nonameiguess
0 replies
3h31m

You two seem to have figured this out, but as far as I can tell, the disconnect here is that the vast majority of security issues related to the separation difference between VMs and containers isn't due to container "escapes" at all. It's due to the defaults of the application you're running assuming it's the only software on the system and it can run with any and all privileges. Lazy developers don't give you containers that work without running as privileged and demand from users to use that application after migrating from a primarily VM-based IT infrastructure to a primarily container-based one is too great to simply tell them no, and if it's free software, you have no ability to tell the developers to do anything differently.

Discussions on Hacker News understandably lean toward the concerns of application developers and especially greenfield projects run by startups who can take complete control of the full stack if they want to. But running applications using resources partially shared by other applications encompasses a hell of a lot of other scenarios. Think some bank or military department that has to self-host ADP, Rocket Chat, a Git server, M365, and whatever other hundreds of company-wide collaboration tooling the employees need. Do you do it on VMs or containers? If the application in question inherently assumes it is running on its own server as root, the answer to that question doesn't really depend on kernel CVEs potentially allowing for container escapes.

If we're just reasoning from first principles, applications in containers on the same host OS share more of a common attack surface than applications in VMs on the same physical host, and those share more than applications running on separate servers in the same rack, which in turn share more than servers in separate racks, which in turn share more than servers in separate data centers. The potential layers of separation can be nearly endless, but there is a natural hierachy on which containers will always sit below VMs, regardless of the kernel providing the containers.

Even putting that aside, if we're going to frame a choice here, these are not exactly kernels on equal footing. A kernel written in C that has existed for nearly four decades and is used on probably trillions of devices by everything from hobbyists to militaries to Fortune 50 companies to hospitals to physics labs is very likely to be safer on any realistic scale compared to a kernel written in Rust by one college student in his spare time that is tested on Qemu. The developer himself tells you don't use this in production.

I think the annoyance here is it often feels when reading Hacker News that a lot of users treat static typing and borrow checking like it's magic and automatically guarantees a security advantage. Imagine we lived in the Marvel Multiverse and vibranium was real. It might provide a substrate with which it is possible to create stronger structures than real metals, but does that mean you'd rather fly in an aircraft constructed by Riri Williams when she is 17 that she built in her parents' garage or would you rather trust Boeing and whatever plain-ass alloy with all its physical flaws they put into a 747? Maybe it's a bad analogy because vibranium pretty much is magic but there is no magic in the real world.

cmrdporcupine
3 replies
6h52m

I like Rust and work in it fulltime, and like its memory-safety aspects but I think it's a bit of a stretch to be able to claim memory safety guarantees of any kind when we're talking about low-level code like a kernel.

Because in reality, the kernel will have to do all sorts of "unsafe" things even just to provide for basic memory management services for itself and applications, or for interacting with hardware.

You can confine these bits to verified and well-tested parts of the code, but they're still there. And because we're human beings, they will inevitably have bugs that get exploited.

TLDR being written in Rust is an improvement but no guarantee of lack of memory safety issues. It's all how you hold the tool.

ho_schi
1 replies
5h52m

Yep. And tooling to secure C improved a lot in recent years. The Address-Sanitizer is a big improvement. I’m looking forward that C++ improves as language itself because it was already improved (smart-pointers, RAII, a lot of edge cases regarding sequencing) and they seem to be willing to modify the actual language. This opens a path for project to migrate from C to C++. A language inherits a lot from its introduction (strength/weak) but also changes a lot.

Every interaction with hardware (disk, USB, TCP/IP, graphics…) need to do execute unsafe code. And we have firmware. Firmware is probably a underestimate issue for a long time :(

Aside from errors caused by undetected undefined behavior all kinds of errors remain possible. Especially logic errors. Which are probably the biggest surface?

Example:

https://neilmadden.blog/2022/04/19/psychic-signatures-in-jav...

Honestly I struggle to see the point in rewriting C++ code with Java just for the sake of doing it. Probably improving test coverage for the C++ implementation would have been less work and didn’t created the security issue first.

That being said. I want to see an #unsafe and #safe in C++. I want some hard check that the code is executing only defined. And modern compilers can do it for Rust. Same applies to machine-dependent/implementation defined code which isn’t undefined but also can be dangerous.

cmrdporcupine
0 replies
2h30m

One of the inspirations for Rust, as I recall, was Cyclone: https://cyclone.thelanguage.org/

Which was/is a "safe" dialect of C; basically C extended with a bunch of the stuff that made it into Rust (algebraic datatypes, pattern matching, etc.) Though its model of safety is not the borrow checker model that Rust has.

Always felt to me like something like Cyclone would be the natural direction for OS development to head in, as it fits better with existing codebases and skillsets.

In any case, I'm happy to see this stuff happening in Rust.

insanitybit
0 replies
6h35m

I've responded to the central point of "there will still be 'unsafe'" here: https://news.ycombinator.com/item?id=38853040

GuB-42
3 replies
6h53m

More like memory safer. A kernel necessarily has a lot of unsafe parts. See: https://github.com/search?q=repo%3Allenotre%2Fmaestro+unsafe...

Rust is not a magic bullet, it just reduces the attack surface by isolating the unsafe parts. Another way to reduce the attack surface would be to use a microkernel architecture, it has a cost though.

viraptor
0 replies
6h42m

You're not really illustrating your point well with the link. If you look through the examples, they're mostly trivial and there's no clear way to eliminate them. Some reads/writes will interact with hardware and the software concepts of memory safety will never reach there because hardware does not operate at that level.

Check a few of the results. They range from single assembler line (interrupts or special registers), array buffer reads from hardware or special areas, and rare sections that have comments about the purpose of using unsafe in that place.

Those results really aren't "look how much unsafe code there is", but rather "look how few, well isolated sections there are that actually need to be marked unsafe". It's really not "a lot" - 86 cases across memory mapping, allocator, task switching, IO, filesystem and object loader is surprisingly few. (Actually even 86 is overestimated because for example inb is unsafe and blocks using it are unsafe so they're double-counted)

insanitybit
0 replies
6h37m

Practically speaking, even with `unsafe` the exploitability of rust programs is extremely difficult. With modern mitigation techniques it is required that you be able to chain multiple vulnerabilities and primitives together in order to actually reliably exploit software.

Bug density from `unsafe` is so low in Rust programs that it's just radically more difficult.

My company (not me, Chompie did the work, all credit to her for it) took a known bug, which was super high potential (write arbitrary data to the host's memory), and found it extremely difficult to exploit (we were unable to): https://chompie.rip/Blog+Posts/Attacking+Firecracker+-+AWS'+...

Ultimately there were guard pages where we wanted to write and it would have taken other vulnerabilities to actually get a working POC.

Exploitation of Rust programs is just flat out really, really hard.

arghwhat
0 replies
6h47m

While I agree, do note that a significant portion of a kernel is internal logic that can be made much safer.

phh
1 replies
6h52m

Imagine if containers were actually vm-level safe? The performance and operational simplicity of a container with the security of a VM.

As far as I know, the order of magnitudes of container security flaws from memory safety is the same as security flaws coming from namespace logic issues, and you'll have to top that with hardware issues. I'm sorry but rust or not, there will never be a world where you can 100% trust running a malware.

Fuschia [...] is unlikely to give us the memory safety that a Rust kernel would

Well being micro kernel make it easier to migrate bits by bits, and not care about ABI

insanitybit
0 replies
6h44m

the order of magnitudes of container security flaws from memory safety is the same as security flaws coming from namespace logic issues,

Memory safety issues are very common in the kernel, namespace logic issues are not.

peoplefromibiza
1 replies
5h30m

if Linux had been built with safety in mind security would be a lot simpler

I'm replying simply because you're getting defensive with your edits, but you're missing a few important points, IMO.

First of all, the comment I quoted falls straight into the category of if only we knew back then what we know now.

What does it even mean "built with safety in mind" for a project like Linux?

No one could predict that Linux (which was born as a kernel) would run on billions of devices that people keep in their pockets and constantly use for everything, from booking a table at the restaurant to checking the weather, from chatting with other people to accessing their bank accounts. And that said banks would use it too.

Literally no one.

Computers were barely connected back then, internet wasn't even a thing outside of research centers and universities.

So, what kind of safety should he have planned for?

And to safeguard what from what and who from who?

Secondly, Linux was born as a collaborative effort to write something already old: a monolithic Unix like kernel, nothing fancy, nothing new, nothing experimental, just plain old established stuff for Linus to learn how that kernel thing worked.

The most important thing about it was to be a collaborative effort so he used a language that he and many others already knew.

Did Linus use something more suited for stronger safety guarantees, such as Ada (someone else already mentioned it), Linux wouldn't be the huge success it is now and we would not be having this conversation.

Lastly, the strongest Linux safety guarantee is IMO the GPL license, that conveniently all these Rust rewrites are turning into more permissive licenses. Which steers away from what Linux was, and still largely is, a community effort based on the work of thousands of volunteers.

bigstrat2003
0 replies
49m

Lastly, the strongest Linux safety guarantee is IMO the GPL license, that conveniently all these Rust rewrites are turning into more permissive licenses. Which steers away from what Linux was, and still largely is, a community effort based on the work of thousands of volunteers.

There is nothing about permissive licenses which prevents the project from being such a community effort. In fact, most of the Rust ecosystem is a community effort just like you describe, while most projects have permissive licenses. There's no issue here.

maayank
1 replies
5h55m

I’m interested of reading more. Where can I find the blog posts?

insanitybit
0 replies
5h51m

https://web.archive.org/web/20221130205026/graplsecurity.com...

The company no longer exists so you can find at least some of them mirrored here:

https://chompie.rip/Blog+Posts/

The Firecracker, io_uring, and ebpf exploitation posts.

Chompie was my employee and was the one who did the exploitation, though I'd like to think I was at least a helpful rubber duck, and I did also decide on which kernel features we would be exploiting, if I may pat myself on the back ever so gently.

jmakov
1 replies
6h4m

Hasn't Kata containers solved this probl: https://github.com/kata-containers/kata-containers ?

insanitybit
0 replies
5h57m

Kata is an attempt at solving this problem. There are problems:

1. If using firecracker then you can't do nested virtualization

2. You still have the "os in an os" problem, which can make it operationally more complex

But Kata is a great project.

mikepurvis
0 replies
3h48m

Isn't gVisor kind of this as well?

"gVisor is an application kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects. Unlike most kernels, gVisor does not assume or require a fixed set of physical resources; instead, it leverages existing host kernel functionality and runs as a normal process. In other words, gVisor implements Linux by way of Linux."

https://github.com/google/gvisor

kossTKR
0 replies
5h5m

Has there ever been any examples of malware/viruses jumping around through levels like this?

I'm honestly interested to know, because it sounds like a huge deal here, but in my laymans ears very cool and sci fi!

giancarlostoro
0 replies
3h34m

I didn't know Firecracker existed, that's really awesome. Looks to be in Rust as well. I'll have to look at how this differs from the approach that Docker uses, my understanding is that Docker uses cgroups and some other built-in Linux features.

Timber-6539
0 replies
5h10m

Containers became popular because it doesn't make much sense to be running full blown virtual machines just to run simple single process services.

You can lock down the allowed kernel syscalls with seccomp and go further with confining the processes with apparmor. Docker has good enough defaults for these 2 security approaches.

Full fat VMs are not immune to malware infection (the impact still applies to the permitted attack surface). Might not be able to easily escape to host but the risk is still there.

Alifatisk
0 replies
5h35m

Consider that right now a docker container can't be relied upon to contain arbitrary malware, exactly because the Linux kernel has so many security issues and they're exposed to containers.

No, Docker container was never meant for that. Never use containers with untrustable binary. There is Vagrant and others for that.

weinzierl
33 replies
7h50m

I love it and hope it will catch on.

I reminds me of what Linus Torvalds once said when asked about fearing competition, though.

From my memory his answer was something like: I really like writing device drivers. Few people like that and until someone young and hungry comes along who likes that I'm not afraid of competition.

sylware
32 replies
7h34m

But GPU drivers require a fairly big team. We must have a GPU hardware programming "standard" first. And I would favor a risc-v kernel instead to avoid the dependency on a super complex syntax (rust) compiler and do gcc-dependency-like mistake all over again. For this reason, eating the bullet and moving on a modern worldwide standard ISA would actually be the real move forward. We have already linux and others tied to gcc extensions and very recent ISO tantrums (porting to not-inline assembly and back to c89/99 is carefully made not reasonable, not to mention assembly code paths tied to specific stack alignment features from the compiler). Namely there is a serious imbalance between the compiler complexity and what it is actually bringing on the table.

Snow_Falls
22 replies
6h48m

What do you mean a risc-v kernel? One written in RISC v assembly? Because that would be terrible.

Risc-v is the instruction set architecture, rust is a programming language. You can port languages to target ISAs. Linux can already run in riscv. The ISA of the hardware and the language the software it runs are completely different issues.

sylware
21 replies
6h22m

Well, I think you are wrong, and that would actually be the real way forward: an assembly written kernel using a worldwide standard ISA, aka RISC-V.

Of course, it would have not to abuse any preprocessor, because moving the issue which is the complexity of the compiler dependency to a preprocessor complexity dependency would nullify everything.

Doing that in rust, is just doing the mistake of linux all over again, actually even worse, since rust syntax is much more complex than C with gcc extensions.

mkl
12 replies
4h29m

What you are proposing is so terrible in practice that people have invented hundreds of programming languages to escape it. Millions of person-hours have been spent on getting away from what you say is desirable.

sylware
11 replies
4h20m

You are right in a legacy context where the mess of ISAs required an abstraction of the assembly language.

But where you are wrong is, moving forward in a world with a modern worldwide standard ISA (RISC-V) is actually writing a kernel in assembly (without abusing any preprocessing).

MobiusHorizons
4 replies
4h0m

RISC-v assembly is not substantially easier to write than any other assembly. RISC assemblies in general (aka MIPS, Arm, RV64 etc) require more instructions to accomplish the same tasks. I would argue they are designed with compilers in mind more than human authors. Older assemblies expected the programmer to be programming in assembly directly, which is why they allow you to express your intent more directly than RISC. Humans of course can and do write assembly in all of these, but risc-v has not somehow made assembly programming any safer or more portable than it ever was.

sylware
3 replies
3h33m

RISC-V is a modern load-store ISA. Is is very clean, much more than the mess of x86_64 for instance. It is actually better to write RISC-V than x86_64, even if the later is CISC.

edgyquant
1 replies
2h35m

Even if this is true, which I agree it is, it’s still a 100x more painful than writing a programming language and compiling to RISC-V. That also gives you the benefit of compiling to multiple arch’s

sylware
0 replies
2h21m

This is where you are wrong: it will take more time, be a bit more painful and require a different training... but your code will be shielded against compiler and language complexity and planned obsolescence, which has stellar value on the long run.

Short term-ists won't understand as this is going for the long run, and that requires perspective of what happened in software in the last decades.

MobiusHorizons
0 replies
1h50m

I agree that rv64 is easier to write than x86_64. My point was more that earlier ISAs were designed with human authors as a target audience, and that results in differences that are arguably (subjective) easier to hand write. Modern load store architectures are cleaner, but also quite verbose. 6502 is probably more representative of my point than 32bit x86, and most things in the 64 bit era are more compiler focused.

mkl
3 replies
4h9m

People have had the ability to do that for decades, and they have almost always chosen not to when they had the option, because it's terrible. RISC-V is nothing revolutionary on that front, and doesn't magically make it better. Without preprocessing you won't have variable names or jump label names or strings, which is even more terrible.

Please give us some actual evidence, instead of just saying "you're wrong", because the position you're taking seems extreme and, frankly, bonkers.

sylware
2 replies
3h52m

I said without abusing the preprocessing, not without preprocessing at all.

My opinion is this is wrong, and I did voice my disagrement and gave my own view on the matter. If there is something extreme here is the karma slash upon displeasing the pro rust people and AI bots on HN.

jodrellblank
1 replies
17m

"If there is something extreme here is the karma slash upon displeasing the pro rust people and AI bots on HN."

It's refusing to backup your position with anything more than saying "you're wrong" "you're wrong" "you're wrong". You've repeatedly been asked to explain why RISC_V would be better, why RISC_V changes anything, what you propose for how it would work (e.g. emulators), and you haven't. That's annoying, timewasting, and downvoteworthy.

sylware
0 replies
2m

Again, this is wrong, actually a lie in regards of what I have been saying on this thrtead. I have been giving my opinion and explaining all along in all parts of this threads my views and opinions.

This is so bluntly disregading everything, I think you may very probably be a AI bot with a very small context window.

aeonik
1 replies
3h29m

What makes RISC-V special here? You speak as if RISC-V has some fundamental differences that invalidates old limitations that keep us trapped. Why is this the case? I'm very curious, and don't understand.

Also, why does preprocessing matter so much?

sylware
0 replies
2h29m

RISC-V is a real free, worldwide standard for a modern and good enough ISA, and that changes everything, namely "moving forward" is not going to be the same than in a legacy context with those locked, non free ISAs.

Preprocessing does matter because it would be pointless to get rid of the compiler complexity to get instead preprocessing complexity.

For instance x86_64 assembler fasmg has the most powerful preprocessor out there.... because the assembler is actually written using this preprocessor language! So it is very easy to "slip" and to end up using excessively this prepropressor to actually write assembly code which is not an assembler!

vkazanov
5 replies
5h6m

You can get your assembly-written kernel right now for every OS out there: just compile Linux using a RISC-V backend of your favourite compiler.

sylware
4 replies
4h53m

This is severely wrong: you cannot compare hand written and properly commented assembly with compiler generated one.

bluGill
3 replies
3h55m

Compare in what way? Handwritten and commented assembly will be much easier for a human to maintain. However modern optimizers are much more likely to apply the correct micro optimization tricks to get the best performance - and they can output different assembly for different variations of the same instruction set with ease, something that would make the hand written assembly much more complex.

sylware
2 replies
3h37m

"Handwritten and commented assembly will be much easier for a human to maintain"

It seems some people here have issue acknowledging that.

But where you are wrong: for modern micro-archs, everything mostly happens at runtime. Specific micro-archs optimizations are not done anymore, the linux kernel do not bother anymore and is compiled for "generic" x86_64 for instance, it is not worth it (and may cause more harm in the end). Usually, you only just care of basic static optimizations, like cache line, fetch code window, alignment, which are more writing "correct" assembly code than anything else.

And even with that, in the worst case scenarios, one could write some specific micro-arch code paths, not an issue while thinking long term of many software components life cycle, which would be "installed"/"branched to" at runtime. At least that knowledge would not be hidden deep in the absurd complexity of an optimizing compiler...

bluGill
1 replies
2h35m

Unless you are building your own custom kernel (ie gentoo) CPU specific optimizations are not worth it as they are worse for any other CPU even if the code is still run. Most software never had those micro optimizations applied, because most software wants to have one build that runs any many different CPUs, but if you want to make an exception it is still possible with compiled code.

While you can write those micro optimizations for each CPU by hand, they not worth the human cost except in very rare situations. In most cases of course you can't measure the difference, as only a couple CPU cycles are saved.

sylware
0 replies
2h28m

This is what I just said. Then we agree on that matter.

BatmanAoD
1 replies
3h55m

Are you proposing a kernel that would only run on risc-v hardware, or expecting that people would run some kind of emulator?

....or do you think that because RISC-V is "standard", assembly for RISC-V would run on any hardware?

sylware
0 replies
2h14m

The "right way" would be CPU vendors to support that standard. But I have thought about running a 64bits RISC-V interpreter on x86_64 (Mr Bellard, ffmpeg, tinycc, etc, wrote a risc-v emulator which could be as a based for that), and that in the kernel. Basically, you would have RISC-V assembly for x86_64 arch: at least, RISC-V here would be stellar more robust and stable that all the features creeps we have in the linux kernel because of the never ending gcc extensions addition and latest ISO C tantrums...

vmfunction
4 replies
5h4m

We must have a GPU hardware programming "standard" first. Isn't that what WebGPU has become?
flohofwoe
3 replies
4h35m

The WebGPU API is very far removed from the hardware. It's the common subset of Vulkan, D3D12 and Metal, each of those APIs also being fairly high level abstractions over different GPU architectures.

sylware
2 replies
4h31m

I don't know about webGPU, but it seems you missed the word "hardware" in "hardware programming interface", like NVMe is for non volatile memory device.

flohofwoe
1 replies
4h19m

I was replying to the "Isn't that what WebGPU has become?".

The WebGPU programming model is already too high level for a "hardware programming interface". WebGPU is designed to sit on top of other 3D APIs, which in turn sit on GPU vendor drivers, and most of the complexity and 'hidden magic' is in those drivers.

sylware
0 replies
3h51m

oops! wrong reply button, my bad.

madushan1000
3 replies
5h50m

What do you mean? most complicated parts of the gpu driver are in userland and handled by Mesa or equivalent. Kernel drivers expose a standard interface(DRM) which userspace drivers use to upload compiled gpu programs and manage gpu memory. Also, linux is not tied to gcc extensions, it can be compiled with llvm(clang) for a long time now.

sylware
2 replies
4h34m

What you said is mostly wrong.

The kernel part of the GPU driver is massive. It is well known, maybe you were misguided: for instance the AMD GPU drivers are gigantic compared to the actual kernel.

clang(llvm) is playing cat and mouse with gcc extensions and recent ISO C tantrums which creeps into the kernel:Linus T. does not resist those, he does resist only the linux useland ABI(syscalls) breakers.

madushan1000
1 replies
1h59m

It's also well known the driver is massive because there are hundreds of thousands of lines of auto-generated register access code in there. Not because it's inherently very complex.

sylware
0 replies
1h7m

And again you are wrong. It was debunked not a long time ago, and I think it was here on HN. The AMD drivers, kernel side, are still gigantic even excluding the generated register descriptions. Is this what we call a AI lie?

But where you may be right: it seems nvidia hardware programming interface is much cleaner than AMD one and may require much much less code.

dottedmag
16 replies
7h43m

Syscalls are easy. Drivers will be tough.

weinzierl
14 replies
7h22m

Drivers are the tough part and the lack of a stable interface in Linux makes them hard to reuse.

rhabarba
9 replies
6h59m

People who want stable interfaces should not touch anything Linux with a ten-foot pole.

ahmedfromtunis
7 replies
6h49m

Care to elaborate on this?

I clearly understand nothing of this, but it always felt confused about it. Why won't Linux aim for ABI stability? Wouldn't that be a win for everyone involved?

peoplefromibiza
4 replies
6h32m

The Linux Kernel Driver Interface

(all of your questions answered and then some)

https://github.com/torvalds/linux/blob/master/Documentation/...

waych
2 replies
1h37m

Cyclic logic that says you're wrong for wanting a stable kernel interface, because the kernel keeps changing so the solution is to just get your code merged into mainline. As a tautology, it's true, but it's also a cover for "because we don't want to".

See Windows or android GKI for existence proof that it can be done if so motivated.

ahmedfromtunis
1 replies
1h25m

From what I understood, I think the big difference here is the human factor: Windows and Android are maintained by employees, who have no choice but to work on things even if they don't like doing it. Linux on the other hand is a collective effort of people doing what they want to do on their free time.

surajrmal
0 replies
42m

That's a myth. Most Linux contributions come from paid employees from various companies not unpaid volunteers.

ahmedfromtunis
0 replies
4h4m

Great to see that Greg Kroah-Hartman dedicated a whole article to answering my questions. Thanks!

Karellen
0 replies
6h35m
IshKebab
0 replies
2h20m

TL;DR: maintaining a stable driver ABI is more work because you have to deal with backwards compatibility, and it mainly benefits vendors that don't make their drivers open source.

So the Linux devs are really against it both from a lack of resources point of view, and from an ideological "we hate closed source" point of view.

Unfortunately, most vendors with closed source drivers don't give a shit about ideology and simply provide binaries for very specific Linux releases. That means users end up getting screwed because they are stuck on those old Linux versions and can't upgrade.

The Linux devs have this strange idea that vendors will see this situation as bad and decide that the best option is to open source their code, but that never happens. Even Google couldn't get them to do that. This is one of the main reasons that Android OS updates are not easy and universal.

SpaghettiCthulu
0 replies
22m

That's only in terms of the driver interface, right? My understanding is that the userspace interface is extremely stable.

llenotre
2 replies
2h53m

There have been attempts to create kernel-agnostic interfaces for drivers such as: https://en.wikipedia.org/wiki/Uniform_Driver_Interface

For my case, I am planning to re-implement them. I like doing this.

I sure am not going to be able to re-implement everything myself though. I will concentrate on what I need, and I will consider implementing others if anyone else other than me is willing to use the OS (which would be incredible if it happened)

seastarer
1 replies
1h22m

You can implement a virtual machine monitor (e.g. KVM) and then launch a Linux virtual machine to run drivers you lack.

joveian
0 replies
1h1m

Or NetBSD drivers via rump:

https://github.com/rumpkernel/wiki

yjftsjthsd-h
0 replies
3h59m

Doesn't FreeBSD borrow graphics drivers from Linux? If I'm remembering that right, it can't be quite that bad.

_flux
0 replies
6h55m

Great number of Linux hosts run in virtual machines, reducing the number of different devices drivers for that purpose.

For running on bare iron.. I suppose there's no short-term solution for that.

dark-star
7 replies
6h35m

What a cool little project. It's astonishing how far this can boot with less than a third of the syscalls of Linux implemented.

However, my guess is that the ones that are missing are the more complicated ones. The TTY layer, for example, looks rather basic at the moment. Getting this right will probably be a lot of work.

So don't hold your breath for Maestro running your Linux applications in the next 3 years or so (even without taking into account all the thousands of drivers that Linux has)

berkes
4 replies
5h59m

Is there maybe a subset of Linux applications that it could run soon? A proxy, nfs, some database server, http server, firewall?

I think it doesn't need to run Steam, libreoffice and Firefox to be useful. Many parts in a common server or microservices architecture are relatively simple in what they do and would probably benefit a lot from a safe, simple kernel.

consp
2 replies
4h3m

Is there maybe a subset of Linux applications that it could run soon? A proxy, nfs, some database server, http server, firewall?

You first need to port drivers for your -specific- network and io chipset. And if you want adoption and performance you also need the manufacturer on board. My guess is not quite soon.

eggnet
1 replies
1h55m

A good first target is a VM.

ClumsyPilot
0 replies
1h51m

tahts actually a pretty huge market

llenotre
0 replies
2h17m

Indeed, as I stated in the blog post, I am not very far from being able to run a text editor such as Vim or a compiler :)

Arainach
1 replies
1h57m

It's astonishing how far this can boot with less than a third of the syscalls of Linux implemented.

It's a great project, but I don't find this ratio surprising at all. Any mature platform builds up logic to enable scenarios such that most things don't need most of the system. As the saying goes, no one uses more than 10% of Excel, but it's a different 10% for everyone.

You could implement 30% of Excel functions and probably have an engine which opens 99% of spreadsheets out there.....though if you wanted full doc compatibility you would still have a long journey ahead of you.

kelvie
0 replies
11m

You could implement 30% of Excel functions and probably have an engine which opens 99% of spreadsheets out there.....though if you wanted full doc compatibility you would still have a long journey ahead of you.

Isn't this what effectively googles docs did? For a ton of use-cases google sheets is enough, I've heard of companies that basically were extra stringent about excel licenses (as a cost cutting measure no doubt), instead heavily pushing users toward using google sheets instead.

snvzz
6 replies
4h28m

I applaud them for getting things done vs just talking about it.

Personally, I find yet another monolithic kernel unix clone is not what we need, but the point here is that it's made in Rust, which itself is an experiment; It is best to not do too many experiments at once, thus cannot complain.

diggan
3 replies
4h26m

Personally, I find yet another monolithic kernel unix clone is not what we need

It seems highly irrelevant what you or I need. The author explicitly made the project as a learning experience, not for others. The "Why" is described in the opening paragraph, and makes the goal very clear.

And it seems like the author was highly successful, so congratulations author! Great to see people diving headfirst into very complicated parts of the stack.

snvzz
1 replies
4h21m

It seems highly irrelevant what you or I need.

There is no need to twist my words into sounding negative.

As already explained in the parent, congratulations to them for getting it done, and I believe it does provide value through testing one thing (Rust) while sticking to the very mature and well understood UNIX design.

diggan
0 replies
3h2m

Sorry if it sounded rough but I don't think I twisted anything. It's fairly common for people on here to post stuff they done as a learning exercise, and also very common for others to then ask "But what is the value?" and "This doesn't seem useful to anyone", which I think could put first-time authors off from HN. I guess I just got a bit tired of it at this point.

llenotre
0 replies
3h25m

Thank you very much!

yjftsjthsd-h
1 replies
3h56m

I mean, https://www.redox-os.org/ exists if you're into that

snvzz
0 replies
3h32m

Genode[0] is the project I'd suggest looking at, for state of the art OS architecture.

0. https://www.genode.org

phkahler
6 replies
3h30m

MIT license? If by chance this evolves into something big, it will be eaten alive by commercial interests. Look at the conflict between Linux devs and nVidia for example. Look at the IBM/RedHat stuff trying to circumvent the spirit of the GPL, if maybe not the text of it.

If it becomes a thing, the most active developers will be paid by corporations and they will not be sharing code with you when it suits them - which can be at the drop of a hat.

I'd recommend changing to GPLv3 while your number of contributors is low enough to do it. Otherwise you're just doing free work for your future masters.

winstonewert
3 replies
3h27m

It seems to me that your examples rather show the futility of trying to use a license to force good behaviour rather than a reason to change licenses.

yoyohello13
1 replies
2h52m

The only reason there is one Linux kernel everyone uses is because of the license. If it wasn't GPL2 there would be "Microsoft Linux", "Google Linux", "Oracle Linux" all with different features and potential incompatibilities. At least with the GPL2 license those flavors have to contribute changes back upstream so everyone gets the benefits.

wizzwizz4
0 replies
2h22m

They don't have to contribute them upstream: they just have to give their users the permission to do so.

phkahler
0 replies
2h20m

> It seems to me that your examples rather show the futility of trying to use a license to force good behaviour rather than a reason to change licenses.

If not for the license there would be NO good behavior. Notice that nVidia is relatively Linux friendly with some exceptions and RedHat seems to be under pressure to make more money but is otherwise very Linux friendly. Without the license, all sorts of others would be blatantly ripping it off.

I contend the difference in popularity and success between the BSDs and Linux is most likely due to the GPL license.

yoyohello13
0 replies
2h4m

The amount of hate for GPL on HN is disturbing.

wizzwizz4
0 replies
2h23m

I'd recommend changing to GPLv3

I'd recommend AGPLv3, to avoid the Windows 365 loophole. (As I understand, you'd still be able to run a web server without sharing the source code of the kernel.)

gardaani
4 replies
7h35m

There's also Kerla [1] (Monolithic kernel in Rust, aiming for Linux ABI compatibility), but that seems to have gone dormant for few years.

[1] https://news.ycombinator.com/item?id=28986229

jillesvangurp
3 replies
2h24m

Or Redox OS, which is still there: https://www.redox-os.org/. It has a micro kernel design. But it is a bit more mature probably. And also MIT licensed so, there probably is some opportunity for code sharing.

maxloh
1 replies
41m

Seems that the project is dead. The repository does not receive any commit for two years.

bestouff
0 replies
1h49m

Last time I had a look at it Redox didn't (even want to) implement Linux ABI compatibility.

cies
4 replies
7h37m

Compatible means "syscall compatible" (I get that from the article). I wonder if it also means kernel module compatible (I dont think so, as the API touch point surface is much larger), but if it strives to be that'd be great (use all hardware that works on Linux).

_flux
3 replies
6h53m

Not even Linux itself is kernel module compatible from version to version, so it would be exceedingly difficult to try to be compatible with it.

cies
2 replies
4h26m

Sure, but the benefit from being even partly compatible (same structs with same names, etc -- or maybe some compatibility layer) are great as Linux' devices drivers can then be ported more easily.

On one end device drivers in Rust are now possible, OTOH the Meastro kernel. I wonder if there come be a day in my life that I run a non-C-kernel in prod/ on dev laptop.

_flux
1 replies
3h21m

I hope that there would be architectural improvements possible that would not be realistic for current Linux, and implementing those changes would make also the internals—and thus the kernel module interfaces—look quite a bit different.

cies
0 replies
2h42m

Dunno what I hope for more: better internal architecture or more HW compatibility. I think the latter drives adoption more than the first.

tutfbhuf
3 replies
5h37m

I wonder how far we are from having a GPT-X.Y operating in a loop, creating a fully Linux-compatible kernel with all 437 system calls in Rust within a day, which includes testing, debugging, and recompiling.

_nalply
2 replies
5h0m

I imagine someone could set up a complicated autofeedback loop. Describe the task to the LLM, give a development system and a possibility to run many tests, then let it rip.

I still think perhaps not too soon. I think the problem that there are many things to optimize for. One of them is correctness, but if a program runs this does not mean it is correct. Another thing is security. How to test the system for security? Have another LLM playing an adversary and try to hack the system?

This said, I wonder if someone manages to pull this off what the implications might be.

One of them: Have this system re-run automatically everytime as long as the Linux kernel is maintained.

Then why should anybody invest the effort of continuing development of the Linux kernel?

Then how to advance the development? Just tell the LLM to add a feature?

ilc
1 replies
2h41m

Because there is much more to the linux kernel than maintenance.

Look at things like ebpf and uring for examples of meeting real needs with new development in the kernel.

I doubt that a LLM will be able to come up with the ideas, and implement these things without substantial prompting.

For the every day stuff. Yeah, sure. Though you'd be amazed how many strange corner conditions POSIX and Linux have even around "simple" things like pipes.

... Understanding the whole context may be beyond where we are today based on what I have seen from LLMs, there may day where they can come closer. But as the Klingons say: "Not today."

_nalply
0 replies
51m

It's not neccessary that the LLM by itself comes up with ideas, only the person who tells the LLM.

«Gee what if the kernel could do clairvoyance? Implement a syscall! Specify the parameters and the structure of the data being returned!»

</s>

A more realistic scenario is just imitating Linux' syscalls.

EDIT: Just had an idea what could happen. Some guy can't do kernel development but is annoyed that a feature request got declined. Someone else set up an LLM-driven kernel development system. That guy now tells the system to implement said feature. The rest of the story: even more community fracture.

llenotre
3 replies
3h18m

So many thank to all of you for your support! This project has represented a lot of efforts for me and it means a lot!

Right now the website seems to be pretty slow/down. There is a lot of traffic, which was not expected. I also suspect there might be a DoS attack going on.

I will try to make it work better when I get home! (I am currently at work so I cannot give much attention to it right now)

Sorry for the inconvenience, but glad you appreciate the project!

TheHiddenSun
1 replies
1h58m

Please test your website on mobile.

The navbar takes like 33% screen state and can't be removed.

I never understand why people want to make them sticky and steal valuable reading screen space. You can, if you want, always scroll to the top in like 300 ms.

satvikpendem
0 replies
1h30m

I agree, I have the Kill Sticky extension installed on my mobile browser and it works great for these kinds of situations.

dash2
0 replies
2h48m

DOS from HN's very own Slashdot effect...

agentultra
3 replies
4h32m

Sounds like a fun project. Curious though: most of the drawbacks to using C and difficulties with developing an OS are around debugging.

I assume that the switch to Rust eliminated a certain class of memory error but is debugging still a pain? Or is there less of it than before the switch making debugging more tolerable?

llenotre
2 replies
3h38m

A lot of memory and concurrency issues have been eliminated. It is still a pain to debug, but a lot less than it was before though.

As an example, there is not a lot of chances you forget to use a mutex since the compiler would remind it to you by an error.

This is not a silver bullet though, things such as deadlocks are still present. Especially with interruptions.

To give an example, if you decide to lock a mutex, then an interruption happens, the code that locks the mutex will stop running until the interruption is over. If the interruption itself tries to lock the same mutex, then you have a deadlock, and the typing system cannot help you with this kind of problem.

The solution is to disable interruptions handling while the mutex is locked, but the compiler cannot enforce it.

claytonwramsey
0 replies
2h25m

If you’re willing to implement your own mutex, it actually is possible to enforce! You could make disabling interrupts emit a token and then require the mutex to accept that token as a parameter to its locking behavior.

agentultra
0 replies
2h5m

I suspect those sort of liveness properties (and likely some safety properties in unsafe code) cannot be encoded in Rust's type system and you'd have to use a model checker at some point.

Still, it's cool to see such a system used and providing immediate benefits. Happy hacking!

willangelo
2 replies
2h17m

I really like the idea of building a kernel, especiallly for learning purposes. Curious about the resources you used to understand the whole kernel/OS thing

llenotre
1 replies
2h14m
willangelo
0 replies
2h12m

Awesome, thank you!

goodpoint
2 replies
6h37m

Writing alternatives to GPL software under MIT/Apache licenses is really harmful for the FOSS ecosystem.

We need to protect end users from more and more proprietarization, tracking and privacy breaching, SaaS and untrusted IoT devices.

pas
1 replies
5h57m

The road paved with good intentions and all.

Sure, users are 1-bit entities in need of protection, no questions 'bout that, but also given that premise they are best served by good software that helps them get their job done. If a kick ass GPL software can do that, great. They will even pay for it. If not? They will pay for the non-OSI one that bundles the GPL and will laugh at GPL enforcement attempts.

Licenses are intellectually cute, but unless it's well-enforced AGPL3++ it doesn't matter much. (See the recent thread about 3D printer https://news.ycombinator.com/item?id=38768997 )

goodpoint
0 replies
5h29m

Such snarky tone sounds unnecessary on HN.

unless it's well-enforced AGPL3++

GPL has been successfully enforced in various occasions, and it can be enforced effectively especially when large companies need to protect their R&D investments from freeloading competitors.

A new, stronger "AGPL3++" can be written and enforced. Many companies have been experimenting with new licenses to find more sustainable options than the status quo.

drtgh
2 replies
4h39m

This sounds more than great.

Unrelated but at same time related, feel your self absolutelly free to ignore this message,

Linux needs a HISP with firewall. I comment it here because this need to be supported by a/the kernel, its needed to limit the functions that allow process injections, and also a way for to canalize all the process executions in a supervised mode.

As an [put operative system name here] user, I need (desire) to know when a process/program wants to access the network or internet, if it wants to act as a server, what port, what IP's wants to call at that moment, and to be able to block the operation before happen, limit what IP's are allowed to serve or not to the program, being able to sniffing the program behavior.

In that moment/event, I need to know how was launched the process/program, what parent process launched it. To know if the process wants to inject over another one own resource something, or wants to access not natural system resources. And before it happens, being able to block such intention for folder/files/disk access, keyboard, screenshots, configuration system files, console commands and so on.

If that program wants to launch another program, or service and so on, it's needed to control even if it is allowed to launch an executable in its own folder. Absolutely supervise the program and system access.

As user, I need to be prompted about all of this before happens, with information, for to give permission or not, temporally at that moment, or session, or to save it as decision that will taken the next time the program run.

Being able to configure latter it is essential, a UI more or less like a uMatrix UI point of view, and so on, designed for usability.

When one run a program, the gears of the HISP always are runing:

    - Why is trying to inject this program the browser memory? of course I do not allow it, it's more, I kill the process right now . System scan now, we are in troubles. Log, were are the logs!! Damn, the next two days are going to be miserable... I'll probably format the whole system when I find from were entered this.

    - Why is this trying to connect to internet? it's more, this IP is from XXXXX, isn't it? sorry, I do not allow it, run without this requests or die.

    - What, this is requesting DNS?, And now it is requesting a local network IP address? Houston...

    - Ehhh, what are you doing with that keyboard capture try? unnecessary, akta gammat.

    - Ok server installed running for first time, but only under this specific port, and only the loopback IP is allowed to access, this computer and anyone else. This was fast.

    - Ok, I allow you to access such internet IP, but only this time, keep asking the next time you run, I'll decide.

    - Thanks for warning about the port scan, I guess with IPv6 this would be even worst. Thankfully I have all the services limited to IPv4 localhost, but I'll keep one eye over those bots if they insist much.

    - and so on.
This does not exist in Linux. Currently it is a Windows users thing, after installing and configuring tools, with exception of the console command filtering and uMatrix UI, that I added because they are also necessary (In windows, HISP's configuring interfaces are just.. very rustic and hidden, they don't have usability in mind, it is like an available legacy feature, unfortunately).

Whatever. In Linux, this require kernel custom modifications, and the whole HISP with firewall does not exist, and ironically, when separated one from the another are just useless.

So, humbly but from an selfish way, I would ask to consider design the kernel with this thing in mind. ( I do not mean to design the HISP with firewall application).

As I started saying, feel your self absolutely and totally free to ignore this message.

samus
0 replies
1h21m

It already exists: SELinux or AppArmor. They build on infrastructure that allows implementing other solutions to that effect.

However, on a typical system there is so much going on that this is unlikely to be of much use to anybody not willing to just spend their time reviewing arcane internals of their applications. The above does not how I'd want to spend my day at the computer.

Android and iOS presents a middle ground. But even their requests get tiresome after some time, and users are pretty quickly seduced to just allow everything.

flohofwoe
0 replies
4h30m

This sounds like an absolute nightmare from a user perspective. The current popup-galore on Windows and macOS when running a program for the first time is already bad enough.

rvz
1 replies
5h29m

Some words of encouragement in the sea of pessimism on HN which brought down the previous attempt at this [0]. Keep going, ignore the FUD and continue where others have left off.

We need alternative and safer kernels, and attempts like this should be encouraged. Rust is suitable for that guarantee.

Keep going.

[0] https://news.ycombinator.com/item?id=28986377

llenotre
0 replies
2h34m

Thank you very much! Even if nobody liked the project, I would not be planning to stop it. I am doing this as a hobby first!

Having even one other user than me would be terribly difficult but if it happens that would be super cool! If it does not happen, then I just have my own system and I am happy with it anyways!

potato24
1 replies
4h35m

This is obviously impressive. Did you think from the beginning monolithic/module-based like linux was the way to go or did you consider making it a hybrid/micro kernel.

llenotre
0 replies
3h32m

The monolithic/module thing was imposed by the subjects at my school (since it started as a school project).

However, a part of me is feeling like it could make sense to do a big refactor to turn all of this into a micro kernel. However I am not willing to do this until I have a plan to make it right.

By the way, the 32 bits thing too was imposed by the school. I am now wondering if it still relevant to support it and just support 64 bits only...

pizza234
1 replies
6h24m

I think this had already been attempted by the now-discontinued project [Kerla](https://github.com/nuta/kerla).

llenotre
0 replies
3h36m

I didn't know this project. I will check it out!

lucasyvas
1 replies
3h7m

Tangent, but I love this Gource thing that the author made the contribution video with. I'd never seen it before but had an idea to try making something like it a couple of years back - no original ideas it seems!

llenotre
0 replies
2h18m

On my side, I discovered it a while ago with this video: https://www.youtube.com/watch?v=zRjTyRly5WA

bcye
1 replies
7h43m

Small feedback: On mobile the back button (and nav bar) block 1/6th of the page, probably could use a bit less padding

llenotre
0 replies
4h54m

Thank you for the feedback. I will fix that!

Havoc
1 replies
1h8m

Is there some sort of organised push for dropping copyleft?

2nd post today going down that route

bigstrat2003
0 replies
1h3m

Not everyone likes copyleft licenses. It's as simple as that.

mgoetzke
0 replies
7h44m

Great. Hope he keeps doing this until he finds enough supporters