Not to mention MINIX is hidden away in almost every modern Intel CPU as part of its Management Engine. This little known fact makes it one of the most widely distributed operating systems.
It's kind of sad systems research pretty much stopped at this point. I really was hoping that by 2024 I'd be running a distributed operating system where processes could be be freely migrated between my phone, desktop, laptop and NAS without too much of a hitch.
This type of thing always seemed to be in the very cool but pointless basket, to me.
A lot of systems researchers were absolutely obsessed with clustering, network transparency, distributed systems, and viewed them as the pinnacle of the operating system. I never understood why. I completely understand the coolness factor mind you, I just never could see why it was so important that your server-laptop-phone-network system behaved as a single system.
I think a lot of effort was wasted chasing that dragon. Wasted is probably the wrong word because research into cool things is good and probably created useful things along the way. I don't feel there was ever enough justification put into it and it could possibly have been better spent though.
The alternative of having multi-system tools and programming models that allow you to manage multiple systems without having them appear as a single image at the lowest level didn't get much love from academia after TCP/IP, and was largely developed by industry.
Very strongly disagree with you here.
We should have had a distributed OS like Amoeba/Plan9/Inferno/etc. allowing us to manage all our chosen set of devices using a single uniform interface i.e. "A Namespace" (in Plan9/Inferno speak). Such namespaces can themselves be connected into "Hierarchical Namespaces" and so on. This is a natural and easy way to let users keep their control over their devices while still being connected to the "Greater Internet".
But the Industry manipulated us into the Cloud model so that they could retain control and make money off of us. It was all Business to the detriment of a better User Experience via Technology.
We should have had a distributed OS like Amoeba/Plan9/Inferno/etc. allowing us to manage all our chosen set of devices using a single uniform interface i.e. "A Namespace" (in Plan9/Inferno speak). Such namespaces can themselves be connected into "Hierarchical Namespaces" and so on. This is a natural and easy way to let users keep their control over their devices while still being connected to the "Greater Internet".
Why should we have?
But the Industry manipulated us into the Cloud model so that they could retain control and make money off of us. It was all Business to the detriment of a better User Experience via Technology.
The choice was not cloud or distributed single system machine, they were and are orthogonal.
Why should we have?
Because that is what an OS is supposed to do viz. provide a uniform interface and transparent access to various Compute, Storage and Network resources wherever they might be. A Distributed OS (https://en.wikipedia.org/wiki/Distributed_operating_system) is a natural extension to a Single Node OS. Note that we have in a sense realized our distributed OS in the IaaS and PaaS layers of a Cloud network. However they are done in such a manner as to take control away from us unless of course you use some open source offerings setting up which is much more complex than a distributed OS should be.
If you really wanted to do this, it would be fairly trivial to implement with Erlang/Elixir. We have thr technology - just no motivation (i.e. no profit in it)
Hey! Don't let out the secrets :-) This is something i have been long thinking about (but have done nothing practical so far). The problem is how to bridge/shim between Erlang and those platforms which do not support it eg. Android (though some people seem to have done it - https://github.com/JeromeDeBretagne/erlanglauncher). Joe Armstrong actually called the Erlang/OTP System as a AOS (Application Operating System) in his paper i.e. it contains much of the functionality of a traditional OS but people seem to ignore it and insist on calling it "just another language".
There is still a lot of research and innovation, but it doesn't always come in the way of completely new software projects. The cost of trying to build a new OS is simply massive. You need to be compatible with existing useful software if you want to do anything other than an appliance. Anything that provides a new paradigm shift that requires changing existing software has a huge slog ahead of it to make it successful. That said, there is tons of incremental progress in the operating systems.
I think a lot of folks have thought about the idea of a truly distributed operating system. I'm pretty sure existing operating systems will eventually evolve in that direction. You already see bits and pieces of it popping up.
You should be looking at infrastructure related roles for this stuff; live migrations of VMs and containers are regularly done when you drain a VM or Kubernetes minion, for example.
For consumers, they mostly care about the user experience. Having software that syncs their contents to a server thus making it available to all devices has much lower overhead than trying to live migrate a process over unreliable networks.
running a distributed operating system where processes could be be freely migrated between my phone, desktop, laptop and NAS without too much of a hitch.
This was what i always wanted ever since i read Tanenbaum's "Modern Operating Systems" and in particular his "Amoeba distributed OS" - https://en.wikipedia.org/wiki/Amoeba_(operating_system) Also see Plan 9/Inferno from Bell Labs. But instead what we got (due to industry shenanigans) is this garbage/duct-tape of distribution gone crazy in the "Cloud".
The research has not stalled at all, the incentive to make consumer devices has because of the capture by tech giants. If someone wanted to implement an interesting idea like this, they would be harshly judged for not being able to compete with pixels and iphones for the rest of the stack. See Rabbit for example.
What you are describing is the bread and butter of modern systems research and all the large cloud providers internally implement this kind of thing.
Sounds like you like myself have taken Rob Pike’s take from long ago to heart http://www.herpolhode.com/rob/utah2000.pdf
While plenty of things have happened since that paper, I have this sinking feeling that he was right and we just stopped trying to really expand and explore what’s possible. But it may be more a matter of the state of academia than about the domain itself. It’s not like people were getting a bunch of conference invitations for GPGPU applications with ML until hype freight trains hit. This sobering reality of academic hegemony and grant chasing kept me from ever getting terribly interested in systems research unfortunately.
I was failing badly in my computer architecture courses. Received a 5% score in one of the mid-terms. Switched from the recommended book to Andrew's book and did nothing apart from read it everyday for 2 hours. Received 100% in the final. Such an amazingly approachable book. :-)
Out of curiosity, which was the recommended book?
Some horrible local author whose name I now forget. Andrew's book was Structred Computer Organization
Not OP but if it was computer architecture i imagine it would be “structured computer organisation”.
One if my favourite books on computers and looking at the comments, many people here too.
Source: this book got me top marks too
Wow. Which book?
I had Tannenbaum books in in two of my CS courses too 2 decades back. They were great textbooks, found them quite accessible too
It's funny to see how they highlight that it inspired Linux, ehile Tanenbaum heavily criticized it for not being a microkernel :D
Well, it “inspired” Linux because Linus was unhappy with Minix. Linus wanted UNIX and Minix was not what he was looking for. I don’t think emulating Minix itself was ever his goal. He chose the Minix file system originally but this was just pragmatic as that is what his drive was formatted with.
The earliest versions of Linux were written on Minix though. Credit where credit is due.
Interestingly, Linus was unaware of BSD. He has said that, if he had known about it, he may never have written Linux to begin with.
The whole flame war between the two is worth a read [1], at least for historical reasons; very different points of view. And while Linus disagreed with Tanenbaum's POV, I Tanenbaum's criticism to Linux made it actually better.
[1] https://groups.google.com/g/comp.os.minix/c/wlhw16QWltI#9f3c...
My favorite thing about that whole thread is the unanimous consensus among everyone involved that we will all be running GNU/Hurd in 2 years so these stopgap OSes are just academic hobbies.
They meant 2 centuries, for the while well have to deal with these stopgap academic hobbies OSes /j
When Linux was written BSD was still encumbered by non-BSD-licensed AT&T code. That changed a year or so later.
IIRC the very first installations of Linix started with Minix as a base, that then was "patched" into Linux. So it was more than just the filesystem. But yes, Torvalds wasn't happy with Minix, and Tanenbaum wasn't happy with Linux.
I want to appreciate Andrew Tanenbaum for making possible the world's most popular OS, Intel ME. This system has been made absolutely right in every its piece except license. Minix under GPL would not let megagorporations to backdoor every functional x86 chip on the planet.
What keeps it from becoming a free software backdoor?
GPL would theoretically force ME to open its source code.
A different license wouldn't change that. They would just find some way to do it with BSD or some other OS.
right in every its piece except license.
But then it would not be the "most popular OS" right? Whats your logic here?
is MINIX abandonware now? Many years ago I tried to install the release that works (or comes bundled) with a light window manager but it was not trivial and it looked pretty abandoned even back then.
MINIX3's development has stalled years ago.
Basically, around MINIX 3.2.0 (just before I stated contributing) the OS ditched its homegrown userland and adopted the NetBSD source tree + pkgsrc. While that boosted the software compatibility of MINIX3 in the short term, the maintenance burden of keeping up with upstream with such a large diff proved unsustainable in the long term, especially after the grant money dried up.
In hindsight, my opinion is that MINIX3 should've gone with NetBSD binary compatibility. The NetBSD syscall table would've been a far slower moving target to keep up with than the entire NetBSD source tree.
The OS also had a significant amount of tech debt, especially in the microkernel which was uniprocessor and 32-bit only, as well as outdated hardware support which meant nobody was daily-driving it anymore. It also was an aging design: while the system was divided up into user-mode servers with message-based communication, you couldn't containerize a process by spawning a parallel userland ecosystem for example because it wasn't capability-based or namespaceable.
It's too bad really, because the base system had really impressive capabilities. It could transparently survive crashes of stateless drivers, even when stress-testing it by injecting faults into the drivers at runtime. You could live-update the various system services at runtime without losing state or impacting clients. Some really good papers came out of MINIX3 [1].
I've ranted more in detail before, both on HN [2] as well on Google Groups [3]. I do not fault the maintainers for the current state of affairs because keeping up the MINIX3 userland against modern software standards was a major maintenance burden, so adopting NetBSD's one way or another was inevitable. At any rate, there are other micro-kernel based operating systems [4], some under active development, so MINIX's spirit lives on.
[1] https://wiki.minix3.org/doku.php?id=publications
[2] https://news.ycombinator.com/item?id=34916261
[3] https://groups.google.com/g/minix3/c/qUdPZ0ansVw/m/7LuOv0YOA...
Still waiting for that Minix 3.3.0 release, after so many RCs.
I followed the project somewhat, and I understand the main issue has been lack of having someone at the helm, pursuing this 3.3.0 release.
The situation is such that the release blocking bugs were fixed, and yet the release hasn't happened, because nobody is willing to put the time and effort to make it happen.
Should the release happen, and somebody be willing to review and merge changes and organize a regular schedule of releases (even with a long period, such as yearly), the system would no doubt get some life back.
Not even a commit in years. Yes, I would say it is dead.
Anyone remembers the legendary Torvalds-Tanenbaum debates? What were they exacly about?
Err, perhaps I'm missing something, but if the ACM Software System Award is presented to an institution or individual(s) recognized for developing a software system that has had a lasting influence, how come Linus hasn't got his yet?
I wonder if they simply don't want to reward his toxicity.
Perhaps a consolation prize that Linus doesn't need...
I read Operating Systems: Design and Implementation in 1988 or 1989, and it was an insightful and pleasing experience. I only wished, at the time, that there was some Unix-like OS that was "free" (for some intuitive value of the word "free", rather than the formal definition, which I hadn't heard of yet at the time). This could have been Minix.
It was one of my degree course books. Thoroughly enjoyed it!
One of the best textbooks I had to read for my degree, back in the 80s. The appendix containg the Minix source code was my first exposure to a large body of well-written C code.
I have read Tanenbaum's book twice. Really great book. Very dense in information but enjoyable as well. That and the Common Lisp Reference Manual were at some point my favorite CS readings. I was reading them in printed form.
Which one? He has written more than one great book. :-)
Yeah i didn't specify, i think it is the "Modern Operating Systems". I read it in Greek and I don't remember the title, maybe the title was translated a little bit different. It was not the Minix book, i have not read that unfortunately.
Now however, i am sold on the idea of the Lisp Machine. Hopefully some day a lisp OS and hardware will be a viable way to use a computer.
His book “Computer Networks” was one of my favourites in my CompSci study days. Many years later I gave lectures on Distributed Systems at a business school and based the material on the book. Still feels relevant, even today.
Although this is a thread about Tanenbaum, personally I feel that Data Communications and Networking by Forouzan helps explain things in a better way and goes into the details of each network layer.
Modern Operating Systems is great though, and when I was in college I've recommended it to peers whose feedback was along the same lines.
This was my favourite textbook of my entire undergrad CS studies. I still have it on my shelf to this day. I've never gone deep into networking but the broad knowledge has stayed with me and comes in useful again and again. I would say it sets me apart from many other engineers.
I’ll never forget this. I was listening to a talk by Reed Hastings (Netflix Founder/CEO) at (I think was) Stanford. He was explaining how he came up with the idea of Netflix. A student asked: “when did you realize you had to switch to the internet”. At which he replied: “that was the idea from the beginning. We knew networks were going to become what they are today. Look, there’s a saying in a CS textbook that says: ‘never underestimate the bandwidth of a truck full of tapes over the interstate’. We knew we had to ship the DVDs first until at some point the network would reach our desired level”.
While I was watching that i said: “DUDE! I remember that quote (and that illustration)”. Went to my text book and there it was. In Tanenbaum’s networking textbook.
Aside from the anecdote, this guy has had a huge influence in the whole industry (not even mentioning the Kernel debates).
https://en.m.wikipedia.org/wiki/Sneakernet
Has some pointers to the orig source
Another awesome fact about Tanenbaum is that he was the person behind electoral-vote.com. Prior to everyone having their own model and Nate Silver (err, should I say Poblano?) running the table in 2008, this was the place to go understand the 2004 US Presidential Election between Bush and Kerry. Hugely helpful for many people to understand polling and statistics.
he was the person behind electoral-vote.com
electoral-vote.com is still going strong: https://www.electoral-vote.com/
I'm sure you know this, but I want to emphasize it for anyone who is not aware.
Prof Tanenbaum has a co-writer now (Prof Bates, history at UCLA/Cal Poly), and the site is published every day (used to be weekdays only and only during election cycles).
Well-deserved, congrats Andrew. I still have his distributed systems textbooks from way back when, and still wish Minix had won and its microkernel model had become the basis of the FOSS *nix ecosystem.
Also in case anyone is not aware, Andrew runs the election science blog Electoral Vote [1], using an electoral college poll model to analyze and predict US elections. One of the better US political sites out there.
[1]:https://www.electoral-vote.com/evp2024/Info/welcome.html
People should know that electoral-vote.com is not just an election science blog at this point; Andrew and his co-writer Christopher Bates publish a very cogent summary each morning of the previous day's political news and its possible effects on US politics. You could read only their dispassionate but witty daily posts and be a reasonably well informed American citizen.
Man, his Computer Networks books were dense but they had a lot of good stuff in them. They also had really good and fun looking covers.
I'm genuinely surprised this hasn't happened already!
Well ge wrote complex textbooks that spoiled my college days
Every student should read Tanenbaum's "Structured Computer Organization". It was the first book which showed me the logical layering involved in a "Computer System" which is absolutely essential to understanding this field.
He wrote great books. I am ashamed to admit his OS book served as a monitor support on my desk for some time.
Hell yeah! Well deserved. I had a blast with some of his books. Especially _Operating Systems: Design and Implementation_ and _Computer Networks_. Legend
Still have a cd-rom with copy of minix3, from when he had a talk in my university. His books on OS and Networking are very approachable and fun read!
Along with K&R (and K and Plauger’s “Software Tools”), the Dragon book, Bentley’s Programming Pearls, and Holzmann’s Beyond Photography, AST’s books were the most formative in my life (I started coding in 1976 but was self-taught until early 80s when I got to college and read all these brilliant works). Long overdue recognition; so many people benefited from the lucidity of these minds.
This is a richly deserved award for a great educator who makes computer science both accessible and enjoyable.
Structures computer organisation is supposed to be a textbook but it's written so well I found myself reading it cover to cover like a thriller.
You won't find many people saying that about Knuth for example (not to say anything against Knuth who is amazing in his own way).
I see ASTs books, in particular the hands-on Minix ones, as sitting on the same “plane” as the philosophy espoused in The Night Watch paper. Ultimately the paper is about a level of comfort with reality that is at it’s core rooted in familiarity with rather than ignorance due to abstractions, and having learned fearlessness rather than helplessness. While it is highly unlikely you will be having a debugging session that has you executing kernel-level code alongside having an oscilloscope/logic probe hooked up to the pins of a processor chip to monitor data lines (though we all know someone who does this without a second thought), having this level of knowledge and comfort with being ever so slightly closer to the silicone, the data sheet of the processor somewhere near by, the memory segmentation modes not too alien of a concept, is a great boon to a software developer. It is a leap that I think everyone should try just once, and with it, abolish any notion of mental barriers that prevent one from understanding how things really work.
For who does not know, his books used as textbook in South American universities for years during 90s.
I was myself taught Computer Architecture in 1991 using the Tanenbaum SCO book, and many years later taught the Computer Architecture course for four years using the SCO book (a later edition, but still!). A true classic, and if anything, it is a wonder that Tanenbaum had not already received the award.
So what is the outcome of the kernel war? Performant micro-kernels have settled it no?
The Amsterdam Compiler Kit is also his work (along with Ceriel Jacobs):
https://github.com/davidgiven/ack
Just as Minix perhaps could've been Linux, The Amsterdam Compiler Kit could've been gcc, but for licensing issues:https://www.theguardian.com/books/2001/apr/10/firstchapters....
It's so sad that all the work on Minix3 has stopped.
I came across Tanenbaum's Operating Systems book during my CS and it had a huge influence over me, till then I was a huge Windows nerd and after reading the book I felt like I was being cheated by Windows, like I was denied something which was rightfully mine.
I hated my labs as it had only Windows, started exploring *nix systems post class and never went back.
Thank you Mr.Andrew and congratulations.
IIRC, Structured Computer Organization played the same role for me, back in the day, that I think From Nand to Tetris has for many of you all.
Why did they use Minix and not e.g. L4 or sel4?
Which version of Minix did they actually use? There is Minix v3.1 (released in 2005 with the book), 3.2 (released in 2012) and 3.3 (released in 2014).
The original L4 (I believe) wasn't commercially available und sel4 is GPL licensed. Minix has a BSD license, so maybe that's why
The original L4 was written in assembler and replaced by different other implementations long before the ME platform was developed. Pistachio was in development around that time and available under BSD.
100% because of the license
As to why, no idea. I guess some engineer was just familiar with it from their undergrad days like the rest of us.
And which version, I know it’s MINIX 3 but beyond that? No idea. They probably heavily modified it and as Minix is not GPL, Intel never published it. Based on the timelines it’s likely 3.1 as the ME platform has heen around since approximately 2007 iirc.
Then I guess it's one of the 3.1.3x versions released in 2007 (see https://github.com/Stichting-MINIX-Research-Foundation/minix...), or maybe 3.1.2 from 2006, depending how long hey had to implement the ME.
Besides what others pointed out, Minix3 is engineered for fault tolerance foremost. seL4 has different goals.
AIUI Sel4 is just a kernel, so adding all the "management engine crap" - networking stacks, drivers etc. would be a lot of work. Minix came with 'batteries' included.
Or Minix 2?
I'm curious if this is actually true, considering various ARM MCUs and SOCs seem to dominate in quantity. Considering these largely run some sort of Linux or RTOS, I'd be curious to see if MINIX or Linux is more widespread?
ACM should call on AST to disavow this before receipt of any award.
Sometimes I wonder how the world would be today if MINIX was distributed with a FLOSS license similar to Linux. I think the Linus Torvalds vs Andrew Tanenbaum debate could have been a pivotal moment in tech history by the way MINIX missed a huge opportunity to step up in the history.