Besides all his innumerable accomplishments he was also a hero to Joe Armstrong and a big influence on his brand of simplicity.
Joe would often quote Wirth as saying that yes, overlapping windows might be better than tiled ones, but not better enough to justify their cost in implementation complexity.
RIP. He is also a hero for me for his 80th birthday symposium at ETH where he showed off his new port of Oberon to a homebrew CPU running on a random FPGA dev board with USB peropherals. My ambition is to be that kind of 80 year old one day, too.
Wirth was such a legend on this particular aspect. His stance on compiler optimizations is another example: only add optimization passes if they improve the compiler's self-compilation time.
Oberon also, (and also deliberately) only supported cooperative multitasking.
Supported cooperative multitasking won in the end.
It just renamed itself to asynchronous programing. That's quite literally what an 'await' is.
async/await has the advantage over cooperative multitasking that it has subroutines of different 'colors', so you don't accidentally introduce concurrency bugs by calling a function that can yield without knowing that it can yield
i think it's safe to say that the number of personal computers running operating systems without preemptive multitasking is now vanishingly small
as i remember it, oberon didn't support either async/await or cooperative multitasking. rather, the operating system used an event loop, like a web page before the introduction of web workers. you couldn't suspend a task; you could only schedule more work for later
And these fancy new names aren't there just for hiding the event loop? :)
Sort of and sort of not.
The key thing about 2023-era asynchronous versus 1995-era cooperative multitasking is code readability and conciseness.
Under the hood, I'm expressing the same thing, but Windows 3.1 code was not fun to write. Python / JavaScript, once you wrap your head around it, is. The new semantics are very readable, and rapidly improving too. The old ones were impossible to make readable.
You could argue that it's just syntactic sugar, but it's bloody important syntactic sugar.
Exactly. The way I think about it, the "async" keyword transforms function code so that local variables are no longer bound to the stack, making it possible to pause function execution (using "await") and resume it at an arbitrary time. Performing that transformation manually is a fair amount of work and it's prone to errors, but that's what we did when we wrote cooperatively multitasked code.
That's my point, we still do that. And based on your phrasing we're forgetting it :)
Sure, that's a good way to look at it. Another way to look at it: because the process of transforming code for cooperative multitasking is now much cleaner and simpler, it's fine to use new words to describe what to do and how to do it.
Coroutines are better than both. Particularly in reasoning about code.
I never left 1991 and I haven't seen anything that has made me consider leaving ConcurrentML except for the actor model, but that is so old the documentation is written on parchment.
if the implied contrast is with cooperative multitasking, it's exactly the opposite: they're there to expose the event loop in a way you can't ignore. if the implied contrast is with setTimeout(() => { ... }, 0) then yes, pretty much, although the difference is fairly small—implicit variable capture by the closure does most of the same hiding that await does
Not asking about old JavaScript vs new JavaScript. Asking about explicit event loop vs hidden event loop with fancy names like timeout, async, await...
do you mean the kind of explicit loop where you write
or, in yeso, async/await doesn't always hide the event loop in that sense; python asyncio, for example, has a lot of ways to invoke the event loop or parts of it explicitly, which is often necessary for integration with software not written with asyncio in mind. i used to maintain an asyncio cubesat csp protocol stack where we had to do thisto some extent, though, this vitiates the concurrency guarantees you can otherwise get out of async/await. software maintainability comes from knowing that certain things are impossible, and pure async/await can make concurrency guarantees which disappear when a non-async function can invoke the event loop in this way. so i would argue that it goes further than just hiding the event loop. it's like saying that garbage collection is about hiding memory addresses: sort of true, but false in an important sense
What worries me is we may have a whole generation who doesn't know about the code you posted above and thinks it's magic or worse, real multiprocessing.
okay but is that what you meant by 'hiding the event loop' or did you mean something different
(To set the tone clearly - this seems like an area where you know a _lot_ more than me, so any questions or "challenges" below should be considered as "I am probably misunderstanding this thing - if you have the time and inclination, I'd really appreciate an explanation of what I'm missing" rather than "you are wrong and I am right")
I don't know if you're intentionally using "colour" to reference https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ? Cooperative multitasking (which I'd never heard of before) seems from its Wikipedia page to be primarily concerned with Operating System-level operations, whereas that article deals with programming language-level design. Or perhaps they are not distinct from one another in your perspective?
I ask because I've found `async/await` to just be an irritating overhead; a hoop you need to jump through in order to achieve what you clearly wanted to do all along. You write (pseudocode) `var foo = myFunction()`, and (depending on your language of choice) you either get a compilation or a runtime error reminding you that what you really meant was `var foo = await myFunction()`. By contrast, a design where every function is synchronous (which, I'd guess, more closely matches most people intuition) can implement async behaviour when (rarely) desired by explicitly passing function invocations to an Executor (e.g. https://www.digitalocean.com/community/tutorials/how-to-use-...). I'd be curious to hear what advantages I'm missing out on! Is it that async behaviour is desired more-often in other problem areas I don't work in, or that there's some efficiency provided by async/await that Executors cannot provide, or something else?
Then what you want are coroutines[1], which are strictly more flexible than async/await. Languages like Lua and Squirrel have coroutines. I and plenty of other people thing it's tragic that Python and Javascripts added async/await instead, but I assume the reason wasn't to make them easier to reason about, but rather to make them easier to implement without hacks in existing language interpreters not designed around them. Though Stackless Python is a CPython fork that adds real coroutines, also available as the greenlet module in standard CPython [2], amazing that it works.
[1] Real coroutines, not what Python calls "coroutines with async syntax". See also nearby comment about coroutines vs coop multitasking https://news.ycombinator.com/item?id=38859828
[2] https://greenlet.readthedocs.io/en/latest/
Bliss 36 and siblings had native coroutines.
We used coroutines in our interrupt rich environment in our real time medical application way back when. This was all in assembly language and the coroutines vastly reduced our multithreading errors to effectively zero. This is one place where C , claimed to be close to the machine falls down.
interesting, i didn't even realize bliss for the pdp-10 was called bliss-36
how did coroutines reduce your multithreading errors
well some of the things i know are true but i don't know which ones those are; i'll tell you the things i know and hopefully you can figure out what's really true
yes! i'm referencing that specific rant. except that what munificent sees as a disadvantage i see as an advantage
there's a lot of flexibility in systems design to move things between operating systems and programming languages. dan ingalls in 01981 takes an extreme position in 'design principles behind smalltalk' https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....
in the other direction, tymshare and key logic's operating system 'keykos' was largely designed, norm hardy said, with concepts from sigplan, the acm sig on programming languages, rather than sigsosp
sometimes irritating overhead hoops you need to jump through have the advantage of making your code easier to debug later. this is (i would argue, munificent would disagree) one of those times, and i'll explain the argument why below
in `var foo = await my_function()` usually if my_function is async that's because it can't compute foo immediately; the reasons in the examples in the tutorial you linked are making web requests (where you don't know the response code until the remote server sends it) and reading data from files (where you may have to wait on a disk or a networked fileserver). if all your functions are synchronous, you don't have threads, and you can't afford to tie up your entire program (or computer) waiting on the result, you have to do something like changing my_function to return a promise, and putting the code below the line `var foo = await my_function()` into a separate subroutine, probably a nested closure, which you pass to the promise's `then` method. this means you can't use structured control flow like statement sequencing and while loops to go through a series of such steps, the way you can with threads or async
so what if you use threads? the example you linked says to use threads! i think it's a widely accepted opinion now (though certainly not universal) that shared-mutable-memory threading is the wrong default, because race conditions in multithreaded programs with implicitly shared mutable memory are hard to detect and prevent, and also hard to debug. you need some kind of synchronization between the threads, and if you use semaphores or locks like most people do, you also get deadlocks, which are hard to prevent or statically detect but easy to debug once they happen
async/await guarantees you won't have deadlocks (because you don't have locks) and also makes race conditions much rarer and relatively easy to detect and prevent. mark s. miller, one of the main designers of recent versions of ecmascript, wrote his doctoral dissertation largely about this in 02006 http://www.erights.org/talks/thesis/index.html after several years working on an earlier programming language called e based on promises like the ones he later added to js; but i have to admit that, while i've read a lot of his previous work, i haven't read his dissertation yet
cooperative multitasking is in an in-between place; it often doesn't use locks and makes race conditions somewhat rarer and easier to detect and prevent than preemptive multitasking, because most functions you call are guaranteed not to yield control to another thread. you just have to remember which ones those are, and sometimes it changes even though your code didn't change
(in oberon, at least the versions i've read about, there was no way to yield control. you just had to finish executing and return, like in js in a web page before web workers, as i think i said upthread)
that's why i think it's better to have colored functions even though it sometimes requires annoying hoop-jumping
You will get them in .NET and C++, because they map to real threads being shared across tasks.
There is even a FAQ maintained by .NET team regarding gotchas like not calling ConfigureAwaitable with the right thread context in some cases where it needs to be explicitly configured, like a task moving between foreground and background threads.
It hasn't won. Threads are alive and well and I rather expect async has probably already peaked and is back on track to be a niche that stays with us forever, but a niche nevertheless.
Your opinion vs. my opinion, obviously. But the user reports of the experience in Rust is hardly even close to unanimous praise and I still say it's a mistake to sit down with an empty Rust program and immediately reach for "async" without considering whether you actually need it. Even in the network world, juggling hundreds of thousands of simultaneous tasks is the exception rather than the rule.
Moreover, cooperative multitasking was given up at the OS level for good and sufficient reasons that I see no evidence that the current thrust in that direction has solved. As you scale up, the odds of something jamming your cooperative loop monotonically increase. At best we've increased the scaling factors, and even that just may be an effect of faster computers rather than better solutions.
in the 02000s there was a lot of interest in software transactional memory as a programming interface that gives you the latency and throughput of preemptive multithreading with locks but the convenient programming interface of cooperative multitasking; in haskell it's still supported and performs well, but it has been largely abandoned in contexts like c#, because it kind of wants to own the whole world. it's difficult to add incrementally to a threads-and-locks program
i suspect that this will end up being the paradigm that wins out, even though it isn't popular today
I was considering making a startup out of my simple C++ STM[0], but the fact that, as you point out, the transactional paradigm is viral and can't be added incrementally to existing lock-based programs was enough to dissuade me.
[0] https://senderista.github.io/atomik-website/
nice! when was this? what systems did you build in it? what implementation did you use? i've been trying to understand fraser's work so i can apply it to a small embedded system, where existing lock-based programs aren't a consideration
It grew out of an in-memory MVCC DB I was building at my previous job. After the company folded I worked on it on my own time for a couple months, implementing some perf ideas I had never had time to work on, and when update transactions were <1us latency I realized it was fast enough to be an STM. I haven't finished implementing the STM API described on the site, though, so it's not available for download at this point. I'm not sure when I'll have time to work on it again, since I ran out of savings and am going back to full-time employment. Hopefully I'll have enough savings in a year or two that I can take some time off again to work on it.
that's exciting! i just learned about hitchhiker trees (and fractal tree indexes, blsm trees, buffer trees, etc.) this weekend, and i'm really excited about the possibility of using them for mvcc. i have no idea how i didn't find out about them 15 years ago!
Then you may be interested in this paper which shows how to turn any purely functional data structure into an MVCC database.
https://www.cs.cmu.edu/~yihans/papers/concurrency.pdf
thank you!
Sound’s nifty. Did this take advantage of those Intel (maybe others?) STM opcodes? For a while I was stoked on CL-STMX which did (as well as implementing non-native version to the same interface)
No, not at all. I'm pretty familiar with the STM literature by this point, but I basically just took the DB I'd already developed and slapped an STM API on top. Given that it can do 4.8M update TPS on a single thread, it's plenty fast enough already (although scalability isn't quite there yet; I have plenty of ideas on how to fix that but no time to implement them).
Since I've given up on monetizing this project, I may as well just link to its current state (which is very rough, the STM API described in the website is only partly implemented, and there's lots of cruft from its previous life that I haven't ripped out yet). Note that this is a fork of the previous (now MIT-licensed) Gaia programming platform (https://gaia-platform.github.io/gaia-platform-docs.io/index....).
https://github.com/senderista/nextdb/tree/main/production/db...
The version of this code previously released under the Gaia programming platform is here: https://github.com/gaia-platform/GaiaPlatform/blob/main/prod.... (Note that this predates my removal of IPC from the transaction critical path, so it's about 100x slower.) A design doc from the very beginning of my work on the project that explains the client-server protocol is here (but completely outdated; IPC is no longer used for anything but session open and failure detection): https://github.com/gaia-platform/GaiaPlatform/blob/main/prod....
I read that as octal; so 1024 in decimal. Not a very interesting year, according to Wikipedia.
https://en.wikipedia.org/wiki/1024
Meanwhile, in JS/ECMAScript land, async/await is used everywhere and it simplifies a lot of things. I've also used the construct in Rust, where I found it difficult to get the type signatures right, but in at least one other language, async/await is quite helpful.
Await is simply syntactic sugar on top of what everybody was forced to do already (callbacks and promises) for concurrency. As a programming model, threads simply never had a chance in the JS ecosystem because on the surface it has always been a single-threaded environment. There's too much code that would be impossible to port to a multithreaded world.
It has mostly won for individual programs, but very much not for larger things like operating systems and web browsers.
Mostly won for CRUD apps (yes and a few others). Your DAW, your photo editor, your NLE, your chatbot girlfriend, your game, your CAD, etc might actually want to use more than one core effectively per task. Even go had to grow up eventually.
Indeed, however the experience with crashes and security exploits, has proven that scaling processes, or even distributing them across several machines, scales much better than threads.
preemptively scheduled processes, not cooperatively scheduled
Ah, missed that.
It's moving in more and more.
A core problem is that it's now clear most apps have hundreds or thousands of little tasks going, increasingly bound by network, IO, and similar. Async gives nice semantics for implementing cooperative multitasking, without introducing nearly as many thread coherency issues as preemptive.
I can do things atomically. Yay! Code literally cooperates better. I don't have the messy semantics of a Windows 3.1 event loop. I suspect it will take over more and more into all walks of code.
Other models are better for either:
- Highly parallel compute-bound code (where SIMD/MIMD/CUDA-style models are king)
- Highly independent code, such as separate apps, where there are no issues around cooperation. Here, putting each task on a core, and then preemptive, obviously wins.
What's interesting is all three are widely used on my system. My tongue-in-cheek comment about cooperative multitasking winning was only a little bit wrong. It didn't quite win in the sense of taking over other models, but it's in widespread use now. If code needs to cooperate, async sure beats semaphores, mutexes, and all that jazz.
Async programming is not an alternative to semaphores and mutexes. It is an alternative to having more threads. The substantial drawback of async programming in most implementations is that stack traces and debuggers become almost useless; at least very hard to use productively.
In the last 15 to 20 years asynchronous programming --- as a form of cooperative multi-tasking [1] --- did gain lot's of popularity. That was mainly because of non-scalable threads implementations in most language runtimes, e.g. the JVM. At the same time the JS ecosystem needed to have some support for concurrency. Since threads weren't even an option the community settled first on callback-hell and then on async/await. The former reason to asynchronous programming alleged win is currently being reversed. The JVM has introduced light weight threads that have the low runtime cost of asynchronous programming and all the niceties of thread-based concurrency.
[1]: Asynchronous programming is not the only form of cooperative programming. Usually cooperative multi-tasking systems have a special system call yield() which gives up the processor in addition to io induced context-switches.
In .NET and C++ asynchronous programming is not cooperative, it hides the machinery of a state machine mapping tasks into threads, it gets prempted and you can write your own scheduler.
But, isn't the separation of the control-flow into chunks, either separated by async/await or by sepration between call and callback, a form of cooperative thread yielding on top of preemptive threads? If that isn't true for .NET, then I'd really interested to understand what else it is doing.
No, it is a state machine that generates an instace of a Task from Task Parallel Library, and automates the Run()/Get() invocations from it.
Assuming your type isn't an Awaitable, with magic methods to influence how the compiler actually generates the state machine.
Is this the same as coroutines as in Knuth's TAOCP volume 1?
Sorry, my knowledge is weak in this area.
not exactly; see https://en.wikipedia.org/wiki/Cooperative_multitasking
Thanks, will check that.
The quick answer is that coroutines are often used to implement cooperative multitasking because it is a very natural fit, but it's a more general idea than that specific implementation strategy.
interesting, i would have said the relationship is the other way around: cooperative multitasking implies that you have separate stacks that you're switching between, and coroutines are a more general idea which includes cooperative multitasking (as in lua) and things that aren't cooperative multitasking (as in rust and python) because the program's execution state isn't divided into distinct tasks
i could just be wrong tho
Yeah thinking about it more I didn’t intend to imply a subset relationship. Coroutines are not only used to implement cooperative multitasking, for sure.
Not in Java, .NET and C++ case, as it is mapped to tasks, managed by threads, and you can even write your own scheduler if so inclined.
Also (AFAIK) not in JavaScript. An essential property of cooperative multitasking is that you can say “if you feel like it, pause me and run some other code for a while now” to the OS.
Async only allows you to say “run foo now until it has data” to the JavaScript runtime.
IMO, async/await in JavaScript are more like one shot coroutines, not cooperative multitasking.
Having said that, the JavaScript event loop is doing cooperative multitasking (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Even...)
I always knew my experience with RISC OS wouldn't go to waste!
What an elegant metric! Condensing a multivariate optimisation between compiler execution speed and compiler codebase complexity into a single self-contained meta-metric is (aptly) pleasingly simple.
I'd be interested to know how the self-build times of other compilers have changed by release (obviously pretty safe to say, generally increasing).
Hmm, but what if the compiler doesn't use the optimized constructs, e.g. floating point optimizations targeting numerical algorithms?
Simple fix: floating-point indexes to all your tries. Or switch to base π or increment every counter by e.
That’s not a simple fix in this context. Try making it without slowing down the compiler.
You could try to game the system by combining such a change that slows down compilation with one that compensates for it, though, but I think code reviewers of the time wouldn’t accept that.
Life was different in the '80s. Oberon targeted the NS32000, which didn't have a floating point unit. Let alone most the other modern niceties that could lead to a large difference between CPU features used by the compiler itself, and CPU features used by other programs written using the compiler.
That said, even if the exact heuristic Wirth used is no longer tenable, there's still a lot of wisdom in the pragmatic way of thinking that inspired it.
Speaking of that, if you were ever curious how computers do floating point math, I think the first Oberon book explains it in a couple of pages. It’s very succinct and, for me, one of the clearest explanations I’ve found.
Rewrite the compiler to use a LLM for complication. I'm only half joking! The biggest remaining technical problem is the context length, which is severely limiting the input size right now. Also, the required humongous model size.
probably use a fortran compiler for that instead of oberon
You cannot add a loop skew optimization to compiler before compiler needs a loop skew optimization. Which it would not need at all because it is loop skew optimization (it requires matrix operations) that need a loop skew optimization.
In short, compiler is not an ideal representation of the user programs it needs to optimize.
Wirth ran an OS research lab. For that, the compiler likely is a fairly typical workload.
But yes, it wouldn’t work well in a general context. For example, auto-vectorization likely doesn’t speed up a compiler much at all, while adding the code to detect where it’s possible will slow it down.
So, that feature never can be added.
On the other hand, may lead to better designs. If, instead, you add language features that make it easier for programmers to write vectorized code, that might end up being easier for programmers. They would have to write more code, but they also would have to guess less whether their code would end up being vectorized.
perhaps you could write the compiler using the data structures used by co-dfns (which i still don't understand) so that vectorization would speed it up, auto- or otherwise
Perhaps Wirth would say that compilers are _close enough_ to user programs to be a decent enough representation in most cases. And of course he was sensible enough to also recognize that there are special cases, like matrix operations, where it might be wirthwhile.
EDIT: typo in the last word but I'm leaving it in for obvious reasons.
His stance should be adopted by all languages authors and designers but apparently it's not. The older generation of programming language guru like Wirth and Hoare are religiously focused on simplicity hence they never take compilation time for granted unlike most popular modern languages authors. C++, Scala, Julia and Rust are all behemoth in term of complexity in language design hence have very slow compilation time. Popular modern languages like Go and D are the breath of fresh air with their lightning fast compilation due to their inherent simplicity in their design. This is to be expected since Go is just a modern version of Modula and Oberon, and D is designed by a former aircraft engineer where simplicity is mandatory not an option.
Do you happen to remember where he said that? I've been looking for a citation and can't find one.
I think that some of the text in "16.1. General considerations" of "Compiler Construction" are sorta close, but does not say this explicitly.
Someone on reddit found it! https://www.reddit.com/r/programming/comments/18xqea3/niklau...
The author cited, Michael Franz, was one of Wirth's PhD students, so what he relates is an oral communication from Wirth that may very well never have been put in writing. It does seem entirely consistent with his overall philosophy.
Wirth also had no compunction about changing the syntax of his languages if it made the compiler simpler. Modula-2 originally allowed undeclared forward references within the same file. When his implementation moved from the original multi pass compilers (e.g. Logitech's compiler had 5 passes: http://www.edm2.com/index.php/Logitech_Modula-2) to a single pass compiler http://sysecol2.ethz.ch/RAMSES/MacMETH.html he simply started requiring that forward references had to be declared (as they used to be in Pascal).
I suspect that Wirth not being particularly considerate of the installed base of his languages, and not very cooperative about participating in standardization efforts (possibly due to burn out from his participation in the Algol 68 process) accounts for the ultimately limited commercial success of Modula-2 & Oberon, and possibly for the decline of Pascal.
That's fascinating. I'd imagine there are actually two equilibria/stable states possible under this rule: a small codebase with only the most effective optimization passes, or a large codebase that incorporates pretty much any optimization pass.
A marginally useful optimization pass would not pull its weight when added to the first code base, but could in the second code base because it would optimize the run time spent on all the other marginal optimizations.
Though the compiler would start out closer to the small equilibrium in its initial version, and there might not be a way to incrementally move towards the large equilibrium from there under Wirth's rule.
Note that Oberon descendents like Active Oberon and Zonnon, do have premptive multitasking.
In the past, this policy of Wirth's has been cited when talking about go compiler development.
Go team member, Robert Griesemer, did his Phd under Mössenböck and Wirth.
This was a fantastic talk. https://www.youtube.com/watch?v=EXY78gPMvl0
Thank you for sharing. I was there and didn’t expect to see this again. :)
He had the crowd laughing and cheering, and the audience questions in the end were absolutely excellent.
Always glad to be of service.
I think I last watched it during the pandemic and was inspired to pick up reading more about Oberon. A demonstration / talk like that is so much better when the audience are rooting for the presenter to do well.
A Wirthwhile ambition. :)
Sorry, couldn't resist.
I first wrote it as "worthwhile", but then the pun practically fell out of the screen at me.
I love Wirth's work, and not just his languages. Also his stuff like algorithms + data = programs, and stepwise refinement. Like many others here, Pascal was one of my early languages, and I still love it, in the form of Delphi and Free Pascal.
RIP, guruji.
Edited to say guruji instead of guru, because the ji suffix is an honorific in Hindi, although guru is already respectful.
According to his daughter (she runs a grocery store, and my wife occasionally talks to her), he kept on tinkering at home well past 80.
i hope you are! we miss you