This PumpkinOS project is pretty incredible. I can't imagine how much effort it would take to be compatible with all the system calls that the average Palm app would expect. I remember Palm did some truly weird things with memory: anything moderately large would need to be put into a special memory block that the OS could rearrange at will, and one would need to lock the block's handle to keep it stable while accessing it. Stuff like that must have been challenging (and fun) to implement in PumpkinOS.
This brings me back. I used to make little games for Palm OS, and I was so excited for the next version of the OS which would let one use the (then new) Palm OS Development Suite to make programs. It was also the last OS I've used where an app had a central event loop. Everything else today has UI frameworks that handle it for you. Things are easier now, but I still miss it.
Didn't 16-bit Windows and classic Mac OS do something similar? If you're doing multitasking on a system without an MMU then I think that kind of live heap defragmentation would have been practically required.
Classic MacOS did, but it's definitely not something needed for multitasking without an MMU. For instance AmigaOS didn't do this, but instead effectively had a single shared heap.
How do you free the shared heap when an application quits?
Very carefully.
It's in fact one of the biggest issues with AmigaOS that made it incredibly hard to add proper MMU support. The OS is heavily message-passing based, and it's not at all always clear "from the outside" who the owner of a given structure passed via a message port (which is little more than a linked list) is, and so the OS doesn't even know which task (process/thread - the distinction was pretty meaningless due to the lack of memory protection) owns a given piece of memory.
Later versions added some (optional) resource tracking to make it easier to ensure resources are freed, but if an application crashed or was buggy you'd frequently leak memory, and eventually have to reboot. It was not great, but usually less awful than it sounds with sufficiently defensive strategies.
[I have at various points when e.g. doing some work on AROS way back, argued that it is is quite likely possible to largely untangle this; partially because for a lot of cases, the ownership changes are clear and rules that fit actual uses can be determined; partially because the set of extant AmigaOS apps is small enough you could "just" add some new calls that does ownership tracking, declare the old ones legacy, and map ownership changes for the rest one by one and either patch them, or, say, add a data file for the OS to use to apply heuristics; had the remaining userbase been larger maybe it'd have been worth it]
That situation doesn't prevent an MMU and virtual memory. It prevents multiple address spaces. Multiple address spaces per process are not a requirement for virtual memory, as such. They are a requirement for getting some of the protection benefits of virtual memory. Not all the benefits. With a single address space for all applications, there can still be user/kernel protection: userland not being able to trash kernel data structures. (Of course with important system functions residing in various daemons, when those processes get trashed, it's as good as the system being trashed.)
It doesn't "prevent" an MMU and virtual memory, you're right, but it does severely limits what you can do with it hence why I wrote "proper" MMU support. There are virtual memory solutions for AmigaOS, though rarely used. There are also limited MMU tools like Enforcer, but it was almost only used by developers. AmigaOS4 has some additional MMU use, and there has been work on trying to add some more protection elsewhere as well, but it is all fairly limited.
Specifically in terms of the comment I replied to, you categorically can not automatically free memory when a task (process/thread) ends in AmigaOS without applications-specific knowledge without risking causing crashes, because some memory "handoffs" are intentional.
Yes, you could if the OS was designed for it, and it was done at a point where most of the application developers were still around to fix the inevitable breakage.
The problem with doing this in AmigaOS without significant API changes or auditing/patching of old code is that there is no clear delineation of ownership for a lot of things.
This includes memory in theory "owned" by the OS, that a lot of applications have historically expected to be able to at least read, and often also write to.
You also e.g. can't just redefine the "system calls" for manipulating lists and message queues to protect everything because those are also documented as ways to manipulate user-level structures - you can define your own message ports and expect them to have a specific memory layout.
More widely, it includes every message sent to or received from the OS, where there's no general rule of who owns which piece of the message sent/received. E.g. a message can - and will often - include pointers to other structures where inclusion in the message may or may not imply an ownership change or "permission" to follow pointers and poke around in internals.
To address this would mean defining lifecycle rules for every extant message type, and figuring out which applications breaks those assumptions and figuring out how to deal with them. It's not a small problem.
Mac OS, Win16, PalmOS all have shared heaps too. This is precisely why you need defragmentation (after an application quits, the heap is a fragmented mess, full of holes) and therefore some system so that the other applications keep "movable handles" to heap blocks instead of raw pointers (which would become invalid after the heap undergoes one round of defragmentatino).
If an OS does not do this you are basically indirectly setting a limit to its uptime, as eventually this global heap's fragmentation will prevent launching any new programs.
Having local heaps does not solve this, as you still have to allocate these local heaps from somewhere. Having an MMU will allow you to do transparent defragmentation without handles as raw pointers (virtual addresses) become your handles. Having an MMU with fixed page size will allow you to outright avoid the need for defragmentation.
Mac OS didn’t. It had a system heap and a heap for the running application. Once it supported running multiple applications simultaneously, each of them had its own heap (https://www.folklore.org/Switcher.html: “One fundamental decision was whether or not to load all of the applications into a single heap, which would make optimal use of memory by minimizing fragmentation, or to allocate separate "heap zones" for each application. I decided to opt for separate heap zones to better isolate the applications, but I wasn't sure that was right.”)
That’s why MultiFinder had to know how much RAM to give to each application. https://en.wikipedia.org/wiki/MultiFinder#MultiFinder: “MultiFinder also provides a way for applications to supply their memory requirements ahead of time, so that MultiFinder can allocate a chunk of RAM to each according to need” (Wikipedia doesn’t mention it, but MultiFinder also allowed users to increase those settings)
Yes. The idea wasn't to get away with not having an MMU, though - it was to get away with shipping the Mac with an ungodly low amount of RAM for a machine with a GUI. I believe the original idea was to ship with like 64k or something?
Obviously, with the state of mobile hardware back then relocatable blocks were also similarly necessary in order to save RAM.
For anyone wondering, no, this isn't the thing that made classic Mac OS unfit for multitasking. The MMU is necessary to keep applications from writing other apps' heaps, not to do memory defragmentation. You can do some cool software-transparent defragmentation tricks with MMUs, but if you're deciding what the ABI looks like ahead of time, then you can just make everyone carry double-pointers to everything.
Well, there's also the fact that the MC68000 in the original Mac didn't have an MMU, and it was difficult to add an external MMU to a 68000 system [1]. You could use an MMU sanely starting with the MC68010, and it wasn't until I think the MC68030 that the CPU came with an integrated MMU.
[1] Because exceptions on the 68000 didn't save enough information to restart the faulting instruction. You could get around this, but it involved using two 68000s as insane hack ...
Just to muddy the waters some more there was also an EC variant¹ of the 030 without the MMU.
The EC variant was available right through to the 060, and I'd be curious to know how prevalent the line was. I suspect the EC versions far outnumbered the "full" chips, because they appeared in all kinds of industrial systems. I'm basing that entirely on working for a company that was still shipping products with MMU-less 68k and coldfire this century, not any real data.
¹ https://en.wikipedia.org/wiki/Motorola_68030#Variants
Yeah, the way to port classic MacOS apps to native OS X apps was called Carbon, and it was basically 80% of the classic MacOS toolbox just ported to OS X, Handles and QuickDraw and all. Classic MacOS apps written to CarbonLib would "just run" natively in OS X (and the same binary in Classic MacOS). Carbon even kept working on Intel MacOS, but they finally killed it with the 32-bit deprecation a year or two before Apple Silicon was released.
Apple could have worked in multitasking in classic MacOS if they really wanted to, but their management was totally dysfunctional in the 90's where there was no point seen in investing in boring old MacOS since there was always a revolution just around the corner in the form of Pink, Taligent, Copland etc, projects which due to the aforementioned management never went anywhere.
They did, and they couldn’t. Most users had some code running that patched system calls locally or globally or that peeked into various system data structures, and all applications assumed the system used cooperative multitasking. Going from there to a system with preemptive multitasking would mean breaking a lot of code, or a Herculean effort to (try to) hack around all issues that it caused with existing applications. I think that would have slowed down the system so much that it wasn’t worthwhile making the effort.
Having said that, MacOS 9 had a preemptive multitasking kernel. It ran all ‘normal’ Mac applications cooperatively in a single address space, though. Applications could run ’tasks’ preemptively, but those tasks couldn’t do GUI stuff (https://developer.apple.com/library/archive/documentation/Ca...)
Yes. Both the 6809 based design and the first 68k one targeted 64k. See https://folklore.org/Five_Different_Macs.html
I actually installed an aftermarket MMU in my Macintosh II, since it's required for A/UX.
https://retrocomputing.stackexchange.com/questions/10931/wha...
https://aux-penelope.com
Palm was started by ex-Apple people and borrowed a lot of ideas and mistakes from the Mac.
Well - it is even more than that. They basically used Apple style code resources to define PalmOS apps. You used to be able to compile Think Pascal code on a Mac, and some guy has worked out how to use that code resource to convert it in to a PalmOS app with just tweaking the code resources in the compiled MacOS code. It was quite mind bending to me as a 20-something PalmOS fanboy with a day job doing Delphi. This was in like, 1998/1999 or so. I even went as far as emulating MacOS just to play with it. I don;t know if his code still exists online, but the tool was called SARC (Swiss Army Resource Compiler) is anyone cares to search for it.
I don't think Palm did a lot to change the exe format in the early days. And they used the same CodeWarrior 68K compiler that also targeted MacOS at the time.
More saliently than that, Palm started out as a vendor of Newton apps, before it started making its own Newton-killer hardware.
Palm's Graffiti started out as an alternate text input system for the Newton. It was an Apple software vendor long before it was an Apple rival, and its design is influenced by the Newton more proximally than the Mac.
Windows 16 bit did, but it required a MMU anyway, at least since Windows 3, that was its big feature, 16 bit protected mode and a VM mode for running MS-DOS.
Windows 3.0 supported three modes of operation: real mode (8086 minimum), standard mode (286 minimum), 386 Enhanced mode (386 minimum). Real mode was pretty limited, and a lot of apps could not fit in its rather limited memory, but it was not completely useless. I believe real mode Windows apps could use EMS, although I’m not sure if many actually did
In Windows 3.1, real mode was removed, and only standard and 386 Enhanced were supported. So, 3.1 was the first version to “require an MMU”, if by that you mean a 286 or higher
Yes you're right, but I never knew anyone that would get Windows 3, only to keep using it as Windows 2.
I assume this is what `{Local,Global}{Lock,Unlock}` were for when combined with `{Local,Global}Alloc({L,G}MEM_MOVEABLE)`
Similar idioms occasionally persist in modern code - e.g. when dealing with FFI in GCed languages (C#'s `fixed` statement pins memory in place.)
The lock/unlock metaphor is also used when sharing buffers with video or a/v apis in Windows.
And the "safe array" type from COM/OLE.
Windows is still like that if you use Win32 APIs directly.
All GUI toolkits ever made are like that, but in most of the modern ones, this queue and loop are usually internal and you can only infer their existence by looking at the stack in a debugger or when something crashes.
Which is basically the only option for C and C++ developers, when using vanilla Visual Studio, unless they want to write libraries to be consumed by .NET instead, or use a third party framework.
It is either raw Win32 or MFC, don't even bother with WinUI.
Windows doesn't actually have a central event loop, which makes it pretty unique.
macOS/iOS/etc have a central event loop in Cocoa - what more only initial thread is allowed to talk with windowserver!
Xlib pretty much enforced single event loop per connection - XCB allowed more.
In comparison, win32 applications can create an event loop ("message pump") per thread, and you can use GUI calls completely independently on them.
One nice thing about modern hardware would be that you wouldn't exactly be memory constrained. You'd get to implement a complicated API with whatever large size chunk of memory you wanted, since 128 MB of ram or how ever much they came with is peanuts today.
The first Palm (Pilot 1000) had 128 kB. I think the biggest 68k Palm was the Palm Vx with 8MB. Towards the end of the (Intel) ARM Palms, they did have 128 MB models though.
I think only the latest Treo had 128MB - the last PDA (Lifedrive) had 64MB, the TX 32MB.
(One should remember though that there wasn't mass storage+RAM as we typically think of it - the memory of the Palm devices was storage and active memory in one. Battery-backup'ed until the very latest models. There wasn't a filesystem as such. So all this memory should be thought of as memory for applications, nor like storage in an Android device.)
I loved Palm games! Those were the best mobile games, nothing modern compares to them at all.
That’s extremely easy on modern hardware with gigabytes of RAM (compared to 2 megabytes on the pal pilot III): just use malloc, never move memory around, and make locking and unlocking such blocks no-ops. If there is an OS call to determine lock state, you’ll have to store that somewhere, but that isn’t difficult, either.
It also isn’t hard to implement the way they did back then; it ‘just’ complicates using it.