return to table of content

Windows NT for Power Macintosh

stahi
108 replies
1d3h

Nostalgia reminds me of this: https://lowendmac.com/2014/next-openstep-and-the-triumphant-...

Amelio and the rest of his senior staff began searching for a way out. They needed a new operating system to buoy their attempts to compete with the Wintel juggernaut. The same search had taken place several times before, but now the company was desperate.

Limited Options Ultimately the list of possible targets was narrowed down to five options:

License Windows NT from Microsoft and bolt a Mac-like interface onto it. License Solaris from Sun and bolt a Mac-like interface onto it. Narrow the focus of the flagging Copland project and release it in a year and a half. Acquire Be and use BeOS. Acquire NeXT and use OpenStep.

faefox
51 replies
1d3h

What I'd give for a glimpse of the timeline where Apple bought Be instead of NeXT.

Damogran6
18 replies
1d2h

While I LOVE LOVE LOVED BeOS. It resonated with me and it was a great experience...it wasn't multiuser and didn't have some stuff that I forget, but seemed necessary for a system to have long-term success. But mam did it cut out the cruft and wring out ALL of the HP of the hardware of the time.

pavlov
17 replies
1d1h

Why was multiuser so important? Apple’s most successful operating system in the Jobs 2.0 era isn’t multiuser either.

BeOS would have been a fine foundation for smartphones and tablets. But of course it’s an open question whether Apple could have got that far in the 2000s without the return of Jobs. I suspect the company would have been acquired or merged with some unsuitable suitor like Sun.

xattt
5 replies
1d

Multi-user would have likely been bolted on one way or another at some point, as had happened when other OSes gained fundamental new features.

kbolino
4 replies
22h17m

Was any operating system actually able to go from single-user to multi-user so easily? Windows NT and OS X were totally rewritten from the ground up relative to their single-user predecessors Windows 9x and classic Mac OS.

nullindividual
1 replies
17h20m

NT had no predecessor (at least from Microsoft; architecturally it's predecessor would be VMS). It was ground-up multi-user. Mac OS [Classic] was not OS X's predecessor, either -- it was NeXTStep.

kbolino
0 replies
16h6m

You are right on the technical lineage, but I was referring to how these products were (ultimately) presented commercially. The facts that they are both older than one might think and they were developed with specific goals from the start I think more clearly illustrate that you can't just "bolt on" such fundamental differences.

hypercube33
1 replies
17h59m

NT is closer to a mainframe OS than 9x and it came out in the win3 era

kbolino
0 replies
16h13m

Yeah, I actually used NT 3.51 for some time. However it was the NT line that replaced the 9x line, at least from the consumer perspective.

bee_rider
5 replies
1d

I agree that iOS isn’t multi-user in any real, like, multiple user accounts intended to be used by real people sense.

I wonder, though, it is based on MacOS somehow, right? Which is based on BSD. Could there be some left over multi-user plumbing sticking around in a technical sense?

PlutoIsAPlanet
3 replies
23h52m

Yes, iOS still uses users in the technical "Unix" sense, they're just not mapped to actual physical users, but instead to various services.

Android is in a similar boat. They're still an important way to manage filesystem access of programs.

anyfoo
1 replies
23h34m

iOS barely uses that. Processes commonly run as “mobile” or “root”, but it does not matter very much. POSIX users and access permissions are archaic, and, in my opinion, don’t match with how almost any device is being used nowadays. iOS implements its own concepts through entitlements, containers, vaults, sandboxes etc. (Look up the “Apple Platform Security Guide” for details.)

nullindividual
0 replies
17h13m

Shared iPad security in iPadOS

[...] User data is partitioned into separate directories, each in their own data protection domains and protected by both UNIX permissions and sandboxing.

POSIX users are quite important.

meibo
0 replies
23h39m

Android isn't in a similar boat, AOSP has full isolated Multi-User support that is realized through Unix users. You can create new users through Settings and they have their own home screen, apps and files.

Most vendors have this enabled now and things like work accounts/MDM use the same system.

https://source.android.com/docs/devices/admin/multi-user

asveikau
0 replies
15h30m

In the very early public slides announcing the iPhone, I believe Jobs referred to iOS as "Mac OS X".

Damogran6
2 replies
23h21m

It allows applications to run in different priviledge contexts and allows you inherit privilidges on a network, off the top of my head.

Unless you _want_ your solitare game have the ability to enumerate your contacts and send mail.

pavlov
0 replies
21h50m

macOS and iOS use a completely different mechanism to ensure a game doesn’t read my contacts or make network requests. It has nothing to do with Unix-style users.

They could have built that on top of BeOS just as well as on NextStep.

anthk
0 replies
20h52m

Unix permissions will allow you do to that perfectly fine.

ranger_danger
0 replies
21h16m

Why was multiuser so important?

Because computing devices, and access to them, was not so ubiquitous back then. Families all had to share a single computer. Business users had to share access to large servers. There were no smartphones. Some had to travel to an educational setting just to see or use a computer.

randkyp
0 replies
14h32m

Heh, funny you mention that, considering Be's pivot to BeIA. Some Be engineers also worked on the (unreleased) Palm OS Cobalt, and eventually, Android. (And then Fuschia, but I don't think that OS will ever hit smartphones.)

postmodest
11 replies
1d1h

I suspect that without Jobs, Apple is dead by 2002. Turned into an arm of, like, RIM. Then Google comes out with the Android phone and it's all over for everyone including Microsoft.

nailer
4 replies
1d1h

HTC G1 the original Android phone, was basically another sidekick. We’d have this shitty Android and WinMo for a decade longer and eventually things would get good.

selectodude
1 replies
1d

Sure, but unlike the iPhone, that thing was an unbelievable piece of shit.

pathartl
0 replies
1d

It's not hard to extrapolate that something would have entered the market. I assume it would only have been a couple of years later.

qingcharles
0 replies
14h45m

Microsoft had been doing it since 2000 but never made it cool. Here's one from 2004:

https://www.theregister.com/2004/11/03/review_hp_ipaq_6340/

I had one since 2000 as a daily driver and I loved the thing, but the app library was horrible.

The funny thing is, iPhone launched with no support for 3rd party apps. It took damned near a year for the first apps to arrive.

hylaride
4 replies
1d

I suspect if Apple and NeXT never "merged", the phone market would be RIM vs (a very different) Android, but we'd be worse off.

Blackberries were OK at the time, but my god RIM was so culturally conservative and Google at the time had no design sense that we'd probably be 10 years behind where we are now - and if we even had software keyboards they'd be terrible.

pjmlp
1 replies
22h42m

Outside US, it would keep being all about J2ME, Symbian, Windows CE/Pocket PC, Bada OS.

Which keep being forgotten, in these kind of discussions.

hypercube33
0 replies
17h57m

RIP PalmOS you died before your time. Best mobile gaming OS I've used and ran for a week on some AA batteries

insane_dreamer
1 replies
18h14m

Nokia would still be around

qingcharles
0 replies
14h51m

They probably would still be the #1 phone manufacturer, still making terrible Symbian phones with terrible developer support. I was working for Nokia (and a shareholder) on their future phones c. 2004-2005 and it was horrible all the way down. Apple deserved to eat everyone's lunch.

aswanson
0 replies
21h40m

Hard to argue with that. Still remember a co-intern telling me "Steve Jobs returned to Apple!" and just shrugging.

snakeyjake
7 replies
22h39m

Apple would have died.

I love BeOS.

I love BeOS more than you.

I love BeOS more than Jean-Louis Gassée.

BeOS wasn't ready. I was there I used BeOS as my daily driver on a maxxed-out Power Macintosh 6400. I have, today, that machine still running BeOS. I also have a second dual-PII running BeOS.

BeOS wasn't ready.

Apple was making a decision in 1996 for a deal that had to be struck and struck fast (1997). I started using BeOS in 1997 with one of the first Power Macintosh releases.

Those days (my days) it wasn't ready-- you had to pile patch on top of patch on top of community fix on top of some set of drivers some dude in Boise made in order to get things functional. I spent more time browsing BeOS listservs and download sites fixing things (at 56k) than using it.

Everyone is remembering fondly the R4.5/R5 days in late 1999. Three years after the December 1996 announcement of the NeXT acquisition. By then BeOS was... better. It was running on PCs and had a much larger user community. In 1996/7? Just a handful of BeBox owners and Mac users dumb enough (me) to try it.

Management would have been professionally negligent to choose it over NeXTStep.

In 1996, when Apple was up against the wall, NeXTStep was almost ten years old. All they had to do was buy the company, port it to PPC, and change some copyright messages and icons around. Took about two years.

BeOS would have required way more time, money, and expertise.

BeOS had no multi-user, no security (at all), an "aspirational" level of posix compliance, drivers that were a disaster, a network stack specifically designed to frustrate you, and no "companies that actually matter and no Gobe doesn't count" application support.

I get it. It was pretty. It was fast. French electro DJs loved it because it was low latency, and some audio tools were ported to it.

A userbase of French electro DJs doesn't pay the bills.

temac
1 replies
22h8m

The C++ API bound forever to the gcc 2.95 ABI IIRC was also an "interesting" choice. Sure with success they would certainly have engineered a way forward, out of necessity, but if I would have had to give my opinion about that for Apple at the time...

timschmidt
0 replies
19h51m

The Haiku project folks worked out ABI compatibility for newer GCC releases. Current builds of Haiku support binary backwards compatibility with the 2.95 ABI while also supporting newer GCC ABIs at the same time. So at least we know it is (and would have been) possible.

DaoVeles
1 replies
15h0m

I have a spare machine to do some offline work on running Haiku OS, the open source successor to BeOS. Even today it simultaneously feels 20 years ahead and 20 years behind.

Good example, the interface is incredibly snappy but no Wifi. I love the vision of BeOS but it probably would have been the death of Apple if they had gone with it.

1oooqooq
1 replies
16h14m

i think you missed the greater point. apple with beos would not have turned Enterprise dead end that it is today.

that market was what jobs had to pivot to (there's a great video where he pushes the rationalization of going to academia and Enterprise workstations) so that next could survive. and it infected apple from the inside.

the timeline everyone missed with beos is exactly apple not becoming the mix of ibm and Microsoft it is today.

scarface_74
0 replies
15h5m

Everyone also forgets that JLG was a horrible manager and his decision to keep Macs obscenely high priced was what ultimately almost killed Mac.

And Steve Jobs as the founder of Apple had a political capital than JLG would have ever had.

While I know that the measly $250 million that Microsoft invested in Apple didn’t “save it”, only SJ could make peace with MS (only Nixon could go to China) and make the tough choices.

Founders are given a lot more leeway to make huge changes than any other manager.

lproven
0 replies
7h10m

I agree both with @snakeyjake and with @faefox.

I'd love to see what Apple would have done... but at the same time I fear very much it would have killed Apple.

BeOS was amazing, but its dev tools were nothing special while NeXT's were state of the art, industry-beating. That means no way to win Mac developers over to the new OS.

BeOS was as insecure as Classic MacOS, so no way to crack the server market (not that that worked) and no way to build nice locked-down little gadgets like iPhones.

And while Be had JLG, a very smart cookie, NeXT had Jobs, who had more vision in a day than JLG in a year.

Me, I wish a disappointed Be had done a deal with Acorn and ported to Arm.

Arms in the hundreds-of-MHz range were there and gigahertz-class was coming soon. Acorn had prototype multi-processor machines. Be had the best SMP support in the business in the late 1990s.

They could have reskinned BeOS to look more RISC OS like, run RO in a VM like OS X ran Classic in a VM, and offered both the thinnest lightest Web-capable laptops in the world, and compellingly-priced multiprocessor desktop workstations that didn't need a dozen cooling fans and could have gone to 4-way or even 8-way at an affordable price, which x86 and PowerPC chips could not touch.

scelerat
4 replies
1d3h

I can’t think of a better acqui-hire than NeXT + Steve Jobs

faefox
1 replies
1d2h

For sure, it was pretty clearly the right call at the time and objectively so with the benefit of hindsight. But BeOS was just too good to be allowed (forced, really) to die on the vine the way that it did.

tiledjinn
0 replies
1d2h

No need for hindsight. BeOS is still alive. A Haiku for you.

actionfromafar
1 replies
1d2h

You mean when NeXT bought Apple and forced Apple to finance the deal.

A.K.A. that time when Steve Jobs acqui-hired Apple.

alwillis
0 replies
1d1h

Apple looked at it as an acquisition, while NeXT looked at it as a merger.

NeXT didn’t buy Apple obviously, but Steve Jobs and his lieutenants took over the most important positions within Apple.

Which made sense given the situation Apple found itself in.

firecall
1 replies
16h27m

Many moons ago I read that Apple came super close to buying Be, but Jean-Louis Gassée wanted to much money and negotiations fell apart.

I reckon it was the star power of Jobs more than the OS as such that saved Apple in the late 90s.

Jobs' ability to do the deal with MS to get MS Office on the Mac and a cash injection of $150m saved Apple at the time.

We were well into the 2000s before OSX really began to make a difference to Apple's fortunes, and no one could have predicted that OSX would be the foundation for the iPhone OS.

It wasnt even a given that OSX would be the OS for the iPhone during development.

But absolutly, NeXT was the best choice, even if the OSX Beta and 10.0 were almost unusably slow and buggy!

I was at the Paris Mac Expo when Jobs launched the beta, and the excitement was amazing!

Same year as the Key Lime iBooks IIRC. They rose up our of the floor on a pedastal if I'm remembering it right LOL.

I worked at Apple in the 90s, and saw all the demos of early releases of OSX and the confusing stack of platforms and tools. Jobs did the right thing ditching that!

Finally, I remember being at the Mac Expo in London and BE was there. We were standing around admiring the cool light bars on the front of the Be Box and its multitasking abilities.

If I'm remenbering right the lights on the front went up and down to indicate CPU usage. I could be imagining that?

Someone correct me if I'm misremembering :-)

scarface_74
0 replies
15h3m

The $150 million had nothing to do with saving Apple. They had already secured a billion dollar line of credit and besides, the same quarter Apple spent $100 million to buy out PowerComputings Mac assets and license.

It was years before Apple became profitable and the $50 million was a drop in the bucket

pzautke
0 replies
1d1h

As someone who loved BeOS at the time, past discussion here on HN leads to believe that Apple's adoption of it would have had some real challenges.

https://news.ycombinator.com/item?id=22002062

pndy
0 replies
1d1h

Right innit? It's not just Apple using BeOS but wondering how everything else would be different. You actually need to... think different at this point, aye?

pjmlp
0 replies
1d

Assuming Apple would still be successful, and not close shop.

It would have settled C++ as the main systems language on desktop OSes, between Windows, and Mac Be OS.

Objective-C would have died, and Swift would never come to be.

Clang and LLVM probably would never had gotten Apple's sponsorship.

POSIX would have died on the desktop, as Be OS like other non UNIX OSes wasn't that keen in being UNIX like.

There wouldn't be a flock of Linux and BSD developers rushing out to buy Apple hardware, instead of sponsoring OEMs shipping PCs with those distributons.

nine_k
0 replies
1d2h

Some other company would release the iPhone then!

aryonoco
0 replies
5h21m

We kind of live in an alternate to that alternative reality.

A lot (most?) of Be engineers ended up at a company called Danger, which was bought by Google a few years later, and they all went on to be the original core team of Android. Some BeOS technologies even ended up in Android such as its Binder, which from memory the Be engineer working on it open sourced it just before Palm bought Be, and then he used the same code in Android.

mepian
25 replies
1d3h

They did end up releasing salvageable bits and pieces of Copland as Mac OS 8.

cmrdporcupine
10 replies
1d2h

I think of it as a time when people became so entranced with the idea of the emerging object-oriented zeitgeist that they imagined that making things OO and component driven would just hand-wave towards success. People rushed to make infrastructure to support this model, and, well, it was no magic bullet. Complexity, fragility, and a lot of hard work remained.

exe34
9 replies
1d1h

the thing I'd like to see is for software to expose functional/"rest"-like interfaces. similar with websites and the content/user data.

if you look at what accessibility software achieves by simply looking at the screen/hooking into the OS, now imagine any and all software could be connected like that.

some of this functionality is now provided in individual silos, e.g. an email with an appointment in gmail can go into your Google calendar. a ticket from your train company can be added to Google wallet.

but if you look at a flight booking system and want to compare the total price of a given set of dates for travel, including hotel and other things at different times - you're back to doing it on paper (or use somebody's website where 20% of the flights or hotels you want aren't included).

postmodest
3 replies
1d1h

Isn't that the SmallTalk model, but everyone decided to use C++ instead?

pjmlp
2 replies
1d

Not everyone, IBM OS/2 SOM had support for Smalltalk, C and C++.

Smalltalk was OS/2's .NET, so to speak.

Unfortunately it all died alongside OS/2, followed by IBM's pivot to Java.

hypercube33
1 replies
17h51m

Java on the AS/400 is the weirdest thing. The whole platform was designed to be compile once to near assembly and then run cached byte code or whatever. Java is just too late compilation is clunky in comparison imo.

I believe Tribes 2 did something similar just at a higher level

pjmlp
0 replies
12h3m

TIMI is compiled at installation time usually, the bytecode format is used only as portable executable format.

While initially Java on AS/400 did take advantage of TIMI, IBM eventually moved away from it, into regular JVM design on now IBM i.

Most likely because IBM z doesn't use TIMI, rather language environments, and they want a more straight design, or due to Java dynamic capabilities using TIMI wasn't the best approach.

On AS/400 (IBM i), only Metal C, and the underlying kernel/firmware, written in a mix of PL.8, Modula-2 and C++, are pure native code.

nullindividual
2 replies
1d1h

the thing I'd like to see is for software to expose functional/"rest"-like interfaces. similar with websites and the content/user data.

You mean, COM?

Dwedit
1 replies
17h34m

ActiveX is raising its arm through the soil

nullindividual
0 replies
3h30m

Office is probably the largest consumer-facing application to use COM today. It's honestly a great technology -- which came from the Cairo project.

bitwize
0 replies
1h19m

if you look at what accessibility software achieves by simply looking at the screen/hooking into the OS

Unfortunately that's kind of a pipe dream... in the real world, the first organization to flagrantly violate the OS vendor's human interface guidelines is the OS vendor itself, who ships some application with custom-schmustom, owner-drawn widgets that cannot be queried according to the standard OS APIs for querying them for content. So the accessibility software literally has to OCR the screen to determine what's written in the text fields and buttons.

Source: Knew a guy who worked on Dragon NaturallySpeaking, he had some war stories.

now imagine any and all software could be connected like that.

As other commenters mentioned, there's COM, but that's a hairball to code for and a hairball to configure. The nicest systems that worked that way historically were the Lisp machine and Smalltalk environments, but the closest day-to-day software is probably Emacs. Everything in Emacs really can be connected together through the power of text buffers and Lisp. Objective-C is Smalltalk semantics retrofitted onto C, and originally NEXTSTEP, later macOS, really was a promising environment for this sort of integration... but in recent years Apple has been so focused on "apps" that it's doubtful they can keep supporting that vision.

temac
3 replies
21h36m

It's funny how cancelled projects are somehow making people almost more "nostalgic" than projects that actually shipped. One of the reason may be that cancelled projects don't need to be completed, to ship with a reasonably good quality, to have an easy to use interface, to be sustainable for a long enough period of time, etc.

aleph_minus_one
1 replies
20h43m

I rather see a reason that many such cancelled projects had very elegant architectures/structures, which is something that hackers love.

If you want to ship a product so that it becomes commercially successful, you often have to "deeply taint" this clear vision to make the product compatible with the multitude of very inelegant (industry) standards that customers expect/require.

Thus the nostalgia is not about shipping vs not shipping, but about keeping an elegant vision or be willing to taint it to be compatible with a "depraved" world.

qingcharles
0 replies
14h41m

Right. Most cancelled projects are beautifully architected and discount tons of installed products with all their decades of cruft.

Then you start trying to bring the New Thing(r) into the Real World(tm) and there are 5000 edge cases the original project dealt with in a 42-page if/else block for 270 different OEM patches that were sold by some JV team in another building who have never written a line of code in their lives.

nemomarx
0 replies
20h45m

There's a concept in political theory about why people become nostalgic for lost causes, failed movements, etc - kind of a backwards looking utopianism?

If only we had done X, if only this group had succeeded instead of failed in the past... I do agree that it's a way to avoid looking at the actual implementations and details of a real thing, and just go to what might have been directly.

alsetmusic
8 replies
1d2h

I saw Copland at MacWorld Expo Boston at Apple’s booth. It kernel panicked. We didn’t get to use it.

But wow do I remember how exciting it was the first time I read about it in MacUser magazine. Basically some GUI mockups and promises of cool under-the-hood improvements, possibly the first time I was excited about future computing. Thank goodness they bought NeXT instead. They had no ability to manage projects at scale at the time.

Too bad TimApple has no sense of how to do anything other than chase growth; we need another product person at the helm. Oh well.

karlgkk
6 replies
1d1h

I'm tired of hearing "Tim Apple's Apple Can't Engineer".

VisionOS has some of the most incredible, coolest engineering I've seen in a long time, from top to bottom. Even if you're not interested in AR/VR, the writeups and sessions about the dual processor OS layout they've used, the scene and room graphing technology they've built, and the challenges of building ultra low latency pass through scene reconstruction and mapping are incredible.

Vision Pro might honestly be one of the technically coolest things built in the last 10 years, and I really hope they get the price down and fix some of the UX (weight, battery, etc), because it is something I use every day (and in fact, am writing this post on right now). It's for sure a Beta/DevKit/etc, and I wouldn't recommend the casual person to buy one, but again, on engineering chops alone, it is a masterpiece.

jmb99
4 replies
1d

and I really hope they get the price down

Very much agreed. I get why it’s expensive and I’m no stranger to the apple tax (and usually don’t mind it all that much), but I got the email today that it’s now available in Canada and $5k for a headset is… steep.

karlgkk
2 replies
16h13m

For what it's worth, the entry level PowerBook 100 in 1991, when adjusted for inflation, cost about $5500 USD. An entry level iBook in 2003 would cost about $2700 USD.

Yeah, it's expensive, but it's also astonishing how much prices have come down.

scarface_74
0 replies
14h57m

We don’t even have to adjust for inflation. I (well my parents) paid $4000 dollars for a Mac LCII with a //e card, a crappy 12 inch monitor, a LaserWriter LS printer, a 5-1/4 drive for the //e card and SoftPC. It had 10MB RAM

qingcharles
0 replies
14h36m

It's so easy for people to look at the dollar figure for something they bought in the 80s or 90s and not account for inflation and make a pithy comment, but when you back-calculate the actual price my parents paid for some things, it blows my mind how cheap technology is now.

$5500 for an entry level laptop! You can get an M2 MacBook for $800 today, and I couldn't even begin to describe how much more tech is inside that thing for 20% of the price.

adamomada
0 replies
14h14m

Hey you could have got the PowerBook G3 Lombard mentioned in the OP instead for $5k. pretty lucky you can get a magic headset instead eh

lproven
0 replies
6h38m

one of the technically coolest things built in the last 10 years

Sadly, that is a low bar to clear.

nrr
0 replies
23h49m

"Too bad TimApple has no sense of how to do anything other than chase growth …" I feel like this is more emblematic of the wider industry trend than Apple's own strategy that they chose to impose on us.

The focus shifted very hard over the past 15-20 years from selling people new and novel gadgets and software to turning us ordinary folks into "monthly active users." Personal computing got a lot less personal, and Apple leaned into it to stay relevant.

lizknope
10 replies
1d1h

A/UX was still a licensed version of Unix. This post says the licensing fee was $1,000 per seat.

https://www.reddit.com/r/Apple_AUX/comments/11wks6i/what_was...

I remember the AT&T / BSD lawsuit in 1992. Maybe Apple looked at NeXTSTEP and since it is based on the Mach microkernel and BSD that they would be able to fix the 10 or so files to avoid the AT&T / Unix System Labs licensing fees like Free/NetBSD did.

In the 1980's Microsoft had their own port of Unix named Xenix. Why didn't they push this instead of the very limited MS-DOS?

https://www.abortretry.fail/p/the-history-of-xenix

In March of 1982, Western Electric reduced the royalty fees that it charged for UNIX with the introduction of UNIX System III (a combination of UNIX V7 and PWD). This was done by raising the source license cost, but lowering the royalty per user license to $100 (about $318 in 2023) from $250. This meant that Microsoft’s prepayment of $200000 in 1980 (around $700k in 2023) for a discount in volume was voided if they wished to use the newer system. That same month, on the 10th, Paul Allen held a seminar in NYC where he outlined plans for MS-DOS 2.0.
mixmastamyk
4 replies
17h12m

Yep, that must have been it. Porting it to BSD might have been more work and opened them up for a lawsuit.

How was Solaris an option then?

lizknope
3 replies
16h24m

Good question but I don't know.

Apple did not create A/UX. It was created by UniSoft

https://en.wikipedia.org/wiki/UniSoft

https://virtuallyfun.com/2021/09/19/so-what-is-the-deal-with...

I'm guessing that Apple would have to pay a fee to both UniSoft and AT&T and maybe Sun was much larger and a partner with AT&T / Unix System Labs on SysV so maybe they could resell Solaris for cheaper.

Sun open sourced Solaris in 2008 for 2 years. Maybe AT&T / USL didn't care by then or maybe Sun's licensing agreement allowed them to do it. I don't know.

icedchai
1 replies
15h50m

A/UX was also ridiculously insecure in its default configuration. It had a "guest" user, no password. I remember labs of Mac II or IIx systems at a local university, with public IP addresses, guest accounts, no passwords. A local BBS guy posted a message on how to get "free" internet access through the university's dialups. I think it was at least a month before things got locked down.

mixmastamyk
0 replies
15h43m

It was the time, only trustworthy people on the net so far. Told the story here about my mid-ninties job where I ran Quake servers on the lan before we had a firewall.

mixmastamyk
0 replies
15h49m

Interesting what-ifs. Perhaps offering the money they spent on Copeland to ATT could have bulk licensed it.

AshamedCaptain
4 replies
1d

Why didn't they push this instead of the very limited MS-DOS?

Hardware requirements.

For a (quite long) while the plan was to keep DOS for microcomputers and Xenix for the serious stuff. DOS gained some extensions that made it very Xenix/UNIX-like (like subdirectories, pipes, "-" as command line switch char, device files in "\dev" instead of a global namespace, etc.).

hjdiiokjs
1 replies
14h30m

While MSDOS had subdirectories, pipes were not supported at the kernel level. It was simply a convenience of COMMAND.COM that simulated pipes by running each command sequentially. The command switch character was the slash (/) until the end, with the exception of a few non-standard third-party softwares insisting on the dash. Device names were always global namespace and \dev never existed in any form.

Did you copy-paste an hallucinating LLM?

lproven
0 replies
6h7m

No, it looks fair.

MS-DOS 1.x did not have subdirectories.

That came in with MS-DOS 2.x which is when the very rudimentary level of Xenix compatibility came in: device names such as `LPT1:` or `COM2:` could be prefixed as `/dev/lpt1` or `/dev/com2`. Pipes simulated in COMMAND.COM.

I don't recall DOS ever accepting command switches prefixed with `-` instead of `\` as standard, though.

progmetaldev
0 replies
20h55m

I think for the time period, hobbyists can't be written off either. My dad got into Commodore and Atari in the late 70's/early 80's, and sparked my interest in software. Computing already wasn't cheap, but if the home consumer needed to pay for extra licensing fees, I could easily see adoption being far more limited to business. I think a lot of developers, myself included, got their start as hobbyists and never would have caught the "itch" to develop if for not this early exposure at home.

adamomada
0 replies
14h9m

Hmm I remember switches always had to begin with “/“ in DOS

rjsw
2 replies
1d2h

A/UX runs on 68k NuBus machines, not PowerPC PCI ones.

mixmastamyk
1 replies
1d1h

Unix itself does—seems like a short porting effort compared to the other choices. They had emulation available as well.

lukeh
0 replies
18h29m

A/UX 4.0 I believe was going to be based on OSF/1. The MkLinux port was also useful in the Mach 3.0 migration (as were the open source BSDs which refreshed the 4.4BSD code Rhapsody was based on, from NEXTSTEP 4.0 which only ever shipped as an alpha).

lproven
0 replies
6h9m

Why was A/UX not in the running?

I asked this myself.

The thing that I didn't think about was that A/UX combined 680x0 UNIX™ code with 680x0 MacOS code in very clever ways. MacOS provided the windowing, terminal emulators, the filesystem browser, etc. Unix provided the underlying kernel, the filesystem, networking, etc.

(This is a hand-wavey simplification for illustrative purposes.)

The point being, intimately intertwined, closely integrated.

But classic MacOS only ran on 680x0 and to port it to PowerPC, Apple invented a clever hack built around partial emulation: a nanokernel containing a 680x0-to-PowerPC interpreter (with a way for 680x0 code to call PowerPC code), on top of which ran the MacOS Toolbox ROM image, on top of which ran most of MacOS in 680x0 code. Then they profiled that, and converted the most performance-critical parts -- and only those -- to PowerPC code.

When I ran a PowerMac with classic MacOS, I had a little indicator applet in the menu bar. It was red when executing 680x0 code and turned green for PowerPC code. (Not very helpful for colour blind folks, but hey, fine for me.)

It was red almost all the time. Only years later when substantial PowerPC-native apps started to appear did it sometimes stay green long enough to see.

It wouldn't have been so hard to port the Unix bits of A/UX to PowerPC-native, but that would have ruined the integration between the Unix code and the MacOS code. Getting MacOS all running natively would have been a massive task. Apple never did it; it replaced the entire OS with NeXTstep.

Getting 680x0 MacOS code integrating with PowerPC A/UX code would have been a major technical challenge.

It sounded easy and obvious but it was an even bigger project than Copland, and Apple wisely dediced against it.

anyfoo
4 replies
23h45m

License Windows NT from Microsoft and bolt a Mac-like interface onto it.

Gosh, am I glad that that didn't happen. I wonder how serious of an option that really was, it seems wildly out of place compared to the other propositions, which are all reasonable.

cnasc
1 replies
22h17m

IIRC from reading Showstopper, NT was intended to afford that sort of different frontend swapping. It just happened that Windows became the primary focus in the end.

stg24
0 replies
21h32m

It might have been necessary to stick with the NT 3.51 codebase. I think they broke the clean separation of the GUI from the rest of the OS in 4.0, which was launched that year, for performance reasons.

lemoncucumber
0 replies
18h20m

Google for Mac OS X Server 1.0 screenshots... the UI was a bizarre mashup of classic Mac OS and NeXTSTEP, you can see they were part way through bolting a Mac-like interface on.

When Mac OS X was eventually released it looked utterly different.

GeekyBear
3 replies
1d2h

Next's APIs were also ported to run on top of Windows NT and Sun's Solaris.

In addition, the entire NextStep OS ran bare metal on several CPU architectures.

originally ran only on NeXT's Motorola 68k-based workstations and that was then ported to run on 32-bit Intel x86-based "IBM-compatible" personal computers, PA-RISC-based workstations from Hewlett-Packard, and SPARC-based workstations from Sun Microsystems.

https://www.wikipedia.org/wiki/OpenStep

SoftTalker
1 replies
1d2h

Yep, I used to run Apple's (maybe it was still NeXT at the time?) Mail app on Windows NT, I can't recall where I got it, maybe it was part of some goodies included with WebObjects?

dkadams
0 replies
1d

Yes. WebObjects development was possible on NT due to “YellowBox”. It shipped with ProjectBuilder, EOModler, Mail, and other NeXT apps.

You could even do Objective-C monkeypatching shennigans on YellowBox, which I remember being needed at the time to get the scroll wheels on certain mice to work on YellowBox apps.

http://www.roughlydrafted.com/RD/RDM.Tech.Q1.07/4B800F78-0F7...

pjmlp
0 replies
22h40m

It were those ports to Sun's Solaris, that influenced Java's design and later on Java EE as well.

tarsinge
2 replies
6h53m

I have read these kind of stories many times but I'm starting to realize how much the relative engineering success of MacOS 8 and 9 is overlooked with the now "classic" storytelling centered around Jobs and his comeback. Mac OS 10 was not really usable until 10.1 in 2001, yet the iMac, iBook and PowerBook G3 were successes, so the OS obviously played a role, but reading these stories it's kind of absent.

My memories are surely tainted by nostalgia because 1997-2001 I was a teen and avid Mac user but from what I remember between the mid-90s Performa on 7.5 and late 90s iMac on 9 it felt like a lot happened. While not the future, as an end user 8.6/9 felt relatively modern, especially compared to Windows 95/98. And even in the mid-90s even if from a business POV the future was bleak for Apple from an end user one 7.5 was not really worst than 95.

aryonoco
1 replies
5h26m

I'd argue that OS X (to use the nomenclature of the time) 10.0 was alpha quality software, and even 10.1 was at best what we now would consider beta software. OS X was not usable until 10.2 (and that's when many major software such as Photoshop and even Apple's own Final Cut Pro received their first native OS X port), but even 10.2 didn't reach feature partiy with classic MacOS. It wasn't until 10.3 Tiger that you could safely recommend to friends and family to upgrade to OS X.

TazeTSchnitzel
0 replies
35m

Tiger was 10.4

smm11
1 replies
1d2h

I still think Solaris/Mac stood a chance; the hardware would have been miles better than anything else.

vt240
0 replies
21h42m

The best 68k Mac I ever had was MAE in Solaris 9 on my SunBlade 2500. Fully patched up Solaris 9 had all the filesystem and disk i/o improvements, and none of the libc changes that happened in 10 which broke all backwards compatibility.

fny
21 replies
1d5h

Very, very cool, and I can't wait to relive giving up all my DOS games!

TickleSteve
20 replies
1d4h

This isnt an x86 emulator that can run a DOS box, this is NT running natively on PPC.

So... no DOS games I'm afraid.

nullindividual
15 replies
1d4h

Macintosh PowerPC.

NT4 was already compatible with PowerPC systems. The support ended with NT4 Service Pack 2; no further SPs had PPC support.

FMecha
9 replies
1d4h

Because desktop PowerPC was unpopular for anything other than Mac, and PPC NT wasn't designed for Open Firmware. Though you can blame Microsoft's monopoly tactics - in which loyalty to x86 was the prime directive, alternative OSes on PPC Macs were not a widely accepted idea either - people stuck around with classic Mac OS, in (ultimately) vain hopes that Copland/Taligent would deliver.

nullindividual
8 replies
1d3h

Where are you getting this information?

The PReP/CHRP was a venture by Apple, IBM, and Motorola (but primarily IBM) to allow companies to build OSes for a common platform. Apple didn't want to participate in the end. But like other ports of NT to MIPS and Alpha (Alpha enjoyed much more support until Compaq dropped support), the popularity was well beneath the dirt cheap in comparison x86 platform.

https://en.wikipedia.org/wiki/PowerPC_Reference_Platform

https://en.wikipedia.org/wiki/Common_Hardware_Reference_Plat...

This had nothing to do with Microsoft's monopoly. And nothing to do with exclusively 'desktop' PPC -- remember, NT4 Server also had PPC support through SP2. IBM's intention was for servers, not desktops (though they did produce compatible laptops/desktops), to support operating systems such as AIX and S[l]o[w|l]aris

AIX isn't exactly a desktop OS.

yjftsjthsd-h
5 replies
1d3h

AIX isn't exactly a desktop OS.

If so, only artificially; AIX was born on a workstation and seems to still support running a graphical desktop environment.

lizknope
2 replies
1d1h

AIX was the first Unix I ever used in 1991 on an IBM RT with the ROMP CPU

https://en.wikipedia.org/wiki/IBM_RT_PC

I wanted one on my desktop so bad. DOS/Win3.1 PCs were just so bad to me. Then I saw SunOS and really just wanted any Unix system on my desk. I bought a PC in 1994 just to install Linux.

lproven
0 replies
6h0m

AIX was the first Unix I ever used in 1991 on an IBM RT with the ROMP CPU

Nice. Me too. But it was about 1989. The first machine I ever compiled a C program and it took me ages to find `a.out` and work out that that was my binary. I was more used to DOS compilers that turned $SOURCENAME.$ext into $sourcename.EXE.

icedchai
0 replies
23h59m

I had one of these back in the early 90's, bought used, of course. It was running AIX 2.x. I moved on to Linux pretty quick.

nullindividual
1 replies
1d3h

Just like NT4 Server :-)

Intended use vs. potential use.

And yeah, NT4 Server was just NT4 Workstation with some registry fiddly bits flipped.

yjftsjthsd-h
0 replies
1d2h

Right; NT was and is likewise quite capable of being a desktop or server OS, the difference is that MS actually pushed that and NT 10.0 is still actively used in both contexts.

banish-m4
0 replies
18h0m

An interesting footnote is that installing the web browser package on AIX caused a telemetry dial-home to Big Blue. I found this because it was keeping our ISDN BRI leased line up constantly and costing us money. Blocking that address solved it.

FMecha
0 replies
1d1h

Where are you getting this information?

From personal perception every time I read articles on Taligent, Copland, and Workplace OS (which did result in a demo PowerPC port of OS/2).

This had nothing to do with Microsoft's monopoly.

Admittedly I had that comment in mind to pre-empt any potential narrativeposting.

duffyjp
4 replies
1d3h

Someone gave me an NT4 CD when I was a kid and on the jewel case it said it supported x86, PPC, I think some other architectures too. I was disappointed I could never get our PowerMac to boot from it. :P

nullindividual
3 replies
1d3h

Alpha and MIPS are the other two architectures.

Sad trombone noises for Alpha

duffyjp
2 replies
1d

I had a job in 2006 where the sysadmin ran the entire shop on a single OpenVMS server with two 500Mhz Alpha CPUs. It was our file server, AD server (somehow), DB server, web server, everything.

I just looked it up and that CPU came out in 1996, at the same time Intel released their 200Mhz Pentium. Alpha was way ahead of their time.

hypercube33
0 replies
10h34m

Somewhere there is a blog about how that CPU had some hardware bug that prevented it from hitting 1ghz. if it didn't have that I'd be a 1ghz 64bit processor in a desktop early on. I really cannot find the blog. It was down a rabbit hole of how Intel AMD both picked some IBM mainframe to base their designs off but AMD went for an older quirky design but it allows a lot of advantages. Anyway, search engines are awful now.

cobbaut
0 replies
12h8m

Yes, and 64-bit!

xbar
0 replies
1d4h

Read it again.

rezmason
0 replies
22h27m

I wonder what software is available NT4 on a Power Macintosh.

my123
0 replies
1d4h

NT on PPC had NTVDM afaik (with the licensed SoftPC x86 emulator)

Narishma
0 replies
1d4h

You misread the comment. They don't want to run DOS games, they want to relive the inability to run them when switching to Windows NT.

lawnchair
15 replies
1d3h

Windows NT was fascinating. If you want to read a great book about it, check out Showstopper.

dboreham
13 replies
1d3h

Was ?

badgersnake
11 replies
1d2h

Dunno why this is getting downvoted. Modern Windows is still very much a Windows NT derivitive.

redox99
10 replies
1d2h

Is it a derivative? I'd say it's the latest version of Windows NT.

badgersnake
7 replies
1d2h

It’s the same codebase that was my point. Whether it’s still called Windows NT or whether that name died in the 90s can probably be argued either way.

sedatk
2 replies
1d

NT is still the name of the kernel. (ntoskrnl.exe)

mycall
1 replies
23h36m

I wonder if that is still true for Azure OS.

lizknope
1 replies
1d1h

I think XP is when the DOS based Win95/98/ME was merged with NT

mewse-hn
0 replies
1d

The Win95 line wasn't really merged, more discontinued. XP had improved DOS/legacy compatibility vs win2000 but not much.

nullindividual
0 replies
1d1h

The marketing and front-facing name clearly died. The internal name did not.

marshray
0 replies
18h32m

Much of that pre-2000 code remains. But so much more has been added in the last 1/4 century that "the same codebase" is somewhat a matter of perspective.

pndy
1 replies
1d1h

That's still the same NT line tho they changed how versions are called after Windows 8.1 which was NT 6.3. Initial Windows 10 release was bumped to NT 10.0 but further versions use year-month and year-half year formats

[1] - https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_vers...

Kwpolska
0 replies
9h7m

The kernel version number stopped having any meaning in 2000.

Hnrobert42
0 replies
20h36m

Often people conjugate based on their perspective. As in, it was a great book at the they read it.

eigenvalue
0 replies
1d3h

That was a great book, up there with Masters of Doom and The Soul of a New Machine.

pwenzel
10 replies
1d2h

Man, I loved Windows NT back in the day. It was light enough that I could run it on fairly low end late-90s hardware, and it was substantially more stable than Windows 95.

pjmlp
9 replies
1d

Had Microsoft been serious with POSIX subsystem, and most likely I would never bothered with Linux for our UNIX assignments.

Instead I dual booted between both of them.

hylaride
3 replies
23h41m

It's a shame. NT did support POSIX, but it was in practice designed to make it easier to port UNIX apps. At co-op job I had back in the day (~2000), I had to fight with Veritas' backup software on NT that wrote to tapes. I don't recall what it was (a device mapping to the drive maybe?), but you could see the UNIX foundations of the software.

pjmlp
2 replies
23h1m

Ironically they had multiple iterations of it, with POSIX subsystem in NT, MKS Toolkit, Interix, Windows Services for UNIX, for various levels of "POSIX support", dropped everything on Windows Vista, only to create WSL out from Project Astoria ashes.

And before they got golden goose with MS-DOS/IBM deal, they were into the UNIX business with Xenix.

A few reasons why Linux would probably never taken off, had Microsoft stayed closer to their UNIX related projects.

temac
1 replies
21h26m

There was actually "SUA" after "SFU", only removed in Windows 8.1 and Windows Server 2012 R2.

banish-m4
0 replies
17h54m

It was mostly for running NFS to talk to other UNIX boxes, not to run UNIX software.

To make a Windows NT box UNIX-ish, there was MKS Toolkit. It predates Cygwin, msys, and djgpp.

EvanAnderson
3 replies
23h4m

Had Microsoft been serious with POSIX subsystem, and most likely I would never bothered with Linux for our UNIX assignments.

Arguably they did have a serious POSIX subsystem w/ the Interix (nee OpenNT) acquisition but they didn't do anything with it. I remember building GNU software under Interix and having a blast with it.

I still wish there was a distribution of NT that booted text -mode and had an Interix-based userland. That would be tons of fun.

pjmlp
2 replies
22h50m

Nowadays that would be Windows Server Core with WSL, which unfortunely isn't yet supported.

EvanAnderson
1 replies
22h16m

Interix was so much more orthogonal to the design of NT-- so much more elegant. WSLv1 was more elegant than WSLv2. WSLv2 feels like a minor step up from just running a Linux VM.

pjmlp
0 replies
22h9m

Agreed, that is what happens when Linux syscalls are more relevant than regular UNIX.

See the usual complains about macOS not being Linux, from people that probably should be supporting Linux OEMs instead of buying Apple laptops.

progmetaldev
0 replies
20h46m

Absolutely, I had a university instructor that taught us both Windows NT and (I believe) Red Hat Linux in 1998, for running web servers. Linux made much more sense when thinking about a multi-user system running various services. I ended up dual booting Windows and Slackware to learn Linux, while also being able to use the software I was already familiar with on Windows. If Microsoft had gone deeply into POSIX for their consumer systems, I don't believe I would have switched over to Linux except for quality of life improvements that may have come about (which may have never happened).

UniverseHacker
8 replies
1d1h

Can someone explain the context here? Surely NT is closed source and was never developed for macs, what actually is this? And what are the chances you can get software for it? Presumably most NT software was compiled for Intel only and closed source.

nullindividual
4 replies
1d1h

NT4 was developed for PowerPC architectures (and MIPS, Alpha). A large portion of the NT4 source code was leaked [0]. I'm not saying the author used the NT4 source code in any fashion, but I would imagine using such source would make life much easier to accomplish this task.

Or the author clean room reversed engineered the bootloader. Or there is enough information out there to forgo the need for any internal knowledge of Windows code.

The PPC code base was never explicitly targeted for Macs, but other systems from IBM/Motorola; but because it is a 'common' platform, the binaries themselves on the NT4 ISO do not need modification.

And yes, you can find the source code on GitHub. In multiple repos!

[0] https://www.neowin.net/news/exclusive-windows-2000--windows-...

temac
1 replies
1d

Nt has a dynamic HAL layer to adapt it to platforms. I'm not sure if it was publicly documented, but in any case yes the source code leak don't harm that kind of port, even if you "just" have to write a HAL.

sumtechguy
0 replies
1d

The current PE exe format supports it. As well as a few Alpha, a few ARM formats, a few MIPS format, Itanium and of course x86. The big part would be finding a compiler that spits out the right code. The preceding NE format also did a few variations.

hylaride
0 replies
23h44m

Yeah, I think the only commercial OEM releases of NT that got sold were x86 and Alpha, though there was support for a handful of specific MIPS and PPC machines, I don't think there were any OEM sales of them with it. I do know that only x86 and Alpha had all the service packs released. There were also at one time internal MS builds on Sparc, but that didn't last long. Alpha did have some pre-release builds of windows 2000 that went out, but there was no final release.

A fun fact is that because of where he worked at the time was a DEC shop, the creator of PuTTY originally created it because he got an alpha windows machine and there was no compatible terminal emulator for alpha NT. He supported alpha builds well into the 2000s and IIRC only dropped it as the machine he was using still for builds finally died.

aeyes
0 replies
1d

Isn't this just drivers and a HAL? Microsoft built the OS in a way so that such ports would be possible so there should be some API documentation out there. The question is how complete it is because everybody just used the MS provided HAL.

fredoralive
1 replies
1d

NT was designed to be portable, so the machine specific parts are cleanly separated, even on the same CPU. So if you port only those parts, the existing binaries for an architecture should work. The three bits are:

1) ARC boot firmware. NT was developed on non-x86 systems like i860 then MIPS, and ARC is the native boot firmware used (on x86 NTLDR emulated it prior to Vista). Similar to a PC BIOS / UEFI, and a PCI PowerMac would use OpenFirmware. As well as providing an ARC compatible environment that loads over OpenFirmware, this project seems to does some fun so the boot firmware can pretend there's a storage device so that driver "floppies" can be loaded during the initial stages of setup. (ie an F6 disc)

2) A HAL. The main NTOSKRNL is hardware agnostic, so the idea is that there's one binary for each CPU architecture. But the kernel needs to interface with actual timers, busses etc., so the interface code is in HAL.DLL and the appropriate one is copied by setup. (For example see https://www.geoffchappell.com/studies/windows/km/hal/history... for a list of x86-32 ones in older versions of windows, with various HALs for NEC PC-98, IBM MCA, various early multiprocessing systems, nowadays there's just one AMD64 one that's mostly in the kernel itself). So the main kernel in unaltered, and halgoss handles the Mac specific stuff.

3) Device drivers. Once NT is up and running it does need actual drivers.

The specs for 1) are known, so can presumably be emulated, and I guess there's a DDK for 3) (or it can be deduced from another DDK), I guess 2) is probably the one that would need the most insider knowledge, I'm not sure if the leaked NT sources go down to that level as I've never looked at them.

As for compatibility, 32 bit PowerPC Win32 binaries, 16 bit x86 Win16 binaries, and whatever x86 DOS stuff will run in an NT4 DOS box. No x86 Win32, only Alpha had an emulator for Win32 x86 stuff (until ARM stuff).

mycall
0 replies
23h33m

I remember migrating whole Windows installations using image copies, but taking a fresh install's HAL files and copying them onto the images and volla, system transfer completed (device drivers would need reinstalling though).

xattt
5 replies
1d2h

This is one of those things that I’d have nerdy arguments about with my nerdy friends in middle school.

technothrasher
4 replies
1d2h

Dude, shut up, the Amiga is totally better. Cooperative multitasking is lame. Have you seen HAM mode??

xattt
3 replies
1d

No, Amiga sucks, it can’t even run UNIX! Do you even know what it is? My dad uses it for work.

Locutus_
0 replies
20h13m

Hah! We had ixemul.library and the geekgadgets distro back in the day! Full bizare GNU+BSD userland on top of AmigaOS!

ngneer
2 replies
1d4h

Any screenshot?

Connector2542
2 replies
1d3h

how far away is it from being a (even as a joke) daily driver?

Palomides
1 replies
1d2h

considering how there's basically zero software for PPC Windows even if it were running smoothly, very far

roytam87
0 replies
1d2h

but at least there is VC4 RISC Edition available for PowerPC, you can compile something on it.

walterburns
1 replies
1d2h

If I had a dime for every person that's asked for this.

actionfromafar
0 replies
1d2h

You'd have a dime?

the_panopticon
1 replies
1d3h

Fascinating work. The ARC std https://en.wikipedia.org/wiki/ARC_(specification) was used to boot the Dec Alpha Windows machines, along w/ MIPS, etc. Anyone know of other open source variants of that in the wild? At intel in 1998 the original efi spec was modeled on & inspired by Arc. The Intel boot initiative (IBI) in fact looked mostly like Arc. EFI (now UEFI) is sort of Arc + installable GUID-based interfaces (aka protocols) a la MS COM https://en.wikipedia.org/wiki/Component_Object_Model. Page 8 of https://www.intel.com/content/dam/www/public/us/en/documents... recounts some of those travails.

whalesalad
0 replies
22h2m

Finally, I can run a domain controller off an old G3.

nxobject
0 replies
21h17m

THat's absurd. That's it, that's the comment. Hack of the Year award, stat!

gcp123
0 replies
1d4h

Wow, just when I was looking for a reason to fire up my old 1998 bondi blue iMac G3, this pops up. What a weird, wild, and specific project!

dmitrygr
0 replies
23h27m

Writing a new NT HAL is a very impressive achievement. A tip of my hat to you, sir!

Docs are spotty at best, and I am sure many bugs aren’t known as existing HALs simply got lucky to not expose them.

Lammy
0 replies
22h24m

Neat. I am totally going to try this on one of my spare Blue&White G3s.