return to table of content

Windows NT vs. Unix: A design comparison

runjake
159 replies
22h14m

The NT kernel is pretty nifty, albeit an aging design.

My issue with Windows as an OS, is that there's so much cruft, often adopted from Microsoft's older OSes, stacked on top of the NT kernel effectively circumventing it's design.

You frequently see examples of this in vulnerability write-ups: "NT has mechanisms in place to secure $thing, but unfortunately, this upper level component effectively bypasses those protections".

I know Microsoft would like to, if they considered it "possible", but they really need to move away from the Win32 and MS-DOS paradigms and rethink a more native OS design based solely on NT and evolving principles.

robotnikman
66 replies
20h30m

The backwards compatibility though is one of the major features of windows as an OS. That fact that a company can still load some software made 20 years ago developed by a company that is no longer in business is pretty cool (and I've worked at such places using ancient software on some windows box, sometimes there's no time or money for alternatives)

sedatk
22 replies
20h18m

That and it's 30+ years (NT was released in 1993). Backwards compatibility is certainly one of the greatest business value Microsoft provides to its customers.

MarkSweep
9 replies
18h53m

If you include the ability of 32-bit versions of Windows to run 16-but Windows and DOS applications with NTVDM, it is more like 40+ years.

https://en.wikipedia.org/wiki/Virtual_DOS_machine

(Math on the 40 years: windows 1.0 was released in 1985, the last consumer version of Windows 10 (which is the last Windows NT version to support 32-bit install and thus NTVDM) goes out of support in 2025. DOS was first released in 1981, more than 40 years ago. I don’t know when it was released, but I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186)

sedatk
5 replies
17h32m

True. I just assumed that 16-bit support got dropped since Windows 11 was 64-bit only.

EvanAnderson
4 replies
13h31m

Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily). It has been unofficially built for 64-bit versions of Windows as a proof-of-concept: https://github.com/leecher1337/ntvdmx64

badgersnake
2 replies
11h52m

That’s okay, and if people want to test their specific use case on that and use it then great.

It’s a pretty different amount of effort to Microsoft having to do a full 16 bit regression suite and make everything work and then support it for the fewer and fewer customers using it. And you can run a 32 bit windows in a VM pretty easily if you really want to.

Timwi
1 replies
10h17m

Or you can run 16-bit Windows 3.1 in DOSBox.

badgersnake
0 replies
8h21m

Sure, but again that’s on you to test and support.

skissane
0 replies
6h19m

Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily).

I recently discovered that Windows 6.2 (more commonly known as Windows 8) added an export to Kernel32.dll called NtVdm64CreateProcessInternalW.

https://www.geoffchappell.com/studies/windows/win32/kernel32...

Not sure exactly what it does (other than obviously being some variation on process creation), but the existence of a function whose name starts with NtVdm64 suggests to me that maybe Microsoft actually did have some plan to offer a 64-bit NTVDM, but only abandoned it after they’d already implemented this function.

winter_blue
2 replies
15h50m

I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186

It’s amazing that stuff still runs on Windows 10. I’m guessing Windows 10 has a VM layer both for 32-bit and 16-bit Windows + DOS apps?

JonathonW
0 replies
14h6m

Windows 10 only does 16-bit DOS and Windows apps on the 32-bit version of Windows 10, so it only has a VM layer for those 16-bit apps. (On x86, NTVDM uses the processor's virtual 8086 mode to do its thing; that doesn't exist in 64-bit mode on x86-64 and MS didn't build an emulator for x86-64 like they did for some other architectures back in the NT on Alpha/PowerPC era, so no DOS or 16-bit Windows apps on 64-bit Windows at all.)

ozim
6 replies
19h47m

People tend to forget that it already is 2024.

xattt
3 replies
18h46m

Short of driver troubles at the jump from Win 9x to 2k/XP, and the shedding of Win16 compatibility layers at the time of release of Win XP x64, backwards compatibility had always been baked into Windows. I don’t know if there was any loss of compatibility during the MS-DOS days either.

It’s just expected at this point.

anthk
2 replies
11h52m

On DOS, if you borrow ReactOS' NTVDM under XP/2003 and Maybe Vista/7 under 32 bit (IDK about 64 bit binaries), you can run DOS games in a much better way than Windows' counterpart.

anthk
0 replies
7h14m

I think they recently improved NTVDM a lot.

ExoticPearTree
1 replies
11h1m

Not long ago, it was posted here a link to a job advert for the german railway looking for a Win 3.11 specialist.

As I see it, the problem is the laziness/cheapness of companies when it comes to upgrades and vendor's reluctance to get rid of dead stuff for fear of losing business.

APIs could be deprecated/updated at set intervals, like Current -2/-3 versions back and be done with it.

wongarsu
0 replies
4h57m

Lots of hardware is used for multiple decades, but has software that is built once and doesn't get continuous updates.

That isn't necessarily laziness, it's a mindset thing. Traditional hardware companies are used to a mindset where they design something once, make and sell it for a decade, and the customer will replace it after 20 years of use. They have customer support for those 30 years, but software is treated as part of that design process.

That makes a modern OS that can support the APIs of 30 year old software (so 40 year old APIs) valuable to businesses. If you only want to support 3 versions that's valid, but you will lose those customers to a competitor who has better backwards compatibility

dartharva
4 replies
13h42m

But only to a degree, right? Only the last two decades of software is what the OS ideally needs to support, beyond that you can just use emulators.

da_chicken
1 replies
10h35m

Pretty much the only 16-bit software that people commonly encounter is an old setup program.

For a very long time those were all 16-bit because they didn't need the address space and they were typically smaller when compiled. This means that a lot of 32-bit software from the late 90s that would otherwise work fine is locked inside a 16-bit InstallShield box.

aleph_minus_one
0 replies
7h0m

Pretty much the only 16-bit software that people commonly encounter is an old setup program.

I know quite a lot of people who are still quite fond of some old 16-bit Windows games which - for this "bitness reason" - don't work on modern 64 bit versions of Windows anymore. People who grew up with these Windows versions are quite nostalgic about applications/games from "their" time, and still use/play them (similar to how C64/Amiga/Atari fans are about "their" system).

badsectoracula
0 replies
12h33m

Software is written against APIs, not years, so the problem with this sort of thinking is that software written -say- 10 years ago might still be using APIs from more than 20 years ago, so if you decide to break/remove/whatever the more-than-20-year-ago APIs you not only break the more-than-20-year-ago software but also the 10 year old software that used those APIs - as well as any other software, older or newer, that did the same.

(also i'm using "API" for convenience here, replace it with anything that can affect backwards compatibility)

EDIT: simple example in practice: WinExec was deprecated when Windows switched from 16bit to 32bit several decades ago, yet programs are still using it to this day.

NegativeLatency
0 replies
13h31m

Maybe, but your app could also be an interface to some super expensive scientific/industrial equipment that does weird IO or something.

flohofwoe
15 replies
11h40m

If you look at more recent Windows APIs, I'm really thankful that the traditional Win32 APIs still work. On average the older APIs are much nicer to work with.

bboygravity
11 replies
10h8m

Nicer to work with?

I can't think of any worse API in the entire world?

flohofwoe
5 replies
9h32m

There are some higher level COM APIs which are not exactly great, but the core Win32 DLL APIs (kernel32, user32, gdi32) are quite good, also the DirectX APIs after ca 2002 (e.g. since D3D9) - because even though the DirectX APIs are built on top of COM, they are designed in a somewhat sane way (similar to how there are 'sane' and 'messy' C++ APIs).

Especially UWP and its successors (I think it's called WinRT now?) are objectively terrible.

frabert
3 replies
6h44m

I think that particular pattern is a perfectly reasonable way to let the user ingest an arbitrarily long list of objects without having to do any preallocations -- or indeed, any allocations at all.

mananaysiempre
1 replies
6h22m

Yes, but the inversion of control is unpleasant to deal with—compare Find{First,Next}File which don’t require that.

wongarsu
0 replies
5h9m

Which is a pattern that also exists in the Win32 API, for example in the handle = CreateToolhelp32Snapshot(), Thread32First(handle, out), while Thread32Next(handle, out) API for iterating over a process's threads.

I also find EnumChildWindows pretty wacky. It's not too bad to use, but it's a weird pattern and a pattern that Windows has also moved away from since XP.

https://learn.microsoft.com/en-us/windows/win32/toolhelp/tra...

wruza
0 replies
3h37m

Because allocating well under a hundred handles is a biggest problem we have.

WinAPI is awful to work with for reasons above anyone’s comprehension. It’s just legacy riding on legacy, with initial legacy made by someone 50% following stupid patterns from the previous 8/16 bit decade and 50% high on mushrooms. The first thing you do with WinAPI is abstracting it tf away from your face.

wruza
0 replies
3h57m

Yeah, I like the smell of cbSize in the RegisterClassExA. Smells like… WNDCLASSEXA.lpfnWndProc.

Nothing can beat WinAPI in nicety to work with, just look at this monstrosity:

  gtk_window_new(GTK_WINDOW_TOPLEVEL);

sumtechguy
0 replies
6h36m

The 'win32' AOU calls are decent relative to themselves. If you understand their madness. For every positive call there is usually an anti call and a worker call. Open a handle, use the handle with its helper calls, close the handle. You must understand their 3 step pattern to all work. There are a few exceptions. But usually those are the sort of things where the system has given you a handle and you must deal with it with your calls to helper calls. In those cases usually the system handles the open/close part.

Now you get into the COM/.NET/UWP stuff and the API gets a bit more fuzzy on that pattern. The win32 API is fairly consistent in its madness. So are the other API stacks they have come up with. But only in their own mad world.

Also out of the documentation the older win32 docs are actually usually decently written and self consistent. The newer stuff not so much.

If you have the displeasure of mixing APIs you are in for a rough ride as all of their calling semantics are different.

sirwhinesalot
0 replies
9h36m

The various WinRT APIs are even worse. At least Win32 is "battle tested"

pjmlp
0 replies
7h28m

X Windows and Motif, for example.

marcosdumay
0 replies
4h33m

So, you don't use the newer Windows APIs?

alt227
2 replies
10h9m

On average the older APIs are much nicer to work with

IMO this is because they are better written, by people who had deeper understanding of the entire OS picture and cared more about writing performant and maintainable code.

xeonmc
1 replies
7h10m

Well-illustrated in the article “How Microsoft Lost the API War”[0]:

    The Raymond Chen Camp believes in making things easy for developers by making it easy to write once and run anywhere (well, on any Windows box). 
    The MSDN Magazine Camp believes in making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve. 
    The Raymond Chen camp is all about consolidation. Please, don’t make things any worse, let’s just keep making what we already have still work. 
    The MSDN Magazine Camp needs to keep churning out new gigantic pieces of technology that nobody can keep up with.  
[0] https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...

jimbokun
0 replies
5h4m

making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve.

I feel the same way about Spring development for Java.

Also reminds me of:

https://www.infoq.com/presentations/Simple-Made-Easy/

runjake
9 replies
19h49m

  > The backwards compatibility though is one of the major features of windows as an OS.
It is. That's even been stated by MSFT leadership time and time again.

But at what point does that become a liability?

I'm arguing that point was about 15-20 years ago.

efitz
5 replies
14h48m

There is another very active article on HN today about the launch of the new Apple iPhone 16 models.

The top discussion thread on that post is about “my old iPhone $version is good enough, why would I upgrade”.

It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.

For Microsoft, the driver for backwards compatibility is economic: Microsoft wants people to buy new Windows, but in order to do that, they have to (1) convince customers that all their existing stuff is going to continue to work, and (2) convince developers that they don’t have to rewrite (or even recompile) all their stuff whenever there’s a new version of Windows.

Objectively, it seems like Microsoft made the right decision, based on revenue over the decades.

Full disclosure: I worked for Microsoft for 17 years, mostly in and around Windows, but left over a decade ago.

ruthmarx
2 replies
10h39m

It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest”

This is almost never a technical decision, but a 'showing off' decision IMO.

HPsquared
1 replies
9h41m

It's a "fun toys" decision.

ruthmarx
0 replies
7h43m

Often one and the same.

aleph_minus_one
1 replies
6h55m

It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.

Not concerning the iPhone, but in general tech people tend to be very vocal about not updating when they feel that the new product introduces some new spying features over the old one, or when they feel that the new product worsens what they love about the existing product (there, their taste is often very different from the "typical customer").

dgfitz
1 replies
19h17m

New frameworks have vulnerabilities. Old OS flavors have vulnerabilities. OpenSSh keeps making the news for vulnerabilities.

I’d argue that software is never finished, only abandoned, and I absolutely did not generate that quote.

Stop. Just stop.

anthk
0 replies
11h51m

OpenSSH

Yes, just stop... with the bullshit. OpenBSD didn't make vulnerabilities. Foreign Linux distros (OpenSSH comes from OpenBSD, and they release a portable tgz too) adding non-core features and libraries did.

wvenable
0 replies
14h10m

It is not a liability because most of what you are talking about is just compatibility not backwards compatibility. What makes an operating system Windows? Fundamentally it is something that runs Windows apps. Windows apps existed 15-20 years ago as much as they exist today. If you make an OS that doesn't run Windows apps then it just isn't Windows anymore.

The little weird things that exist due to backwards compatibility really don't matter. They're not harming anything.

johannes1234321
6 replies
19h14m

It is a great achievement. But the question is: Is it really relevant? Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?

Of course even that isn't trivial, as one wants to share filesystem access (while I can imagine some overlay limiting access), might need COM and access to devices ... but I would assume they could push that a lot more actively. If they decided which GUI framework to focus on.

wvenable
1 replies
18h42m

Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?

A huge amount of the compatibility stuff is already moved out into separate code that isn't loaded unless needed.

The problem too, though, is users don't want independent subsystems -- they want their OS to operate as a singular environment. Raymond Chen has mentioned this a few times on his blog when this sort of thing comes up.

Backwards compatibility also really isn't the issue that people seem to think it is.

dredmorbius
0 replies
29m

Independent subsystems need not be independent subsystems that the user must manage manually.

The k8s / containers world on Linux ... approaches ... this. Right now that's still somewhat manual, but the idea that a given application might fire off with the environment it needs without layering the rest of the system with those compatibility requirements, and also, incidentally, sandboxing those apps from the rest of the system (specific interactions excepted) would permit both forward advance and backwards compatibility.

A friend working at a virtualisation start-up back in the aughts told of one of the founders who'd worked for the guy who'd created BCPL, the programming language which preceded B, and later C. Turns out that when automotive engineers were starting to look into automated automobile controls, in the 1970s, C was considered too heavy-weight, and the systems were imploemented in BCPL. Some forty years later, the systems were still running, in BCPL, over multiple levels of emulation (at least two, possibly more, as I heard it). And, of course, faster than in the original bare-metal implementations.

Emulation/virtualisation is actually a pretty good compatibility solution.

nitwit005
1 replies
11h52m

Generally speaking, the waste is only hard disk space. If no one ever loads some old DLL, it just sits there.

johannes1234321
0 replies
7h8m

Nobody loads it, but the attacker. Either via a specially crafted program or via some COM service invoked from a Word document or something.

jimbokun
0 replies
4h59m

By moving the Win32 API onto Windows NT kernel, isn't that essentially what Microsoft did?

486sx33
0 replies
18h55m

I think that VM software like Parallels has shown us that we are just now at the point where VMs can handle it all and feel native. Certainly NT could use a re write to eliminate all the legacy stuff…but instead they focus on copilot and nagging me not to leave windows edge internet explorer

ksec
5 replies
10h55m

My question is why cant M$ ship the old OS running as VM. And free themselves from Backward compatibility on a newer OS.

galaxyLogic
2 replies
10h46m

Users will want to use applications that require features of the earlier OS version, and newer ones that require newer features. They don't want to have to switch to using a VM because old apps would only run on that VM.

wongogue
1 replies
10h31m

Putting apps from the VM on the primary desktop is something they have already done on WSLg. Launching Linux and X server is all taken care of when you click the app shortcut. Similar to the parent’s ask, WSL2/WSLg is a lightweight VM running Linux.

simonh
0 replies
8h31m

In many ways the old API layers are sandboxed much like a VM. The main problems are things like device drivers, software that wants direct access to external interfaces, and software that accesses undocumented APIs or implementation details of Windows. MS goes to huge lengths to keep trash like that still working with tricks like application specific shims.

mike_hearn
0 replies
7h12m

Backwards compatibility isn't their biggest problem to begin with, so that wouldn't be worth it. In effect they already did break it: the new Windows APIs (WinRT/UWP) are very different to Win32 but now people target cross platform runtimes like the browser, JVM, Flutter, etc. So it doesn't really matter that they broke backwards compatibility. The new tech isn't competitive.

tippytippytango
1 replies
11h36m

If 20 years is so ancient, why did they go by so fast....

lproven
0 replies
9h24m

Bad news. NT wasn't 20 years ago. It was 31 years ago.

hnfong
1 replies
16h48m

It's possible that wine is more "backwards compatible" than the latest version of Windows though.

And while wine doesn't run everything, at least it doesn't circumvent security measures put in place by the OS...

leftyspook
0 replies
6h49m

I've had more luck running games from 97-00 under wine than on modern Windows.

apatheticonion
42 replies
19h35m

My understanding is that the portion of revenue Microsoft makes from Windows these days is nearly negligible (under 10%). Both XBox and Office individually make more money for Microsoft than Windows, which indicates that they don't have a compelling incentive to improve it technically. This would explain their infatuation with value extraction initiatives like ads in Explorer and Recall.

My understanding is that the main thing keeping Windows relevant is the support for legacy software, so they'd be hesitant to jeopardize that with any bold changes to the kernel or system APIs.

That said. Given my imagined cost of maintaining a kernel plus my small, idiolistic, naive world view; I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel (or if GNU is too restrictive, BSD or alternatively write their own POSIX compliant kernel like MacOS).

Linux would be ideal given its features; containers, support for Android apps without emulation, abundance of supported devices, helpful system capabilities like UNIX sockets (I know they started to made progress there but they abandoned further development), and support for things like ROCm (which only works on Linux right now).

Microsoft could build Windows on top of that POSIX kernel and provide a compatibility layer for NT calls and Win32 APIs. I don't even care if it's open source.

The biggest value for me is development capabilities (my day job is writing a performance sensitive application that needs to run cross-platform and Windows is a constant thorn in my side).

Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.

WSL1 was a great start and gave me hope, but is now abandonware.

WSL2 is a joke, if I wanted to run Linux in a VM, I'd run Linux in a VM.

I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant - If I could target unix syscalls during development and produce binaries that worked on Windows, I supposed I'd be happy with that.

delta_p_delta_x
16 replies
19h18m

I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel

I don't understand why people keep repeating this wish, rather than the arguably better, more competitive option: open-source the NT and Windows codebase, prepare an 'OpenWindows' (nice pun there, really) release, and simultaneously support enterprise customers with paid support licences, like places like Red Hat currently do.

Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.

I couldn't disagree more. As someone who comes from a mostly-Windows pedigree, UNIX is... pretty backwards, and I look upon any attempt to shoehorn UNIX-on-Windows with a fair bit of disapproval, even if I concede that their individual developers and maintainers have done a decent job. Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf? Tell me when you can get flame graphs in one click).

jen20
4 replies
17h53m

Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf? Tell me when you can get flame graphs in one click).

Hard disagree on the development aspect of VS, which (last time I used it, in 2015) couldn't even keep up with my fairly slow typing speed.

The debugging tools are excellent, but they are certainly not any more excellent than those in Instruments on macOS (which is largely backed by DTrace).

Timwi
2 replies
9h55m

2015 is 9 years ago. We shouldn't keep comparing Windows/Microsoft software from that long ago with modern alternatives on Linux and Mac.

That said, I agree that Visual Studio was extremely slow and clunky in the first half of the 2010s.

jen20
0 replies
4h55m

I didn’t compare it with a modern alternative. I compared its debugging tools of Instruments of the same vintage, and pointed it out that last time I tried VS it couldn’t keep up with basic typing.

Dalewyn
0 replies
4h44m

NT 10.0 hails from 2015 (Windows 10) and was re-released in 2021 (Windows 11).

pathartl
0 replies
17h41m

VS2022 is actually pretty damn slick. I use it on the daily and it's much more stable than any previous version. It's still not as fast as a text editor (I _do_ miss Sublime's efficiency), but even going back to 2019 is extremely hard.

cherryteastain
2 replies
19h7m

open-source the NT and Windows codebase

May be very difficult or impossible if the Windows codebase has third-party IP (e.g. for hardware compatibility) with restrictive licensing

jen20
1 replies
17h51m

Sun managed it with Solaris (before Oracle undid that work) - indeed they had to create a license which didn't cause problems with the third party components (the CDDL).

p_l
0 replies
10h35m

The license happened less about third party components (GPLv2 would have worked for that, too, even if it's less understood area), but because GPLv3 was late, Sun wanted patent clause in license, and AFAIK engineers rebelled against licensing that would have prevented BSDs (or other) from using the code.

(For those who still believe "CDDL was designed to be incompatible with GPL", the same issues show up when trying to mix GPLv2 and GPLv3 code if you can't relicense the former to v3)

EvanAnderson
2 replies
18h51m

I have a fever dream vision of a "distribution" of an open source NT running in text mode with a resurrected Interix. Service Control Manager instead of systemd, NTFS (with ACLs and compression and encryption!), the registry, compatibility with scads of hardware drivers. It would be so much fun!

ruthmarx
1 replies
10h24m

Isn't ReactOS close enough?

EvanAnderson
0 replies
1h58m

I've kept meaning to look at ReactOS and put it off again and again. I felt Windows Server 2003 was "peak Windows" before Windows 7 so I'd imagine I'd probably like ReactOS.

dajtxx
1 replies
12h11m

I can imagine the effort of open source Windows would be prohibitive.

Having to go through every source file to ensure there is nothing to cause offense in there; there may be licensed things they'd have to remove; optionally make it buildable outside of their own environment...

Or there may be just plain embarrassing code in there they don't feel the need to let outsiders see, and they don't want to spend the time to check. But you can be sure a very small group of nerds will be waiting to go through it and shout about some crappy thing they found.

dijit
0 replies
11h4m

I'd venture that even more nerds would go through it and fix their specific problems.

It's always been quite clear that FOSS projects that have sufficient traction are the pinnicle of getting something polished. No matter how architecturally flawed or no matter how bad the design is: many eyes seem to make light work of all edge cases over time.

On the other hand, FOSS projects tend to lack the might of a large business to hit a particular business case or criticality, at least in the short term.

Open sourcing is probably impossible for the same reasons open sourcing Solaris was really difficult. The issues that were affecting solaris affect Windows at least two orders of magnitude harder.

It's the smart play, though they'd lose huge revenues from Servers that are locked in... but otherwise, Windows is a dying operating system, it's not the captive audience it once was as many people are moving to web-apps, games are slowly leaving the platform and it's hanging on mostly due to inertia. The user hostile moves are not helping to slow the decline either.

apatheticonion
1 replies
18h2m

That's an interesting idea. Some thoughts come to mind:

- The relatively low revenue of Windows for Microsoft means that they have the potential opportunity of increasing Windows profitability by dropping the engineering costs associated with NT (though on the flipside, they'd acquire the engineering cost of developing Linux).

- Open sourcing NT would likely see a majority of it ported into Linux compatibility layers which would enable competitors (not that this is bad for us as consumers, it's just not good for business)

- Adopting the Linux kernel and writing a closed source NT compatibility layer, init system, and closed source desktop environment means that the "desktop" and Microsoft aspects of the OS could be retained as private IP - which is the part that they could charge for. I know I'd certainly pay for a Linux distribution that has a well made DE.

UNIX is... pretty backwards,

I honestly agree. Many of the APIs show their age and, in the age of high level languages, it's frustrating to read C docs to understand function signatures/semantics. It's certainly not ergonomic - though that's not to say there isn't room to innovate here.

Ultimately, I value sameness. Aside from ergonomics, NT doesn't offer _more_ than POSIX and language bindings take care of the ergonomics issues with unix, so in many ways I'd argue that NT offers less.

Visual Studio (not Code) is a massively superior development and debugging too [...] Tell me when you can get flame graphs in one click

Just because the tooling isn't as nice to use now doesn't mean that Microsoft couldn't make it better (and charge for that) if they adopted Linux. This isn't something entirely contingent on the kernel.

delta_p_delta_x
0 replies
17h39m

I don't see why everything has to be Linux (which I will continue to maintain has neither the better kernel- nor user-mode).

Windows and NT have their own strengths as detailed in the very article that this thread links to. When open-sourced they could develop entirely independently, and it is good to have reasonable competition. Porting NT and the Windows shell to the Linux kernel for porting's sake could easily take years, which is wasted time and effort on satisfying someone's not-invented-here syndrome. It will mean throwing away 30+ years of hardware and software backward compatibility just to satisfy an imperfect and impractical ideal.

For perspective: something like WINE still can't run many Office programs. The vast majority of its development in recent years has been focused on getting video games to work by porting Direct3D to Vulkan (which is comparatively straightforward because most GPUs have only a single device API that both graphics APIs expose, and also given the fact that both D3D and Vulkan shader code compile to SPIR-V). Office programs are the bread and butter of Windows users. The OpenOffice equivalents are barely shadows of MS Office. To be sure, they're admirable efforts, but that only gets the developers pats on the back.

juunpp
0 replies
17h24m

Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf?)

VS is dogshit full of bloat and a UI that takes a PhD to navigate. CLion and QTCreator embed gdb/lldb and do the debugging just fine. perf also gets you more system metrics than Visual Studio does; the click vs CLI workflow is mostly just workflow preference. But if you're going to do a UI, at least don't do it the way VS does.

PeterStuer
6 replies
11h6m

In Microsoft's perfect world your local machines would just be lightweight terminals to their Azure mainframe.

alt227
4 replies
10h4m

This is coming. IMO in 20 years this will be how all devices, including phones, work.

alternatex
3 replies
9h33m

Without massive (exponential) battery/efficiency improvements it won't happen. Networking isn't something you can magically wave away. It has a cost.

alt227
1 replies
9h22m

Yes but by removing pretty much all processing on the end device and making it a thin client, you can extend battery life exponentially.

pxc
0 replies
5h56m

Is it actually the case that local computation on mobile devices is much more expensive than running the radios? I was just the impression that peripherals like the speakers, real radios, and display often burn up much more power than local manipulation of bits.

cesarb
0 replies
4h21m

You'd need massive networking improvements too. Telling someone "try next to the stairs, the cellular signal's better there" is an example I saw yesterday (it was a basement level), and that's not uncommon in my experience. You have both obstacles (underground levels, tunnels, urban canyons, extra thick walls, underwater) and distance (large expanses with no signal in the middle of nowhere); satellites help with the later but not with the former. Local computing with no network dependencies works everywhere, as long as you have power.

pjmlp
0 replies
7h24m

Catching up with Google's Chromebook, so worshiped around here.

EvanAnderson
6 replies
19h22m

I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant...

Microsoft had this and abandoned it. I was building GNU software on NT in 2000 under Interix. It became Services for Unix and then was finally abandoned.

lproven
5 replies
9h20m

was finally abandoned.

Was finally _replaced_ by WSL.

pxc
4 replies
6h8m

By WSL1. But WSL2 is a VM running a Linux kernel, not POSIX compatibility for Windows.

There's still all kinds of pain and werodness surrounding the filesystem boundary with WSL2. And contemporary Windows still has lots of inconsistency when trying to use Unix-style paths (which sometimes work natively and sometimes don't), and Unix-y Windows apps are still either really slow or full of hacks to get semi-decent performance. Often that's about Unix or Linux expectations like stat or fork, but sometimes other stuff (see for instance, Scoop's shim executable system that it uses to get around Windows taking ages to launch programs when PATH is long).

WSL2 also just isn't a real substitute for many applications. For instance, I'd like to be able to use Nix to easily build and install native Windows software via native toolchains on Windows machines at work. You can't do such a thing with WSL2. For that you need someone who actually knows Windows to do a Windows port, and by all reports that is very different from doing a port to a Unix operating system.

Idk if what people are asking for when they say 'POSIX compliant' with respect to Windows really has much to do with the POSIX standard (and frankly I don't think that matters). But they're definitely asking for something intelligible and real that Windows absolutely lacks.

EvanAnderson
3 replies
5h21m

But they're definitely asking for something intelligible and real that Windows absolutely lacks.

Interix was what Windows lacks, but it was abandoned. It wasn't a Linux compatibility layer like WSL1 (or just a gussied-up Linux VM like WSL2). It was a freestanding implementation of POSIX and building portable software for it was not unlike building software portable to various *nixes. GNU autotools had a target for it. I built software from source (including upgrading the GCC it shipped with).

It was much more elegant than WSL and was built in the spirit of the architecture of NT.

RiverCrochet
2 replies
4h29m

IIRC Interix was a separate "subsystem" in the Windows API model - psxss.exe, with Win32 covered by csrss.exe and believe it or not there was an OS2 one.

pxc
1 replies
3h46m

What does it say about the practical usefulness of this Windows facility that MS has, it seems, never maintained one of these 'personalities' long-term?

EvanAnderson
0 replies
1h56m

I think it's probably business case and revenue potential, not practical usefulness. I felt like Interix was plenty useful but probably couldn't earn its keep. I think that pluggable personalities even exists in NT speaks to the general Microsoft embrace / extend / extinguish methodology. They were a means to an end to win contracts.

therein
3 replies
19h2m

Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.

WSL1 was a great start and gave me hope, but is now abandonware.

Exactly my thoughts. I really admired the design and how far WSL1 got. It is just sad to see it abandoned.

WSL2 is a joke, if I wanted to run Linux in a VM, I'd run Linux in a VM.

I couldn't have said it better. If I wanted to run Linux in a VM, I'd run Linux in a VM, why are we pretending something special is going on.

wkat4242
1 replies
17h36m

Yes WSL1 was really special. Talking to the NT kernel from a Linux environment.

I think it was mainly the docker crowd that kept asking for compatibility there :(

pxc
0 replies
5h49m

Microsoft could have implementes the Docker API as part of WSL1 instead of loading up a real Linux kernel for it. That's how LX Zones on Illumos work for running Docker containers on non-Linux without hardware virtualization.

I'm sure it's tricky and hard (just like WSL1 and WINE are tricky and hard), but we know it's at least possible because that kind of thing has been done.

michalf6
0 replies
4h28m

Exactly my thoughts. I really admired the design and how far WSL1 got. It is just sad to see it abandoned.

But why was it better, other than aesthetic preference?

shiroiushi
0 replies
16h30m

threw their weight behind the Linux kernel (or if GNU is too restrictive

The GPL isn't too restrictive. Google has no issue with it on Android (which uses a modified Linux kernel). GPL doesn't mean you have to open-source everything, just the GPL components, which in the case of the Linux kernel, is just the kernel itself. MS already contributes a bunch of drivers (for their hypervisor) to the Linux kernel. They could easily make a Linux-based OS with their own proprietary crap on top if they wanted to.

support for Android apps without emulation

They wouldn't need CPU-level emulation, but the API would need some kind of compatibility layer, similar to how WINE serves this purpose for Windows applications on Linux.

Microsoft could build Windows on top of that POSIX kernel and provide a compatibility layer for NT calls and Win32 APIs.

They don't need to: they can just use WINE. They could improve that, or maybe fork it and add some proprietary parts like CodeWeavers does, or they could even just buy out CodeWeavers.

ruthmarx
0 replies
10h29m

I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel

Oh hell no!

Diversity in operating systems is important, and the NT architecture has several advantages over the Linux approach. I definitely don't want just one kernel reigning supreme, not yet at least - although that is probably inevitable.

lproven
0 replies
9h21m

I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel

This entire article is an elegant argument why this would be a terrible idea. Didn't you RTFA?

I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant

Way to expose and highlight your ignorance. :-( NT 3.1, the first release, was POSIX compliant in 1993, and every release since has been.

SideQuark
0 replies
15h54m

20+B annually is not “nearly negligible.” That’s more revenue than all but 3 other software companies: oracle 46B, SAP $33B, and salesforce 430B. It’s more annual revenue than Adobe and every other software company.

PeterStuer
0 replies
11h3m

"I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant"

Their was a time MS sales posted NT to be more POSIX compatible than UNIXes.

7bit
0 replies
10h7m

My understanding is that the main thing keeping Windows relevant is the support for legacy software, so they'd be hesitant to jeopardize that with any bold changes to the kernel or system APIs.

Without Windows, there would be no platform to sell office (macOS aside). That as a side note.

The important piece you are missing is this: The entirety of Azure runs on an optimized Variant of Hyper-V, hence all of Azure runs on Windows. That is SUBSTANTIAL!

486sx33
0 replies
18h54m

Pulling WSL from windows 10 was particularly nasty

phendrenad2
12 replies
20h24m

an aging design

What does that mean, exactly? Linux is also an "aging design", unless I missed a big announcement where they redesigned it at any point since 1994.

HPsquared
5 replies
20h12m

cf Terry Davis saying "Linux wants to be a 1970s mainframe".

jonathaneunice
2 replies
14h12m

Every new system wants to be a mainframe when it grows up. VMS, Unix, Linux, NT...they all started "small" and gradually added the capabilities and approaches of the Bigger Iron that came before them.

Call that the mainframe--though it too has been evolving all along and is a much more moving target than the caricatures suggest. Clustering, partitions, cryptographic offload, new Web and Linux and data analytics execution environments, most recently data streaming and AI--many new use modes have been added since the 60s and 70s inception.

Someone
1 replies
10h47m

Every new system wants to be a mainframe when it grows up. VMS, Unix, Linux, NT...they all started "small" and gradually added the capabilities and approaches of the Bigger Iron that came before them

MacOS started on desktop, moved from there to smartphones and from there to smartwatches. Linux also moved ‘down’ quite a bit. NT has an embedded variant, too (https://betawiki.net/wiki/Windows_NT_Embedded_4.0, https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP..., https://en.wikipedia.org/wiki/Windows_IoT).

jonathaneunice
0 replies
6h0m

True. Every new system wants to be just about everything when it grows up. Run workstations, process transactions, power factors, drive IoT, analyze data, run AI...

"Down" however is historically a harder direction for a design center to move. Easier to add features--even very large, systemic features like SMP, clustering, and channelized I/O--than to excise, condense, remove, and optimize. Linux and iOS have been more successful than most at "run smaller, run lighter, fit into a smaller shell." Then again, they also have very specific targets and billions of dollars of investment in doing so, not just hopeful aspirations and side-gigs.

beeflet
0 replies
16h14m

TD had some interesting ideas when it came to simplifying the system, but I think the average person wants something inbetween a mainframe and a microcomputer.

In linux/unix there is too much focus on the "multiuser" and "timesharing" aspect of the system, when in the modern day you generally have one user with a ton of daemons so you forced to run daemons as their own users and then have some sort of init system to wrangle them all. A lot of the unixisms are not as elegant as they should be (see plan9, gobolinux, etc).

TempleOS is more like a commodore 64 environment than an OS: there's not really any sort of timesharing going on and the threading is managed manually by userspace programs. One thing I like is that the shell language is the same as the general programming language (HolyC).

anthk
0 replies
10h6m

Every modern OS wants to be that, even iOS, at least internally.

kbolino
2 replies
18h23m

Linux actually did have some pretty significant redesigns with some notable breaking changes. It wasn't until the 2.4 line ended in the late oughts that Linux as we know it today came fully into existence.

yencabulator
0 replies
2h2m

Linux 2.6 internals were very different from 2.4 internals which were hugely different from 2.2. Programming for the three was almost like targeting 3 different kernels.

dredmorbius
0 replies
38m

What were some of those changes / developments?

ofrzeta
1 replies
17h56m

FWIW Linux got support for kernel modules in January 1995.

IgorPartola
0 replies
15h20m

Yeah this was one thing I spotted as well. The author seems to confuse the fact that the norm for Unix/Linux is that the OS should have the drivers whereas MS assumes the manufacturer should provide it with the capability to have this.

It also entirely overlooks how the system that allows a user with know specialized knowledge to authorize random code to run in a privileged environment which led to vulnerabilities that had their own vulnerabilities.

runjake
0 replies
19h44m

That was in response to the beginning of the article:

"I’ve repeatedly heard that Windows NT is a very advanced operating system"

It's very advanced for decades ago. It's not meant as an insult.

About 20 years ago, despite being a Linux/UNIX/BSD diehard, I went through the entire Inside Windows NT book word by word and took up low-level NT programming and gained a deep respect for it and Dave Cutler. Also a h/t to Russinovich who despite having better things to do running Winternals Software[1], would always patiently answer all my questions.

1. https://en.wikipedia.org/wiki/Sysinternals

fortran77
12 replies
21h53m

The NT kernel is pretty nifty, albeit an aging design.

Unix is an Apollo-era technology! Also an aging design.

UniverseHacker
11 replies
20h30m

Except unix nowadays is just a set of concepts and conventions incorporated into modern OSs

phendrenad2
4 replies
20h23m

What percent of Unix users are using a "modern OS" and what percentage are using Linux, which hasn't significantly changed since it was released in 1994?

phendrenad2
0 replies
3h42m

Of course, I meant the design hasn't changed. Linux has had a lot of refactoring, and probably Windows has also.

UniverseHacker
0 replies
19h28m

My point was that most people are using things like Linux, MacOS, etc. nowadays, which are all also pretty old by now but not nearly as old as ATT Unix

SoothingSorbet
0 replies
10h30m

Linux has changed dramatically since its first release. It has major parts rewritten every decade or so, even. It just doesn't break its ABI with userspace.

SSLy
0 replies
19h43m

let's be charitable, removal of global lock was fairly big change.

jiggawatts
3 replies
20h19m

How “modern” are they when they’re just a bunch of shell scripts on top of POSIX? SystemD caught up to NT4 and the original MacOS.

xattt
2 replies
18h40m

SystemD caught up to NT4 and the original MacOS.

The transition happened to the huffing and puffing/kicking and screaming of many sysadmins.

ruthmarx
1 replies
10h13m

Still a minority of sysadmins though. Most seem to have embraced it to an extent that's honestly a little sad to see. I liked to think of the linux community as generally being a more technical community, and that was true for a long time when you needed more grit to get everything running, but nowadays many just want Linux to be 'free windows'.

RiverCrochet
0 replies
4h20m

nowadays many just want Linux to be 'free windows'

This means Linux has "made it."

I liked to think of the linux community as generally being a more technical community, and that was true for a long time when you needed more grit to get everything running

I guess that grit was a gateway to a basic Linux experience for a long time - it did take a lot of effort to get a normal desktop running in the early to mid 90's. But that was never going to last - technical people tend to solve problems and open source means they're going to be available to anyone. There are new frontiers to apply the grit.

leftyspook
0 replies
4h44m

Set of concepts derived from "whatever the hell Ken Thompson had in his environment circa 1972".

ElectricalUnion
0 replies
20h7m

If by "modern" you mean stuff between 1930 and 1970, sure, most contemporany OSes can trace roots from that era.

nox101
7 replies
20h25m

Seems to me they should pull an Apple. Run everything old in some "rosetta" like system and then make something 100% new and try to get people to switch, like say no new updates except security updates for the old system so that apps are incentivized to use the new one.

wvenable
2 replies
18h40m

Nobody wants something 100% new. Users don't want it. Developers don't want it. You can make a new OS but then you'll have zero developers and zero users.

Yet this fantasy exists.

And as soon as you make something new, it'll be old, and people will call for its replacement.

wannacboatmovie
1 replies
3h47m

People live under the delusion that OS X was "100% new" when in fact it was warmed-over NeXTSTEP from 1989. Most of them probably have never seen or heard of a NeXT workstation.

To reinforce how much people hate "100% new" how long has Microsoft been working on ReFS? 20 years? The safest most boring job in the world must be NTFS developer.

wvenable
0 replies
3h1m

The OS X story is even worse than that. When Apple first released OS X to developers, like Adobe, they balked. They weren't going to port their Mac applications to this "new" operating system. Apple had to take another year to develop the Carbon APIs so Mac developers could more easily port their apps over.

mepian
2 replies
20h14m

Carbon is a better Apple-related comparison since it's basically a cleaned-up version of the classic Mac OS API as a library that ran on both Mac OS X and classic.

nullindividual
1 replies
18h17m

Win32 runs on Linux by way of SQL Server.

wannacboatmovie
0 replies
18h48m

You left out the important part: abandon the Rosetta-like system a mere few years later once you've lured them in, then fuck everyone over by breaking backwards compatibility every OS release. Apple really has the "extinguish" part nailed down.

marcodiego
3 replies
21h20m

I created a file named aux.docx on a pendrive with Linux. Tried to open it on windows 7. It crashed word with a strange error. Don't know what would happen on 8+.

uncanneyvalley
1 replies
20h38m

It would fail, too. ‘CON’ has been a reserved name since the days of DOS (actually CP/M, though that doesn’t have direct lineage to Windows) where it acted as a device name for the console. You can still use it that way. In a CMD window:

`type CON > file.txt`, then type some stuff and press CTRL+Z.

https://learn.microsoft.com/en-us/windows/win32/fileio/namin...

nullindividual
0 replies
19h56m

This is a Win32-ism rather than an NT-ism. This will work:

    mkdir \\.\C:\COM1

ninetyninenine
2 replies
11h29m

Didn't the article say that unix's were even more full of cruft?

ruthmarx
1 replies
10h36m

Did it?

ninetyninenine
0 replies
2h55m

It did. The whole article was about how the NT kernel is better designed.

anthk
2 replies
22h2m

Win32 is not the issue, MS could just create shims for these in a secure way. It's 2024, not 1997.

Ditto with GNU/Linux and old SVGAlib games. It already should have been some a wrapper against SDL2.

runjake
1 replies
20h25m

I'm not sure what you're trying to say here, but those "shims" exist. Apps generally do not talk directly to the Executive (kernel). Instead, the OS has protected subsystems that publish APIs.

Apps talk to the subsystem, and the subsystem talks to the Executive (kernel).

Traditionally, Windows apps talk to the Win32 subsystem[1]. This subsystem, as currently designed, is an issue as described in my previous comment.

1. https://en.wikipedia.org/wiki/Architecture_of_Windows_NT#Win...

Caveat: Details of this may have changed in the last couple major Windows versions. I've been out of the NT game for a bit. Someone correct me, if so.

anthk
0 replies
11h27m

Yes, I will correct you. Direct Draw games with run dog slow under Windows 8 and up. You can run them at full speed with WineD3D as it has libraries to map both GDI and DDraw to OpenGL.

pjmlp
0 replies
7h30m

That was the whole point of WinRT and UWP, and the large majority of Windows developer community rebeled against it, unfortunely.

Didn't help that trying to cater to those developers, management kept rebooting the whole development approach, making adoption even worse.

pasc1878
0 replies
8h51m

Does any common OS have a modern design.

Unix is around the same age.

giancarlostoro
0 replies
5h12m

I'd rather we get rid of all the marketing crap first.

flohofwoe
0 replies
9h8m

My main impression of Windows is that all the 'old' NT kernel stuff is very solid and still holds up well, that there's a 'middle layer' (Direct3D, DXGI, window system compositing) where there's still solid progress in new Windows versions (although most of those 'good parts' are probably ported over from the Xbox), while most of the top-level user-facing code (UI frameworks, Explorer, etc..) is deterioating at a dramatic pace, which makes all the actual progress that still happens under the hood kinda pointless unfortunately.

Andrex
0 replies
13h12m

Imagine the alternate reality where we got Longhorn with WinFS instead of Vista.

nullindividual
72 replies
1d1h

There's a large debate whether a 'hybrid' kernel is an actual thing, and/or whether NT is just a monolithic kernel.

The processes section should be expanded upon. The NT kernel doesn't execute processes, it executes _threads_. Threads can be created in a few milliseconds where as noted, processes are heavy weight; essentially the opposite of Unicies. This is a big distinction.

io_uring would be the first true non-blocking async I/O implementation on Unicies.

It should also be noted that while NT as a product is much newer than Unicies, it's history is rooted in VMS fundamentals thanks to it's core team of ex-Digital devs lead by David Cutler. This pulls back that 'feature history' by a decade or more. Still not as old as UNIX, but "old enough", one could argue.

[0] https://stackoverflow.com/questions/8768083/difference-betwe...

mananaysiempre
22 replies
1d

io_uring would be the first true non-blocking async I/O implementation on Unices.

I would agree with that statement in isolation, except by that standard (no need for a syscall per operation) the first “true” asynchronous I/O API on NT would be I/O ring (2021), a very close copy of io_uring. (Registered I/O, introduced in Windows 8, does not qualify because it only works on sockets.) The original NT API is absolutely in the same class as the FreeBSD and Solaris ones, it’s just that Linux didn’t have a satisfactory one for a long long time.

nullindividual
19 replies
1d

POSIX AIO is not non-blocking async I/O; it can block other threads requesting the resource. IOCP is a true non-blocking async I/O. IOCP also extends to all forms of I/O (file, TCP socket, network, mail slot, pipes, etc.) instead of a particular type.

POSIX AIO has usability problems also outlined in the previously linked thread.

Remember, all I/O in NT is async at the kernel level. It's not a "bolt-on".

IoRing is limited to file reads, unlike io_uring.

jhallenworld
5 replies
22h46m

This is all fine, but Window-NT file access is still slow compared with Linux- this shows up in shell scripts. The reason is supposedly that it insists on syncing during close, or maybe waiting for all closed files to sync before allowing the process to terminate. Shouldn't close finality be an optional async event or something?

nullindividual
3 replies
22h32m

The reason is due to file system filters, of which Windows Defender is always there. There is a significant delay from Defender when performing CloseFile()[0].

As I was looking at the raw system calls related to I/O, something immediately popped out: CloseFile() operations were frequently taking 1-5 milliseconds whereas other operations like opening, reading, and writing files only took 1-5 microseconds. That's a 1000x difference!

This is why DevDrive was introduced[1]. You can either have Defender operate in async mode (default) or remove it entirely from the volume at your own risk.

The performance issue isn't related to sync or async I/O.

[0] https://gregoryszorc.com/blog/2015/10/22/append-i/o-performa...

[1] https://devblogs.microsoft.com/visualstudio/devdrive/

cyberax
2 replies
17h27m

Windows FS stack is still _way_ slower than Linux. Filesystem operations have to create IRPs and submit them for execution through a generic mechanism. These IRPs can get filtered and modified in-flight, providing quite a bit of overall flexibility.

In Linux, filesystem paths are super-optimized, with all the filtering (e.g. for SELinux) special-cased if needed.

But even still, Windows also had to cheat to avoid completely cratering the performance, there's a shortcut called "FastIO": https://learn.microsoft.com/en-us/windows-hardware/drivers/i...

I wrote a filesystem for Windows around 25 years ago, and I still remember how I implemented all the required prototypes and everything in Explorer worked. But notepad.exe was just showing me empty data. It took me several days to find a note tucked into MSDN that you need to implement FastIO for memory mapped files to work (which Notepad.exe used).

nullindividual
0 replies
4h24m

But it simply isn't slower than Linux.

Robert Collins explains that performance is just as good as Linux and the performance loss on Windows is due to file system filters (Defender)[0].

This is what DevDrive intends (and does) fix.

[0] https://youtu.be/qbKGw8MQ0i8?t=1759

SoothingSorbet
0 replies
10h14m

That's interesting, why would notepad.exe use mmapped files?

torginus
0 replies
5h13m

This probably has a lot to do with the mandatory file locking on Windows - afaik on Windows, the file is the representation of the underlying data of the disk, unlike on Linux, where it's just a reference to the inode, so locking the file on open is necessary. This is why you always get those 'file in use and cannot be deleted' prompts.

This impacts performance particularly when working with a ton of tiny files (like git does).

Sesse__
5 replies
20h41m

Remember, all I/O in NT is async at the kernel level. It's not a "bolt-on".

All I/O in Linux is also async at the kernel level! The problem has always been expressing that asynchronicity to userspace in a sane way.

netbsdusers
4 replies
18h21m

Filesystem io (and probably more) is not async at the kernel level in Linux. (Just imagine trying to express the complexity of it in continuations or some sort od state machine!) As such io_uring takes the form of a kernel thread pool. Disk block io by contrast is much easier to be fundamentally async since its almost always a case of submitting a request to an HBA and waiting for an interrupt.

wmf
1 replies
16h16m

Just imagine trying to express the complexity of [a filesystem] in continuations or some sort of state machine!

Arguably asyc/await could help with this; obviously it didn't exist in 1991 when Linux was created but it would be interesting to revisit this topic.

SoothingSorbet
0 replies
10h16m

Arguably asyc/await could help with this; obviously it didn't exist in 1991 when Linux was created

Wouldn't that just consist of I/O operations returning futures and then having an await() block the calling thread until the future is done (i.e. put it on a waitqueue)?

treyd
0 replies
14h40m

Just imagine trying to express the complexity of it in continuations or some sort od state machine!

With Rust in the kernel this becomes somewhat possible to conceptualize.

mananaysiempre
0 replies
6h39m

Just imagine trying to express the complexity of it in continuations or some sort o[f] state machine!

You’d probably want to use either some sort of code generation to do the requisite CPS transform[1] or the Duff’s-device-like preprocessor trick[2], but it’s definitely doable with some compiler support. Not in an existing codebase, though.

(Brought to you by working on a C codebase that does express stuff like this as explicit callbacks and context structures. Ugh.)

[1] https://dx.doi.org/10.1007/s10990-012-9084-5, https://www.irif.fr/~jch/research/cpc-2012.pdf, https://github.com/kerneis/cpc

[2] https://www.chiark.greenend.org.uk/~sgtatham/coroutines.html

mananaysiempre
4 replies
1d

POSIX AIO + FreeBSD kqueue or Solaris ports are functionally equivalent to IOCP as far as I can tell.

trentnelson
2 replies
23h46m

I should do an updated version of that deck with io_uring and sans the PyParallel element. I still think it’s a good resource for depicting the differences in I/O between NT & UNIX.

And yeah, IOCP has implicit awareness of concurrency, and can schedule optimal threads to service a port automatically. There hasn’t been a way to do that on UNIX until io_uring.

nullindividual
1 replies
23h26m

Yes, please! And if you're interested, RegisteredIO and I assume you'd drop in IoRing.

In a nicely wrapped PDF :-)

trentnelson
0 replies
23h7m

Yeah I’d definitely include RegisteredIO and IoRing. When I was interviewing at Microsoft a few years back, I was actually interviewed by the chap that wrote RegisteredIO! Thought that was neat.

netbsdusers
0 replies
23h41m

Posix AIO is just an interface. Windows also relies on thread pools for some async io (I.e. when reading files when all the data necessary to generate a disk request isn't in cache - good luck writing that as purely asynchronous)

dataflow
0 replies
4h21m

IOCP is a true non-blocking async I/O.

Unfortunately that's only half-true. You can (and will) still get blockage sometimes with IOCP, it depends on a lot of factors, like how loaded the system is, I think. There is absolutely no guarantee that your I/O will actually occur asynchronously, only that you will be notified of its completion asynchronously.

Also, opening a file is also always synchronous, which is quite annoying if you're trying not to block e.g. a UI thread.

The implication of both of these is you still need dedicated I/O threads. I love IOCP as much as anyone, but it does have these flaws, and was very much designed to be used from multiple threads.

The only workaround I'm aware of was User-Mode Scheduling, which effectively notified you as soon as your thread got blocked, but it still required multiple logical threads, and Microsoft removed support for it in Windows 11.

trentnelson
0 replies
23h2m

None of the UNIXes have the notion of WriteFile with an OVERLAPPED structure, that’s the key to NT’s asynchronous I/O.

Nor do they have anything like IOCP, where the kernel is aware of the number of threads servicing a completion port, and can make sure you only have as many threads running as there are underlying cores, avoiding context switches. If you write your programs to leverage these facilities (which are very unique to NT), you can max perform your hardware very nicely.

a-dub
0 replies
14h50m

notably the NT equivalent of select(): WaitForSingleObject and WaitForMultipleObjects had a benefit that one select/wait type syscall could be tickled by any of a network, file or the NT equivalent of a pthread signal.

emily-c
15 replies
1d

Before VMS there was the family of RSX-11 operating systems which also had ASTs (now called APCs in NT parlance), IRPs, etc. Dave Cutler led the RSX-11M variant which significantly influenced VMS. The various concepts and design styles of the DEC family of operating systems that culminated in NT goes back to the 1960s.

It's sad that the article didn't mention VMS or MICA since NT didn't magically appear out of the void two years after Microsoft hired the NT team. MICA was being designed for years at DEC West as part of the PRISM project.

rbanffy
10 replies
22h48m

In many ways NT was a new, ground up implementation of “VMS NT”.

It started elegant, but all the backwards compatibility, technical debt, bad ideas, and dozens of versions later, with an endless list of perpetual features driven by whoever had a bigger wand at Microsoft at the time of their inception, takes a toll. Windows now is much more complicated than it could be.

It shocks me some apps get Windows NT4 style buttons even on Windows 11.

markus_zhang
5 replies
21h52m

How do you get Windows NT4 style buttons on 11? That's something I want to do with my application!

dspillett
3 replies
21h19m

The GDI libraries/APIs that provide that are all still there, you just need to find a framework that lets you see them, are kick through the abstraction walls of [insert chosen app framework] to access them more manually. Be prepared for a bit of extra work on what more modern UI libraries make more automatic, and having to discuss everything rather than just what you want to differ from default.

markus_zhang
2 replies
20h26m

Oh thanks, I always think what is there is the native. I don't realize the old graphics way is still there. Maybe the Win3.x style is still there too?

saratogacx
1 replies
20h2m

I think you can get back to Win9x/2k style controls by instructing the system to not add any theming. If you're finding a panel that is using 3.x controls, they're likely in the resources of the app/dll. Although the 3.x file picker can still be found in a couple of rare corners of the OS.

https://learn.microsoft.com/en-us/windows/win32/api/uxtheme/...

    STAP_ALLOW_NONCLIENT
Specifies that the nonclient areas of application windows will have visual styles applied.

markus_zhang
0 replies
19h11m

Thanks, this is interesting!

abareplace
0 replies
2h59m

If there is no application manifest, you will get Windows NT4 / Windows 9x style buttons. Just tested this on Windows 11.

heraldgeezer
1 replies
20h37m

It shocks me some apps get Windows NT4 style buttons even on Windows 11.

This is good, though. The alternative is that the app won't run at all, right? Windows NT is good because of that background compatibility, both for business apps and games.

rbanffy
0 replies
1h52m

The alternative is that the app won't run at all, right?

The alternative is that the application displays with whatever the current GUI uses for its widgets.

radicalbyte
0 replies
1h53m

Under Windows it's very rare to have trouble to running software. When you have trouble it's usually due to some security considerations or because you're using something which has roots in other operating systems.

MacOS & Linux are nothing like this. You can run most software, as most of the basis for modern software on those stacks is available in source form and can be maintained. Software which isn't breaks.

Apple/Google with their mobile OSes take that a step further, most older software is broken on those platforms.

The way they've kept compatibility within Windows is something I really love about the platform.. but it I keep wondering if there's a way to get the best of both worlds. Can you keep the compatibility layer as an adhoc thing, running under emulation, so that the core OS can be rationalised?

emily-c
0 replies
22h28m

In many ways NT was a new, ground up implementation of “VMS NT”.

Most definitely. There was a lot of design cleanup from VMS (e.g. fork processes -> DPCs, removing global PTEs and balance slots, etc), optimizations (converging VMS's parallel array structure of the PFN database into one), and simplification (NT's Io subsystem with the "bring your own thread" model, removing P1 space, and much more). SMP was also designed into NT from the beginning. You can start seeing the start of these ideas in the MICA design documents but their implementation in C instead of Pillar (variant of Pascal designed for Mica) in NT was definitely the right thing at the time.

rasz
0 replies
13h15m

Didnt it end brilliantly for MS? Settlement involved MS supporting Alpha while DEC trained its enormous sales/engineering arm to sell and support NT thus killing any incentives to buy DEC hw in the first place. DEC moved upstream the value chain and Microsoft moved tons of NT to all existing DEC corporate customers.

Taniwha
1 replies
15h51m

Vaxes also had hardware support for ASTs in VMS (unlike NT) - they were essentially software interrupts that only triggered when the CPU was in a process context and no enabled interrupts were pending - so you could set a bit in a mask in another thread's context that would get loaded automatically on context switch and triggered once the thread was runnable in user mode .... device drivers could trigger a similar mechanism in kernel mode (and the 2 intermediate hardware modes/rings). There were also atomic queue instructions that would dispatch waiting ASTs

ssrc
0 replies
7m

Months ago I found this presentation on youtube, "Re-architecting SWIS for X86-64"[0], about how VMS was ported from VAX to Alpha to Itanium to x86 that did not have the same AST behaviour.

[0] https://www.youtube.com/watch?v=U8kcfvJ1Iec

delta_p_delta_x
10 replies
1d

The NT kernel doesn't execute processes, it executes _threads_

This is amongst the most important and visible differences between NT and Unix-likes, really. The key idea is that processes manage threads. Pavel Yosifovich in Windows 10 System Internals Part I puts it succinctly:

  A process is a containment and management object that represents a running instance of a program. The term “process runs” which is used fairly often, is inaccurate. Processes don’t run - processes manage. Threads are the ones that execute code and technically run.
NtCreateProcess is extremely expensive and its direct use strongly discouraged (but Cygwin and MSYS2, in their IMO misguided intention to force Unix paradigms onto Windows, wrote fork() anyway), but thread creation and management is extremely straightforward, and the Windows threading API is as a result much nicer than pthreads.

PaulDavisThe1st
8 replies
1d

It is hard to accept that this is written by someone with any idea about how Linux works (as a Unix).

A process (really, a "task") is a containment and management object that represents a running instance of a program. A program ("task") does not run, its threads do.

The significant difference between Windows-related OS kernels and Unix-y ones is that process creation is much more heavyweight on the former. Nevertheless, on both types of systems, it is threads that execute code and technically run.

jjtheblunt
0 replies
17h7m

i was programming on NeXT as a registered developer back then too. Middle aged nerds unite!

immibis
1 replies
23h35m

This was written about Windows kernels.

Linux is the only Unix-like kernel I actually know anything about. In Linux, processes essentially do not exist. You have threads, and thread groups. A thread group is what most of the user-space tooling calls a process. It doesn't do very much by itself. As the name implies, it mostly just groups threads together under one identifier.

Linux threads and "processes" are both created using the "clone" system call, which allows the caller to specify how much state the new thread shares with the old thread. Share almost everything, and you have a "thread". Share almost nothing, and you have a "process". But the kernel treats them the same.

By contrast, processes in NT are real data structures that hold all kinds of attributes, none of which is a running piece of code, since that's still handled by a thread in both designs.

ithkuil
0 replies
22h55m

IIRC indeed Linux preserves the time honoured Unix semantics of a process ID by leveraging the thread group ID

delta_p_delta_x
1 replies
23h42m

If you're splitting hairs, you're correct; processes manage threads on all OSs.

However, from the application programmer's perspective, the convention on Unix-likes (which is what really matters) is to fork and pipe between processes as IPC, whereas on Windows this is not the case. Clearly the process start-up time on Unix-likes is considered fast enough that parallelism on Unix until fairly recently was based on spinning up tens to hundreds of processes and IPC-ing between them.

I believe the point stands.

PaulDavisThe1st
0 replies
19h51m

For a certain kind of application programming, that is and was true, yes.

But not for many other kinds of application programming, where you create threads using pthreads or some similar API, which are mapped 1:1 onto kernel threads that collectively form a "process".

I'm not sure what your definition of "fairly recently" is, but in the mid-90s, when we wanted to test new SMP systems, we would typically write code that used pthreads for parallelism. The fact that there is indeed a story about process-level parallelism (with IPC) in Unix-y systems should not distract from the equally fact existence and use of thread-level parallelism for at least 35 years.

torginus
0 replies
6h12m

My knowledge might be very out of date, but I remember a Linux process being an unit of execution as well as isolation. Creating a process without a thread is not possible afaik.

In contrast, Linux threads were implemented essentially as a hack - they were processes that shared memory and resources with their parent process, and were referred to internally as LWPs - lightweight processes.

I also remember a lot of Unix/Linux people not liking the idea of multithreading, preferring multiple processes to one, single-threaded process.

netbsdusers
0 replies
23h29m

All kernels execute threads. It's just that very old unixes had a unity of thread and process (and Linux having emulated that later introduced an unprecedented solution to bring in support for posix threads). The other unixes for their part all have a typical process and threads distinction today and have had for a while.

qsdf38100
8 replies
22h31m

WNT is VMS+1

V->W

N->M

S->T

steve1977
1 replies
22h18m

Initially actually (afaik) it stood for N10, for the Intel i860 CPU. I think “New Technology” came from marketing then.

nullindividual
0 replies
22h7m

If you watch the interview with David Cutler to the time code I linked to, he explains that NT stands for New Technology which marketing did not want.

JeremyNT
0 replies
20h29m

One of my very favorite facts about Windows 2000, as revealed in its boot screen, is that it's based on New Technology Technology.

(I no longer work with Windows very much, but this little bit of trivia has stuck with me over the years)

revskill
1 replies
9h46m

Could you please explain on those characters ? What do they mean ? Thnks.

fredoralive
0 replies
5h55m

VMS[1] is an OS for VAX[2] systems by Digital that Dave Cutler worked on before Windows NT (with the abandoned MICA OS for the equally abandoned PRISM CPU architecture between the two). As people have noted, the NT kernel is rather VMS / MICA like, because it's written by some of the same people, so they're solving problems with things they know work (with some people suggesting directly copied code as well, although VMS and MICA didn't use C as their main programming languages).

Some people point out if you shift characters by one "VMS" becomes "WNT", and give it as an explanation of the name choice of Windows NT, but it's a coincidence. For one thing, nobody ever explains how this gag was going to work back when the project was "NT OS/2"[3].

[1] Virtual Memory System, originally VAX/VMS, later OpenVMS.

[2] Later DEC Alpha, Intel Itanic and now AMD64 systems.

[3] AKA OS/2 3.0 or Portable OS/2.

queuebert
0 replies
4h23m

In true Windows form, you have a memory access error.

pmontra
0 replies
21h29m

Actually

M->N

lr1970
3 replies
21h5m

It should also be noted that while NT as a product is much newer than Unicies, it's history is rooted in VMS fundamentals thanks to it's core team of ex-Digital devs lead by David Cutler.

WNT = VMS + 1 (next letter in alphabet for all three)

amatwl
2 replies
20h50m

For the record, NT comes from the codename for the Intel i860 (N10) which was the original target platform for NT.

mannyv
0 replies
5h55m

NT used to mean "New Technology," if I remember correctly. Not sure if that was the internal codename or a marketing creation anymore.

PaulDavisThe1st
3 replies
1d

The processes section should be expanded upon. The NT kernel doesn't execute processes, it executes _threads_. Threads can be created in a few milliseconds where as noted, processes are heavy weight; essentially the opposite of Unicies. This is a big distinction.

I am not sure what point you are attempting to make here. As written, it is more or less completely wrong.

NT and Unix kernels both execute threads. Threads can be created in a few microseconds. Processes are heavy weight on both NT and Unix kernels.

The only thing I can think of is the long-standing idea that Unix tends to encourage creating new processes and Windows-related OS kernels tend to encourage creating new threads. This is not false - process creation on Windows-related OS kernels is an extremely heavyweight process, certainly comparing it with any Unix. But it doesn't make the quote from you above correct.

On a separate note, the state of things at the point of creation of NT is really of very little interest other than than to computer historians. It has been more than 30 years, and every still-available Unix and presumably NT have continued to evolve since then. Linux has dozens to hundreds of design features in it that did not exist in any Unix when NT was released (and did not exist in NT either).

epcoa
2 replies
21h20m

Processes and threads on NT are distinct nominative types of objects (in a system where “object” has a much more precise meaning) and the GP is at least correct that the former are not schedulable entities. This distinction doesn’t really exist on Linux for instance where there are at one approximation on the user side only processes (at least to use the verbiage of the clone syscall - look elsewhere and they’re threads in part due to having to support pthreads), and the scheduler schedules “tasks” (task_struct) (whereas in NT the “thread” nomenclature carries throughout). FreeBSD may have separate thread and proc internally but this is more an implementation detail. I guess this all to say at the level lower than an API like pthreads, process/thread really isn’t easily comparable between NT and most Unixes.

It’s not so much “heavyweight” vs “lightweight” but that NT has been by design more limited in how you can create new virtual memory spaces.

For better or worse NT tied the creation of VM spaces to this relatively expensive object to create which has made emulating Unix like behavior historically a pain in the ass.

PaulDavisThe1st
1 replies
19h54m

pthreads is a user-space API, and has nothing to do with the kernel. It is possible to implement pthreads entirely in user space (though somewhat horrific to do so). Linux does not have kernel threads in order to support pthreads (though they help).

Anyway, I see your point about the bleed between the different semantics of a task_t in the linux kernel.

epcoa
0 replies
19h0m

Linux does not have kernel threads in order to support pthreads

Yes, what I was alluding to somewhat cryptically was things like tgids and the related tkill/tgkill syscalls that as far I am aware were added with the implementation of 1:1 pthread support in mind.

formerly_proven
1 replies
1d

There's a large debate whether a 'hybrid' kernel is an actual thing, and/or whether NT is just a monolithic kernel.

I don't think it's a concept that meaningfully exists. Microkernels are primarily concerned with isolating non-executive functions (e.g. device drivers) for stability and/or security (POLA) reasons. NT achieves virtually none of that (see e.g. Crowdstrike). The fact that Windows ships a thin user-mode syscall shim which largely consists of thin-to-nonexistent wrappers of NtXXX functions is architecturally uninteresting at best. Arguably binfmt_misc would then also make Linux a hybrid kernel.

hernandipietro
0 replies
15h22m

Originally, Windows NT 3.x was more "microkernelithic" as graphics and printer drivers where isolated. NT 4 moved them to Kernel mode to speedup the system.

Dwedit
1 replies
23h3m

Threads aren't created in milliseconds, that would be really slow. It's more like microseconds.

nullindividual
0 replies
22h54m

Typo, thanks for the correction. Too late to edit :-)

slt2021
0 replies
18h52m

Process is a way to segregate resources (memory, sockets, file descriptors, etc). You kill a proc - it will release all memory and file descriptors.

Thread is a way to segregate computation. You spawn a thread and it will run some code scheduled by the OS. you kill/stop a thread and it will stop computation, but not the resources.

kev009
44 replies
1d

This is great! It would be interesting to see darwin/macos in the mix.

On the philosophical side, one thing to consider is that NT is in effect a third system and therefore avoided some of the proverbial second system syndrome.. Cutler had been instrumental in building at least two prior operating systems (including the anti-UNIX.. VMS) and Microsoft was keen to divorce itself from OS/2.

With the benefit of hindsight and to clear some misconceptions, OS/2 was actually a nice system but was somewhat doomed both technically and organizationally. Technically, it solved the wrong problem.. it occupies a basically unwanted niche above DOS and below multiuser systems like UNIX and NT.. the same niche that BeOS and classic Mac OS occupied. Organizationally/politically, for a short period it /was/ a "better DOS than DOS and better Windows than Windows" with VM86 and Win API support, but as soon as Microsoft reclaimed their clown car of APIs and apps it would forever be playing second fiddle and IBM management never acknowledged this reality. And that compatibility problem was still a hard one for Microsoft to deal with, remember that NT was not ubiquitous until Windows XP despite being a massive improvement.

nullindividual
15 replies
1d

And that compatibility problem was still a hard one for Microsoft to deal with, remember that NT was not ubiquitous until Windows XP despite being a massive improvement.

I think when it comes to this it is best to remember the home computing landscape of the time, and the most important part: DRAM prices.

They were absurdly high and NT4/2000 required more of it.

My assumption is Microsoft would have made the NT4/2000 jump much quicker if DRAM prices were driven in a downward direction.

kev009
8 replies
1d

Definitely impactful for NT4, not for 2000.

p_l
7 replies
23h39m

Was very impactful in early days of 2000. Seeing 64 MiB "used up" by barely loaded NT5.0 beta/RC was honestly a sort of chilling effect. But prices shortly fell and 128MB became accessible option, just in time for Windows XP to nail Windows 9x dead

hnlmorg
5 replies
23h27m

128MB was pretty common by the time Windows 2000 was released. I could afford it and I wasn’t paid well at that time.

Plus Windows ME wasn’t exactly nimble either. People talk about the disaster that was Vista and Windows 8 but ME was a thousand times worse. Thank god Windows 2000 was an option.

somat
3 replies
22h16m

My understanding is that ME from a technological point of view was 98 with NT drivers. It probably was a critical step in getting vendors to make NT drivers for all of their screwball consumer hardware, and this made XP, the "move the consumers to the NT kernel" step a success. The lack of drivers is also what made XP 64 bit edition so fraught with peril, but xp-64/vista was probably critical for win7's success for the same reason.

But yeah, what a turd of a system.

mepian
2 replies
20h3m

98 was the one that introduced NT drivers (WDM).

hypercube33
1 replies
17h52m

Didn't it still use VXD drivers for a lot of stuff though?

cyberax
0 replies
17h22m

Yes, because WDM support was limited. It only allowed synchronous requests and was really only suitable for USB or storage drivers.

p_l
0 replies
5h39m

It got much better once the release date rolled in, but it was something I remember discussed a lot at the time among those who did use NT.

Also, at the time NT was still somewhat limited in official settings to pretty much rare more expensive places and engineering systems, with typical business user keeping to Windows 98 on "client" and NT Server domain controllers or completely different vendors, at least outside big corporations with fat chequebooks. Active Directory started changing it a lot faster, the benefits wer great and 2000 having hotplug et al was great improvement, but it took until XP for the typical office computer to really run on NT in my experience.

nullindividual
0 replies
23h18m

It was roughly $1/MB in 1999. Or about $250 USD with inflation in 2024 dollars for a 128MB DIMM.

hnlmorg
4 replies
23h29m

DOS compatibility was a far bigger issue.

PC gaming was, back then, still very DOS-centric. It wasn’t until the late 90s that games started to target Windows. And even then, people still had older games they wanted supported.

hypercube33
2 replies
17h53m

NT 4 had NTVDM and it worked well enough. Quake 1, command and conquer, sim games for DOS and a bunch of other stuff worked just fine. You'd run into issues with timing on crappier games or some games that talked directly to soundcards I forget what the details were but you'd just not have audio.

nikau
0 replies
12h52m

I had a gravis ultrasound at the time and remember having to cut the reset signal line on the card.

I could then initialise the card in DOS and reboot into NT without it being reset and losing settings. Then some sketchy modified driver was able to use it.

cyberax
0 replies
17h24m

I remember NT4 having problems with games that wanted to access SVGA resolutions and SoundBlaster. I kept a volume with Win98 back then specifically for the games.

dspillett
0 replies
21h11m

I'd say both. IIRC the DOS story was better under OS/2 than NT, but the RAM requirements were higher (at least until XP).

To add a third prong: hardware support was a big issue too as it is for any consumer OS, with legacy hardware being an issue just as it can be today if not more so. This hit both NT and OS/2 similarly.

euroderf
0 replies
1d

I recall DRAM prices restricting the uptake of OS/2 also.

hnlmorg
13 replies
23h33m

That “niche” you described was actually a desktop computing norm form more than a decade.

Let’s also not forget RISC OS, Atari TOS, AmigaOS, GEOS, SkyOS and numerous DOS frontends including, but not limited to, Microsoft Windows.

kev009
7 replies
18h42m

I believe we are obliquely agreeing.

To further my thoughts a bit, the distinction I would place on OS/2 over DOS is double: 1) first class memory protection 2) full preemptive multitasking. The distinction I would place against all widely used modern desktop OSes is the lack of first class multi-user support.

Early Windows gains cooperative multi-tasking like Mac OS classic but neither fundamentally use memory protection the way later OSes normalize and that turns out to be a pretty clear dead end. I believe there are extensions for both that retrofit protection in. Both also have some add on approaches to multi-user but are not first class designs.

So, even the most robust single user system that implements full memory protection and preemptive multitasking still seems to be stuck in a valley. I.e. whatever the actual cost in terms of implementation and increase in cognitive workload for single user systems (i.e. "enter your administrative password" prompts on current macOS or Windows administrative accept dialogs) seems to be accepted by the masses.

And note that this isn't an implicit a negative judgement, for instance I find BeOS or those in your list can be absolutely fascinating. And OS/2 is lovely even today for certain niche or retrocomputing things. Just pointing out that NT made a better bet for the long term, and some of that undoubtedly related to the difference of a couple years.. keep in mind OS/2 ran on a 286, which NT completely bypassed.

nullindividual
4 replies
18h9m

Windows 2.0 for i386 was the first to introduce protected mode and preemptive multitasking. These features had to wait for Intel but it was available in 1987.

kev009
3 replies
17h39m

In a limited sense, yes, Windows (without NT) increasingly /used/ memory protection hardware over its life but never in a holistic approach as we typically understand today to create a TCB.

I don't believe Windows 2.0 implemented preemptive tasking, can you show a reference so I can learn?

fredoralive
1 replies
5h30m

AIUI Windows/386 2.x is basically a pre-emptive V86-mode DOS multitasker that happens to be running Windows as one of its tasks. So the GUI itself is cooperative, but between its VM and DOS VMs it can pre-empt.

(Windows 3.x 386 mode is similar, with Windows 9x stuff in the Windows VM can pre-emptively multitask, mostly).

kev009
0 replies
1h35m

Thanks that explanation makes sense to me!

jazzypants
0 replies
13h38m

I'm not that guy, but the Wikipedia article[1] is a decent jumping-off point, but I also found this blog article[2] talking about the different versions-- although it seems to get a couple things wrong according to discussion about it on lobste.rs[3] Finally, this long article from Another Boring Topic [4] includes several great sources.

1 - https://en.m.wikipedia.org/wiki/Windows_2.0#Release_versions

2 - https://liam-on-linux.livejournal.com/78006.html

3 - https://lobste.rs/s/4xfswa/what_was_difference_between_windo...

4 - https://anotherboringtopic.substack.com/p/the-rise-of-micros...

devbent
1 replies
3h11m

Windows has had multi-user support for ages.

That is how user switching in XP works, and how RDP works. You can have an arbitrary number of sessions of logged in users at once, only limited by the license for what version of Windows you have installed.

There have also been versions of Windows that allow multiple users to interact with each other at once, but I believe these have all been cancelled and I do not know to what extent these simultaneous users had their own accounts.

kev009
0 replies
1h36m

Well of course, Windows XP is a direct decedent of Windows NT. Maybe you are referring to Citrix or Terminal Server, which are also Windows NT technologies.

torginus
0 replies
4h54m

Yeah, this was a common misconception - due to the fact you could boot into Windows (up to 95) from DOS, people assumed it was just a frontend program.

hnlmorg
0 replies
22h1m

To be honest I was being flippant with that Windows remark but you’re right to call me out for it.

Microsoft get a lot of stick but Windows of the 90s do bring a lot to table. And by 9x DOS was basically a boot loader.

hypercube33
1 replies
17h55m

How dare you ignore BeOS which isn't Windows or Nix

hnlmorg
0 replies
11h22m

It had already been mentioned and the point of my post was to give other examples

tivert
9 replies
1d

This is great! It would be interesting to see darwin/macos in the mix.

But that's just another UNIX.

kev009
8 replies
1d

Only in the user's perception. The implementation is nothing like UNIX, being a Mach2.5 derivative and later additions like DriverKit.

steve1977
7 replies
22h11m

macOS is a proper UNIX. As was OSF/1 (aka Digital UNIX aka Tru64 UNIX), which also had Mach 2.5 kernel.

kev009
6 replies
21h48m

So is z/OS. UNIX branding under The Open Group is correctly focused on the user's perception and has little to do with kernel implementation. Mach is is no more UNIX than NT is VMS, to call one the other in this context of kernel discussion is reductionist and impedes correct understanding of the historical roots and critical differences.

p_l
5 replies
20h26m

OSF/1 however was a complete Unix system, even if it based its kernel on Mach (at least partially because it offered a fast path to SMP and threading), and formed the BSD side of Unix wars.

And NeXTSTEP didn't diverge too much that when OSX was released they updated the code base with last OSFMK release.

icedchai
3 replies
16h39m

I'd say SunOS (4.x and earlier, not Solaris) was the BSD side of the Unix wars. For most of the 90's, SunOS was the gold standard for a Unix workstation. I worked at a couple of early Internet providers and the users demanded Sun systems for shell accounts. Anything else was "too weird" and would often have trouble compiling open source software.

kev009
2 replies
14h40m

Correct, SunOS is a descendant of BSD and was the most widely used and renowned one during that time. And like you said, it was the gold standard for easy builds of most contemporary software and enthusiastic support.

DEC Ultrix is also BSD kin and IBM AOS and HP-BSD were intentionally vanilla BSDs. There were some commercial BSDs like Mt Xinu and BSDi that were episodically relevant.

BSD proper was alive and well especially in the academic and research circles into the 1990s and we get the current derivatives like Net, Free, and Open which are direct kin.

Mach is regularly BSD-affined because BSD was a typically ported server but Mach is decidedly its own thing (as a simple and drastic counterexample, there was MkLinux and OS/2 for PowerPC which had little to do with BSD but are very much Mach). NeXTSTEP and eventually Darwin/macOS inherit BSD affinity.

icedchai
1 replies
8h32m

Ultrix had a “weird” feeling to it. It didn’t even support shared libraries from what I remember.

kev009
0 replies
7h59m

It had some heavy hitters behind it, but my understanding is DEC's effort was mired in organizational problems leading to fractured strategy and commitment. Many vendors were still figuring shared libraries out into the early 1990s so it must have been swept away from Ultrix once OSF/1 became the plan of record.

kev009
0 replies
17h31m

Again, so too is z/OS a complete UNIX system (from the user perspective).

OSF/1 is decidedly not the the BSD side of the Unix wars, it is its own alternative strand against its contemporaries BSD and System V. More specifically, it took its initial UNIX personality from IBM AIX and was rapidly developed and redefined to accommodate the standards du jour which included BSD and System V APIs.

netbsdusers
2 replies
23h26m

The second system for Cutler was really Mica - he discusses its outrageous scope in his recent interview with Dave Plummer.

markus_zhang
0 replies
22h54m

Dave Cutler is really someone I look up to and wish I could be (but could never be due to numerous reasons). I strongly resonate with what he said in "Showstopper":

  What I really wanted to do was work on computers, not apply them to problems.
And he sticks to it for half of a century.

cturner
0 replies
8h17m

Be is much more like NT than OS/2 or classic Mac OS. Like NT: kernel written in C, portable, emphasis on multi-threading, robust against subsystem failures, shipped with a TCP/IP stack in the base OS. Be booted straight into the desktop but you could download software to give it a logon screen like NT4 and different users - the structures were already in place.

The current crop of operating systems may themselves be a temporary niche. The design of NT and unix are awash with single-host assumptions yet most use-cases are now networked. Consider the way that linux filesystem handles are specific to the host they are running on, rather than the grid of computers they run in. Yet we run word-processors in browsers, ssh to other host to dispatch jobs.

There is a gap for a system which has an API that feels like an operating system API, but which sits on top of a grid of computers, rather than a single host. The kernel/keeper acts as a resource-manager for CPUs and memory in the grid. Such systems exist in sophisticated companies but not in the mainstream. Apache Yarn is an example of a system headed in that direction.

Once such a system becomes mainstream, you don't need the kind of complex operating systems we have now. A viable OS would do far less - coordinate drivers, scheduler, TCP/IP stack.

andrewla
36 replies
23h25m

Practically speaking there are a number of developer-facing concerns that are pretty noticeable. I'm primarily a Linux user but I worked in Windows for a long time and have grown to appreciate some of the differences.

For example, the way that command line and globbing works are night and day, and in my mind the Windows approach is far superior. The fact that the shell is expected to do the globbing means that you can really only have one parameter that can expand. Whereas Win32 offers a FindFirstFile/FindNextFile interface that lets command line parameters be expanded at runtime. A missing parameter in unix can cause crazy behavior -- "cp *" but on windows this can just be an error.

On the other hand, the Win32 insistence on wchar_t is a disaster. UTF-16 is ... just awful. The Win32 approach only works if you assume 64k unicode characters; beyond that things go to shit very quickly.

rbanffy
8 replies
22h33m

NT was more ambitious from the start, and this might be one of the reasons why it didn’t age so well: the fewer decisions you make, the fewer mistakes you’ll have to carry. GNU/Linux (the kernel, GNU’s libc, and a handful of utilities) is a very simple, very focused OS. It does not concern itself about windows or buttons or mice or touchscreens. Because of that, it’s free to evolve and tend to different needs, some of which we are yet to see. Desktop environments come and go, X came and mostly went, but the core has evolved while keeping itself as lean as technically possible.

andrewla
4 replies
22h24m

This is more of a sweeping generalization than I think would be appropriate.

The command line handling as I note above is a really crufty old Unix thing that doesn't make sense now and is confusing and offputting when you get papercuts from it.

Another notable thing that they talk about to an extent is process creation -- the fork/exec model in Linux is basically completely broken in the presence of threads. The number of footguns involved in this process has now grown beyond the ability of a human to understand. The Windows model, while a bit more cumbersome seeming at first, is fully generalizable and very explicit about the handoff of handles between processes.

The file system model I think is mostly a wash -- on the one hand, Window's file locking model means that you can't always delete a file that's open, which can be handy. On the other hand, it means that you can't always delete a file that's open, which can be painful. On Linux, it's possible that trying to support POSIX file system semantics can result in unrecoverable sleeps that are nearly impossible to diagnose or fix.

xolve
1 replies
8h36m

I agree about threads a lot! Process creation and handling APIs e.g. fork, signals, exec etc. are great when working with single threaded processes and command line, but they have so many caveats when working with threads.

A paper by Microsoft on how viral fork is and why its presence prevents a better process model: https://www.cs.bu.edu/~jappavoo/Resources/Papers/fork-hotos1...

rbanffy
0 replies
1h45m

True, starting a process and waiting until all its threads complete is a pain on Linux but I don’t remember it being less painful on Windows (although the last time I tried that on Windows was with Windows 2000).

simoncion
1 replies
15h6m

The command line handling as I note above is a really crufty old Unix thing that doesn't make sense now and is confusing and offputting when you get papercuts from it.

It's a power tool, and one whose regularity and consistent presence is appreciated by many.

If it trips you up, then configure your interactive shell and/or scripts to not glob. Bash has 'set -f', other shells surely have similar switches, and undoubtedly there are shells that do no globbing at all.

If your counterargument to this workaround is that now you definitely can't use "*?[]" and friends, whereas you could maybe do that in some Windows software, my counterargument to that would be that leaving globbing & etc up to the application software not only makes it inconsistent and unreliable, it does nothing to systematically prevent the 'cp *' problem you mentioned above.

andrewla
0 replies
1h5m

I mean, I can't turn globbing off -- applications do not do glob expansion in unix (with rare exceptions like `find`), so they just wouldn't work. This is an intrinsic decision in how linux applications are designed to work with command line arguments. All shells need to comply with this if they expect to be able to use command line tools and allow users to use wildcards. There's simply no other choice.

The `cp *` case is annoying in that it will sometimes fail explicitly, but often will work, except that it will do something entirely unexpected, like copy all the files in the directory to some random subdirectory, or overwrite a file with another file. This is unfixable. Files that start with a dash are a minefield.

The windows approach is not without its flaws (quoting is horrible, for example), but on balance I think a little more reasonable.

okanat
2 replies
18h58m

Sorry but your argument is baseless. There is nothing in NT kernel that forces a certain UI toolset nor it deals with the UI anymore (it briefly did when PCs were less powerful via GDI API, not anymore). Linux kernel and its modesetting facilities are quite a bit invasive and it is Linux that forces a certain way of implementing GPU drivers.

Windows just requires a set of functions from the driver to implement. Win32 drawing APIs are now completely userspace APIs and Windows actually did implement a couple of different ones.

Browsers switched to using a single graphics API canvas component long ago. Instead of relying on high level OS graphics API, they come with their own direct renderers that's opaque to the OS apart from the buffer requests. This approach can utilize GPUs better. Windows was among the first systems to implement the same as a UI component library. It is called WPF and it is still the backbone of many non-Win32 UI elements on Windows. On the Linux side, I think Qt was the first to implement such a concept with QML / QtQuick which is much later than WPF.

Moreover your argument that Unix evolves better or more freely falls apart when you consider that we had to replace X. On Unix world, the tendency to design "minimal" APIs that are close to hardware is the source of all evil. Almost any new technology requires big refactoring projects from application developers in the Unix world since the system programmers haven't bothered to design future-proof APIs (or they are researchers / hobbyists who don't have a clue about the current business and user needs nor upcoming tech).

Windows didn't need to replace Win32 since it was designed by engineers who understood the needs of businesses and designed an abstract-enough API that can survive things like introduction of HiDPI screens (which is just a "display changed please rerender" event). It is simply a better API.

On the Unix side X was tightly designed as around 96 or 72 DPI screens and everything had to be bolted on or hacked since the APIs were minimal or tightly coupled with the hardware capabilities at the time. Doing direct rendering on X was a pain in the ass and had an intertwined web of silly hacks which was why the DEs in 2010s kept discovering weird synchronization bugs and it was why Wayland needed to be invented.

rbanffy
0 replies
2h7m

There is nothing in NT kernel that forces a certain UI toolset

Unfortunately, Windows the OS and the NT kernel are not completely independent - you can’t really run one without the other.

we had to replace X

We did so because we wanted to continue to run apps made for it. The fact it was done without any change to the Linux kernel is precisely because the graphical environment is just another process running on top of the OS, and it’s perfectly fine to run Linux without X.

anthk
0 replies
11h29m

Qt...

Cairo and such predate QML for long...

panzi
7 replies
22h29m

Hard disagree. The way Windows handles command line parameters is bonkers. It is one string and every program has to escape/parse it themselve. Yes, there is CommandLineToArgvW(), but there is no inverse of this. You need to escape the arguments per hand and can't be sure the program will really interpret them the way you've intended. Even different programs written by Microsoft have different interpretations. See the somewhat recent troubles in Rust: https://github.com/rust-lang/rust/security/advisories/GHSA-q...

pcwalton
1 replies
20h19m

It's really strange to me that Microsoft has never added an ArgvToCommandLineW(). This would solve most of the problems with Windows command line parsing.

pjmlp
0 replies
6h59m

It was common in MS-DOS compilers, in Windows largely ignored, because GUI is the main way of using the system, not always being stuck in CLI world.

chasil
1 replies
22h12m

As I understand it, CMD.EXE came from OS/2 and has had many revisions that allow more pervasive evaluation of variables (originally, they were expanded only once, at the beginning of a script).

The .BAT/.CMD to build the Windows kernel must have originally been quite the kludge.

Including pdksh in the original Windows NT might have been a better move.

https://blog.nullspace.io/batch.html

pmontra
0 replies
21h20m

I quote URLs and "file names" with double quotes in Linux bash much like I quote "Program Files" in Windows cmd. It's the same. I quote spaces\ with\ a\ backslash\ sometimes.

andrewla
0 replies
22h9m

Question of where the pain goes, I guess. In unix, having to deal with shell escaping when doing routine tasks is super annoying -- URLs with question marks and ampersands screwing everything up, and simple expansions (like the `cp *` example above) causing confusing.

Yes, Windows resolution and glob expansion can be inconsistent, but it usually isn't, but Unix makes you eat the cruft every time you use it. And you still get tools like ImageMagick that have strange ad hoc syntax for wildcard because they can't use the standard wildcarding, or even ancient tools like find that force you to do all sorts of stupid shit to be compatible with globbing.

SkiFire13
0 replies
20h21m

See the somewhat recent troubles in Rust: https://github.com/rust-lang/rust/security/advisories/GHSA-q...

FYI this started out as a vulnerability in yt-dlp [1]. Later it was found to impact many other languages [2]. Rust, along with other languages, also considered it a vulnerability to fix, while some other languages only updated the documentation or considered it as wontfix.

[1]: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-42...

[2]: https://flatt.tech/research/posts/batbadbut-you-cant-securel...

torginus
3 replies
4h35m

I can see why MS went with UTF-16. As someone who had experience from before that era, and comes from a non-English culture, before UTF-16, most people used crazy codepages and encodings for their stuff, resulting in gobbledygook once something went wrong - and it always did.

If you run with the assumption that all UTF-16 characters are two bytes, you still get something that's usable for 99% of the Earth's population.

devbent
2 replies
3h4m

UTF-8 wasn't a thing when the decision to go with UTF-16 was made.

UTF-8 became a thing shortly thereafter and everyone started laughing at MS for having followed the best industry standard that was available to them when they had to make a choice.

yencabulator
0 replies
2h17m

Meanwhile a bunch of unix graybeards literally invented UTF-8 on a napkin, and changed the world.

wvenable
0 replies
2h47m

UTF-16 also didn't exist when the decision was made. It was UCS-2.

Microsoft absolutely made the right decision at the time and really the only decision that could have been made. They didn't have the luxury to ignore internationalization until UTF-8 made it viable for Linux.

Dwedit
3 replies
23h5m

The wchar_t thing is made much worse by disagreements on what type that actually is. On Win32, it's a 16-bit type, guaranteed to be UTF-16 code points (or surrogate pairs). But on some other compilers and operating systems, wchar_t could be a 32-bit type.

Another problem with UTF-16 on Windows is that it does not enforce that surrogate pairs are properly matched. You can have valid filenames or passwords that cannot be encoded in UTF-8. The solution was to create another encoding system called "WTF-8" that allows unmatched surrogate pairs to survive a round trip to and from UTF-16.

progmetaldev
1 replies
20h12m

Is this just a really good joke, or something real? I enjoyed it, regardless!

benchloftbrunch
0 replies
19h26m

WTF-8 barely qualifies as "another encoding system" - it's a trivial superset of UTF-8 that omits the rule forbidding surrogate codes.

Imo that artificial restriction in UTF-8 is the problem.

stroupwaffle
1 replies
23h19m

If wchar_t holds the majority of code points for given use, then there are some benefits to having a fixed-width character and certain algorithms.

But it is fairly easy to convert wchar_t to-and-from UTF8 depending on use.

UTF16 is not awful it is the same as an 8-bit character set but twice longer.

andrewla
0 replies
23h10m

UTF-16 is fine so long as you are in Plane 0. Once you have to deal with surrogate pairs, then it really is awful. Once you have to deal with byte-order-markers you might as well just throw in the towel.

UTF-8 is well-designed and has a consistent mechanism for expanding to the underlying code point; it is easy to resynchronize and for ASCII systems (like most protocols) the parsing can be dead simple.

Dealing with Unicode text and glyph handling is always going to be painful because this problem is intrinsically difficult. But expansion of byte strings to unicode code points should not be as difficult as UTF-16 makes it.

Windows was converted to UCS-2 before higher code planes were designed and they never recovered.

mixmastamyk
1 replies
21h53m

command line and globbing

Both Unix and NT are suboptimal here. I believe there was an OS lost to time (and my memory) that had globbing done by the program but using an OS-provided standard library. Probably the best way to do it consistently. That said, having to pick the runner up, I prefer the Unix way. As the unpredictable results happened to me more often on NT than... for example your cp example, which though possible I don't think I've ever done in my career.

The rest of command.com/cmd.exe is so poorly designed as to be laughable, only forgiven for being targeted to the PC 5150, and should have been retired a few years later. Makes sh/bash look like a masterpiece. ;-)

andrewla
0 replies
21h25m

Win32 in theory has globbing done by an OS-provided standard library -- the `FindFirstFile` and `FindNextFile` win32 calls process globbing internally, and they are what you are expected to use.

Some applications choose to handle things differently, though. For example, the RENAME builtin does rough pattern-matching; so "REN .jpg .jpg.old" will work pretty much the way that intuition demands that it work, but the latter parameter cannot be globbed as there no such files when this command begins to execute. Generally speaking this can get pretty messy if commands try to be clever about wildcard expansion against theoretical files.

jhallenworld
1 replies
22h51m

Wchar_t is definitely the biggest annoying difference in that it shows up everywhere in your C/C++ source code.

TillE
0 replies
20h55m

It depends on what you're doing, but after many years I've just settled on consistently using UTF-8 internally and converting to UCS-2 at the edges when interacting with Win32.

There's just too much UTF-8 input I also need to take, and converting those to wstring hurts my heart.

dspillett
1 replies
21h4m

Not even UTF16. Just UCS2 for a long time.

deathanatos
0 replies
14h2m

Windows, I believe, is WTF-16. It permits surrogates (e.g., you can stick an emoji in a filename) — thus it cannot be UCS-2. It permits unpaired surrogates — thus it cannot be UTF-16.

tracker1
0 replies
1h31m

For what it's worth, UTF-8 didn't exist when UTF-16/UCS2 was created. I'm sure Windows, JavaScript and a lot of other things would be very different if UTF-8 came first.

Aside: a bit irksome how left out cp-437 is in a lot of internationalization/character tools. PC-DOS / US was a large base of a lot of software/compatibility.

pie_flavor
0 replies
15h44m

wchar_t was advanced for its time, Microsoft was an early adopter of Unicode and the ANSI codepage system it replaced was real hell but what almost everyone else was using. UTF-8's dominance is much more recent than Linux users tend to assume - Linux didn't (and in many places still doesn't) support Unicode at all, but an API that passes through ASCII or locale-based ANSI can have its docs changed to say UTF-8 without really being wrong. Outside of the kernel interface, languages used UTF-16 for their string types, like Python and Java. Even for a UTF-8 protocol like HTTP, UTF-16 was assumed better for JS. Only now that it is obvious that UTF-16 is worse (as opposed to just having an air of "legacy"), is Microsoft transitioning to UTF-8 APIs.

jmmv
0 replies
20h23m

OP here. I did not touch upon CLI argument handling in this article because I wanted to focus on the kernel but this is indeed a big difference. And... I had written about this too :) https://jmmv.dev/2020/11/cmdline-args-unix-vs-windows.html

delta_p_delta_x
29 replies
23h53m

NT is why I like Windows so much and can't stand Unix-likes. It is object-oriented from top to bottom, and I'm glad in the 21st century PowerShell has continued that legacy.

But as someone who's used all versions of Windows since 95, this paragraph strikes me the most:

What I find disappointing is that, even though NT has all these solid design principles in place… bloat in the UI doesn’t let the design shine through. The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.

I couldn't agree more. Windows 11 is irritatingly macOS-like and for some reason has animations that make it appear slow as molasses. What I really want is a Windows 2000-esque UI with dense, desktop-focused UIs (for an example, look at Visual Studio 2022 which is the last bastion of the late 1990s-early 2000s-style fan-out toolbar design that still persists in Microsoft's products).

I want modern technologies from Windows 10 and 11 like UTF-8, SSD management, ClearType and high-quality typefaces, proper HiDPI scaling (something that took desktop Linux until this year to properly handle, and something that macOS doesn't actually do correctly despite appearing to do so), Windows 11's window management, and a deeper integration of .NET with Windows.

I'd like Microsoft to backport all that to the combined UI of Windows 2000 and Windows 7 (so probably Windows 7 with the 'Classic' theme). I couldn't care less about transparent menu bars. I don't want iOS-style 'switches'. I want clear tabs, radio buttons, checkboxes, and a slate-grey design that is so straightforward that it could be software-rasterised at 4K resolution, 144 frames per second without hiccups. I want the Windows 7-style control panel back.

kev009
8 replies
23h40m

It's never been commonplace but can't you still run alternate shells (the Windows term for the GUI, not the command prompt in UNIX parlance)?

kev009
2 replies
21h44m

You didn't understand the parenthetical, busybox has no relation to the Windows shell.

chasil
1 replies
15h2m

It is a Windows shell. A POSIX Windows shell.

Such as should have been present at the beginning, not the OS/2 affliction.

kev009
0 replies
14h35m

Unfortunately it is still clear you don't understand my comment nor the linked wikipedia page at all. Windows Shell isn't the same use of the word you are overloading here in this thread and you are out in left field from what everyone else is talking about. Maybe revisit the wikipedia page and read a little more, look at the project descriptions&screencaps, and it will make sense to you.

p_l
0 replies
23h37m

You still can, and it's even exposed specifically for making constrained setups (though not everyone knows to use it)

chasil
0 replies
22h4m

Busybox has a great shell in the Windows port.

It calls itself "bash" but it is really the Almquist shell with a bit of added bash syntactic sugar. It does not support arrays.

https://frippery.org/busybox/index.html

EricE
0 replies
22h12m

Indeed the first thing I do on a new Windows install is load Open Shell.

reisse
2 replies
21h54m

Ah, it's a cycle repeating itself. I remember when Microsoft first released XP it was considered bloated (UI-wise) compared to Windows 2000 and Windows 95/98/ME. Then Vista came and all of a sudden XP was in the limelight for being slim and fast!

aleph_minus_one
0 replies
20h51m

Even when Vista came, people told all the time that they consider Windows 2000 to be much less UI-bloated than Windows XP; it was just that of the "UI bloat evils" Windows XP was considered to be the lesser evil than Windows Vista. I really heard nobody saying that XP was slim and fast.

BTW: Windows 7 is another story: at that time PC magazines wrote deep analyses how some performance issues in the user interface code of Windows Vista that made Vista feel "sluggish" were fixed in Windows 7.

nullindividual
2 replies
23h22m

What I really want is a Windows 2000-esque UI with dense

Engineers like you and I want that. The common end user wants flashy, sleek, stylish (and apparently CandyCrush).

But don't forget that that 2000 UI was flashy, sleek, and stylish with it's fancy pants GDI+ and a mouse pointer with a drop shadow!

EvanAnderson
1 replies
22h56m

The common end user wants flashy, sleek, stylish (and apparently CandyCrush).

Do they, though? I get the impression that nobody is actually testing with users. It seems more like UI developers want "flashy, sleek, stylish" and that's what's getting jammed down all our throats.

mattkevan
0 replies
22h37m

As a UI designer and developer, I would push the blame further along the stack and say that execs and shareholders want “flashy, sleek, stylish”, in the same way everything has to have AI jammed in now, lest the number start going down or not up quite as fast as hoped.

AshamedCaptain
2 replies
22h27m

The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.

NT has been sluggish since _forever_. It is hardly a bloated GUI problem. On machines were 9x would literally fly NT would fail to boot due to low memory.

nullindividual
1 replies
21h52m

Not sure what NT4 systems you were dealing with, but I've dealt with ones thrashing the page file on spinning rust and the GUI is still responsive.

NT4 had a higher base RAM requirement than 9x. Significantly so.

AshamedCaptain
0 replies
1h58m

The point being, 3.x/9x and NT using the same GUI, yet NT consistently requiring up to 4 times more RAM. NT itself was ridiculously bloated, not the GUI.

wvenable
1 replies
13h49m

Windows 11 is irritatingly macOS-like and for some reason

I bet dollars to donuts that all the designers who come up new Windows designs are using Macs.

The old Windows UI was designed out of painstaking end user testing which was famously responsible for the Start button.

SoothingSorbet
0 replies
9h45m

Indeed. And importantly, you could tell exactly which UI elements were which. It's sometimes genuinely difficult to tell if an element is text, a button, or a button disguised as a link on Windows 10/11.

markus_zhang
1 replies
21h45m

Win 2000 was the pinnacle. I stuck to it until WinXP was almost out of breath and reluctantly moved to it. Everything afterwards is pretty meh.

ruthmarx
0 replies
9h37m

Windows 7 was pretty great.

Melatonic
1 replies
20h1m

Agreed on you with everything here

That being said if you run something like Win10 LTSC (basically super stripped down win10 with no tracking and crapware) and turn off all window animations / shadows / etc you might be very surprised - it is snappy as hell. With a modern SSD stuff launches instantly and it is a totally different experience.

wkat4242
0 replies
17h11m

I run LTSC. You still get the tracking and some crapware sadly.

whoknowsidont
0 replies
13h13m

It is object-oriented from top to bottom,

On what planet is this a good thing? What does this realistically and practically mean outside of some high level layer that provides syntax sugar for B2B dev's. Lord knows you better not be talking about COM.

I honestly only see these types of comments from people who do NOT do systems programming.

ruthmarx
0 replies
9h39m

NT is why I like Windows so much and can't stand Unix-likes. It is object-oriented from top to bottom,

This sounds like you are talking from a design perspective and the rest of your post seems to be from a usability perspective. Is this correct?

Windows 11 is irritatingly macOS-like

MacOS is such an objectively inferior design paradigm, very frustration to use. It's Apple thinking 'different' for the sake of being different, not because it's good UI.

I only keep a W10 image around because it's still supported and W11 seems like a lot more work to beat into shape. OpenShell at least makes things much better.

pcwalton
0 replies
20h17m

slate-grey design that is so straightforward that it could be software-rasterised at 4K resolution, 144 frames per second without hiccups

This is not possible (measure software blitting performance and you'll see), and for power reasons you wouldn't want to even if it were.

jiripospisil
0 replies
4h36m

Windows 11 is irritatingly macOS-like and for some reason has animations that make it appear slow

Not sure about Windows but on macOS you can disable most of these animations - look for "Reduce motion" in Accessibility, the same setting is available on iOS/iPadOS. The result seems snappier.

EvanAnderson
0 replies
22h50m

I love that NT was actually designed. I don't necessarily like all of the design but I like that people actually thought about it.

rbanffy
27 replies
22h42m

I don’t think the registry is a good idea. I don’t mind every program having its own dialect of a configuration language, all under the /etc tree. If you think about it, the /etc tree is just a hierarchical key-value store implemented as files where the persistent format of the leaf node is left to its implementer to decide.

aseipp
16 replies
19h24m

If you think about it, the /etc tree is just a hierarchical key-value store

Well, you're in luck, I have good news for you -- Windows also has its own version of this concept: it's called "The Registry". You might have heard of it?

rkagerer
10 replies
11h56m

The registry would have been better if there were a stronger concept of "ownership" of the data it contains, tying each key to the responsible app / subsystem. I've tracked hundreds of software uninstalls and I would bet only about 1% of them actually remove all the cruft they originally stick in (or populated during use). The result is bloat, a larger surface area for corruption, and system slowdown.

Ironically in this respect it was a step backward... When settings lived in INI files, convention typically kept them in the same place as the program, so they were easy to find and were naturally extinguished when you deleted the software.

If you look at more modern OS's like Android and iOS they tend to enforce more explicit ties between apps and their data.

ruthmarx
3 replies
9h44m

only about 1% of them actually remove all the cruft they originally stick in (or populated during use). The result is bloat, a larger surface area for corruption, and system slowdown.

I think this is a myth partly spread by commercial offerings that want to 'clean and optimize' a windows install.

Most of the cruft left in the registry is the equivalent of config files in /etc not removed after uninstalling an app. That stuff isn't affecting performance.

vkazanov
2 replies
8h44m

15 something years ago I had this unpleasant job where I had to install a major vendor's database on Windows server machines. I remember I also had a lengthy list of things to check and clean in the registry to make sure things work.

Yes, these are configs. And no, we cannot just let applications do whatever they want in the shared config space without a way to trace things back to the original app.

At least in the Linux world I was able to just check what the distro scripts installed.

ruthmarx
1 replies
4h54m

And no, we cannot just let applications do whatever they want in the shared config space without a way to trace things back to the original app.

We don't have to let them, but we do for the most part. We could use sandboxing technology to isolate and/or log, but mostly OSs don't do anything to restrict what an executable can do by default, at least as far as installing.

At least in the Linux world I was able to just check what the distro scripts installed.

You can do this in Windows too sometimes, but it doesn't matter if it's a badly behaving app. There are linux installers that are just binary blobs and it would be a lot more work to monitor what they do also.

whoknowsidont
0 replies
2h36m

but mostly OSs don't do anything to restrict what an executable can do by default, at least as far as installing.

There is a very mature and very powerful system for this called Jails.

There are linux installers that are just binary blobs and it would be a lot more work to monitor what they do also.

This is simply not true. If I want to monitor an app in it's entirety I can easily do so on most unixy systems.

Past the default tools that require some amount of systems knowledge to use correctly, you can easily just use Stow or Checkinstall (works on most linux systems).

There is no mechanism for doing this on Windows as even the OS loses track of it sometimes. And if you think I'm being dramatic, trust me, I am not. There is a reason the tools don't exist for Windows, at least meeting feature parity.

nullindividual
3 replies
4h28m

This is the responsibility of the installer.

Using Windows Installer, this is easily accomplished. The Msi database _does_ track individual files and registry entries. If you're using another installer, or the developer allows their app to write something somewhere that isn't tracked by their installer, you're going to get files left behind.

macOS is especially bad in this respect. Bundles are great, until you have a ~/Library full of files that you never knew about after running an application.

rbanffy
2 replies
2h14m

This is the responsibility of the installer

On any Unix I can grep my way into the /etc tree and find files belonging to uninstalled applications and get rid of them myself. The whole point is that I can manage the “configuration database” with the same tools I manage a filesystem. That if the brilliant tools like apt and dnf fail to clean up after a program is uninstalled.

nullindividual
1 replies
1h43m

Windows is no different. Next time you're in front of Terminal, try:

    cd HKCU:\SOFTWARE
You're now browsing the registry and can use terminal commands.

rbanffy
0 replies
46m

When was this introduced? This is surprisingly enlightened.

Can I edit keys with a text editor?

alt227
1 replies
9h52m

system slowdown

This is often touted as a downside for the registry, and indeed a whole ecosystem of apps have evolved around this concept to 'clean' the registry and 'speed it up'.

In my experience of 35 years of using windows, I have never noticed a bloated registry slowing down a computer. I have also never noticed a speed up of the system by removing some unused keys. The whole point of addresses and key pairs is that individual bits of data can be written or read without loading the whole hive.

I wonder where this idea of a bloated slow registry came from?

rbanffy
0 replies
2h18m

Since the registry is a database, I would expect adding and removing branches and leaves would create fragmentation that, in the age of spinning metal and memory pressure, could create performance issues. A file system is easily defragmenters with tools available in the operating system itself, but not the registry. I’m not even sure how much of it can be optimised (by doing garbage collection and defragmenting the underlying files) with the computer running.

If it makes use of indexes, changes will lead to the indexes themselves being fragmented, making performance even worse.

rbanffy
2 replies
2h10m

What exactly do I gain from the registry that compensates for the fact I can use any tool that works on files to manage the /etc tree?

Can you manage registry folders with Git and keep track of the changes you make? Can you grep for content? On a Mac it’s indexed by the systemwide text search tooling.

Using the file system to store that data is extremely powerful.

craigmoliver
1 replies
1h9m

You can create/export .reg files to your GIT repo/file system...but that may be one extra step and doesn't stay in sync automatically.

rbanffy
0 replies
41m

Another idea would be to do periodic snapshots of the /etc folder. That, sadly, excludes ext4, but any flavour of Solaris can easily do it.

magicalhippo
1 replies
18h8m

And since it supports variable-lenght binary values, it fully supports that "the persistent format of the leaf node is left to its implementer to decide".

rbanffy
0 replies
1h59m

Except that now it’s an opaque blob and not a text file I can use grep to find and vi to edit.

phendrenad2
7 replies
20h19m

So what's wrong with the registry if a file-based registry is okay? The registry could be an abstraction layer over a Unix /etc filesystem, after all.

ruthmarx
3 replies
9h43m

I have no issue with the concept, but in practice the Windows registry is a lot more obfuscated than it needs to be. There can be trees and trees of UUIDs or similar, and there is no need for it to be so user unfriendly.

Part of this might be mixing internal stuff that people never need to see with stuff that people will need to access.

nullindividual
2 replies
4h21m

That's a developer choice, not the registry in and of itself. You could just as easily have /etc filled files with GUIDs for file names.

Generally, 3rd party developers don't use a bunch of GUIDs for keys. Microsoft does for Windows components to map the internal object ID of whatever they're dealing with; my assumption is for ease of recognition/documentation on their side (and generally the assumption that the end user shouldn't be playing around in there).

ruthmarx
1 replies
3h23m

That's a developer choice, not the registry in and of itself. You could just as easily have /etc filled files with GUIDs for file names.

For sure, that's why I said I have no issue with the concept but rather how it's used in practice.

Microsoft does for Windows components to map the internal object ID of whatever they're dealing with; my assumption is for ease of recognition/documentation on their side (and generally the assumption that the end user shouldn't be playing around in there).

That's maybe fair, but most of that stuff isn't stuff the user even needs to access most of the time. Maybe separating it out from all the HKLM and HKCU software trees would have made sense.

nullindividual
0 replies
3h12m

HKLM and HKCU have specific ACLs. It wouldn't make sense to have _two_ user-modifiable registry hives and ditto for machine.

JackSlateur
1 replies
10h56m

"rm -rf /etc/nginx" versus "try to remember where are the miriads of random keys spread everywhere"

alt227
0 replies
9h46m

Then it seems like the issue you have is how vendors are storing keys in the registry, not the registry itself. In your example, if nginx made a single node in the registry with all its keys under that then it would be just as easy to remove that single node as it would be to remove the single directory.

However in the real world it is never as simple as you suggest. Linux apps often litter the filesystem with data. an app might have files in /etc and /opt with shortcut scripts in /root/bin and /usr/sbin. Config files in /home and /usr directories. Linux file systems are just as littered as windows registries in my experience, if not worse because they differ in different distros.

nmz
0 replies
10h54m

Labeling, fstab is supposed to be a .tsv, what happens if I mistype something? where's the safety? Does this need to be a DSL?

mixmastamyk
0 replies
21h47m

I like what the elektra project was trying to do, but it didn't catch on. Basically put config into the filesystem with a standard schema, etc. Could use basic tools/permissions on it, rsync it, etc. Benefits of the registry but less prone to failure and no tools needed to be reinvented.

delta_p_delta_x
0 replies
18h39m

I can't quite decide if this comment is sarcasm or not.

Const-me
19 replies
18h46m

I would add that on modern WinNT, Direct3D is an essential part of the kernel, see dxgkrnl.sys. This means D3D11 is guaranteed to be available. This is true even without any GPU, Windows comes with a software fallback called WARP.

This allows user-space processes to easily manipulate GPU resources, share them between processes if they want, and powers higher level technologies like Direct2D and Media Foundation.

Linux doesn’t have a good equivalent for these. Technically Linux has dma-buf subsystem which allows to share stuff between processes. Unfortunately, that thing is much harder to use than D3D, and very specific to particular drivers who export these buffers.

movedx
18 replies
17h56m

But why should Linux have an equivalent of such features?

lelandbatey
14 replies
17h47m

Why should a kernel have anything? Because it's useful and convenient, as the OP mentioned.

movedx
13 replies
17h16m

It _can be_ useful. It can also _not_ be useful to others. It sounds like it's not a choice in this case, but a forced feature, and that's fine for some and not for others.

So again, why _must_ Linux have an equivalent?

Dalewyn
8 replies
16h9m

This line of thought is precisely why Linux continues to falter in mainstream acceptance.

Windows exists to enable the user to do whatever he wants. If the user wants to play a game or watch a video, Direct3D is there to let him do that. If he doesn't, Direct3D doesn't get in the way.

This is far better than Linux's (neckbeards'?) philosophy of Thou Shalt Not Divert From The One True Path which will inevitably inconvenience many people and lead to, you guessed it, failure in the mainstream market.

Contrast Android, which took Linux and re-packaged it in a more Windows-like way so people could actually use the bloody thing.

movedx
3 replies
14h18m

If the user wants to play a game or watch a video, Direct3D is there to let him do that. If he doesn't, Direct3D doesn't get in the way.

I _just_ moved from Windows 11 to Kubuntu. None of that stuff is missing. In fact, unlike Windows 10/11, I didn't even have to install a graphics driver. My AMD 7700 XTX literally just worked right out of the box. Instantly. Ironically that’s not the case for Windows 10/11. This isn’t a “My OS is better than your OS” debate — we’re talking about why D3D being integrated into the kernel is a good idea. I’m playing devil’s advocate.

And thus, you missed my point: "Why should Linux have an equivalent to Direct3D" isn't me arguing that Windows having it is bad, it's me asking people to think about design choices and consider whether they're good or bad.

This is far better than Linux's (neckbeards'?) philosophy of Thou Shalt Not Divert From The One True Path which will inevitably inconvenience many people and lead to, you guessed it, failure in the mainstream market.

If you think Windows having Direct3D "built in" is why it has mainstream dominance, then you potentially have a very narrow view of history, market timing, economics, politics, and a whole range of other topics that actually led to the dominance of Windows.

Hikikomori
2 replies
10h30m

I _just_ moved from Windows 11 to Kubuntu. None of that stuff is missing. In fact, unlike Windows 10/11, I didn't even have to install a graphics driver. My AMD 7700 XTX literally just worked right out of the box. Instantly. Ironically that’s not the case for Windows 10/11.

How did you install a driver on windows if your gpu didn't work out of the box?

SoothingSorbet
1 replies
10h0m

I'm sure Windows is perfectly capable of driving a GOP framebuffer. That doesn't mean the kernel has an actual GPU driver.

Hikikomori
0 replies
8h47m

It will also install a proper driver with windows update, can also do that during the installation.

SoothingSorbet
1 replies
9h57m

Windows exists to enable the user to do whatever he wants

It's very bad at that, then, considering it insists on getting in my way any time I want to do something (_especially_ something off of the beaten path).

If the user wants to play a game or watch a video, Direct3D is there to let him do that. If he doesn't, Direct3D doesn't get in the way.

I don't see what the point you are trying to make is, this is no different on Linux. What does D3D being in the kernel have to do with _anything_? You can have a software rasterizer on Linux too. You can play games and watch videos. Your message is incoherent.

Dalewyn
0 replies
9h21m

I don't see what the point you are trying to make is

Parent commenter said Linux shouldn't have <X> if it's not useful for everyone, though more likely he means for himself. Either way, he is arguing Linux shouldn't have a feature for dogmatic reasons. Violating the Unix ethos of doing only one thing, or something.

Meanwhile, Windows (and Android) have features so people can actually get some bloody work done rather than proselytize about their glorious beardcode.

severino
0 replies
9h15m

I don't get it. You mean people can't watch videos or play games in Linux?

agumonkey
0 replies
14h30m

Not to contradict but it seems to me that *nixes have always split user interaction and 'compute'. To them running a headless toaster is probably more valuable than a desktop UI.

scoodah
3 replies
16h46m

No one used the word must until you right now. The OP comment was posting a valid thing that Windows has that Linux does not. It’s fine if Linux doesn’t have it but I don’t understand where you’re coming from as presenting this as though someone said Linux must have this.

movedx
2 replies
14h14m

Linux doesn’t have a good equivalent for these.

That implies Linux must or should have an equivalent to those features found in Windows -- you can choose any word you like, friend. There is no other reason to make that statement but to challenge the fact Linux doesn't have those options.

Fun fact: I switched to Kubuntu recently and I didn't even have to install a graphics driver. It was just there, just worked, and my AMD 7700 XTX is working fine and playing Windows "only" games via Proton just fine as well as Linux native games just fine.

I'm simply trying to get people to think about design choices and questioning or stating why one thing is better than another.

lelandbatey
0 replies
2h52m

Linux doesn’t have a good equivalent for these.

That literally does not imply a need for those features. It points out a thing that Linux lacks, which is true. And that's where it stops. You are projecting an implication that "Linux does need x, y, or z because Windows has X, Y, or Z."

We're not sitting in a thread talking about what makes Linux/Windows better than the other, we're in a thread talking about just factual differences between the two. You can talk about two things, compare them, and even state your own preference for one or the other without stating that each should do everything that the other can do.

E.g. snowmobiles are easier to handle while driving in the snow than a Boeing 737. I like driving my snowmobile in the snow more than I like taxiing a Boeing 737 in the snow.

We can talk about things without implying changes that need to happen.

alt227
0 replies
9h58m

Dont read into the text too much, this doesnt imply what you are saying at all.

The reason to make that statement is to point out that there are differences in functionality.

Nobody in the thread said one situation was better than the other, until you did.

Const-me
2 replies
9h2m

Because modern world is full of computers with slow mobile CPUs and high-resolution high refresh rate displays. On such computers you need a 3D GPU to render anything, even a scrolling text, at the refresh rate of the display.

A 3D GPU is a shared hardware resource just like a disk. GPU resources are well-shaped slices of VRAM which can store different stuff, just like files are backed by the blocks of the underlying physical disks. User-space processes need to create and manipulate GPU resources and pass them across processes, just like they do with files on disks.

An OS kernel needs to manage 3D GPU resources for the same reason they include disk drivers and file system drivers, and expose consistent usermode APIs to manipulate files.

It seems Linux kernel designers mostly ignored 3D GPUs. The OS does not generally have a GPU API: some systems have OpenGL, some have OpenGL ES, some have Vulkan, some have none of them.

marcosdumay
0 replies
4h18m

Or a modern OS does what Linux does, expose DMI and let a userspace driver manage the GPU.

AnimalMuppet
0 replies
4h18m

Because modern world is full of computers with slow mobile CPUs and high-resolution high refresh rate displays.

And Linux does run on such computers. But it also runs on mainframes, and on embedded systems with no graphics whatsoever. And it runs on a much wider variety of CPUs than Windows does.

So for Linux, it's much more of a stretch to assume that the device looks something like a PC. And if it's not going to be there in a wide variety of situations, then should Linux really have 3D graphics as part of the kernel? (At a minimum it should be removable, and have everything run fine without it.)

phendrenad2
12 replies
20h10m

One thing I don't see mentioned in this article, and I consider to be the #1 difference between NT and Unix, is the approach to drivers.

It seems like NT was designed to fix a lot of the problems with drivers in Windows 3.x/95/98. Drivers in those OSs came from 3rd party vendors and couldn't be trusted to not crash the system. So ample facilities were created to help the user out, such as "Safe Mode" , fallback drivers, and a graphics driver interface that disables itself if it crashes too many times (yes really).

Compare that to any Unix. Historic, AT&T Unix, Solaris, Linux, BSD 4.x, Net/Free/OpenBSD, any research Unix being taught at universities, or any of the new crop of Unix-likes such as Redox. Universally, the philosophy there is that drivers are high-reliability components vetted and likely written by the kernel devs.

(Windows also has a stable driver API and I have yet to see a Unix with that, but that's another tangent)

hernandipietro
4 replies
15h26m

Windows NT 3.x had graphics and printer drivers in user-mode for stability reasons. Windows NT 4 moved them to Ring 0 to speed-up graphics applications.

IntelMiner
3 replies
9h13m

Then almost immediately took them back out after realizing this was a bad idea

chrisfinazzo
1 replies
4h7m

I presume this reversal happened during NT's main support window?

deaddodo
0 replies
46m

Well, by "immediate" they mean: "was user space through Win2k, went to kernel space in XP to match 9x performance, reversed in Vista".

So...one generation, and about 7 years later.

jonathanyc
2 replies
16h19m

and a graphics driver interface that disables itself if it crashes too many times (yes really)

I actually ran into this while upgrading an AMD driver and was very impressed! On Linux and macOS I was used to just getting kernel panics.

It’s too bad whatever system Crowdstrike hooked into was not similarly isolated.

baq
1 replies
13h2m

APIs used by crowdstrike et al are also what made WSL1 unworkable performance-wise. Can’t have security without slowness nowadays it seems.

szundi
0 replies
3h25m

Cost is all kind of inconvenience

sedatk
1 replies
12h56m

and a graphics driver interface that disables itself if it crashes too many times

That feature is one of the great ones that came with Windows Vista.

ruthmarx
0 replies
10h7m

It really was nice to be able to at least still use the system if the display driver is crashing. 800x600 at 16 bit or whatever it was is still better than nothing.

Dalewyn
1 replies
16h14m

So ample facilities were created to help the user out, such as "Safe Mode" , fallback drivers, and a graphics driver interface that disables itself if it crashes too many times (yes really).

Pretty sure most of this was already in place with Windows 95; I know Safe Mode definitely was along with a very basic VGA driver that could drive any video card in the world.

deaddodo
0 replies
43m

Safe Mode existed in Win9x.

User space drivers that could restart without kernel panicking didn't exist until Windows Vista (well, on the home user side of Windows).

Fallback drivers were never a thing on Win9x, you would have to go into safe mode to uninstall broken drivers that wouldn't allow a boot (typically graphics drivers); or manually uninstall/replace them otherwise.

teleforce
7 replies
18h49m

As a result, NT did support both Internet protocols and the traditional LAN protocols used in pre-existing Microsoft environments, which put it ahead of Unix in corporate environments.

Fun facts, to accelerate networking and Internet capability of Windows NT due to the complexity of coding a compatible TCP/IP implementation, the developers just took the entire TCP/IP stacks from BSD, and call it a day since BSD license does allows for that.

They cannot do that with Linux since it's GPL licensed codes, and this probably when the FUD attack started for Linux by Microsoft and the culmination of that is "Linux is cancer" statement by the ex-CEO Ballmer.

The irony is that Linux now is the most used OS in Azure cloud platform (where Butler was the chief designer) not BSDs [2].

[1] History of the Berkeley Software Distribution:

https://en.wikipedia.org/wiki/History_of_the_Berkeley_Softwa...

[2] Microsoft: Linux Is the Top Operating System on Azure Today:

https://news.ycombinator.com/item?id=41037380

teleforce
3 replies
11h20m

Are you pretty sure on that? These are some quotes among others on the topic in a forum discussions back in 2005:

"While Microsoft did technically 'buy' their TCP/IP stack from Spider Systems, they did not "own" it. Spider used the code available and written for BSD, so it doesn't appear that Microsoft directly copied BSD code (which, again, it is perfectly legal and legitimate to copy), they got it from a third party. Also ftp, rcp and rsh seems to have come with the bundle. I have heard that ftp was, but have never used rcp and rsh on Windows, so don't know what version(s) those were or were not included in any particular Windows version. Anyone can look through the .exes for those files and look for "The Regents of the University of California" copyright notice, if they want to see for themselves (rather than take the word of some anonymous geeks on a forum) ;)"

[1] Windows TCPIP Stack Based on Unix ?

https://www.neowin.net/forum/topic/381190-windows-tcpip-stac...

nullindividual
2 replies
4h6m

Like I said, the userland applications (ping, tracert, etc.) were ports from BSD, probably nearly 1:1 copies.

The TCP & IP stacks were written by Microsoft in NT 3.5.

What Spider Software used (again, used by Microsoft in NT 3.1 due to time pressures) may have originated from BSD, but we don't know.

You can browse the NT4 source code TCP/IP stack. Just search GitHub.

teleforce
1 replies
1h59m

What Spider Software used (again, used by Microsoft in NT 3.1 due to time pressures) may have originated from BSD

In all likelihood it's from BSD don't you think?

netbsdusers
1 replies
2h56m

the developers just took the entire TCP/IP stacks from BSD, and call it a day since BSD license does allows for that.

They didn't, but I don't know why you're putting a sinister spin on this either way. Of course the licence allows for that. It was the express intention of the authors of the BSD networking stack that it be used far and wide.

They went to considerable effort to extract the networking code into an independent distribution (BSD Net/1) which could be released under the BSD licence independently of the rest of the code (parts of which were encumbered at the time). They wanted it to be used.

teleforce
0 replies
1h47m

They didn't

Didn't they?

I am not questioning the fact that BSD is a commercial and Microsoft 'friendly' license, and Microsoft and Unix vendors hired many of BSD developers, it's a win-win situation for them.

What is so sinister by saying Microsoft at the time didn't particularly like Linux GPL license and their ex-CEO called it a cancer (their words not mine) since it's not compatible with their commercial endeavours at that particular time. Perhaps you have forgotten or too young to remember the hostility of Microsoft towards Linux with their well documented FUD initiatives [1].

[1] Halloween documents:

https://en.wikipedia.org/wiki/Halloween_documents

nyrikki
7 replies
1d

There are a number of issues, like ignoring the role of VMS, that windows 3.1 had a registry, the performance problems of early NT that lead to the hybrid model, the hype of microkernels at the time, the influence of plan 9 on both etc...

Cutler knew about microkernels from his work at Digital, OS/2 was a hybrid kernel, and NT was really a rewrite after that failed partnership.

The directory support was targeting Netware etc...

netbsdusers
2 replies
23h43m

What exactly was "hybrid" about the OS/2 kernel? "Hybrid" has always been basically a made up concept, but in OS/2 it seems especially bizarre to apply it to what's obviously a monolithic kernel, even one that bears a lot of similarity with older unix.

nyrikki
0 replies
22h51m

Real systems rarely can pass idealistic academic ideals.

Balancing benefits and costs of microkernel and monolithic kernels is common.

It looks like Google SEO gaming by removal of old content is making it hard to find good sources, but look at how OS/2 used ring 2 if you want to know.

Message passing and context switching between kernel and user mode is expensive, and if you ever used NT 3.51 that was clearly visible as were the BSoDs when MS shifted to more of a 'hybrid' model.

AshamedCaptain
0 replies
22h31m

You can even call Windows/386 or 3.x "hybrid", and in my opinion it would be more accurate to call Windows/386 a hybrid kernel than calling NT one. There's a microkernel that manages VMs, and there is a traditional, larger kernel inside each VM (either Windows itself, or DOS). The microkernel also arbitrates hardware between each of the VMs, but it is the VMs themselves that contain most of the drivers, which are running in "user space"!

In comparison Windows NT is basically a monolithic kernel. Everything runs in the same address space, so there's 0 protection. Or at least, in any definition where you call NT a hybrid kernel then practically any modular kernel would be hybrid. In later versions of NT the separations between kernel-mode components this post is praising have almost completely disappeared and even the GUI is running in kernel mode...

steve1977
1 replies
22h15m

Also Richard Rashid - the project lead for Mach at CMU - joined Microsoft in 1991.

Which is kinda interesting - Rashid went to Microsoft, Avie Tevanian went to NeXT/Apple.

__d
0 replies
19h56m

Rashid went to Microsoft _Research_, which is quite different.

nullindividual
0 replies
1d

And don't forget Alternate Data Streams for NTFS! Made specifically for Mac OS.

immibis
0 replies
23h40m

AFAIK, Windows 3.1's registry was only to store COM class information. It was just another type of single-purpose configuration file.

userbinator
5 replies
16h38m

Controversial opinion: DOS-based Windows/386 family (including 3.11 enhanced mode, 95, 98, up to the ill-fated ME) are even more advanced than NT. While Unix and NT, despite how different they are in the details are still "traditional" OSes, the lineage that started at Windows/386 are hypervisors that run VMs under hardware-assisted virtualisation. IMHO not enough has been written about the details of this architecture, compared to Unix and NT. It's a hypervisor that passes through most accesses to the hardware by default, which gave it a bad reputation for stability and security, but also a great reputation for performance and efficiency.

wmf
2 replies
16h19m

Is that architecture actually good or is it just complex? If it's more advanced, why did MS replace it with NT? It has long been known that you can trade off performance and protection; in retrospect 95/98 just wasn't reliable enough.

userbinator
0 replies
4h34m

I think it's because that architecture was less understood than the traditional OS model at the time; and they could've easily virtualised more of the hardware and gradually made passthrough not the default, eventually arriving at something like Xen and other bare-metal hypervisors that later became common.

...and as the sibling comment alludes to, MS eventually adopted that architecture somewhat with Hyper-V and the VBS feature, but now running NT inside of the hypervisor instead of protected-mode DOS.

tracker1
0 replies
1h16m

I remember using NT4 for web dev work in the later 90's... was kind of funny how many times browsers (Netscape especially) would crash from various memory errors that never happened in Win9x... how many of those were early exploits/exploitable issues. That and dealing with Flash around that time, and finding out I can access the filesystem.

I pretty much ran NT/Win2K at home for all internet stuff from then on, without flash at all. I do miss the shell replacement I used back in those days, I don't remember the name, but also remember Windowblinds.

omnibrain
0 replies
16h24m

Isn’t it similar when you use Hyper-V?

nullindividual
0 replies
3h2m

Windows/386 are hypervisors that run VMs under hardware-assisted virtualisation.

Not really. There was no Ring -1 (hypervisor), no hardware-assisted virtualization as we use the term today. On Windows/386, it ran in Ring 0.

Virtual 8086 mode was leveraged via the NTVDM, shipping with the first release of NT.

bawolff
5 replies
21h59m

Internationalization: Microsoft, being the large company that was already shipping Windows 3.x across the world, understood that localization was important and made NT support such feature from the very beginning. Contrast this to Unix where UTF support didn’t start to show up until the late 1990s

I feel like this is a point for unix. Unix being late to the unicode party means utf-8 was adopted where windows was saddled with utf-16

---

The NT kernel does seem to have some elegance. Its too bad it is not open source; windows with a different userspace and desktop environment would be interesting.

chungy
4 replies
21h52m

Windows would be so much better if it were actually UTF-16. It's worse than that: it's from a world where Unicode thought "16-bits ought to be enough for anybody" and Windows NT baked that assumption deep into the system; it wasn't until 1996 that Unicode had to course-correct, and UTF-16 was carved out to be mostly compatible with the older standard (now known as UCS-2). For as long as you don't use the surrogate sequences in strings, you happen to be UTF-16 compatible; if you use the sequences appropriately, you happen to be UTF-16 compatible; if you use them in invalid ways to UTF-16, now you've got a mess that's a valid name on the operating system.

I can't really blame NT for this, it's unfortunate timing and it remains for backwards compatibility purposes. Java and JavaScript suffer similar issues for similar reasons.

chungy
3 replies
21h42m

I'll throw this out there too: UTF-8 isn't necessarily better than UTF-16; they both support the entirety of the Unicode character space.

UTF-8 is convenient on Unix systems since it fits into 8-bit character slots that were already in place; file systems have traditionally only forbidden the NULL byte and forward-slash, and all other characters are valid. From this fact, you can use UTF-8 in file names with ease on legacy systems, you don't need any operating system support for it.

UTF-8 is "space optimized" for ASCII text, while most extra-ASCII Latin, Cyrillic, Greek, Arabic characters need two bytes each (same as UTF-16); most of Chinese/Japanese/Korean script in the BMP requires three bytes in UTF-8, whereas you still only need two bytes in UTF-16. To go further beyond, all SMP characters (eg, most emoji) require four bytes each in both systems.

Essentially, UTF-8 is good space-wise for mostly-ASCII text. It remains on-par with UTF-16 for most western languages, and only becomes more inefficient than UTF-16 for east-Asian languages (in such regions, UTF-16 is already dominant).

bawolff
1 replies
20h16m

Space savings are irrelavent. Text is small, and after gzip its going to be the same anyways.

Seriously, when was the last time where you both cared about saving a single kb, but compression was not an option. I'm pretty sure the answer is never. It was never in the 90s, it is extra never now that we have hard drives hundreds of gb big.

UTF-8 is better because bytes are a natural unit. You dont have to worry about surrogates. You dont have to worry about de-synchronization issues. You dont have to worry about byte order.

Backwards compatibility with ascii and basically every interface ever, also helps. (Well not 7bit smtp..). The fact is, ascii is everywhere. Being able to treat it as just a subset of utf-8 makes a lot of things easier.

(eg, most emoji) require four bytes each in both systems.

This is misleading because most emoiji are not just a single astral character but a combination of many.

nmz
0 replies
11h12m

Saving disk space and synchronization is only important for network transmission. At the local level, you will need to convert to something where you can get/know positioning, utf8 does not allow for this given its variability, this means a lot of operations are more expensive and you will have to convert to utf16 anyway.

mmoskal
0 replies
16h18m

Interestingly in a Javascript or similar runtime most of text that hits the caches where the size actually matters is still ASCII even in far east because of identifiers. Utf8 for the win!

ThinkBeat
5 replies
18h42m

Architecturally WindowsNT was a much better designed system than Linux when it came out.

I wish it had branched to one platform as a workstation OS (NTNext).

and one that gets dumber and worse in order to make it work well for gaming. Windows 7/8/10/11 etc,

Technically one would hope that NTNext would be Windows Server, but sadly no.

I remember installing WindowsNT on my PC in awe how much better it was than Dos/Windows3 and later 95.

And compatibility back then was not great. There was a lot that didn't work, and I was more than fine with that.

It could run win32, os/2, and posix and it could be extended to run other systems in the same way.

Posix was added as a nessecity to bid for huge software contracts from the US government, and MS lost a huge contract and lost interest in the POSIX sub system, and in the os/2 sub system.

Did away with it, until they re-invented a worse system for WSL.

Note subsystem in WindowsNT means something very different than subsystem for Linux.

kbolino
3 replies
18h29m

WSL was a real subsystem. It worked in similar ways to the old subsystems. However, the Linux kernel is different enough from Windows, and evolves so much faster, that Microsoft wasn't able to keep up. WSL2 is mostly just fancy virtualization but this approach has better compatibility and, thanks to modern hardware features, better performance, than WSL ever did.

Const-me
2 replies
18h21m

thanks to modern hardware, very little performance penalty

I would not call 1 order of magnitude performance penalty for accessing a local file system “little”: https://github.com/microsoft/WSL/issues/4197

kbolino
1 replies
17h54m

Yeah, that's pretty bad. The fact that mounting the volume as a network share gets better performance is surprising and somewhat concerning.

However, what I was talking about performance-wise was the overhead of every system call. That overhead is gone under WSL2. Maybe it wasn't worth it for that reason alone, but original WSL could never keep up with Linux kernel and ecosystem development.

Being able to run nearly all Linux programs with only some operations being slow is probably still better than being able to run only some Linux programs with all operations being slow.

wvenable
0 replies
13h56m

The problem with WSL1 was the very different file system semantics between Windows and Linux. On Linux files are dumb and cheap. On Windows files are smarter and more expensive. Mapping Linux file system calls on Windows worked fine but you couldn't avoid paying for that difference when it came up.

You can't resolve that issue while mapping everything to Windows system calls. If you're not going to map to Windows system calls then you might as well virtualize the whole thing.

kev009
0 replies
13h58m

Linux is one of the prime examples of the Worse is Better motif in UNIX history. And to all your points I don't think it's outgrown this, it is not a particularly elegant kernel even by UNIX standards.

Linus and other early Linux people had a real good nose for performance.. not the gamesmanship kind just stacking often minuscule individual wins and avoiding too many layered architectural penalties. When it came time to worry about scalability Linux got lucky with the IBM investment who had also just purchased Sequent and knew a thing or two about SMP and NUMA.

Most commercial systems are antithetical to performance (there are some occasional exceptions). The infamous interaction of a Sun performance engineer trying to ridicule David Miller who was showing real world performance gains..

I think that keen performance really helped with adoption. Since the early days you might install Linux on hardware that was rescued from the garbage and do meaningful things with it whereas the contemporary commercial systems had forced it obsolete.

PaulHoule
5 replies
23h55m

(1) The has been some convergence such as FUSE in Linux which lets you implement file systems in user space, Proton emulates NT very well, and

(2) Win NT’s approach to file systems makes many file system operations very slow which makes npm and other dev systems designed for the Unix system terribly slow in NT. Which is why Microsoft gave up on the otherwise excellent WSL1. If you were writing this kind of thing natively for Windows you would stuff blobs into SQLLite (e.g. a true “user space filesystem”) or ZIP files or some other container instead of stuffing 100,000 files in directories.

nullindividual
2 replies
23h45m

NT’s approach to file systems makes many file system operations very slow

This is due to the file system filters. It shows when using DevDrive where there are significant performance improvements.

PaulHoule
1 replies
23h22m

Great! I’m thinking of adding a new SSD to my machine and would love to try this out.

immibis
1 replies
23h34m

I wonder how much of Linux's performance is attributable to it not having such grand architectural visions (e.g. unified object manager) and therefore being able to optimize each specific code path more.

chasil
0 replies
22h8m

Many Linux systems on TPC.org are running on XFS (everything rhel-based). It's not simple, but it does appear to help SQL Server.

rkagerer
4 replies
12h8m

Can we talk about NT picoprocesses?

Up until this feature was added, processes in NT were quite heavyweight: new processes would get a bunch of the NT runtime libraries mapped in their address space at startup time. In a picoprocess, the process has minimal ties to the Windows architecture, and this is used to implement Linux-compatible processes in WSL 1.

They sound like an extremely useful construct.

Also, WSL 2 always felt like a "cheat" to me... is anyone else disappointed they went full-on VM and abandoned the original approach? Did small file performance ever get adequately addressed?

tjoff
3 replies
12h3m

I'm surprised they didn't go the WSL2 route from the start. Seems much easier to do.

But WSL is so cool and since I mostly run windows in VMs without nested virtualization support I've pretty much only used that one and am super thankful for it.

torginus
1 replies
4h32m

Honestly, before WSL 1 was a thing, Cygwin already existed, and it was good enough for most people.

tjoff
0 replies
4h13m

Cygwin is capable but I feel WSL is a total game changer.

netbsdusers
0 replies
3h13m

I think (but am not sure) that WSL was a consolation prize of the cancelled Project Astoria, the initiative from the dying days of Windows Phone to support running Android apps on Windows. Implementing this with virtualisation would have been more painful and less practical on the smartphones of the day.

walki
3 replies
10h30m

I feel like the NT kernel is in maintenance only mode and will eventually be replaced by the Linux kernel. I submitted a Windows kernel bug to Microsoft a few years ago and even though they acknowledged the bug the issue was closed as a "won't fix" because fixing the bug would require making backwards incompatible changes.

Windows currently has a significant scaling issue because of its Processor Groups design, it is actually more of an ugly hack that was added to Windows 7 to support more than 64 threads. Everyone makes bad decisions when developing a kernel, the difference between the Windows NT kernel and the Linux kernel is that fundamental design flaws tend to get eventually fixed in the Linux kernel while they rarely get fixed in the Windows NT kernel.

nullindividual
0 replies
4h14m

NT kernel still gets improvements. Think IoRing (copy of io_uring, but for file reads only) which is a new feature.

I think things like Credential Guard, various virtualization (security-related, not VM-related) are relatively new kernel-integrated features, etc.

Kernel bugs that need to exist because of backwards compat are going to continue to exist since backwards compat is a design goal of Windows.

netbsdusers
0 replies
3h7m

I think rumours of NT's terminal illness have been greatly exaggerated. There are numerous new developments I am hearing about from it, like the adoption of RCU and the memory partitions.

It's not clear to me how processor groups inhibit scaling. It's even sensible to constrict the movement of threads willy-nilly between cores in a lot of cases (because of NUMA locality, caches, etc.) And it looks like there's an option to not confine your program to a single processor group, too.

JackSlateur
0 replies
4h10m

I have the same feeling

Windows is more and more based on virtualization

And the other hand, more and more microsoft stuff is Linux native

It would not surprise me if Linux runs every windows, somewhere far deep, in the next decades

More hybridations are probably coming, but where will it stop ? And why ?

ExoticPearTree
2 replies
5h49m

I have been using Windows and Linux for about 20+ years. Still remember the Linux Kernel 2.0 days and Windows NT 4. And I have to admit that I am more familiar with the Linux kernel than the Windows one.

Now, that being said, I think the Windows kernel sounded better on paper, but in reality Windows was never as stable as Linux (even in it's early days) doing everyday tasks (file sharing, web/mail/dns server etc.).

Even to this day, the Unix philosophy of doing one thing and doing it well stands: maybe the Linux kernel wasn't as fancy as the Windows one, but it did what it was supposed to do very well.

torginus
0 replies
4h46m

Windows is way more stable than Linux when you consider the entire desktop stack. The number of crashes, and data loss bugs I've experienced when using Linux over the years constantly puts me off from using it as a daily driver

1970-01-01
0 replies
3h27m

I remember when 'Linux doesn't get viruses' was a true statement. It did things well because it wasn't nearly as popular as WinNT (because it wasn't nearly as user friendly as WinNT) and you needed an experienced administrator to get anything important running..

stevekemp
1 replies
1d

I wonder how many of the features which other operating systems got much later, such as the unified buffer cache, were due to worries of software patents?

jhallenworld
1 replies
23h1m

NT has its object manager.. the problem with it is visibility. Yes, object manager type functionality was bolted-on to UNIX, but at least it's all visible in the filesystem. In NT, you need a special utility WinObj to browse it.

chriscappuccio
1 replies
11h42m

Windows was great at running word processors. BSD and Linux were great as internet scale servers. It was until Microsoft tried running Hotmail on NT that they had any idea there was a problem. Microsoft used this experience to fix problms that ultimately made Windows better for all users across many use cases.

All the talk here about how Windows had a better architecture into he beginning conveniently avoids the fact that windows was well known for being over-designed while delivering much less than its counterparts in the networking arena for a long time.

Its not wrong to admire what windows got right, but Unix got so much right by putting attention where it was needed.

MaxGripe
1 replies
8h14m

The sluggishness of the system on new hardware is an accurate observation, but I think the author should also take a look at macOS or popular Linux distros, where it's significantly worse

MaxGripe
0 replies
2h51m

„I think Apple is AmAzInG so I will downvote you”

Ericson2314
1 replies
34m

I am 1/3 of the way through this, and it doesn't I am afraid seem like a very good breakdown.

For example.

- Portability. Who cares, even in the 1990s, NetBSD is a thing. We've known learned that portability across conventional-ish hardware doesn't actually affect OS design in vary

Overall, the author is taking jargon / marketing terms too much at face value. In many cases, Windows and Unix will use different terms to describe the same things, or we have terms like "object-oriented kernel" that, by default, I assume don't actually mean anything: "object oriented" was too popular and adjective in the 1990s to be trusted.

- "NT doesn’t have signals in the traditional Unix sense. What it does have, however, are alerts, and these can be kernel mode and user mode."

  Topic sentence should not start with the names of things. It is extraneous.
My prior understanding is that NT is actually better in many ways, but I don't feel like I am any closer to learning why.

pavlov
0 replies
16m

> ‘we have terms like "object-oriented kernel" that, by default, I assume don't actually mean anything’

The author did explain what it specifically means in the context of the NT kernel: centralized access control, common identity, and unified event handling.

Circlecrypto2
1 replies
20h52m

A great article that taught me a lot of history. As a long time Linux user and advocate for that history, I learned there is actually a lot to appreciate from the work that went into NT.

RachelF
0 replies
20h45m

The original NT was a great design, built by people who knew what they were doing and had done it before (for VMS). It was superior to the Unix design when it came out, benefiting from the knowledge of 15 years.

I worked on kernel drivers starting in with NT 3.5. However, over the years, the kernel has become bloated. The bloat is both in code and in architecture.

I guess this is inevitable as the original team has long gone, and it is now too large for anyone to understand the whole thing.

AdeptusAquinas
1 replies
15h50m

Could someone explain the 'sluggish ui responsiveness' talked about in the conclusion? I've never experienced it in 11, 10, 8 or 7 etc - but maybe thats because my windows machines are always gaming machines and have a contemporary powerful graphics card. I've used a mac pro for work a couple of times and never noticed that being snippier than my home machine.

IgorPartola
0 replies
13h59m

I think this is taking about a much older version of Windows, such as XP and Vista. Vista was particularly bad.

virgulino
0 replies
15h22m

Inside Windows NT, the 1st edition, by Helen Custer, is one of my all-time favorite books! A forgotten gem. It's good to see it being praised.

ssrc
0 replies
6h12m

I remember reading those books (ok, it was the 4.3 BSD edition instead of 4.4) alongside Bach's "The Design of the Unix Operating System" and Uresh Vahalia's "UNIX internals: the new frontiers" (1996). I recommend "UNIX internals". It's very good and not as well known as the others.

sebstefan
0 replies
6h55m

The C language: One thing Unix systems like FreeBSD and NetBSD have fantasized about for a while is coming up with their own dialect of C to implement the kernel in a safer manner. This has never gone anywhere except, maybe, for Linux relying on GCC-only extensions. Microsoft, on the other hand, had the privilege of owning a C compiler, so they did do this with NT, which is written in Microsoft C. As an example, NT relies on Structured Exception Handling (SEH), a feature that adds try/except clauses to handle software and hardware exceptions. I wouldn’t say this is a big plus, but it’s indeed a difference.

Welp, that's an unfortunate use of that capability given what we see today in language development when it comes to secondary control flows.

pseudosavant
0 replies
1h31m

Great article that is largely on point. I find it funny that it ends with a bit about how the Windows UI might kill off the Windows OS.

Predicting the imminent demise of Windows is as common, and accurate, of a take as saying this is the year of Linux on the desktop or that Linux takes over PC gaming.

phibz
0 replies
15h9m

The article hit on some great high points of difference. But I feel like it misses Cutler and team's history with OpenVMS and MICA. The hallmarks of their design are all over NT. With that context it reads less like NT was avoiding UNIX's mistakes and more like it was built on years of learning from the various DEC offerings.

parl_match
0 replies
20h47m

Unix’s history is long—much longer than NT’s

Fun fact: NT is a spiritual (and in some cases, literal) successor of VMS, which itself is a direct descendant of the RSX family of operating systems, which are themselves a descendant of a process control family of task runners from 1960. Unix goes back to 1964 - Multics.

Although yeah, Unix definitely has a much longer unbroken chain.

p0seidon
0 replies
19h56m

This is such a well-written read, just so insightful and broad in its knowledge. I learned a lot, thank you (loved NT at that time - now I know why).

nmz
0 replies
11h4m

Linux’s io_uring is a relatively recent addition that improves asynchronous I/O, but it has been a significant source of security vulnerabilities and is not in widespread use.

Funny, opening the manpage of aio on freebsd you get this on the second paragraph

Asynchronous I/O operations on some file descriptor types may block an > AIO daemon indefinitely resulting in process and/or system hangs. > Operations on these file descriptor types are considered “unsafe” and > disabled by default. They can be enabled by setting the > vfs.aio.enable_unsafe sysctl node to a non-zero value.

So nothing is safe.

lukeh
0 replies
15h58m

No one was using X.500 for user accounts on Solaris, until LDAP and RFC 2307 came along. And at that point hardly anyone was using X.500. A bit more research would have mentioned NIS.

jonathanyc
0 replies
16h17m

Great article, especially loved the focus on history! I’ve subscribed.

Lastly, as much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system.

I’m curious as to why the NT kernel’s security guarantees don’t seem to result in Windows itself being more secure. I’ve heard lots of opinions but none from a comparative perspective looking at the NT vs. UNIX kernels.

jart
0 replies
9h6m

Unified event handling: All object types have a signaled state, whose semantics are specific to each object type. For example, a process object enters the signaled state when the process exits, and a file handle object enters the signaled state when an I/O request completes. This makes it trivial to write event-driven code (ehem, async code) in userspace, as a single wait-style system call can await for a group of objects to change their state—no matter what type they are. Try to wait for I/O and process completion on a Unix system at once; it’s painful.

Hahaha. Try polling stdin, a pipe, an ipv4 socket, and an ipv6 socket at the same time.

fargle
0 replies
12h40m

this is a lovely and well written article, but i have to quibble with the conclusion. i agree that "it’s not clear to me that NT is truly more advanced". i also agree with the statement "It is true that NT had more solid design principles at the onset and more features that its contemporary operating systems"

but i don't agree with is that it was ever more advanced or "better" (in some hypothetical single-dimensional metric). the problem is that all that high minded architectural art gets in the way of practical things:

    - performance, project: (m$ shipping product, maintenance, adding features, designs, agility, fixing bugs)

    - performance, execution (anyone's code running fast)

    - performance, market (users adopting it, building new unpredictable things)
it's like minix vs. linux again. sure minux was at the time in all theoretical ways superior to the massive hack of linux. except that, of course, in practice theory is not the same as practice.

in the mid 2000-2010s my workplace had a source license for the entire Windows codebase (view only). when the API docs and the KB articles don't explain it, we could dive deeper. i have to say i was blown away and very surprised by "NT" - given it's abysmal reliability i was expecting MS-DOS/Win 3.x level hackery everywhere. instead i got a good idea of Dave Cutler and VMS - it was positively uniformly solid, pedestrian, uniform and explicit. to a highly disgusting degree: 20-30 lines of code to call a function to create something that would be 1-2 lines of code in a UNIX (sure we cheat and overload the return with error codes and status and successful object id being returned - i mean they shouldn't overlap, right? probably? yolo!).

in NT you create a structure containing the options, maybe call a helper function to default that option structure, call the actual function, if it fails because of limits, it reports how much you need then you go back and re-allocate what you need and call it again. if you need the new API, you call someReallyLongFunctionEx, making sure to remember to set the version flag in the options struct to the correct size of the new updated option version. nobody is sure what happens if getSomeMinorObjectEx() takes a getSomeMinorObjectParamEx option strucure that is the same size as the original getSomeMinorObjectParam struct but it would probably involve calling setSomeMinorObjectParamExParamVersion() or getObjectParamStructVersionManager()->SelectVersionEx(versionSelectParameterEx). every one is slightly different, but they are all the same vibe.

if NT was actual architecture, it would definitely be "brutalist" [1]

the core of NT is the antithesis of the New Jersey (Berkeley/BSD) [2] style.

the problem is that all companies, both micro$oft and commercial companies trying to use it, have finite resources. the high-architect brutalist style works for VMS and NT, but only at extreme cost. the fact that it's tricky to get signals right doesn't slow most UNIX developers down, most of the time, except for when it does. and when it does, a buggy, but 80%, solution is but a wrong stackoverlflow answer away. the fact that creating a single object takes a page of code and doing anything real takes an architecture committee and a half-dozen objects that each take a page of (very boring) code, does slow everyone down, all the time.

it's clear to me, just reading the code, that the MBA's running micro$oft eventually figured that out and decided, outside the really core kernel, not to adopt either the MIT/Stanford or the New Jersey/Berkeley style - instead they would go with "offshore low bidder" style for the rest of whatever else was bolted on since 1995. dave cutler probably now spends the rest of his life really irritated whenever his laptop keeps crashing because of this crap. it's not even good crap code. it's absolutely terrible; the contrast is striking.

then another lesson (pay attention systemd people), is that buggy, over-complicated, user mode stuff and ancillary services like control-panel, gui, update system, etc. can sink even the best most reliable kernel.

then you get to sockets, and realize that the internet was a "BIG DEAL" in the 1990s.

ooof, microsoft. winsock.

then you have the other, other, really giant failure. openness. open to share the actual code with the users is #1. #2 is letting them show the way and contribute. the micro$oft way was violent hatred to both ideas. oh, well. you could still be a commercial company that owns the copyright and not hide the, good or bad, code from your developers. MBAAs (MBA Assholes) strike again.

[1] https://en.wikipedia.org/wiki/Brutalist_architecture [2] https://en.wikipedia.org/wiki/Worse_is_better

desdenova
0 replies
19h1m

Unfortunately NT doesn't have any usable distribution, so it's still a theoretical OS design.

davidczech
0 replies
11h37m

Neo

chasil
0 replies
23h7m

I would imagine that Windows was closer to UNIX than VMS was, and there were several POSIX ports to VMS.

VMS POSIX ports:

https://en.m.wikipedia.org/wiki/OpenVMS#POSIX_compatibility

VMS influence on Windows:

https://en.m.wikipedia.org/wiki/Windows_NT#Development

"Microsoft hired a group of developers from Digital Equipment Corporation led by Dave Cutler to build Windows NT, and many elements of the design reflect earlier DEC experience with Cutler's VMS, VAXELN and RSX-11, but also an unreleased object-based operating system developed by Cutler at Digital codenamed MICA."

Windows POSIX layer:

https://en.m.wikipedia.org/wiki/Microsoft_POSIX_subsystem

Xenix:

https://en.m.wikipedia.org/wiki/Xenix

"Tandy more than doubled the Xenix installed base when it made TRS-Xenix the default operating system for its TRS-80 Model 16 68000-based computer in early 1983, and was the largest Unix vendor in 1984."

EDIT: AT&T first had an SMP-capable UNIX in 1977.

"Any configuration supplied by Sperry, including multiprocessor ones, can run the UNIX system."

https://www.bell-labs.com/usr/dmr/www/otherports/newp.pdf

UNIX did not originally use an MMU:

"Back around 1970-71, Unix on the PDP-11/20 ran on hardware that not only did not support virtual memory, but didn't support any kind of hardware memory mapping or protection, for example against writing over the kernel. This was a pain, because we were using the machine for multiple users. When anyone was working on a program, it was considered a courtesy to yell "A.OUT?" before trying it, to warn others to save whatever they were editing."

https://www.bell-labs.com/usr/dmr/www/odd.html

Shared memory was "bolted on" with Columbus UNIX:

https://en.m.wikipedia.org/wiki/CB_UNIX

...POSIX implements setfacl.

IgorPartola
0 replies
13h56m

What I don’t see in the comments here is the argument I remember hearing in the late 90s and early 2000s: that Unix is simpler than Windows. I certainly feel like it was easier for me to grasp the POSIX API compared to what Windows was doing at the time.