The NT kernel is pretty nifty, albeit an aging design.
My issue with Windows as an OS, is that there's so much cruft, often adopted from Microsoft's older OSes, stacked on top of the NT kernel effectively circumventing it's design.
You frequently see examples of this in vulnerability write-ups: "NT has mechanisms in place to secure $thing, but unfortunately, this upper level component effectively bypasses those protections".
I know Microsoft would like to, if they considered it "possible", but they really need to move away from the Win32 and MS-DOS paradigms and rethink a more native OS design based solely on NT and evolving principles.
The backwards compatibility though is one of the major features of windows as an OS. That fact that a company can still load some software made 20 years ago developed by a company that is no longer in business is pretty cool (and I've worked at such places using ancient software on some windows box, sometimes there's no time or money for alternatives)
That and it's 30+ years (NT was released in 1993). Backwards compatibility is certainly one of the greatest business value Microsoft provides to its customers.
If you include the ability of 32-bit versions of Windows to run 16-but Windows and DOS applications with NTVDM, it is more like 40+ years.
https://en.wikipedia.org/wiki/Virtual_DOS_machine
(Math on the 40 years: windows 1.0 was released in 1985, the last consumer version of Windows 10 (which is the last Windows NT version to support 32-bit install and thus NTVDM) goes out of support in 2025. DOS was first released in 1981, more than 40 years ago. I don’t know when it was released, but I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186)
True. I just assumed that 16-bit support got dropped since Windows 11 was 64-bit only.
Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily). It has been unofficially built for 64-bit versions of Windows as a proof-of-concept: https://github.com/leecher1337/ntvdmx64
That’s okay, and if people want to test their specific use case on that and use it then great.
It’s a pretty different amount of effort to Microsoft having to do a full 16 bit regression suite and make everything work and then support it for the fewer and fewer customers using it. And you can run a 32 bit windows in a VM pretty easily if you really want to.
Or you can run 16-bit Windows 3.1 in DOSBox.
Sure, but again that’s on you to test and support.
I recently discovered that Windows 6.2 (more commonly known as Windows 8) added an export to Kernel32.dll called NtVdm64CreateProcessInternalW.
https://www.geoffchappell.com/studies/windows/win32/kernel32...
Not sure exactly what it does (other than obviously being some variation on process creation), but the existence of a function whose name starts with NtVdm64 suggests to me that maybe Microsoft actually did have some plan to offer a 64-bit NTVDM, but only abandoned it after they’d already implemented this function.
It’s amazing that stuff still runs on Windows 10. I’m guessing Windows 10 has a VM layer both for 32-bit and 16-bit Windows + DOS apps?
https://github.com/leecher1337/ntvdmx64
Windows 10 only does 16-bit DOS and Windows apps on the 32-bit version of Windows 10, so it only has a VM layer for those 16-bit apps. (On x86, NTVDM uses the processor's virtual 8086 mode to do its thing; that doesn't exist in 64-bit mode on x86-64 and MS didn't build an emulator for x86-64 like they did for some other architectures back in the NT on Alpha/PowerPC era, so no DOS or 16-bit Windows apps on 64-bit Windows at all.)
People tend to forget that it already is 2024.
Short of driver troubles at the jump from Win 9x to 2k/XP, and the shedding of Win16 compatibility layers at the time of release of Win XP x64, backwards compatibility had always been baked into Windows. I don’t know if there was any loss of compatibility during the MS-DOS days either.
It’s just expected at this point.
On DOS, if you borrow ReactOS' NTVDM under XP/2003 and Maybe Vista/7 under 32 bit (IDK about 64 bit binaries), you can run DOS games in a much better way than Windows' counterpart.
Afaik due to POPF not trapping Reactos implemented NTVDM as a software emulator.
https://web.archive.org/web/20170723164052/http://community.... https://web.archive.org/web/20151216085857/http://community....
At this point you might as well use DOSBOX.
Intel Protected Mode POPF fail:
https://docs.oracle.com/en/virtualization/virtualbox/6.0/adm...
https://devblogs.microsoft.com/oldnewthing/20160411-00/?p=93...
I think they recently improved NTVDM a lot.
Not long ago, it was posted here a link to a job advert for the german railway looking for a Win 3.11 specialist.
As I see it, the problem is the laziness/cheapness of companies when it comes to upgrades and vendor's reluctance to get rid of dead stuff for fear of losing business.
APIs could be deprecated/updated at set intervals, like Current -2/-3 versions back and be done with it.
Lots of hardware is used for multiple decades, but has software that is built once and doesn't get continuous updates.
That isn't necessarily laziness, it's a mindset thing. Traditional hardware companies are used to a mindset where they design something once, make and sell it for a decade, and the customer will replace it after 20 years of use. They have customer support for those 30 years, but software is treated as part of that design process.
That makes a modern OS that can support the APIs of 30 year old software (so 40 year old APIs) valuable to businesses. If you only want to support 3 versions that's valid, but you will lose those customers to a competitor who has better backwards compatibility
But only to a degree, right? Only the last two decades of software is what the OS ideally needs to support, beyond that you can just use emulators.
Pretty much the only 16-bit software that people commonly encounter is an old setup program.
For a very long time those were all 16-bit because they didn't need the address space and they were typically smaller when compiled. This means that a lot of 32-bit software from the late 90s that would otherwise work fine is locked inside a 16-bit InstallShield box.
I know quite a lot of people who are still quite fond of some old 16-bit Windows games which - for this "bitness reason" - don't work on modern 64 bit versions of Windows anymore. People who grew up with these Windows versions are quite nostalgic about applications/games from "their" time, and still use/play them (similar to how C64/Amiga/Atari fans are about "their" system).
Software is written against APIs, not years, so the problem with this sort of thinking is that software written -say- 10 years ago might still be using APIs from more than 20 years ago, so if you decide to break/remove/whatever the more-than-20-year-ago APIs you not only break the more-than-20-year-ago software but also the 10 year old software that used those APIs - as well as any other software, older or newer, that did the same.
(also i'm using "API" for convenience here, replace it with anything that can affect backwards compatibility)
EDIT: simple example in practice: WinExec was deprecated when Windows switched from 16bit to 32bit several decades ago, yet programs are still using it to this day.
Maybe, but your app could also be an interface to some super expensive scientific/industrial equipment that does weird IO or something.
If you look at more recent Windows APIs, I'm really thankful that the traditional Win32 APIs still work. On average the older APIs are much nicer to work with.
Nicer to work with?
I can't think of any worse API in the entire world?
There are some higher level COM APIs which are not exactly great, but the core Win32 DLL APIs (kernel32, user32, gdi32) are quite good, also the DirectX APIs after ca 2002 (e.g. since D3D9) - because even though the DirectX APIs are built on top of COM, they are designed in a somewhat sane way (similar to how there are 'sane' and 'messy' C++ APIs).
Especially UWP and its successors (I think it's called WinRT now?) are objectively terrible.
I've had to work with api functions like https://learn.microsoft.com/en-us/windows/win32/api/winuser/... and friends. It was by far the most unpleasant api I've ever worked with.
I think that particular pattern is a perfectly reasonable way to let the user ingest an arbitrarily long list of objects without having to do any preallocations -- or indeed, any allocations at all.
Yes, but the inversion of control is unpleasant to deal with—compare Find{First,Next}File which don’t require that.
Which is a pattern that also exists in the Win32 API, for example in the handle = CreateToolhelp32Snapshot(), Thread32First(handle, out), while Thread32Next(handle, out) API for iterating over a process's threads.
I also find EnumChildWindows pretty wacky. It's not too bad to use, but it's a weird pattern and a pattern that Windows has also moved away from since XP.
https://learn.microsoft.com/en-us/windows/win32/toolhelp/tra...
Because allocating well under a hundred handles is a biggest problem we have.
WinAPI is awful to work with for reasons above anyone’s comprehension. It’s just legacy riding on legacy, with initial legacy made by someone 50% following stupid patterns from the previous 8/16 bit decade and 50% high on mushrooms. The first thing you do with WinAPI is abstracting it tf away from your face.
Yeah, I like the smell of cbSize in the RegisterClassExA. Smells like… WNDCLASSEXA.lpfnWndProc.
Nothing can beat WinAPI in nicety to work with, just look at this monstrosity:
The 'win32' AOU calls are decent relative to themselves. If you understand their madness. For every positive call there is usually an anti call and a worker call. Open a handle, use the handle with its helper calls, close the handle. You must understand their 3 step pattern to all work. There are a few exceptions. But usually those are the sort of things where the system has given you a handle and you must deal with it with your calls to helper calls. In those cases usually the system handles the open/close part.
Now you get into the COM/.NET/UWP stuff and the API gets a bit more fuzzy on that pattern. The win32 API is fairly consistent in its madness. So are the other API stacks they have come up with. But only in their own mad world.
Also out of the documentation the older win32 docs are actually usually decently written and self consistent. The newer stuff not so much.
If you have the displeasure of mixing APIs you are in for a rough ride as all of their calling semantics are different.
The various WinRT APIs are even worse. At least Win32 is "battle tested"
X Windows and Motif, for example.
So, you don't use the newer Windows APIs?
IMO this is because they are better written, by people who had deeper understanding of the entire OS picture and cared more about writing performant and maintainable code.
Well-illustrated in the article “How Microsoft Lost the API War”[0]:
[0] https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...I feel the same way about Spring development for Java.
Also reminds me of:
https://www.infoq.com/presentations/Simple-Made-Easy/
But at what point does that become a liability?
I'm arguing that point was about 15-20 years ago.
There is another very active article on HN today about the launch of the new Apple iPhone 16 models.
The top discussion thread on that post is about “my old iPhone $version is good enough, why would I upgrade”.
It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.
For Microsoft, the driver for backwards compatibility is economic: Microsoft wants people to buy new Windows, but in order to do that, they have to (1) convince customers that all their existing stuff is going to continue to work, and (2) convince developers that they don’t have to rewrite (or even recompile) all their stuff whenever there’s a new version of Windows.
Objectively, it seems like Microsoft made the right decision, based on revenue over the decades.
Full disclosure: I worked for Microsoft for 17 years, mostly in and around Windows, but left over a decade ago.
This is almost never a technical decision, but a 'showing off' decision IMO.
It's a "fun toys" decision.
Often one and the same.
Not concerning the iPhone, but in general tech people tend to be very vocal about not updating when they feel that the new product introduces some new spying features over the old one, or when they feel that the new product worsens what they love about the existing product (there, their taste is often very different from the "typical customer").
Great related thread from yesterday: https://news.ycombinator.com/item?id=41492251
New frameworks have vulnerabilities. Old OS flavors have vulnerabilities. OpenSSh keeps making the news for vulnerabilities.
I’d argue that software is never finished, only abandoned, and I absolutely did not generate that quote.
Stop. Just stop.
Yes, just stop... with the bullshit. OpenBSD didn't make vulnerabilities. Foreign Linux distros (OpenSSH comes from OpenBSD, and they release a portable tgz too) adding non-core features and libraries did.
It is not a liability because most of what you are talking about is just compatibility not backwards compatibility. What makes an operating system Windows? Fundamentally it is something that runs Windows apps. Windows apps existed 15-20 years ago as much as they exist today. If you make an OS that doesn't run Windows apps then it just isn't Windows anymore.
The little weird things that exist due to backwards compatibility really don't matter. They're not harming anything.
It is a great achievement. But the question is: Is it really relevant? Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?
Of course even that isn't trivial, as one wants to share filesystem access (while I can imagine some overlay limiting access), might need COM and access to devices ... but I would assume they could push that a lot more actively. If they decided which GUI framework to focus on.
A huge amount of the compatibility stuff is already moved out into separate code that isn't loaded unless needed.
The problem too, though, is users don't want independent subsystems -- they want their OS to operate as a singular environment. Raymond Chen has mentioned this a few times on his blog when this sort of thing comes up.
Backwards compatibility also really isn't the issue that people seem to think it is.
Independent subsystems need not be independent subsystems that the user must manage manually.
The k8s / containers world on Linux ... approaches ... this. Right now that's still somewhat manual, but the idea that a given application might fire off with the environment it needs without layering the rest of the system with those compatibility requirements, and also, incidentally, sandboxing those apps from the rest of the system (specific interactions excepted) would permit both forward advance and backwards compatibility.
A friend working at a virtualisation start-up back in the aughts told of one of the founders who'd worked for the guy who'd created BCPL, the programming language which preceded B, and later C. Turns out that when automotive engineers were starting to look into automated automobile controls, in the 1970s, C was considered too heavy-weight, and the systems were imploemented in BCPL. Some forty years later, the systems were still running, in BCPL, over multiple levels of emulation (at least two, possibly more, as I heard it). And, of course, faster than in the original bare-metal implementations.
Emulation/virtualisation is actually a pretty good compatibility solution.
Generally speaking, the waste is only hard disk space. If no one ever loads some old DLL, it just sits there.
Nobody loads it, but the attacker. Either via a specially crafted program or via some COM service invoked from a Word document or something.
By moving the Win32 API onto Windows NT kernel, isn't that essentially what Microsoft did?
I think that VM software like Parallels has shown us that we are just now at the point where VMs can handle it all and feel native. Certainly NT could use a re write to eliminate all the legacy stuff…but instead they focus on copilot and nagging me not to leave windows edge internet explorer
My question is why cant M$ ship the old OS running as VM. And free themselves from Backward compatibility on a newer OS.
Users will want to use applications that require features of the earlier OS version, and newer ones that require newer features. They don't want to have to switch to using a VM because old apps would only run on that VM.
Putting apps from the VM on the primary desktop is something they have already done on WSLg. Launching Linux and X server is all taken care of when you click the app shortcut. Similar to the parent’s ask, WSL2/WSLg is a lightweight VM running Linux.
In many ways the old API layers are sandboxed much like a VM. The main problems are things like device drivers, software that wants direct access to external interfaces, and software that accesses undocumented APIs or implementation details of Windows. MS goes to huge lengths to keep trash like that still working with tricks like application specific shims.
Backwards compatibility isn't their biggest problem to begin with, so that wouldn't be worth it. In effect they already did break it: the new Windows APIs (WinRT/UWP) are very different to Win32 but now people target cross platform runtimes like the browser, JVM, Flutter, etc. So it doesn't really matter that they broke backwards compatibility. The new tech isn't competitive.
They did that with Windows 7. Win 7 had an optional feature called "Windows XP Mode" that was XP running inside of a normal VM.
https://arstechnica.com/information-technology/2010/01/windo...
If 20 years is so ancient, why did they go by so fast....
Bad news. NT wasn't 20 years ago. It was 31 years ago.
It's possible that wine is more "backwards compatible" than the latest version of Windows though.
And while wine doesn't run everything, at least it doesn't circumvent security measures put in place by the OS...
I've had more luck running games from 97-00 under wine than on modern Windows.
My understanding is that the portion of revenue Microsoft makes from Windows these days is nearly negligible (under 10%). Both XBox and Office individually make more money for Microsoft than Windows, which indicates that they don't have a compelling incentive to improve it technically. This would explain their infatuation with value extraction initiatives like ads in Explorer and Recall.
My understanding is that the main thing keeping Windows relevant is the support for legacy software, so they'd be hesitant to jeopardize that with any bold changes to the kernel or system APIs.
That said. Given my imagined cost of maintaining a kernel plus my small, idiolistic, naive world view; I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel (or if GNU is too restrictive, BSD or alternatively write their own POSIX compliant kernel like MacOS).
Linux would be ideal given its features; containers, support for Android apps without emulation, abundance of supported devices, helpful system capabilities like UNIX sockets (I know they started to made progress there but they abandoned further development), and support for things like ROCm (which only works on Linux right now).
Microsoft could build Windows on top of that POSIX kernel and provide a compatibility layer for NT calls and Win32 APIs. I don't even care if it's open source.
The biggest value for me is development capabilities (my day job is writing a performance sensitive application that needs to run cross-platform and Windows is a constant thorn in my side).
Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.
WSL1 was a great start and gave me hope, but is now abandonware.
WSL2 is a joke, if I wanted to run Linux in a VM, I'd run Linux in a VM.
I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant - If I could target unix syscalls during development and produce binaries that worked on Windows, I supposed I'd be happy with that.
I don't understand why people keep repeating this wish, rather than the arguably better, more competitive option: open-source the NT and Windows codebase, prepare an 'OpenWindows' (nice pun there, really) release, and simultaneously support enterprise customers with paid support licences, like places like Red Hat currently do.
I couldn't disagree more. As someone who comes from a mostly-Windows pedigree, UNIX is... pretty backwards, and I look upon any attempt to shoehorn UNIX-on-Windows with a fair bit of disapproval, even if I concede that their individual developers and maintainers have done a decent job. Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf? Tell me when you can get flame graphs in one click).
Hard disagree on the development aspect of VS, which (last time I used it, in 2015) couldn't even keep up with my fairly slow typing speed.
The debugging tools are excellent, but they are certainly not any more excellent than those in Instruments on macOS (which is largely backed by DTrace).
2015 is 9 years ago. We shouldn't keep comparing Windows/Microsoft software from that long ago with modern alternatives on Linux and Mac.
That said, I agree that Visual Studio was extremely slow and clunky in the first half of the 2010s.
I didn’t compare it with a modern alternative. I compared its debugging tools of Instruments of the same vintage, and pointed it out that last time I tried VS it couldn’t keep up with basic typing.
NT 10.0 hails from 2015 (Windows 10) and was re-released in 2021 (Windows 11).
VS2022 is actually pretty damn slick. I use it on the daily and it's much more stable than any previous version. It's still not as fast as a text editor (I _do_ miss Sublime's efficiency), but even going back to 2019 is extremely hard.
May be very difficult or impossible if the Windows codebase has third-party IP (e.g. for hardware compatibility) with restrictive licensing
Sun managed it with Solaris (before Oracle undid that work) - indeed they had to create a license which didn't cause problems with the third party components (the CDDL).
The license happened less about third party components (GPLv2 would have worked for that, too, even if it's less understood area), but because GPLv3 was late, Sun wanted patent clause in license, and AFAIK engineers rebelled against licensing that would have prevented BSDs (or other) from using the code.
(For those who still believe "CDDL was designed to be incompatible with GPL", the same issues show up when trying to mix GPLv2 and GPLv3 code if you can't relicense the former to v3)
I have a fever dream vision of a "distribution" of an open source NT running in text mode with a resurrected Interix. Service Control Manager instead of systemd, NTFS (with ACLs and compression and encryption!), the registry, compatibility with scads of hardware drivers. It would be so much fun!
Isn't ReactOS close enough?
I've kept meaning to look at ReactOS and put it off again and again. I felt Windows Server 2003 was "peak Windows" before Windows 7 so I'd imagine I'd probably like ReactOS.
I can imagine the effort of open source Windows would be prohibitive.
Having to go through every source file to ensure there is nothing to cause offense in there; there may be licensed things they'd have to remove; optionally make it buildable outside of their own environment...
Or there may be just plain embarrassing code in there they don't feel the need to let outsiders see, and they don't want to spend the time to check. But you can be sure a very small group of nerds will be waiting to go through it and shout about some crappy thing they found.
I'd venture that even more nerds would go through it and fix their specific problems.
It's always been quite clear that FOSS projects that have sufficient traction are the pinnicle of getting something polished. No matter how architecturally flawed or no matter how bad the design is: many eyes seem to make light work of all edge cases over time.
On the other hand, FOSS projects tend to lack the might of a large business to hit a particular business case or criticality, at least in the short term.
Open sourcing is probably impossible for the same reasons open sourcing Solaris was really difficult. The issues that were affecting solaris affect Windows at least two orders of magnitude harder.
It's the smart play, though they'd lose huge revenues from Servers that are locked in... but otherwise, Windows is a dying operating system, it's not the captive audience it once was as many people are moving to web-apps, games are slowly leaving the platform and it's hanging on mostly due to inertia. The user hostile moves are not helping to slow the decline either.
That's an interesting idea. Some thoughts come to mind:
- The relatively low revenue of Windows for Microsoft means that they have the potential opportunity of increasing Windows profitability by dropping the engineering costs associated with NT (though on the flipside, they'd acquire the engineering cost of developing Linux).
- Open sourcing NT would likely see a majority of it ported into Linux compatibility layers which would enable competitors (not that this is bad for us as consumers, it's just not good for business)
- Adopting the Linux kernel and writing a closed source NT compatibility layer, init system, and closed source desktop environment means that the "desktop" and Microsoft aspects of the OS could be retained as private IP - which is the part that they could charge for. I know I'd certainly pay for a Linux distribution that has a well made DE.
I honestly agree. Many of the APIs show their age and, in the age of high level languages, it's frustrating to read C docs to understand function signatures/semantics. It's certainly not ergonomic - though that's not to say there isn't room to innovate here.
Ultimately, I value sameness. Aside from ergonomics, NT doesn't offer _more_ than POSIX and language bindings take care of the ergonomics issues with unix, so in many ways I'd argue that NT offers less.
Just because the tooling isn't as nice to use now doesn't mean that Microsoft couldn't make it better (and charge for that) if they adopted Linux. This isn't something entirely contingent on the kernel.
I don't see why everything has to be Linux (which I will continue to maintain has neither the better kernel- nor user-mode).
Windows and NT have their own strengths as detailed in the very article that this thread links to. When open-sourced they could develop entirely independently, and it is good to have reasonable competition. Porting NT and the Windows shell to the Linux kernel for porting's sake could easily take years, which is wasted time and effort on satisfying someone's not-invented-here syndrome. It will mean throwing away 30+ years of hardware and software backward compatibility just to satisfy an imperfect and impractical ideal.
For perspective: something like WINE still can't run many Office programs. The vast majority of its development in recent years has been focused on getting video games to work by porting Direct3D to Vulkan (which is comparatively straightforward because most GPUs have only a single device API that both graphics APIs expose, and also given the fact that both D3D and Vulkan shader code compile to SPIR-V). Office programs are the bread and butter of Windows users. The OpenOffice equivalents are barely shadows of MS Office. To be sure, they're admirable efforts, but that only gets the developers pats on the back.
VS is dogshit full of bloat and a UI that takes a PhD to navigate. CLion and QTCreator embed gdb/lldb and do the debugging just fine. perf also gets you more system metrics than Visual Studio does; the click vs CLI workflow is mostly just workflow preference. But if you're going to do a UI, at least don't do it the way VS does.
In Microsoft's perfect world your local machines would just be lightweight terminals to their Azure mainframe.
This is coming. IMO in 20 years this will be how all devices, including phones, work.
Without massive (exponential) battery/efficiency improvements it won't happen. Networking isn't something you can magically wave away. It has a cost.
Yes but by removing pretty much all processing on the end device and making it a thin client, you can extend battery life exponentially.
Is it actually the case that local computation on mobile devices is much more expensive than running the radios? I was just the impression that peripherals like the speakers, real radios, and display often burn up much more power than local manipulation of bits.
You'd need massive networking improvements too. Telling someone "try next to the stairs, the cellular signal's better there" is an example I saw yesterday (it was a basement level), and that's not uncommon in my experience. You have both obstacles (underground levels, tunnels, urban canyons, extra thick walls, underwater) and distance (large expanses with no signal in the middle of nowhere); satellites help with the later but not with the former. Local computing with no network dependencies works everywhere, as long as you have power.
Catching up with Google's Chromebook, so worshiped around here.
Microsoft had this and abandoned it. I was building GNU software on NT in 2000 under Interix. It became Services for Unix and then was finally abandoned.
Was finally _replaced_ by WSL.
By WSL1. But WSL2 is a VM running a Linux kernel, not POSIX compatibility for Windows.
There's still all kinds of pain and werodness surrounding the filesystem boundary with WSL2. And contemporary Windows still has lots of inconsistency when trying to use Unix-style paths (which sometimes work natively and sometimes don't), and Unix-y Windows apps are still either really slow or full of hacks to get semi-decent performance. Often that's about Unix or Linux expectations like stat or fork, but sometimes other stuff (see for instance, Scoop's shim executable system that it uses to get around Windows taking ages to launch programs when PATH is long).
WSL2 also just isn't a real substitute for many applications. For instance, I'd like to be able to use Nix to easily build and install native Windows software via native toolchains on Windows machines at work. You can't do such a thing with WSL2. For that you need someone who actually knows Windows to do a Windows port, and by all reports that is very different from doing a port to a Unix operating system.
Idk if what people are asking for when they say 'POSIX compliant' with respect to Windows really has much to do with the POSIX standard (and frankly I don't think that matters). But they're definitely asking for something intelligible and real that Windows absolutely lacks.
Interix was what Windows lacks, but it was abandoned. It wasn't a Linux compatibility layer like WSL1 (or just a gussied-up Linux VM like WSL2). It was a freestanding implementation of POSIX and building portable software for it was not unlike building software portable to various *nixes. GNU autotools had a target for it. I built software from source (including upgrading the GCC it shipped with).
It was much more elegant than WSL and was built in the spirit of the architecture of NT.
IIRC Interix was a separate "subsystem" in the Windows API model - psxss.exe, with Win32 covered by csrss.exe and believe it or not there was an OS2 one.
What does it say about the practical usefulness of this Windows facility that MS has, it seems, never maintained one of these 'personalities' long-term?
I think it's probably business case and revenue potential, not practical usefulness. I felt like Interix was plenty useful but probably couldn't earn its keep. I think that pluggable personalities even exists in NT speaks to the general Microsoft embrace / extend / extinguish methodology. They were a means to an end to win contracts.
Exactly my thoughts. I really admired the design and how far WSL1 got. It is just sad to see it abandoned.
I couldn't have said it better. If I wanted to run Linux in a VM, I'd run Linux in a VM, why are we pretending something special is going on.
Yes WSL1 was really special. Talking to the NT kernel from a Linux environment.
I think it was mainly the docker crowd that kept asking for compatibility there :(
Microsoft could have implementes the Docker API as part of WSL1 instead of loading up a real Linux kernel for it. That's how LX Zones on Illumos work for running Docker containers on non-Linux without hardware virtualization.
I'm sure it's tricky and hard (just like WSL1 and WINE are tricky and hard), but we know it's at least possible because that kind of thing has been done.
But why was it better, other than aesthetic preference?
The GPL isn't too restrictive. Google has no issue with it on Android (which uses a modified Linux kernel). GPL doesn't mean you have to open-source everything, just the GPL components, which in the case of the Linux kernel, is just the kernel itself. MS already contributes a bunch of drivers (for their hypervisor) to the Linux kernel. They could easily make a Linux-based OS with their own proprietary crap on top if they wanted to.
They wouldn't need CPU-level emulation, but the API would need some kind of compatibility layer, similar to how WINE serves this purpose for Windows applications on Linux.
They don't need to: they can just use WINE. They could improve that, or maybe fork it and add some proprietary parts like CodeWeavers does, or they could even just buy out CodeWeavers.
Oh hell no!
Diversity in operating systems is important, and the NT architecture has several advantages over the Linux approach. I definitely don't want just one kernel reigning supreme, not yet at least - although that is probably inevitable.
This entire article is an elegant argument why this would be a terrible idea. Didn't you RTFA?
Way to expose and highlight your ignorance. :-( NT 3.1, the first release, was POSIX compliant in 1993, and every release since has been.
20+B annually is not “nearly negligible.” That’s more revenue than all but 3 other software companies: oracle 46B, SAP $33B, and salesforce 430B. It’s more annual revenue than Adobe and every other software company.
"I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant"
Their was a time MS sales posted NT to be more POSIX compatible than UNIXes.
Without Windows, there would be no platform to sell office (macOS aside). That as a side note.
The important piece you are missing is this: The entirety of Azure runs on an optimized Variant of Hyper-V, hence all of Azure runs on Windows. That is SUBSTANTIAL!
Pulling WSL from windows 10 was particularly nasty
What does that mean, exactly? Linux is also an "aging design", unless I missed a big announcement where they redesigned it at any point since 1994.
cf Terry Davis saying "Linux wants to be a 1970s mainframe".
Every new system wants to be a mainframe when it grows up. VMS, Unix, Linux, NT...they all started "small" and gradually added the capabilities and approaches of the Bigger Iron that came before them.
Call that the mainframe--though it too has been evolving all along and is a much more moving target than the caricatures suggest. Clustering, partitions, cryptographic offload, new Web and Linux and data analytics execution environments, most recently data streaming and AI--many new use modes have been added since the 60s and 70s inception.
MacOS started on desktop, moved from there to smartphones and from there to smartwatches. Linux also moved ‘down’ quite a bit. NT has an embedded variant, too (https://betawiki.net/wiki/Windows_NT_Embedded_4.0, https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP..., https://en.wikipedia.org/wiki/Windows_IoT).
True. Every new system wants to be just about everything when it grows up. Run workstations, process transactions, power factors, drive IoT, analyze data, run AI...
"Down" however is historically a harder direction for a design center to move. Easier to add features--even very large, systemic features like SMP, clustering, and channelized I/O--than to excise, condense, remove, and optimize. Linux and iOS have been more successful than most at "run smaller, run lighter, fit into a smaller shell." Then again, they also have very specific targets and billions of dollars of investment in doing so, not just hopeful aspirations and side-gigs.
TD had some interesting ideas when it came to simplifying the system, but I think the average person wants something inbetween a mainframe and a microcomputer.
In linux/unix there is too much focus on the "multiuser" and "timesharing" aspect of the system, when in the modern day you generally have one user with a ton of daemons so you forced to run daemons as their own users and then have some sort of init system to wrangle them all. A lot of the unixisms are not as elegant as they should be (see plan9, gobolinux, etc).
TempleOS is more like a commodore 64 environment than an OS: there's not really any sort of timesharing going on and the threading is managed manually by userspace programs. One thing I like is that the shell language is the same as the general programming language (HolyC).
Every modern OS wants to be that, even iOS, at least internally.
Linux actually did have some pretty significant redesigns with some notable breaking changes. It wasn't until the 2.4 line ended in the late oughts that Linux as we know it today came fully into existence.
Linux 2.6 internals were very different from 2.4 internals which were hugely different from 2.2. Programming for the three was almost like targeting 3 different kernels.
What were some of those changes / developments?
FWIW Linux got support for kernel modules in January 1995.
Yeah this was one thing I spotted as well. The author seems to confuse the fact that the norm for Unix/Linux is that the OS should have the drivers whereas MS assumes the manufacturer should provide it with the capability to have this.
It also entirely overlooks how the system that allows a user with know specialized knowledge to authorize random code to run in a privileged environment which led to vulnerabilities that had their own vulnerabilities.
That was in response to the beginning of the article:
"I’ve repeatedly heard that Windows NT is a very advanced operating system"
It's very advanced for decades ago. It's not meant as an insult.
About 20 years ago, despite being a Linux/UNIX/BSD diehard, I went through the entire Inside Windows NT book word by word and took up low-level NT programming and gained a deep respect for it and Dave Cutler. Also a h/t to Russinovich who despite having better things to do running Winternals Software[1], would always patiently answer all my questions.
1. https://en.wikipedia.org/wiki/Sysinternals
Unix is an Apollo-era technology! Also an aging design.
Except unix nowadays is just a set of concepts and conventions incorporated into modern OSs
What percent of Unix users are using a "modern OS" and what percentage are using Linux, which hasn't significantly changed since it was released in 1994?
Of course, I meant the design hasn't changed. Linux has had a lot of refactoring, and probably Windows has also.
My point was that most people are using things like Linux, MacOS, etc. nowadays, which are all also pretty old by now but not nearly as old as ATT Unix
Linux has changed dramatically since its first release. It has major parts rewritten every decade or so, even. It just doesn't break its ABI with userspace.
let's be charitable, removal of global lock was fairly big change.
How “modern” are they when they’re just a bunch of shell scripts on top of POSIX? SystemD caught up to NT4 and the original MacOS.
The transition happened to the huffing and puffing/kicking and screaming of many sysadmins.
Still a minority of sysadmins though. Most seem to have embraced it to an extent that's honestly a little sad to see. I liked to think of the linux community as generally being a more technical community, and that was true for a long time when you needed more grit to get everything running, but nowadays many just want Linux to be 'free windows'.
This means Linux has "made it."
I guess that grit was a gateway to a basic Linux experience for a long time - it did take a lot of effort to get a normal desktop running in the early to mid 90's. But that was never going to last - technical people tend to solve problems and open source means they're going to be available to anyone. There are new frontiers to apply the grit.
Set of concepts derived from "whatever the hell Ken Thompson had in his environment circa 1972".
If by "modern" you mean stuff between 1930 and 1970, sure, most contemporany OSes can trace roots from that era.
Seems to me they should pull an Apple. Run everything old in some "rosetta" like system and then make something 100% new and try to get people to switch, like say no new updates except security updates for the old system so that apps are incentivized to use the new one.
Nobody wants something 100% new. Users don't want it. Developers don't want it. You can make a new OS but then you'll have zero developers and zero users.
Yet this fantasy exists.
And as soon as you make something new, it'll be old, and people will call for its replacement.
People live under the delusion that OS X was "100% new" when in fact it was warmed-over NeXTSTEP from 1989. Most of them probably have never seen or heard of a NeXT workstation.
To reinforce how much people hate "100% new" how long has Microsoft been working on ReFS? 20 years? The safest most boring job in the world must be NTFS developer.
The OS X story is even worse than that. When Apple first released OS X to developers, like Adobe, they balked. They weren't going to port their Mac applications to this "new" operating system. Apple had to take another year to develop the Carbon APIs so Mac developers could more easily port their apps over.
Carbon is a better Apple-related comparison since it's basically a cleaned-up version of the classic Mac OS API as a library that ran on both Mac OS X and classic.
Win32 runs on Linux by way of SQL Server.
uhhhhhh yes, but not really? All Win32 calls are intercepted and handled by SQLPAL, instead. https://www.microsoft.com/en-us/sql-server/blog/2016/12/16/s...
You left out the important part: abandon the Rosetta-like system a mere few years later once you've lured them in, then fuck everyone over by breaking backwards compatibility every OS release. Apple really has the "extinguish" part nailed down.
I created a file named aux.docx on a pendrive with Linux. Tried to open it on windows 7. It crashed word with a strange error. Don't know what would happen on 8+.
It would fail, too. ‘CON’ has been a reserved name since the days of DOS (actually CP/M, though that doesn’t have direct lineage to Windows) where it acted as a device name for the console. You can still use it that way. In a CMD window:
`type CON > file.txt`, then type some stuff and press CTRL+Z.
https://learn.microsoft.com/en-us/windows/win32/fileio/namin...
This is a Win32-ism rather than an NT-ism. This will work:
It's a dos holdover implemented in the win32 side of things in user space. I'm pretty sure it still exists on win11.
https://stackoverflow.com/questions/40794287/cannot-write-to...
Didn't the article say that unix's were even more full of cruft?
Did it?
It did. The whole article was about how the NT kernel is better designed.
Win32 is not the issue, MS could just create shims for these in a secure way. It's 2024, not 1997.
Ditto with GNU/Linux and old SVGAlib games. It already should have been some a wrapper against SDL2.
I'm not sure what you're trying to say here, but those "shims" exist. Apps generally do not talk directly to the Executive (kernel). Instead, the OS has protected subsystems that publish APIs.
Apps talk to the subsystem, and the subsystem talks to the Executive (kernel).
Traditionally, Windows apps talk to the Win32 subsystem[1]. This subsystem, as currently designed, is an issue as described in my previous comment.
1. https://en.wikipedia.org/wiki/Architecture_of_Windows_NT#Win...
Caveat: Details of this may have changed in the last couple major Windows versions. I've been out of the NT game for a bit. Someone correct me, if so.
Yes, I will correct you. Direct Draw games with run dog slow under Windows 8 and up. You can run them at full speed with WineD3D as it has libraries to map both GDI and DDraw to OpenGL.
That was the whole point of WinRT and UWP, and the large majority of Windows developer community rebeled against it, unfortunely.
Didn't help that trying to cater to those developers, management kept rebooting the whole development approach, making adoption even worse.
Does any common OS have a modern design.
Unix is around the same age.
I'd rather we get rid of all the marketing crap first.
My main impression of Windows is that all the 'old' NT kernel stuff is very solid and still holds up well, that there's a 'middle layer' (Direct3D, DXGI, window system compositing) where there's still solid progress in new Windows versions (although most of those 'good parts' are probably ported over from the Xbox), while most of the top-level user-facing code (UI frameworks, Explorer, etc..) is deterioating at a dramatic pace, which makes all the actual progress that still happens under the hood kinda pointless unfortunately.
Imagine the alternate reality where we got Longhorn with WinFS instead of Vista.