The readme compares it to the cross-architecture Cosmopolitan libc, but Docker is anything but cross-platform. On any other platform besides Linux it requires a Linux VM.
Linux containers are great (and I run Linux as my desktop OS), just pointing out the not-so-efficient nature of considering this cross-platform.
OCI image manifests can specify platforms and architectures. From the end user’s point of view it can be all the same invocation.
Docker natively supports Windows, and it is low lift to make native Windows images for many common programming environments.
Does anyone use it? No not really. It makes a lot of sense if you need Windows stack stuff that is superior to Linux, like DirectX, but maybe not so much for regular applications.
There is also macOS containers, a project that has a decent proof of concept of a containerd fork that runs macOS container images. In principle there is a shorter path of work for so called host process containers, but fully isolated exists for macOS, it could work with e.g. Kubernetes, and people want it and it makes sense, and it sort of does exist.
The difference between cross-platform and “cross-platform” as you’re talking about it is really having some absolutely gigantic company, like Amazon or Google, literally top 10 in the world, putting this stuff into a social media zeitgeist.
Plenty of Windows shops use Windows containers, from my side you can count with 5 projects delivered into production using Windows Containers.
Many App deployments in Azure also use Windows containers.
Yes, "windows shops" are stupid, you got that right. Windows is a toy in the server space, always as been, always will be.
Plenty of big boys money goes through such "toy" servers.
Is that because Microsoft is good at selling it or because it is actually a good piece of tech? We recently had to set up some Microsoft platinum partner test automation software, and the money we spent on SQL Server and Windows instances (on Azure of course) alone could've funded a fleet of Linux servers or a junior dev writing Playwright scripts all day.
(Not to mention it produces unactionable output by default, and if I love one thing, it's "this page didn't work one out of 100 times, must be infra problem" incidents)
You would have spent similar amount of money in Red-Hat Licenses, or anyone else worth using with big boys support contracts.
Linux is only free when our time isn't worth money.
Playwright is developed by Microsoft, by the way.
Systems engineer here, I haven't worked at a company that pays for Linux support in 12 years and this was at scale (10K+ servers). You don't need IBM or Canonical to get patches or a heads up about major vulns. Several ways to go with this but I get up to date patches for free with Debian. And I can count on my hand the number of times that any org I've been part of needed a kernel engineer or access to one. Support contracts for OS AFAIK aren't worth the money any more unless you really don't have anyone who can do system support.
Oh, man, not this shit.
Linux saves time. Windows servers are an endless time sink, that costs more on its hardware, and have added license costs. And license costs are also mostly the time you spend managing your licenses, the actual money you send to Microsoft is peanuts.
Windows only costs its price if your time is worthless.
that's funny, because having rotated through all 3 major cloud providers in the past 5 years now (at different places), Azure support is the most time-wasting not worth it even if it was free, and i'd much prefer if i could waste my time reading documentation that makes sense, but Azure doesn't have that either.
Azure doesn't happen to be an outlier in Microsoft products, right?
And I'm happy the people there get to make things that work outside the eldritch horror that is Windows Server.
https://news.ycombinator.com/newsguidelines.html
Why is windows server a toy?
The main problem, I think, with Windows containers is that they are only really supported on Windows Server - which most developers don't have access to.
You can run them through Docker Desktop, but then why not just run the same containers you will be deploying on you server (which is most likely going to be linux based?).
I would love for MS to make containers the way to deploy programs to Windows, but that requires them to make the runtime part of the default install and to make it available on all the OSs.
Windows 2022 containers work on Windows 11. Docker Desktop uses a shim for Windows containers. “dockerd” a single binary for Windows statically compiled is all you need to run Windows containers with the familiar Docker commands, you could also use PowerShell.
They are supported all the same. IMO the main issue is that this feature is poorly marketed.
Its extremely poorly marketed, since I looked up the MS documentation when I wrote that comment and it only still only said windows server.
Still unless it works on Win10 home, it won't be the default way to install software for windows - which sucks, since its a better way than the current one.
Software delivered via Windows store, specially if packaged with MSIX already uses containers.
Windows containers are supported in Windows Professional as well.
Maybe it is because I spend most of my time as Windows developer, this wasn't hard to find,
https://learn.microsoft.com/en-us/virtualization/windowscont...
It does further down say that you need a windows server even for development purposes.
What I missed was that it only applied to windows server images.
Also the exception only seems to apply for development and testing services and, for some reason, only a physical computer.
Regardless, I was clearly wrong: it is possible, just not well documented.
Isn't this still the Windows Server images only? Can I expect everything to run that would run on win 10, 11 and/or server?
Windows containers can be built on Windows 10 pro and windows 11 pro. All you need is the hypervisor from Microsoft installed under windows Settings->Apps and Features->Additional windows features.
I really like what this script is doing - it's specifying system level dependencies, a database schema, an interpreter, the code that runs on that interpreter, the data (on disk!) required by that code, and an invocation to execute the code all in one script. That's amazing, and this is an excellent model for sharing a stand alone applications with non-trivial dependencies!
However, Docker is an OS-level virtualization. Docker natively supports Windows in the sense that there is a native app. That native app spins up Linux virtual machines, so the container is "native" to my Intel CPU with their virtualization extensions, but it is not native to Windows. I use it, which I say with no animus toward your original message.
edit: I was ignorant of native windows containers. I'm old and my brain still maps docker to lxc I guess. Apologies to OP - the DirectX line should have caught my attention.
No, Docker supports native Windows containers.
Docker Desktop aims to provide the same experience across Mac and Windows and as such those use Linux VM's, yes. However Docker most definitely supports Windows containers.
Sorry, that's right. You can probably guess that all of my Windows Docker use is with Linux images. This particular script wouldn't work as there is no node image for a native windows host (unless there is? again, I'm ignorant of native windows containers).
Windows Services for Linux can install a Ubuntu image for ready usage.
Also ignorant - I have WSL/DockerDesktop etc...
I run ubuntu desktop in a vbox VM.
If I run ubuntu desktop on docker, I have to RDP into it.
What type of container will WSL build? A desktop - or headless with CLI?
Finally - which is lighter-weight, Vbox VM, or a Docker container, or whatever WSL makes?
EDIT: NM - I understand the answer now.
Is directx superior to vulkan? Serious question from a graphics noob (who dislikes windows development)
DirectX the API compared to Vulkan: whatever.
DirectX as a whole product: yes.
For the two middlewares Unity and Unreal, on real applications, DirectX 11 will have better latency (lower CPU time mostly), DirectX 12 performance will be higher throughput (greater FPS), but neither will be by very much. Like a single application on ordinary hardware, it won’t matter. But for the thing I measure, occupancy, you can get something like 3x as much efficiency with DirectX on Windows compared to the same application on Vulkan on Linux.
DirectX is more than just Vulkan. It does sound, input, etc...
Vulkan is like Direct3D 12, a low level 3D API. Between the two, most seem to consider Vulkan the better option. However, Vulkan has the reputation of being verbose and very much not noob friendly. It is mostly geared towards advanced engine developers who want full control to make the most of the hardware.
Besides 3D, the rest of the multimedia API are a bit of a mess it seems. On Windows and elsewhere. I haven't look at it for many years though.
chroot requires disabling SIP on MacOS, so any kind of "container" that shares the kernel but has a mostly isolated userspace is never going to happen on MacOS. If you want an isolated host environment on MacOS the bespoke approach is to use VZVirtualMachine. The whole point of containerization is to not require virtualization, so it kind of defeats the purpose.
I really think people who "want" containers on MacOS don't understand containers or what problem they solve, and if they think they need them should consider why they aren't already running their dev environment in Linux.
Doesn’t windows use WSL?
Not for windows containers. But no one really uses those anyways.
We use them.
Many Windows products, e.g. Sitecore, only support Windows containers.
Microsoft Store software relies on Windows containers infrastructure.
Windows containers make use of Windows jobs APIs.
WSL is a Linux VM
WSL1 is an API shim to get Linux binaries running in windows natively. It is more akin to what wine does on Linux.
Docker Desktop runs either with Hyper-V or with WSL. https://docs.docker.com/desktop/install/windows-install/
I explored the idea of using the scratch image with a cosmopolitan binary to get something more cross-architecture, but you need a shell to run those binaries. I'd love to see cross architecture Docker images, if someone else can figure out a trick to make it work.
I think parent was pointing out that you need Linux to run Docker (since it doesn't run natively on any other OS) which is different from what Cosmopolitan provides.
Edit: Ok, apparently it natively supports Windows for Windows containers and for everything else there's a Hyper-V integration. Not sure if you can write a portable Dockerfile script like that though.
You surely can, I have Dockerfiles that do it.
It is a matter of having build parameters for base images and using programming languages that are mostly OS agnostic.
Just use redbean and provide a init lua file. Or use a http://cosmo.zip provided interpreter (like python, maybe even bash).
Each ape file is also a valid zip file. Add your dependencies as if the ape was an archive:
Also add a `.args` file: For this .args file, put one argument per line. This will run on start. You can use `/zip/mydepencency.anything` to read from files, but if you have an executable dependency you'll need to extract it first (I use the host shell or host powershell for this).You can do this with any software you can compile with comsocc, by adding a call to LoadZipArgs[1] in the main function.
It's easy to get started, your ideas will branch out as soon as you start playing with it.
[1]: https://github.com/jart/cosmopolitan/blob/master/tool/args/a...
Makes me wonder if containerization is even possible without a VM for non-Linux machines.
https://macoscontainers.org
I do believe so but only for the host OS. Eg Mac containers work for Mac etc
Doesn’t Cosmopolitan rely on QEMU to emulate an x86_64 CPU when running on any other platform?
No, it doesn't. You're probably thinking of binfmt https://docs.kernel.org/admin-guide/binfmt-misc.html.
No
that's not necessarily true
Not on Windows when using Windows containers.
Not to mention the non-standard -S flag to env which makes the shebang work.