return to table of content

Show HN: #!/usr/bin/env docker run

pushedx
46 replies
14h12m

The readme compares it to the cross-architecture Cosmopolitan libc, but Docker is anything but cross-platform. On any other platform besides Linux it requires a Linux VM.

Linux containers are great (and I run Linux as my desktop OS), just pointing out the not-so-efficient nature of considering this cross-platform.

doctorpangloss
26 replies
13h44m

OCI image manifests can specify platforms and architectures. From the end user’s point of view it can be all the same invocation.

Docker natively supports Windows, and it is low lift to make native Windows images for many common programming environments.

Does anyone use it? No not really. It makes a lot of sense if you need Windows stack stuff that is superior to Linux, like DirectX, but maybe not so much for regular applications.

There is also macOS containers, a project that has a decent proof of concept of a containerd fork that runs macOS container images. In principle there is a shorter path of work for so called host process containers, but fully isolated exists for macOS, it could work with e.g. Kubernetes, and people want it and it makes sense, and it sort of does exist.

The difference between cross-platform and “cross-platform” as you’re talking about it is really having some absolutely gigantic company, like Amazon or Google, literally top 10 in the world, putting this stuff into a social media zeitgeist.

pjmlp
9 replies
11h15m

Plenty of Windows shops use Windows containers, from my side you can count with 5 projects delivered into production using Windows Containers.

Many App deployments in Azure also use Windows containers.

opentokix
8 replies
9h6m

Yes, "windows shops" are stupid, you got that right. Windows is a toy in the server space, always as been, always will be.

pjmlp
5 replies
9h4m

Plenty of big boys money goes through such "toy" servers.

asmor
4 replies
8h56m

Is that because Microsoft is good at selling it or because it is actually a good piece of tech? We recently had to set up some Microsoft platinum partner test automation software, and the money we spent on SQL Server and Windows instances (on Azure of course) alone could've funded a fleet of Linux servers or a junior dev writing Playwright scripts all day.

(Not to mention it produces unactionable output by default, and if I love one thing, it's "this page didn't work one out of 100 times, must be infra problem" incidents)

pjmlp
3 replies
8h17m

You would have spent similar amount of money in Red-Hat Licenses, or anyone else worth using with big boys support contracts.

Linux is only free when our time isn't worth money.

Playwright is developed by Microsoft, by the way.

vajrabum
0 replies
1h12m

Systems engineer here, I haven't worked at a company that pays for Linux support in 12 years and this was at scale (10K+ servers). You don't need IBM or Canonical to get patches or a heads up about major vulns. Several ways to go with this but I get up to date patches for free with Debian. And I can count on my hand the number of times that any org I've been part of needed a kernel engineer or access to one. Support contracts for OS AFAIK aren't worth the money any more unless you really don't have anyone who can do system support.

marcosdumay
0 replies
1h57m

Linux is only free when our time isn't worth money.

Oh, man, not this shit.

Linux saves time. Windows servers are an endless time sink, that costs more on its hardware, and have added license costs. And license costs are also mostly the time you spend managing your licenses, the actual money you send to Microsoft is peanuts.

Windows only costs its price if your time is worthless.

asmor
0 replies
7h33m

Linux is only free when our time isn't worth money.

that's funny, because having rotated through all 3 major cloud providers in the past 5 years now (at different places), Azure support is the most time-wasting not worth it even if it was free, and i'd much prefer if i could waste my time reading documentation that makes sense, but Azure doesn't have that either.

Azure doesn't happen to be an outlier in Microsoft products, right?

Playwright is developed by Microsoft, by the way.

And I'm happy the people there get to make things that work outside the eldritch horror that is Windows Server.

supriyo-biswas
0 replies
7h22m

https://news.ycombinator.com/newsguidelines.html

Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
mardifoufs
0 replies
2h15m

Why is windows server a toy?

tomjen3
6 replies
13h26m

The main problem, I think, with Windows containers is that they are only really supported on Windows Server - which most developers don't have access to.

You can run them through Docker Desktop, but then why not just run the same containers you will be deploying on you server (which is most likely going to be linux based?).

I would love for MS to make containers the way to deploy programs to Windows, but that requires them to make the runtime part of the default install and to make it available on all the OSs.

doctorpangloss
4 replies
13h11m

Windows 2022 containers work on Windows 11. Docker Desktop uses a shim for Windows containers. “dockerd” a single binary for Windows statically compiled is all you need to run Windows containers with the familiar Docker commands, you could also use PowerShell.

They are supported all the same. IMO the main issue is that this feature is poorly marketed.

tomjen3
2 replies
11h27m

Its extremely poorly marketed, since I looked up the MS documentation when I wrote that comment and it only still only said windows server.

Still unless it works on Win10 home, it won't be the default way to install software for windows - which sucks, since its a better way than the current one.

pjmlp
1 replies
11h8m

Software delivered via Windows store, specially if packaged with MSIX already uses containers.

Windows containers are supported in Windows Professional as well.

Maybe it is because I spend most of my time as Windows developer, this wasn't hard to find,

One physical computer system running Windows 10 or 11 Professional or Enterprise with Anniversary Update (version 1607) or later.

https://learn.microsoft.com/en-us/virtualization/windowscont...

tomjen3
0 replies
10h12m

It does further down say that you need a windows server even for development purposes.

What I missed was that it only applied to windows server images.

Also the exception only seems to apply for development and testing services and, for some reason, only a physical computer.

Regardless, I was clearly wrong: it is possible, just not well documented.

gnatolf
0 replies
11h22m

Isn't this still the Windows Server images only? Can I expect everything to run that would run on win 10, 11 and/or server?

reactordev
0 replies
9h11m

Windows containers can be built on Windows 10 pro and windows 11 pro. All you need is the hypervisor from Microsoft installed under windows Settings->Apps and Features->Additional windows features.

cowboyscott
4 replies
13h2m

I really like what this script is doing - it's specifying system level dependencies, a database schema, an interpreter, the code that runs on that interpreter, the data (on disk!) required by that code, and an invocation to execute the code all in one script. That's amazing, and this is an excellent model for sharing a stand alone applications with non-trivial dependencies!

However, Docker is an OS-level virtualization. Docker natively supports Windows in the sense that there is a native app. That native app spins up Linux virtual machines, so the container is "native" to my Intel CPU with their virtualization extensions, but it is not native to Windows. I use it, which I say with no animus toward your original message.

edit: I was ignorant of native windows containers. I'm old and my brain still maps docker to lxc I guess. Apologies to OP - the DirectX line should have caught my attention.

cpuguy83
3 replies
12h46m

No, Docker supports native Windows containers.

Docker Desktop aims to provide the same experience across Mac and Windows and as such those use Linux VM's, yes. However Docker most definitely supports Windows containers.

cowboyscott
2 replies
12h25m

Sorry, that's right. You can probably guess that all of my Windows Docker use is with Linux images. This particular script wouldn't work as there is no node image for a native windows host (unless there is? again, I'm ignorant of native windows containers).

smitty1e
1 replies
6h0m

Windows Services for Linux can install a Ubuntu image for ready usage.

samstave
0 replies
1h53m

Also ignorant - I have WSL/DockerDesktop etc...

I run ubuntu desktop in a vbox VM.

If I run ubuntu desktop on docker, I have to RDP into it.

What type of container will WSL build? A desktop - or headless with CLI?

Finally - which is lighter-weight, Vbox VM, or a Docker container, or whatever WSL makes?

EDIT: NM - I understand the answer now.

bionhoward
2 replies
8h12m

Is directx superior to vulkan? Serious question from a graphics noob (who dislikes windows development)

doctorpangloss
0 replies
1h21m

DirectX the API compared to Vulkan: whatever.

DirectX as a whole product: yes.

For the two middlewares Unity and Unreal, on real applications, DirectX 11 will have better latency (lower CPU time mostly), DirectX 12 performance will be higher throughput (greater FPS), but neither will be by very much. Like a single application on ordinary hardware, it won’t matter. But for the thing I measure, occupancy, you can get something like 3x as much efficiency with DirectX on Windows compared to the same application on Vulkan on Linux.

GuB-42
0 replies
5h7m

DirectX is more than just Vulkan. It does sound, input, etc...

Vulkan is like Direct3D 12, a low level 3D API. Between the two, most seem to consider Vulkan the better option. However, Vulkan has the reputation of being verbose and very much not noob friendly. It is mostly geared towards advanced engine developers who want full control to make the most of the hardware.

Besides 3D, the rest of the multimedia API are a bit of a mess it seems. On Windows and elsewhere. I haven't look at it for many years though.

duped
0 replies
1h43m

chroot requires disabling SIP on MacOS, so any kind of "container" that shares the kernel but has a mostly isolated userspace is never going to happen on MacOS. If you want an isolated host environment on MacOS the bespoke approach is to use VZVirtualMachine. The whole point of containerization is to not require virtualization, so it kind of defeats the purpose.

I really think people who "want" containers on MacOS don't understand containers or what problem they solve, and if they think they need them should consider why they aren't already running their dev environment in Linux.

adastra22
5 replies
11h39m

Doesn’t windows use WSL?

voxic11
1 replies
11h35m

Not for windows containers. But no one really uses those anyways.

pjmlp
0 replies
11h6m

We use them.

Many Windows products, e.g. Sitecore, only support Windows containers.

Microsoft Store software relies on Windows containers infrastructure.

Windows containers make use of Windows jobs APIs.

ric2b
1 replies
8h6m

WSL is a Linux VM

adastra22
0 replies
1h25m

WSL1 is an API shim to get Linux binaries running in windows natively. It is more akin to what wine does on Linux.

chx
0 replies
9h22m

Docker Desktop runs either with Hyper-V or with WSL. https://docs.docker.com/desktop/install/windows-install/

8organicbits
3 replies
13h53m

I explored the idea of using the scratch image with a cosmopolitan binary to get something more cross-architecture, but you need a shell to run those binaries. I'd love to see cross architecture Docker images, if someone else can figure out a trick to make it work.

t0astbread
1 replies
13h44m

I think parent was pointing out that you need Linux to run Docker (since it doesn't run natively on any other OS) which is different from what Cosmopolitan provides.

Edit: Ok, apparently it natively supports Windows for Windows containers and for everything else there's a Hyper-V integration. Not sure if you can write a portable Dockerfile script like that though.

pjmlp
0 replies
11h7m

You surely can, I have Dockerfiles that do it.

It is a matter of having build parameters for base images and using programming languages that are mostly OS agnostic.

alganet
0 replies
13h34m

Just use redbean and provide a init lua file. Or use a http://cosmo.zip provided interpreter (like python, maybe even bash).

Each ape file is also a valid zip file. Add your dependencies as if the ape was an archive:

    zip -ur myape.com mydependency.anything
Also add a `.args` file:

    zip -ur myape.com .args
For this .args file, put one argument per line. This will run on start. You can use `/zip/mydepencency.anything` to read from files, but if you have an executable dependency you'll need to extract it first (I use the host shell or host powershell for this).

You can do this with any software you can compile with comsocc, by adding a call to LoadZipArgs[1] in the main function.

It's easy to get started, your ideas will branch out as soon as you start playing with it.

[1]: https://github.com/jart/cosmopolitan/blob/master/tool/args/a...

willio58
2 replies
2h52m

Makes me wonder if containerization is even possible without a VM for non-Linux machines.

shepherdjerred
0 replies
59m
edgyquant
0 replies
1h46m

I do believe so but only for the host OS. Eg Mac containers work for Mac etc

erik_seaberg
2 replies
13h48m

Doesn’t Cosmopolitan rely on QEMU to emulate an x86_64 CPU when running on any other platform?

leonheld
0 replies
6h10m

No, it doesn't. You're probably thinking of binfmt https://docs.kernel.org/admin-guide/binfmt-misc.html.

HumanOstrich
0 replies
12h6m

No

riffic
0 replies
13h34m

that's not necessarily true

pjmlp
0 replies
11h17m

Not on Windows when using Windows containers.

Arch-TK
0 replies
8h38m

Not to mention the non-standard -S flag to env which makes the shebang work.

chubot
16 replies
13h17m

Something like this should definitely exist, just not with Docker!

Podman is better but it's also a bit coupled to a distro - https://news.ycombinator.com/item?id=38981844

The problem is the Linux kernel container primitives are a bit of a mess

bubblewrap is a lot closer, although last I heard it's not in some distros for security reasons - https://news.ycombinator.com/item?id=30823164

nickstinemates
11 replies
12h35m

another docker post filled with podman propaganda. despite it all, still no one uses it.

rtpg
5 replies
12h34m

I mean I know several people who run their infra with podman. But it's for personal things, I don't know if there is any level of usage at the enterprise level.

viraptor
1 replies
12h18m

Not enterprise level, but I made a choice of deploying podman at $job rather than docker for a few reasons.

KAMSPioneer
0 replies
9h33m

I'll chime in to say that I have started deploying podman over Docker where it's frictionless at $job as well. I'd say half (or more) of my new container deploys are podman.

At home I use only podman because my tinkering doesn't affect anyone but me.

oso2k
1 replies
11h32m

DISCLAIMER: I work for Red Hat. I'm formerly an OpenShift Consultant and SA.

podman has underpinned our Kubernetes distribution, OpenShift, since 4.0 was released in 2019. OpenShift is a $1B+ USD business for us (https://www.newsobserver.com/news/business/article271678707....). You can search and see a sample of who uses it for Enterprise level business.

gbraad
0 replies
4h8m

OpenShift Container Platform uses CRI-O as the container engine and runC or crun as the container runtime. Podman is only directly used for the openshift-installer, but as a container management tool uses the same underlying runtimes. This means they share the same long tenure in production when it comes to using runc. Is that what you meant?

https://docs.openshift.com/container-platform/4.14/nodes/con....

https://docs.podman.io/en/latest/#:~:text=Podman%20relies%20....

The answer is a little bit more nuanced, as the defaults differ. Podman uses crun by default: https://podman.io/docs/installation#:~:text=crun%20%2F%20run.... For OpenShift the use of crun is available as a Technology Preview: https://www.redhat.com/en/blog/whats-new-in-red-hat-openshif... since 4.12. The default for 4.14 is still runC.

Notwithstanding, Podman is gaining a lot of momentum, especially now with Podman Desktop. Disclaimer: I work for Red Hat on the Podman Machine and OpenShift Local/CRC teams to provide integration with Podman Desktop aiming at developer usecases

gbraad
0 replies
2h25m

Podman Desktop sees a lot of increased use in the last few months and the PM has spoken with many of our 'customers' about the future and how they are using podman.

Disclaimer: working at Red Hat as a (tech) manager of the OpenShift Local team, involved on the virtualization targets for Podman Machine and the integration of some of our extensions.

supriyo-biswas
0 replies
7h23m

https://news.ycombinator.com/newsguidelines.html

Please don't fulminate. Please don't sneer, including at the rest of the community.
ekianjo
0 replies
12h32m

Still a useful alternative to docker and can be packaged in distros

christophilus
0 replies
6h9m

I use it and love it. YMMV.

c0balt
0 replies
8h49m

Can attest from $Job that there are podman users. Podman is awesome for some of our RHEL-based systems and we will continue to use it. You are just gonna hear about it a lot, because it's just a runtime.

65a
0 replies
10h53m

I haven't used docker since ~2017. My clusters run on cri-o, builds are with kaniko, and some of my systems just call runc with OCI container definitions. Docker (especially its API) is a giant mess, and the sooner it's replaced by smaller tools and clear standards the better.

orhmeh09
1 replies
9h48m

No love for Apptainer/Singularity?

chubot
0 replies
3h51m

What's that? What's good about it? :)

a-dub
1 replies
12h28m

i think the kernel primitives are fine, unshare and namespaces make perfect sense to me. docker, podman, buildah, buildx, whatever... all these things with cutesy names and fatal flaws seem like the mess to me.

ksjskskskkk
0 replies
11h17m

the feature IS the fatal flaw. after unsharing namespace you still want your network to "just work". the "quality"of the solution is directly proportional to how bad the security is.

the scale is non virtualized qemu all the way to docker which will even screw your iptables rules for your convenience. hn crowd falling in the middle as the Goldie locks we all are.

bionhoward
11 replies
8h14m

This is genius and I love how this is a whole app meta-seed in a single file! I think I have docker trauma, why did we reach a point where we need computers inside our computers just for normal stuff to work?

Container packing is cool, but is it just a security thing preventing us from using our normal hardware? Or versioning (NixOS)? Is wasm capable of doing this and is wasm still alive? I just feel like needing to run tests inception style inside and outside docker gets complicated and annoying and always try to just use Linux directly these days.

petercooper
7 replies
7h37m

There are many reasons, but the simple idea of "containing" is a big part of it. You could run several versions of Python, database systems, etc. on a single machine, but it rapidly becomes confusing in most cases with dependency clashes, losing track of where everything is, etc. Anyone who worked on multiple projects ~20 years ago and didn't use VMs might remember how it felt.

It's like if you have a workshop and you diligently organize all of the different parts into different trays in different units so it's easier to do all the types of work you need to do. You could just have a giant box in the corner where you chuck absolutely everything.. far less complex, but it'd make your day to day work a nightmare.

zarzavat
2 replies
7h27m

I'd argue that running all of those things inside docker containers also rapidly becomes confusing. The confusion is inherent to the complexity of the things you are running.

I don't hate docker, but I find that it's just not that useful until you reach a certain scale. I stopped using it for personal projects and am much happier for it.

drsh0
1 replies
7h16m

What do you use for your personal projects now out of curiosity?

vrighter
0 replies
1h38m

the computer i would run the container on. I just run my software on that. If it's complex to configure, it'll still be complex to configure in docker. But then I also need to configure docker.

throwaway290
1 replies
7h26m

I know people who use Nix for this... May or may not be another level of confusing though. Also, I heard it's a bad choice for JS ecosystem.

zopa
0 replies
56m

It’s a great choice for the JS ecosystem for the same reason it’s a terrible choice for the JS ecosystem: JS dependencies are a lot, and they sometimes want to do strange things at install-time that Nix frowns upon. There’s definitely an upfront cost, and a maintenance burden as well. But the flexibility and the control over what code you’re actually running could still be worth it.

layer8
0 replies
2h7m

Executable files (and OS processes) used to be that. Then came shared libraries, configuration files, multi-executable applications, and whatnot. It would have been nicer to extend the executable formats and OS process sandboxing, IMO.

Next thing we’ll define a new format and runtime to package and run a collection of docker images with associated configuration.

d0mine
0 replies
5h30m

Docker containers (in practice) can be considered to be an extreme form of distributing static binaries (snaps, flat-packs, nix, fat go binaries, pyinstaller, etc).

It is less about security and more about having several applications on the same hardware without full blown VMs.

teknopaul
1 replies
5h59m

Many people share your concern. Hence users dislike of snap

bornfreddy
0 replies
4h16m

Well snap has other problems too. For me a big one is that it is pushed heavily by a single company which may or may not still exist in 10 years. Or which might decide to capitalize on its investment once enough people are locked into its ecosystem.

photonthug
0 replies
5h50m

The single file aspect is cool for distribution but of course not for editing.. a similar thing that is still maniacal/clever but somewhat easier to scale could use i.e. makeself

a_t48
8 replies
14h20m

That’s cute - though typically the docker images I build need supporting infra around them anyhow - I’d have to forward to the build script.

adtac
7 replies
14h13m

I'm probably getting banned for committing this war crime but have you considered a

    #!/usr/bin/env -S bash -c "docker run --privileged ..."

    FROM docker
    
    RUN <<EOF cat >/other.Dockerfile
      #!/usr/bin/env -S bash -c "docker run ..."
      FROM debian:buster
    EOF
    RUN chmod +x /other.Dockerfile
    
    CMD bash -c "/other.Dockerfile &; ./main"

asn1parse
2 replies
14h11m

heck, why not toss in --net=host

yonatan8070
0 replies
14h1m

    -v /:/sysroot

bravetraveler
0 replies
14h3m

... and privileged, then make the entrypoint 'nsenter' for PID 1

a_t48
2 replies
13h57m

No, but that’s a fun idea. Most of my docker crimes involve working around the lack of REBASE or similar to transplant a layer from another stage. Instead I’m forced to abuse rsync.

efrecon
0 replies
9h27m

I've got that for rebasing: https://github.com/efrecon/docker-rebase

8organicbits
0 replies
13h46m

I use `COPY --from=image` to move data between images. Do you need some more advanced features of rsync?

ta988
0 replies
14h3m

could you make it curl a script as well

tzury
4 replies
14h20m

As a rule, if you can write your code in a file, and run it as a script, it is better than writing scrolls between

    <<EOF 
    …
    EOF
The why is obvious.

mattigames
1 replies
14h4m

Some people like their personal full-programs inside a single-file, I think the appeal is that after opening you only have to keep scrolling to continue reading the other "files", or that if you need to attach it to an email or something similar you are sure it has no dependency on other files, but yeah the trade-off is not worth it.

devoutsalsa
0 replies
13h28m

And keeping it one file means you're reducing risk of a breaking change in the external script.

jonahx
1 replies
14h8m

It's not obvious to me that the benefits outweigh the benefits of a single, easily readable file.

d-z-m
0 replies
13h3m

Some would disagree that heredoc'ing your scripts makes them easily readable.

pedrovhb
4 replies
7h13m

For an actually intentional, non-cursed version of this, see the nix-shell shebang [0]:

#! /usr/bin/env nix-shell > #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor > > # scale image by 50% > import sys, PIL.Image, ansicolor > path = sys.argv[1] > image = PIL.Image.open(path) > factor = 0.5 > image = image.resize((round(image.width * factor), round(image.height * factor))) > path = path + ".s50.jpg" > image.save(path) > print(ansicolor.green(f"done {path}"))

Just `chmod +x` and you have an executable with all dependencies you specify!

[0] https://nixos.wiki/wiki/Nix-shell_shebang

nmz
1 replies
2h59m

There's a 256byte limit for #! so this shouldn't work at all.

EDIT: Now I see its badly formatted, Either way, be careful with #! size limits.

pronoiac
0 replies
33m

Ah. I think two leading spaces fix this? I'll try:

  #! /usr/bin/env nix-shell
  #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor
  
  # scale image by 50%
  import sys, PIL.Image, ansicolor
  path = sys.argv[1]
  image = PIL.Image.open(path)
  factor = 0.5
  image = image.resize((round(image.width \* factor), round(image.height \* factor)))
  path = path + ".s50.jpg"
  image.save(path)
  print(ansicolor.green(f"done {path}"))

miduil
0 replies
6h23m

Totally, some practical use of that here as well:

https://dpc.pw/posts/nix-users-you-can-start-using-rust-scri...

d0mine
0 replies
5h43m

There are pip-run, pipx run, etc for Python-specific use-cases.

phone8675309
3 replies
13h23m

Is this webscale?

scrps
1 replies
9h45m

Is hyperscale.

jbverschoor
0 replies
7h51m

hyperscale^3-8

mirekrusin
0 replies
10h58m

It has web scales on it for sure.

forrestthewoods
3 replies
10h28m

Can someone explain what this is and what it does? I have no idea. I use Windows and have never needed to use Docker for anything.

BossingAround
1 replies
10h23m

It turns a Dockerfile into an executable script, so that by executing the Dockerfile, the shebang invokes docker to build and run the file.

Pretty neat if you're using Dockerfiles, but also highly non-standard so you wouldn't use it in your company repo (unless you want to increase the "what-the-fuck" level of your repo).

It's more of a "look, this is cool" kind of a thing if you're a Linux and container user.

amne
0 replies
7h8m

I can see this used to install dependencies (php composer?) to inspect code with references that resolve instead of having to spinup a whole toolchain just for that

maronato
0 replies
10h10m

It’s a Docker shebang. Normally shebangs are used to define what shell to use when running a script, or in the case of a python script, to run it with “./myscript” instead of “python myscript.py”.

Here OP created a little hack for building and running a docker container by adding a shebang to a Dockerfile.

Usually it’s a two step process. You first use “docker build” to build the image and then “docker run” to create a container from it. With this little hack you just run “./Dockerfile” and it does both.

It’s cool, but not really useful for most people.

WhyNotHugo
3 replies
10h6m

Using spaces in a shebang is not a standard thing and doesn’t work in most shells.

flakes
1 replies
9h48m

Curious for what systems this does not work? I start a lot of my shebangs with `#!/usr/bin/env <app>` such that I can rely on PATH for resolving application locations.

chuckadams
0 replies
2h10m

Just plain #!app also works. Probably less portable, but it does work on linux and macOS. Not sure if POSIX has anything to say about shebangs.

jstanley
0 replies
9h54m

The spaces are being handled by `env`:

    $ env "-S echo hello world"
    hello world
https://www.gnu.org/software/coreutils/manual/html_node/env-...

Kab1r
3 replies
13h20m

This isn't POSIX compliant is it? I feel like I tried to do something different but trying to put arguments in a shebang and ran into trouble there a year or two ago.

error9348
0 replies
10h47m

Should be fine, you can even compile and run a C file using a shebang

cpuguy83
0 replies
12h43m

It depends on the version of /usr/bin/env.

cerved
0 replies
6h56m

I believe it's compliant but only in the sense that the end result is unspecified by POSIX.

I.e. you can't rely on this working on a POSIX compliant system

throwaway892238
2 replies
13h24m

Cute trick, but it's not actually what the title claims.

Since this is actually env calling bash first, not docker, this should just be a Bash script. You can still feed the Dockerfile to docker build via STDIN. But you'd gain the ability to shellcheck the Bash, the code would be easier to read, write, maintain, add comments to, etc. You could keep the filename the same, run it the same way, etc. The way they've done it here is just unnecessarily difficult.

notso411
0 replies
11h15m

You can say it is wrong without being insufferably condescending

chii
0 replies
12h49m

You can still feed the Dockerfile to docker build via STDIN.

but you'd then have to work out how to "filter out" the bash commands inside this bash script to make it a valid docker file.

Unless of course, you entirely store the docker file contents inside heredocs. That works fine, but it's not as "cool" as "executing" dockerfiles as a script.

jcul
2 replies
9h28m

Reminds me of the "self consuming script pattern". Seen in this super user answer.

https://superuser.com/a/440059

It embeds an awk (or any interpreter) script, and uses sed to cut out the script between tags in $0.

I agree with other comments that this kind of thing can get messy, but sometimes it makes a lot of sense and let's you share a single file.

jordemort
1 replies
4h24m

The upgrade files for a product I used to work on was (and perhaps still is) a .tar.gz file with a shell script prepended to them, to make a self-extracting/self-executing archive. The archive wasn't even base64 encoded or anything; just binary data with some text in front that can find the beginning of the binary.

chriswarbo
0 replies
2h44m

For those wanting to go down the self-extracting executable route, I recommend arx (it generates that sort of tarball-prepended-with-shell-script you describe) https://github.com/solidsnack/arx

The `nix bundle` command can generate an arx file, which includes all of an application's dependencies. As an example, we started getting issues with an EC2 server whose image was an accumulation of changes over several years; whilst we worked on migrating to a saner setup (containers defined using Nix), as a stop-gap we got the server working again by using `nix bundle` to create an arx executable containing working versions of all the application's dependencies, which we could copy to the existing server as a drop-in replacement of the existing (broken) command.

benatkin
2 replies
14h8m

I feel comfortable with fenced code blocks. Using heredocs all the time, not so much.

    ```js title="/root/server.js"
    console.log('test')
    ```
or

    `/root/server.js`

    ```js
    console.log('test')
    ```
vs

    RUN <<EOF cat >/root/server.js
    console.log('test')
    EOF
However the Markdown one is better if the syntax highlighting theme makes the code fence a color that doesn't stick out - either monochrome or closer to the background color.

pushedx
0 replies
13h34m

The file is a Dockerfile with a shebang line that ignores the comments with a regex. Code fences would not be valid.

The point of this isn't to share this code, it's a demo of the clever shebang line.

bionhoward
0 replies
8h5m

Brilliant idea. Single markdown file for a whole app stack?

noname120
1 replies
8h2m

The -S / --split-string option[1] of /usr/bin/env is a relatively recent addition to GNU Coreutils. It's available starting from GNU Coreutils 8.30[2], released on 2018-07-01.

Beware of portability: it relies on a non-standard behavior from some operating systems. It only works on OSs that treat all the text after the first space as argument(s) to the shebanged executable; rather than just treating the whole string as an executable path (that can happen to contain spaces).

Fortunately this non-standard behavior is more the norm than the exception: it works at least on modern GNU/Linux, BSDs, and macOS.

[1] https://www.gnu.org/software/coreutils/manual/html_node/env-...

[2] https://github.com/coreutils/coreutils/blob/b09dc6306e7affaf...

riedel
0 replies
6h39m

There is some some ways of doing this in a more portable way on Unix like systems [0]

[0] https://unix.stackexchange.com/questions/399690/multiple-arg...

mgaunard
1 replies
2h28m

why not use docker build -q instead of that silly sha parsing?

adtac
0 replies
38m

As you can imagine, it wasn't a fun developer experience building this incrementally without build logs. This was the only way I could find to have the cake (logs) and it eat it too (sha).

kevincox
1 replies
7h10m

This is cool hacking but I really don't get this obsession with "single file". Directories exist and can contain self-contained applications without the need to pack everything into some ugly script. They are into the slightest bit more difficult to ship around to different machines.

da39a3ee
0 replies
4h16m

I think maybe it helps to think from the point of view of a developer for whom these single-file things are tools in their workshop.

- Easier to grep a collection of single files

- Easier to see what you've got in your collection in a directory listing (whether via a shell or in a web UI such as GitHib)

- Easier to view the contents quickly (`cat`)

- General philosophy that flat is better than nested

teknopaul
0 replies
6h4m

Simple file based solution. That's not abuse that's unix.

Well; thats Unix before Pottering and Microsoft.

renewiltord
0 replies
12h59m

Haha very clever. I like it. L

rekado
0 replies
3h38m

There's also `guix shell` which can be used in shebang position. Example from the Guix manual:

    #!/usr/bin/env -S guix shell python python-numpy -- python3
    import numpy
    print("This is numpy", numpy.version.version)

It also works with manifest files specifying more complex environments.

mike-cardwell
0 replies
41m

I did this in Nov 2021 - https://www.grepular.com/Self_Building_and_Executing_Dockerf...

    #!/usr/bin/env -S bash -c "podman run --rm -w /x -v "\$PWD:/x" \$(podman build -q - < \$0) \${@:1}"

kitd
0 replies
9h53m

For added excitement, you could go the whole hog, and generate, build and run via docker compose. Apart from anything else, you wouldn't need the 2-step build&run.

I mean, you could. Whether you should, well ...

keepamovin
0 replies
14h1m

Can we figure out a way to throw an exec into the shebang so the Docker process replaces the bash One?

WhackyIdeas
0 replies
5h5m

I had no idea a shebang could be used like this! After all of these years…

Nice hack. Love it.

Igor_Wiwi
0 replies
2h33m

what is the practical use of it?

IggleSniggle
0 replies
13h21m

I like how the last code line reads

   ctx.stroke()