return to table of content

Nix is a better Docker image builder than Docker's image builder

kstenerud
45 replies
12h26m

I've tried again and again to like Nix, but at this point I have to throw in the towel.

I have 2 systems running Nix, and I'm afraid to touch them. I've already broken both of them enough that I had to reinstall from scratch in the past (yes yes - it's supposed to be impossible I know), and now I've forgotten most of it. In theory, Nix is idempotent and deterministic, but the problem is "deterministic in what way?" Unless you intimately understand what every dependent part is doing, you're going to get strange results and absolutely bizarre and unhelpful errors (or far more likely: nothing at all, with no feedback). Nix feels more like alchemy than science. Like trying to get random Lisp packages to play nice together.

Documentation is just plain AWFUL (as in: complete and technically accurate, but maddeningly obtuse), and tutorials only get you part of the way. The moment you step off the 80% path, you're in for a world of hurt, because the underlying components are just not built to support anything else. Sure, you can always "build your own", but this requires years of experiential knowledge and layers upon layers of frustration that I just don't want to deal with anymore (which is also why I left Gentoo all those years ago). And woe unto you if you want to use a more modern version than the distribution supports!

The strength of Docker is the chaos itself. You can easily build pretty much anything, without needing much more than a cursory understanding of the shell and your distro's package manager. Or you can mix and match whatever the hell you want! When things break, it's MUCH easier to diagnose and fix the problems because all of the tooling has been around for decades, which makes it mature enough to handle edge cases (and breakage is almost ALWAYS about edge cases).

Nix is more like Emacs: It can do absolutely anything if you have the patience for it and the deep, arcane knowledge to keep it from exploding in a brilliant flash of octarine. You either go full-in and drink the kool aid, or you keep it at arm's length - smiling and nodding as you back slowly towards the door whenever an enthusiast speaks.

janjongboom
22 replies
11h25m

I've gone down the same path. I love deterministic builds, and I think Docker's biggest fault is that to the average developer a Dockerfile _looks_ deterministic - and it even is for a while (build a container twice in a row on the same machine => same output), but then packages get updated in the package manager, base images get updated w/ the same tag, and when you rebuild a month later you get something completely different. Do that times 40 (the number of containers my team manages) and now fixing containers is a significant part of your job.

So in theory Nix would be perfect. But it's not, because it's so different. Get a tool from a vendor => won't work on Nix. Get an error => impossible to quickly find a solution on the web.

Anyway, out of that frustration I've funded https://www.stablebuild.com. Deterministic builds w/ Docker, but with containers built on Ubuntu, Debian or Alpine. Currently consists of an immutable Docker Hub pull-through cache, full daily copies of the Ubuntu/Debian/Alpine package registries, full daily copies of most popular PPAs, daily copies of the PyPI index (we do a lot of ML), and arbitrary immutable file/URL cache.

So far it's been the best of both worlds in my day job: easy to write, easy to debug, wide software compatibility, and we have seen 0 issues due to non-determinism in containers that we moved over to StableBuild in my day job.

ktosobcy
6 replies
7h53m

I don't have any experience with Nix but regarding stable builds of Docker: we provide Java application, have all dependencies as fixed versions so when doing a release, if someone is not doing anything fishy (re-releasing particular version, which is bad-bad-bad) you will get exactly same binaries on top of the same image (again, considering you are not using `:latest` or somesuch)...

janjongboom
5 replies
7h39m

Until someone overwrites or deletes the Docker base image (regularly happens), or when you depend on some packages installed through apt - as you'll get the latest version (impossible to pin those).

ktosobcy
3 replies
6h42m

Until someone overwrites or deletes the Docker base image (regularly happens)

Any source of that claim?

or when you depend on some packages installed through apt - as you'll get the latest version (impossible to pin those).

Well... please re-read my previous comment - we do Java thing so we use any JDK base image and then we slap our distribution on top of it (which are mostly fixed-version jars).

Of course if you are after perfection and require additional packages then you can install it via dpgk or somesuch but... do you really need that? What about security implications?

lolinder
0 replies
4h36m

Do you have an example that isn't Nvidia? They're infamous for terrible Linux support, so an egregious disregard for tag etiquette is entirely unsurprising.

ktosobcy
0 replies
3h42m

You gave example of nvidia and not ubuntu itself. What's more, you are referring to devel(opment) version, i.e. "1.0-devel-ubuntu20.04" which seems like a nightly so it's expected to be overriden (akin to "-SNAPSHOT" for java/maven)?

Besides, if you really need utmost stability you can use image digest instead of tag and you will always get exactly the same image...

theamk
0 replies
2h22m

I am convinced that any sort of free public service is fundamentally incomapatible with long term reproducible builds. It is simply unfair to expect free service to maintain archives forever and never clean them up, rename itself, or go out of business.

If you want reproducibility, the first step is to copy everything to a storage you control. Luckily, this is pretty cheap nowdays

stefanha
4 replies
8h8m

Another option for reproducible container images is https://github.com/reproducible-containers although you may need to cache package downloads yourself, depending on the distro you choose.

stefanha
2 replies
5h59m

For Debian, Ubuntu, and Arch Linux there are official snapshots available so you don't need to cache package downloads yourself. For example, https://snapshot.debian.org/.

janjongboom
1 replies
4h48m

Yes, fantastic work. Downside is that snapshot.debian.org is extremely slow, times out / errors out regularly - very annoying. See also e.g. https://github.com/spesmilo/electrum/issues/8496 for complaints (but it's pretty apparent once you integrate this in your builds).

keybits
3 replies
9h43m

The pricing page for StableBuild says

Free …

Number of Users 1

Number of Users 15GB

Is that a mistake or if not can you explain please?

https://www.stablebuild.com/pricing

janjongboom
2 replies
9h0m

Ah, yes, on mobile it shows the wrong pricing table... Copying here while I get it fixed:

Free => Access to all functionality, 1 user, 15GB traffic/month, 1GB of storage for files/URLs. $0

Pro => Unlimited users, 500GB traffic included (overage fees apply), 1TB of storage included. $199/mo

Enterprise => Unlimited users, 2,000GB traffic included (overage fees apply), 3TB of storage included, SAML/SSO. $499/mo

ethanwillis
1 replies
2h26m

Are you associated with the project?

janjongboom
0 replies
1h30m

I’m an investor in StableBuild.

TeeWEE
2 replies
10h1m

Just pin the dependencies and your mostly fine right?

janjongboom
1 replies
8h51m

Yeah, but it's impossible to properly pin w/o running your own mirrors. Anything you install via apt is unpinnable, as old versions get removed when a new version is released; pinning multi-arch Docker base images is impossible because you can only pin on a tag which is not immutable (pinning on hashes is architecture dependent); Docker base images might get deleted (e.g. nvidia-cuda base images); pinning Python dependencies, even with a tool like Poetry is impossible, because people delete packages / versions from PyPI (e.g. jaxlib 0.4.1 this week); GitHub repos get deleted; the list goes on. So you need to mirror every dependency.

codethief
0 replies
3h32m

Anything you install via apt is unpinnable, as old versions get removed when a new version is released

Huh, I have never had this issue with apt (Debian/Ubuntu) but frequently with apk/Alpine: The package's latest version this week gets deleted next week.

korijn
0 replies
3h6m

What is an efficient process to avoid using versions with known vulnerabilities for long times when using a tool like stablebuild?

codethief
0 replies
3h56m

Anyway, out of that frustration I've funded https://www.stablebuild.com. Deterministic builds w/ Docker, but with containers built on Ubuntu, Debian or Alpine.

Very nice project!

IshKebab
0 replies
9h59m

But also Nix solves more problems than Docker. For example if you need to use different versions of software for different projects. Nix lets you pick and choose the software that is visible in your current environment without having to build a new Docker image for every combination, which leads to a combinatorial explosion of images and is not practical.

But I also agree with all the flaws of Nix people are pointing out here.

orbital-decay
3 replies
11h6m

>Documentation is just plain AWFUL (as in: complete and technically accurate, but maddeningly obtuse)

Documentation is often just plain erroneous, especially for the new CLI and flakes, not even edge cases. I remember spending some time trying to understand why nix develop doesn't work like described and how to make it work like it should. I feel like nobody ever actually used it for its intended purpose. Turns out that by default it doesn't just drop you into the build-time environment like the docs claim (hermetically sealed with stdenv scripts available), it's not sealed by default and the commandline options have confusing naming, you need to fish out the knowledge from the sources to make it work. Plenty of little things like this.

>In theory, Nix is idempotent and deterministic

I surely wish they talked more about edge cases that break reproducibility. Things like floating point code being sensitive to the order of operations with state potentially leaking from OS preemption, and all that. Which might be obvious, but not saying obvious things explicitly is how you get people shoot themselves in the foot.

mananaysiempre
2 replies
8h5m

Things like floating point code being sensitive to the order of operations with state potentially leaking from OS preemption, and all that.

That’s profoundly cursed and also something that doesn’t happen, to my knowledge. Unless the kernel programmer screwed up, an x86-64 FPU is perfectly virtualizable (and I expect an AArch64 FPU too, I just haven’t tried). So it doesn’t matter where preemtion happens.

(What did happen with x87 is that it likes to compute things in more precision than you requested, depending on how it’s configured—normally determined by the OS ABI. Yet variable spills usually happened in the declared precision, so you got different results depending on the particulars of the compiler’s register allocator. But that’s still a far cry from depending on preemption of all things, and anyway don’t use x87.

Floating-point computation does depend on associativity, in that nearestfp(nearestfp(a+b)+c) is not the same as nearestfp(a+nearestfp(b+c)), but the sane default state is that the compiler will reproduce the source code as written, without reassociating things behind your back.)

orbital-decay
1 replies
5h54m

That's doesn't happen in a single thread, but e.g. asynchronous multithreaded code can spit values in arbitrary order, and depending on what you do you can end up with a different result (floating point is just an example). Generally, you can't guarantee 100% reproducibility for uncooperative code because there's too much hardware state that can't be isolated even in a VM. Sure, 99% software doesn't depend on it or do cursed stuff like microarchitecture probing during building, and you won't care until you try to package some automated tests for a game physics engine or something like that. What can happen, inevitably happens.

We don't need to be looking for such contrived examples actually, nixpkgs track the packages that fail to reproduce for much more trivial reasons. There aren't many of them, but they exist:

https://github.com/NixOS/nixpkgs/issues?q=is%3Aopen+is%3Aiss...

Foxboron
0 replies
1h18m

We don't need to be looking for such contrived examples actually, nixpkgs track the packages that fail to reproduce for much more trivial reasons. There aren't many of them, but they exist

Less than a couple of thousand packages are reproduced. Nobody has even attempted to rebuild the entirety of the nixpkgs repository and I'd make a decent wager on it being close to impossible.

fuzzy2
3 replies
10h6m

It’s really not that bad. However, with a standard NixOS setup, you still have a tremendous amount of non-reproducible state, both inside user accounts and in the system. I’m running a “Erase your darlings” setup, it mostly gets rid of non-reproducible state outside my user account. It’s a bit of a pain, but then what isn’t on NixOS.

https://grahamc.com/blog/erase-your-darlings/

Inside my user account, I don’t bother. I don’t like Home Manager.

SkyMarshal
2 replies
9h8m

A nice upgrade to that is to put root in a tmpfs RAM filesystem instead of ZFS:

https://elis.nu/blog/2020/05/nixos-tmpfs-as-root/

That way it doesn't even need to bother with resetting to ZFS snapshots, instead it just wipes root on shutdown and reconstructs it in RAM on reboot.

Then, optionally, with some extra work you can put /home in tmpfs too:

https://elis.nu/blog/2020/06/nixos-tmpfs-as-home/

That setup uses Home Manager, so maybe it's not for you, but worth mentioning if we're talking about making all state declarative and reproducible. You have to use the Impermanence module and set up some soft links to permanent home folders on different drive or partition. But for making all state on the system reproducible and declarative, this is the best way afaik.

fuzzy2
1 replies
3h28m

Thanks, that's interesting. It allows one to stick to "regular Linux filesystems", which is probably a good thing.

SkyMarshal
0 replies
2h48m

True, I think it's more a more elegant setup than the ZFS version. Why actively rollback to a snapshot when ephemeral memory will do that automatically on reboot.

That said I'll just mention that ZFS support on NixOS is like nothing else I've seen in Linux. ZFS is like a first-class citizen on NixOS, painless to configure and usually just works like any other filesystem.

https://old.reddit.com/r/NixOS/comments/ops0n0/big_shoutout_...

paulddraper
1 replies
10h33m

The strength of Docker is the chaos itself.

That depends whether you are okay with chaos.

It appears that you are, so it is suitable tool for you. Choose the right tool for the right job.

---

Docker is a poor choice for people who are interested in deterministic/reproducible builds.

devjab
0 replies
9h46m

I’m not sure exactly why this is being downvoted. It seems pretty fair to want your container builds to not fail because of the “chaos” with docker images and how they change quite a lot. This isn’t about the freedom to build how you want, it’s about securing your build pipelines so that they don’t break at 4am because docker only builds 99% of the time.

I’ll use docker, I like docker, but I can see the point of how it’s not necessarily advantageous if stability is your main goal.

mtmk
1 replies
5h19m

I recently faced a similar hurdle with Nix, particularly when trying to run a .NET 8 AOT application. What initially seemed like it would be a simple setup spiraled into a plethora of issues, ultimately forcing me to back down. I found myself having to abandon the AOT method in favor of a more straightforward solution. To give credit where it's due, .NET AOT is relatively new and, as far as I know, still has several kinks that need ironing out. Nonetheless, I agree that, at least based on my experience, you need a solid understanding of the ins and outs before you can be reasonably productive using Nix.

Smaug123
0 replies
4h41m

.NET AOT really is not designed for deployment, in my experience - for example, the compilation is very hard to do in Nix-land, because a critical part of the compilation is to download a compiler from NuGet at build-time. It's archetypical of the thousand ways that .NET drives me nuts in general.

laerus
1 replies
9h45m

Give a try to Fedora Atomic (immutable). At this point I have pretty much played around and used every distro package maneger there is and I have broken all of them in one way or another even without doing something exotic (pacman I am looking at you). My Fedora Kinoite is still going strong even with adding/removing different layers, daily updates, and a rebase from Silverblue. Imho rpm-ostree will obsolete Nix.

plagiarist
0 replies
6h29m

How do you alter layering without a restart? Just have an immutable base and do other rpm-ostrees in containers? Is that what flatpak is up to?

weatherlight
0 replies
8h26m

I use both Docker and NixOs at work. I've never had any of the problems you seemed to have above. Docker is fine, performance wise it's not great on Macs. I love nix because it's trivial to get something to install and behave the same across different machines.

Nix Doc are horrible but I've found that ChatGPT4 is awesome at troubleshooting Nix issues.

I feel like 90% of the time I run into Nix issues, it's because I decided to do something "Not the Nix way."

underdeserver
0 replies
2h0m

I'm just here to give you points for the Discworld reference.

intelVISA
0 replies
2h28m

It has a bit of a learning curve that is worth it - it's an incredible tool.

fransje26
0 replies
6h46m

Documentation is just plain AWFUL (as in: complete and technically accurate, but maddeningly obtuse)

That has been the case for as long as I can remember. I gave up on Nix about 5 years ago because of it, and apparently not much has changed on that front since then..

aredox
0 replies
8h42m

Maybe it won't be your cup of tea given your reference to Emacs, but there's guix if you want to try a saner alternative to nix.

amelius
0 replies
2h7m

Yes at this point I hope someone builds a friendlier version on top of Nix, so we can cleanly migrate completely away from it.

RGamma
0 replies
1h0m

Just out of curiosity. What were you trying to do that didn't work?

7speter
0 replies
4h56m

You complain about the documentation, and the first thing I wonder is if you’ve tried using one of the prominent chatbots like chatgpt or claude to help fill in the gaps of said documentation? Maybe an obvious thing to do around here, but I’ve found they help fill in documentation gaps really well. At the same time Nix is so niche there might not have been enough information out there to feed into even chatgpt’s model…

tuananh
41 replies
17h39m

as platform engineer, i want to like nix. but it's not easy for everyone else.

and the dx is still pretty bad IMO.

for example, i prefer devbox DX just because i can add pkg like this `devbox add python@3.11`.

also, looking at 120 lines flake.nix. it's not exactly "easier"

https://github.com/Xe/douglas-adams-quotes/blob/main/flake.n...

Cyph0n
26 replies
16h54m

That's kind of an unfair comparison. The flake you linked:

1. Defines a Go binary (i.e., how to build it)

2. Defines a Docker image that uses said Go binary as an entry point

3. Defines a NixOS module that creates a systemd service that runs the Go binary (only relevant on NixOS)

4. Defines a NixOS test for the module that ensures that the NixOS module actually creates a systemd service that runs the Go binary as expected. The NixOS test framework is actually quite impressive - tests run in a QEMU VM that also runs NixOS :)

Note that only (1) and (2) are relevant to the linked article (+ some of the surrounding boilerplate).

mkleczek
14 replies
13h46m

Does Nix make it difficult to properly modularise nix files (flakes or not)?

Because it certainly looks like the things you listed are separate/orthogonal and should be in separate modules/files.

Having many years of Java experience this is the reason why I stick to Maven (not moving to Gradle) - it is opinionated and strongly encourages fine grained modularisation.

yjftsjthsd-h
10 replies
13h25m

Because it certainly looks like the things you listed are separate/orthogonal and should be in separate modules/files.

Nix absolutely allows that, to the point where I'm surprised that the linked example doesn't separate them. Most of my flakes have a flake.nix that's just a thin wrapper around a block that looks like

    devShell = import ./shell.nix { inherit pkgs; };
    defaultPackage = import ./default.nix { inherit pkgs; };
    ...

mkleczek
9 replies
13h14m

And what's the story with generic libraries that can later be used in your nix files to produce desired output?

I am aware of flake-parts that are supposed to offer that but the ecosystem of flake-parts is on the smaller side of things...

lkjdsklf
8 replies
10h43m

Maybe I'm not understanding your question, but this is what the inputs of flakes are for.

You can pull in arbitrary code from pretty much anywhere as an input

mkleczek
7 replies
9h56m

The question is if there actually _is_ a rich ecosystem of such libraries - similar to the rich ecosystem of Maven plugins.

georgyo
6 replies
9h35m

I think something got lost along the way. Nix does not replace maven, you call maven from nix.

https://ryantm.github.io/nixpkgs/languages-frameworks/maven/

nix is a build tool a similae way that docker is a build tool. You define build scripts that call the tools you already use. The major difference is that a docker file gives you no way tl be sure you can reproduce that build in the future. A flake on the other hand gives you a high degree of trust.

mkleczek
5 replies
8h35m

Nothing got lost: both Nix and Maven are dependency management tools. Both are also build tools. The difference is that Maven was created as a Java build tool (and it stayed that way in general).

What we have today is that there is a multitude of dependency managers and - what's worse - all of them are _also_ build tools targeted at specific language.

Nix has a unique position because it is not language specific. Where it is lacking is missing standards and reusable libraries that would simplify common tasks.

I am comparing to Maven because I am looking for multi-platform Maven alternative. There is Bazel but its dependency management is non-existent. There is Buck2 which is great in theory but the lack of ecosystem makes it a non-starter.

Nix is the only contender in the space that offers almost everything and has a chance to become a de-facto standard of software delivery thanks to this.

What's missing though is... easy to use canned solutions similar to maven plugins.

EDIT: grammar

georgyo
4 replies
8h7m

In the link I referenced, it shows you how to use maven plugins in nix.

Nix has composable, reusable, and shareable functions. Nix is a full, be awkward, programming language. You'll find functions and flakes for nearly everything you might want to do. An example of one that is more plugin like is sops-nix.

Though have never used maven or maven plugins, I may be missing your overall point.

mkleczek
3 replies
7h42m

In Java/Maven ecosystem a lot of things is simple because there is a huge ecosystem of easy to integrate libraries/plugins.

Want a packaged spring boot application? There is a plugin for that. Want to package it in a container image? Just add a plugin dependency. Want to build an RPM or deb package? Add another two plugin dependencies. All various artifacts are going to be uploaded to a repository and made available as dependencies.

Missing a specific plugin? You can easily implement it in Java as a module and use it inside your project (and expose it as a standalone artifact as well).

I can’t find anything similar in Nix ecosystem.

Having a language allowing for this is not the same as having solutions already available.

elbear
2 replies
7h27m

I was reading this thread and now I finally understood what you mean by plugins. Plugins kind of exist in Nix, except they're not called that. They are functions available in some specific module.

For example, `dockerTools` provides different functions like creating an image and other things. There is a module of fetchers, functions that retrieve source files from GitHub and other sources.

But I don't think there are many language-specific functions, like the ones you are describing. I can't think of any, except for the ones that build a Nix package from a language-specific package.

elbear
0 replies
4m

Yes, I know about flake.parts (didn't know about the other one). But I'm not aware of the kind of libraries you mentioned. There's FlakeHub[0], which is like a package index for flakes, so maybe we'll start to see there reusable stuff.

[0]: https://flakehub.com/flakes

rfoo
1 replies
8h21m

The thing I hate most about a Java codebase is 100 separate .java files with 30-50 lines each and somehow it indeed worked but I have no idea where to look at if I want to find out how is something implemented.

mkleczek
0 replies
6h53m

Indeed - that’s very often the case - the right balance between too many and too big is not easy to find.

Having said that - Java is in general IDE targeted language and once you have an IDE - many small files is not an issue anymore.

Cyph0n
0 replies
13h18m

Yes, to the point where it can become more confusing than helpful. The fact that Nix handles merging data structures for you makes it easy to fall into over modularization.

hamandcheese
10 replies
16h14m

I agree that it's not a fair comparison, but I will add that this is a big barrier to newcomers. Everyone experienced with nix builds their own ivory tower of a nix flake (myself included), so it's hard to find actually good examples of how to do basic things without wading through a bunch of other bullshit.

rapnie
6 replies
15h28m

Newcomer here. Could anyone tell if std [0] is a good way to bring more sanity into flake design, esp. in avoiding ivory towery custom approaches? Using devenv.sh is another option, but I liked emphasis on creating a common mental picture of the architecture and focus on SLDC that std provides.

[0] https://std.divnix.com

OJFord
3 replies
4h6m

What on Earth is the background reading supposed to be for https://std.divnix.com/explain/why-std.html, a supposedly motivating page for newcomers?

I've even used Nix a little bit (though early days, before flakes) and it makes absolutely no sense to me.

rapnie
2 replies
2h44m

Well, I think that becomes clearer when reading comments on this HN thread. I found std after spending significant time to find out 1) wth is Nix/NixOS? 2) what would I use it for exactly? and 3) How to get going? Then on 3) I found reams of outdated or confusing docs making me doubt 1) and 2) again, as well as the frustrations of others on this.

Then this std background of "We bring clarity, clear mental picture, manageable Nix projects throughout the lifecycle." appealed a lot.

OJFord
1 replies
2h12m

Does cell/cell block/target/actions terminology come from Nix then? I assumed that was std's solution (since it is under the heading Solution, and I haven't heard of it) so found the 'explanation' of it baffling. But if it comes from Nix it can be read sort of like a style guide for how to do what you're already doing with Nix?

If the target audience is limited to those already highly experienced (and yet frustrated) with Nix then I guess it's fine.

rapnie
0 replies
44m

Ah, sorry I misinterpreted you before. Yes, these are std's abstractions to organize your Nix code and gradually 'grow' your solution. Rationale and explainers on these concepts are more spread about in the docs. The 'sales pitch' is another high-level txt than the one you passed on why to use std.

(PS. This cell breakdown reminded me a bit of Atomic Design for front-end UI to make that easier.)

hamandcheese
1 replies
15h15m

I haven't used std, but I would like to point you at what I think is the ideal way to organize a lot of nix: readTree from the virus lounge[0].

It doesn't add kitschy terms like "Cell" and "growOn", it's just a way to standardize attribute names according to where things live in the filesystem.

So in their repo, /foo/bar/baz.nix has the attribute path depot.foo.bar.baz

I will say that to understand how it works you need to have a solid grasp of the nix language. But once established I think it's a pattern that noobs could learn in a very quick and superficial way.

[0]: https://cs.tvl.fyi/depot/-/blob/nix/readTree/README.md

rapnie
0 replies
15h6m

Thank you! I already bumped into the virus lounge, with their TVIX project [0] that I found quite interesting.

[0] https://code.tvl.fyi/about/tvix

Cyph0n
1 replies
15h52m

I'm a beginner myself and have been trying my best to keep things simple. But I do agree that the complexity creep is quite tempting with Nix. Not sure why though..

hamandcheese
0 replies
15h11m

I think nix has a fair amount of gravity - once you've started, assuming like it, you will quickly want to use it for everything.

I don't think most people's flakes are more complex than the alternative (which would be, I don't know, a bunch of different script, maybe some ansible playbooks?) but it is a bit daunting when all that complexity is wrangled into a single abstraction.

MadnessASAP
0 replies
14h22m

I thought I was weird for my bespoke ivory tower monorepo flake.nix, glad to hear I'm not the only one. It has been a tremendous help in managing my homelab.

viraptor
11 replies
16h51m

Those 120 lines are not exactly representative. For example if I was writing this for a single service, I wouldn't bother making a new module and would inline it instead. Then, you've got many lines which would map 1:1 to an inlined systemd service description, so you're not getting rid of those whatever the system you choose. There's also a fancy way to declare multiple systems with an override.

This example is a "let's do a trivial thing the way you'd do a big serious thing". If you wanted to treat it as a one-off, I'm sure you could cut it down to 40 lines or so.

whydoineedthis
10 replies
15h54m

So I can write a Dockerfile using 17 verbs (13, actually), and it's understandable language to a 4th grader... or I can write 120 lines of complete abstract nix code that means nothing to someone doing software for 20 years.

Hrmmmm...this is such a TOUGH decision.

m463
6 replies
15h28m

I would like to mention one pet peeve of mine wrt docker...

cramming everything into one RUN line to save space in a layer.

I really wish instead of:

  RUN foo && \
      bar && \
      bletch
You could do:

  LAYER
  RUN foo
  RUN bar
  RUN bletch
  LAYER
or something similar.

maybe even during development you could do:

  docker build --ignore-layer .
then at the end:

  docker build .

viraptor
4 replies
14h43m

Cramming things into one layer can save gigabytes in size. Even if it saves a 100MB, it adds up between multiple CI runs and in deployment latencies.

Docker should address it one day, but until then, we just have to do it in real world scenarios.

yjftsjthsd-h
1 replies
13h22m

An alternative solution... for some definition of the term... is to write a "naive" Dockerfile like

    RUN foo
    RUN bar
    RUN baz
and then just build it with something like... I think kaniko did this last I looked?... that smashes the whole thing into a single layer. Obviously that has other tradeoffs (no reuse of layers) but depending on your usecase it can be a good trade to make (why yes, I did cut my teeth in an environment where very few images shared a base layer, why do you ask?).

IshKebab
0 replies
9h55m

Docker itself can do that now too.

a_t48
1 replies
14h22m

Parent comment agrees with you, is just asking for more ergonomic syntax

viraptor
0 replies
14h19m

I get it, just providing more context. Should've phrased it nicer.

xena
0 replies
15h34m

I've used this app for like four different talks over the years, I could clean it up, but then I break code samples in my talks.

viraptor
0 replies
14h46m

I addressed the 120 lines already and they're not completely abstract. You seem uncomfortable with an alternative approach and that's fine. But this is not a good intentions argument.

kaba0
0 replies
7h0m

Dockerfile’s are absolutely not something a 4th grader would understand. It looks familiar to you, because you have already learnt it. They are definitely not trivial to understand before that, and the same is also true for Nix.

zer00eyz
0 replies
15h1m

its 120 lines of code to deal with a binary and its systemd setup.

That binary + config file are effectively as close as we're going to get to a "flat pack" on linux.

Im not sure what is the forest and where are the trees but this example shows exactly what we have lost sight of.

klntsky
0 replies
17h33m

These 120 lines do quite a lot more, don't they?

Thaxll
31 replies
15h5m

I keep reading about Nix and I still don't understand what it does better than Docker, all the example in the post are trivial to do in a Dockerfile so where is the added value?

Docker build are deterministic and easily reproductible, you use a tagged image, that's it, it set in stone.

The 0.01% of Dockerfile that don't work, what does it even means, what does not work?

The other thing is about that buildGoModule module so now you need somehow a third party tool to use or build Go in a Docker image, when using a Dockerfile you just use regular Go commands such as go build and you know exactly what is going on and what args you use to build the binary.

As for the thing about using Ubuntu 18 which is out of date and not finding it, most orgs have docker image cache especially since docker hub closed access to large downloads, but more importantly there is a reason that's its not there anymore, it's not secure to use it, it's like wanting to use the JVM 6, you should not use something that is out of date security wise.

ok_dad
13 replies
14h50m

Aren’t Nix builds actually deterministic in that they’ll build the same each time? Docker doesn’t have that, you’re just using prebuilt images everywhere. Determinism has a computer science definition, it’s not “build once run anywhere,” it’s more like “builds the exact same binary each time.”

cpuguy83
9 replies
13h10m

Don't conflate using "apt-get" in a Dockerfile with what "docker build" does.

clhodapp
3 replies
12h58m

Docker doesn't give you the proper tooling to not have to use e.g. apt-get in your Dockerfiles. For that reason, one might as well conflate them.

vergessenmir
1 replies
9h55m

I'm not sure that this is a Docker problem but you do have a point. I've used docker from the very beginning and it always surprised me that users opted to use package managers over downloading the dependencies and then using ADD in the docker file.

Using this approach you get something reproducible. Using apt-get in a docker file is an antipattern

zmgsabst
0 replies
9h5m

Why? — I agree that it’s not reproducible, but so what?

We have 2-3 service updates a day from a dozen engineers working asynchronously — and we allow non-critical packages to float their versions. I’d say that successfully applies a fix/security patch/etc far, far more often than it breaks things.

Presumably we’re trying to minimize developer effort or maximize system reliability — and in my experience, having fresh packages does both.

So what’s the harm, precisely?

cpuguy83
0 replies
12h3m

Docker doesn't give you the tooling to build a package and opts for you to bring the toolchain of your choice. Docker executes your toolchain, and does not prescribe one to you except for how it is executed.

Nix is the toolchain, which of course has its advantages.

takeda
2 replies
12h34m

You can absolutely build a reproducible image in Dockerfile if you have discipline and follow specific patterns of doing.

But you can achieve the same result if you use similar techniques with a bash script.

georgyo
1 replies
9h24m

You _can_ if you have _disipline_. That sounds like a foot gun the longer a project goes on and as more people touch the code.

Just create a snapshot of the OS repo, so apt/dnf/opkg/ etc will all reproduce the same results.

Make sure _any_ scripts you call don't make web requests. If they do you have the validate the checksums of everything downloaded.

Have no way to be sure that npm/pip/cargo's package build scripts are not actually pulling down arbitrary content at build time.

cpuguy83
0 replies
1h6m

So, outside of the fact that a nix build disables networking (which you can actually do in a docker build, btw) how would you check all those build scripts in nix?

You seem to be comparing 2 different things.

ok_dad
1 replies
12h49m

Uh, I never even mentioned apt. Docker and nix are, likewise, very different. In not super familiar with either but I do know docker isn’t reproducible by design whereas nix is. I’m not sure nix is always deterministic, though i know docker (and apt) certainly aren’t, nor are they reproducible by design.

cpuguy83
0 replies
1h15m

So the thing here is docker provides the tooling to produce reproducible artifacts with graphs of content addressable inputs and outputs.

nix provides the toolchain of reproducible artifacts... and then uses that toolchain to build a graph of content addressable inputs in order to produce a content addressable output.

So yes they are very different, but not in the way you are describing. Using nix, just like using docker, cannot guarantee a reproducible output. Reproducible outputs are dependent on inputs. If your inputs change (and inputs can even be a build timestamp you inject into a binary) then so does your output.

jhanoncomm
2 replies
14h17m

Is this about timestamps or is there more to it?

takeda
0 replies
12h29m

The Nix idea is to start building with a known state of the system and list every dependency explicitly (nothing is implicit, or downloaded over net during build).

This is achieved by building inside of a chroot, with blocked network access etc. Only the dependencies that are explicitly listed in the derivation are available.

MadnessASAP
0 replies
14h8m

The timestamps thing is part of ensuring that archives will have the correct hash. Nix ensures that the inputs to a build, that being the compiler, environment, dependencies, file system, are exactly the same. The idea being then that the compiler will produce an identical output. Hash's are used throughout the process to ensure this is actually the case, they are also used to identify specific outputs.

TeeMassive
13 replies
14h52m

Docker builds are not deterministic, I don't get where you get that idea. I can't count the hours lost because the last guy who left one year ago built the image using duck tape and sed commands everywhere. The image is set in stone, but so is a zip file, there's nothing special here.

Building an image using nix solves many problems regarding not only reproducible environments that can be tested outside a container but also fully horizontal dependency management where each dependency gets a layer that's not stacked on one another like a typical apt/npm/cargo/pip command. And I don't have to reverse engineer the world just to see what files changed in the filesystem since everything has its place and has a systematic BOM.

jhanoncomm
12 replies
14h13m

So is it right, to make docker reproducible it needs to either build dependencies from source from say a git hash or use other package managers that are reproducible or rely on base images that are reproducible.

And that all relies on discipline. Just like using a dynamically typed programming language can in theory have no type errors at run time, if you are careful enough.

yjftsjthsd-h
9 replies
13h12m

Right; you could write a Dockerfile that went something like

    FROM base-image@e70197813aa3b7c86586e6ecbbf0e18d2643dfc8a788aac79e8c906b9e2b0785
    RUN pkg install foo=1.2.3 bar=2.3.4
    RUN git clone https://some/source.git && cd source && git checkout f8b02f5809843d97553a1df02997a5896ba3c1c6
    RUN gcc --reproducible-flags source/foo.c -o foo
but that's (IME) really rare; you're more likely to find `FROM debian:10` (which isn't too likely to change but is not pinned) and `RUN git clone -b v1.2.3 repo.git` (which is probably fixed but could change)...

And then there's the Dockerfiles that just `RUN git clone repo.git` and run with whatever happened to be in the latest commit at the moment...

cpuguy83
6 replies
13h7m

It is likely just as rare for someone to use nix for this, though.

clhodapp
2 replies
12h55m

It's actually how nix works by default. When you pull in a dependency, you are actually pulling in a full description of how to build it. And it pulls in full descriptions of how to build its dependencies and so on.

The only reason nix isn't dog slow is that it has really strong caching so it doesn't have to build everything from source.

cpuguy83
1 replies
2h20m

This is literally how docker works as well. The difference is docker doesn't bring a toolchain for those artifacts.

clhodapp
0 replies
20m

Docker can resolve dependencies in a very similar manner to nix, via multi-stage builds. Each FROM makes one dependency available. However, you can only have direct access to the content from one of the dependencies resolved this way. The other ones, you have to COPY over the relevant content --from at build time.

MadnessASAP
1 replies
12h47m

If you're using Nix, that is what you are ultimately producing, it's buried under significant amounts of boilerplate and sensible defaults. Ultimately the output of Nix (called a derivation) reads a lot like a pile of references, build instructions, and checksums.

isbvhodnvemrwvn
0 replies
9h47m

I think their point was that number of people who use nix is a rounding error, perhaps due to poor user experience.

yjftsjthsd-h
0 replies
13h4m

Possible; I don't have a feel for the relative likelihoods. I think the thing nix has going for it is that you can write a nix package definition without having to actually hardcode anything in and nix itself will give you the defaults to make ex. compilers be deterministic/reproducible, and automate handling flake.lock so you don't have to actually pay attention to the pins yourself. Or put differently; you can make either one reproducible, but nix is designed to help you do that while docker really doesn't care.

nullify88
0 replies
11h2m

Maintaining something like that is a pain unless you have tooling like Renovate to inform and update the digests and versions.

janjongboom
0 replies
11h18m

And that assumes that `foo` and `bar` are not overwritten or deleted in your package repository, and that the git repository remains available.

MadnessASAP
1 replies
12h52m

You can also use a hammer to put a screw in the wall.

Dockerfiles being at their core a set of instructions for producing a container image could of course be used to make a reproducible image. Although you'd have to be painfully verbose to ensure that you got the exact same output. You would actually likely need 2 files, the first being the build environment that the second actually get built in.

Or you could use Nix that is actually intended to do this and provides the necessary framework for reproducibility.

otabdeveloper4
0 replies
9h34m

I still don't understand what it does better than Docker

It doesn't break as you scale. If you don't need that, then keep using Docker. (Personally, for me "scale" starts at "3 PC's in the home", so I eventually switched all of them to NixOS. I don't have time to babysit these computers.)

Docker build are deterministic and easily reproductible

No, they definitely aren't. You don't really want to go down this rabbit hole, because at the end you realize Nix is still the simplest and most mature solution.

janjongboom
0 replies
11h20m

That's the interesting bit about Dockerfiles. They look _looks_ deterministic, and they even are for a while while you're looking at it as a developer. I've done a detailed writeup of how it's not deterministic in https://docs.stablebuild.com/why-stablebuild

clhodapp
0 replies
14h49m

Most Docker builds are not remotely deterministic or reproducible, as most of them pull in floating versions of their dependencies. This means that the same Dockerfile is likely to produce different results today than it did yesterday.

jossephus01
21 replies
14h51m

My experience with building Docker images for Java applications using Nix wasn't very pleasant though. After the deprecation of gradle2nix, there doesn't seem to be a clear alternative method for building Docker images for Gradle-based Java applications. I challenged a friend to create the smallest possible Docker image for a simple Spring Boot application some time ago. While I was using Nix, the resulting image was twice the size of the image built without Nix. You can check out the code for yourself here: https://github.com/jossephus/Docker_challenge/blob/main/flak... .

takeda
7 replies
13h12m

I haven't used java in over a decade so won't be able to help much with that, but for example I was able to get my application to fit in just 70MB container including python and all dependencies + busybox and tini

It looked something like this: https://gist.github.com/takeda/17b6b645ad4758d5aaf472b84447b...

So what I did was:

- link everything with musl

- compile python and disable all packages that I didn't use in my application

- trim boto3/botocore, to remove all stuff I did not use, that sucker on it's own is over 100MB

The thing is what you need to understand is that the packages are primarily targeting the NixOS operating system, where in normal situation you have plenty of disk space, and you rather want all features to be available (because why not?). So you end up with bunch of dependencies, that you don't need. Alpine image for example was designed to be for docker, so the goal with all packages is to disable extra bells and whistles.

This is why your result is bigger.

To build a small image you will need to use override and disable all that unnecessary shit. Look at zulu for example:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...

you add alsa, fontconfig (probably comes with entire X11), freetype, xorg (oh, nvm fontconfig, it's added explicitly), cups, gtk, cairo and ffmpeg)

Notice how your friend carefully extracts and places only needed files in the container, while you just bundle the entire zulu package with all of its dependencies in your project.

Edit: tadfisher seems to be more familiar with it than me, so I would start with that advice and modify code so it only includes a single jdk. Then things that I mentioned could cut the size of jdk further.

Edit2: noticed another comment from tadfisher about openjdk_headless, so things might be even simpler than I thought.

chrisandchris
4 replies
7h54m

I've never used Nix, but this looks like hell'a of an unreadable config file (compared to docker)? How do you manage these files?

whateveracct
2 replies
5h0m

It's not actually unreadable - you just have to learn convention on top of the Nix language. For instance, what mkDerivation does. Actually, the Nix language usage here is somewhat minimal. Mostly let bindings (aka lambda calculus).

I wouldn't expect a layman to be able to grok that file. That's fine though - it's not for laymen.

guitarbill
1 replies
4h4m

It's not actually unreadable - you just have to learn convention on top of the Nix language. For instance, what mkDerivation does. Actually, the Nix language usage here is somewhat minimal. Mostly let bindings (aka lambda calculus).

I wouldn't expect a layman to be able to grok that file. That's fine though - it's not for laymen.

This is the kind of comment that makes me want to stay far, far away from Nix and the Nix "community".

whateveracct
0 replies
1h29m

Why? Saying that Nix is complicated and isn't trivial to use or read without learning prerequisite knowledge is bad now?

I actually pointed out that mkDerivation is something helpful to learn - that's one thing I wish someone made me sit and learn when I first got exposed to Nix. It unlocks a lot.

elbear
0 replies
7h36m

What do you mean by manage?

I agree with your assertion regarding the language though. I think nix-lang makes it harder to get into Nix.

jossephus01
1 replies
13h3m

You are correct. I havent done any trimming. Thanks for the suggestions and the gist. Thanks

tadfisher
3 replies
13h1m

Oh, and openjdk_headless skips the GTK and X dependencies that you won't need for Spring.

okr
2 replies
12h30m

That's interesting. We have some applications, that produce PDFs, which use fonts, which usually requires a non-headless (headfull?) jdk. At AWS i wonder, what the default alpine jdk contains. And how much space could be saved, if people were more aware, that they can use a headless one.

hawk_
1 replies
9h49m

headfull?

Wonder if there is a good term for this. I have been jokingly referring to this as 'headed' and headless as 'beheaded'.

okr
0 replies
6h40m

Axed! :)

tadfisher
2 replies
13h12m

That's because you're including two JDKs, zulu and the one that gradle includes via its jdk argument. Look for gradleGen in nixpkgs to see what I mean.

And sorry for gradle2nix, I'm working on an improvement that's less of a hack.

rapnie
0 replies
11h30m

And sorry for gradle2nix, I'm working on an improvement that's less of a hack.

Don't be. Thanks for your work. Excited to learn about the improvement. Can you tell more about what you have in mind?

jossephus01
0 replies
12h54m

Thanks tadfisher, I will check it out. This is by no means meant to be a dunk on gradle2nix. Love your work on android-nixpkgs and I will be looking for the alternative. Thanks.

yjftsjthsd-h
1 replies
13h28m

While I was using Nix, the resulting image was twice the size of the image built without Nix.

I would be very interested to know where the difference is; is nix including things it doesn't need to? Is the non-nix build not including things it should?

jossephus01
0 replies
13h1m

I have included the result of running dive on the resulting image. You can check it out on https://github.com/jossephus/Docker_challenge/wiki.

As stated above, I havent done any trimming on the resulting image, so There's too many stuff in the image.

max-privatevoid
1 replies
5h15m

I decided to participate in your challenge and cleaned up your Nix code a little bit. It seems like the main task of the challenge is building a really small JRE.

I've switched to using a headless OpenJDK build from Nixpkgs as a baseline instead of Zulu, to remove all the unnecessary dependencies on GUI libraries. Then I've used pkgs.jre_minimal to produce a custom minimal JRE with jlink.

The image size now comes out to 161MB, which is slightly larger than the demo_jlink image. This is because it actually includes all the modules required to run the application, resulting in a ~90MB JRE. The jdeps invocation in Dockerfile_jlink fails to detect all the modules, so that JRE is only built with java.base. Building my minimal JRE with only java.base brings the JRE size down to about 50MB, the resulting (broken) container image is 117MB according to Podman.

I've also removed the erroneous copyToRoot from your call to dockerTools.buildImage, which resulted in copying the app into the image a second time while the use of string context in config.Cmd would have already sufficed.

I've also switched to dockerTools.buildLayeredImage, which puts each individual store path into its own image layer, which is great for space scalability due to dependency sharing between multiple container images, but won't have an impact for this single-image experiment.

This is mostly a JRE size optimization challenge. The full list of dependencies and their respective size is as follows:

  /nix/store/v27dxnsw0cb7f4l1i3s44knc7y9sw688-zlib-1.3                            125.6K
  /nix/store/j6n6ky7pidajcc3aaisd5qpni1w1rmya-xgcc-12.3.0-libgcc                  139.1K
  /nix/store/l0ydz31lwa97zickpsxj2vmprcigh1m4-gcc-12.3.0-libgcc                   139.1K
  /nix/store/a3n1vq6fxkpk5jv4wmqa1kpd3jzqhml9-libidn2-2.3.4                       350.4K
  /nix/store/s5ka5vdlp4izan3nfny194yzqw3y4d1z-lcms2-2.15                          445.3K
  /nix/store/a5l3w6hiprvsz7c46jv938iij41v57k6-libjpeg-turbo-2.1.5.1                 1.6M
  /nix/store/r9h133c9m8f6jnlsqzwf89zg9w0w78s8-bash-5.2-p15                          1.6M
  /nix/store/3dfyf6lyg6rvlslvik5116pnjbv57sn0-libunistring-1.1                      1.8M
  /nix/store/a3zlvnswi1p8cg7i9w4lpnvaankc7dxx-gcc-12.3.0-lib                        7.5M
  /nix/store/657b81mfpbdz09m4sk4r9i1c86pm0i8f-app-1.0.0                            19.0M
  /nix/store/1zy01hjzwvvia6h9dq5xar88v77fgh9x-glibc-2.38-44                        28.8M
  /nix/store/b1fhkmscb0vff63xl8ypp4nsc7sd96np-openjdk-headless-minimal-jre-21+35   91.4M
There's not much else that can be done here. glibc is the next largest dependency at ~30MB. This large size seems to be because Nixpkgs configures glibc to be built with support for many locales and character encodings. I don't know if it would be possible or practical to split these files out into separate derivations or outputs and make them optional that way. If you're using multiple images built by dockerTools.buildLayeredImage, glibc (and everything else) will be shared across all of them anyway (given you're using roughly the same Nixpkgs commit).

https://github.com/max-privatevoid/hackernews-docker-challen...

jossephus01
0 replies
3h40m

These changes are all great. Learnt a lot from the optimizations. Thanks.

wbl
0 replies
13h7m

Don't you just stick the JAR in?

xlii
16 replies
19h44m

I spent last 2-3 days trying to get Docker images built on Darwin and I feel that this article is a universe making fun of me.

Nix is absolutely the best tool for what I want to achieve but it has those dark forsaken corners that just suck your soul out dry.

I love it but sometimes it feels like being a Morty on Rick’s adventure to the compilerland.

renewiltord
6 replies
17h41m

I use Orbstack and it works flawlessly to do this. Really good tool. I use Docker to cross-compile for {aarch64,amd64} x {linux,darwin} since not all the cross-compiling is super robust across our stacks (I'm using a specific glibc for one Linux part, etc.). Just a bunch of docker on my Darwin aarch64 and it compiles everything. Good experience.

e40
4 replies
17h34m

I installed Orbstack and found that I didn't really need it, so I removed the directory in /Applications. Wow, for weeks and weeks I found remnants of it in a lot of places. Very disappointing that it left so much cruft around. They should have an uninstaller. It left a really bad taste and I'm unlikely to try it again.

Before someone asks. I've been using macOS for a long time. I've never seen remnants like this from a program. Sure, there are often directories left in ~/Library/Application Support/, but this was more than that. Unfortunately, I didn't write down the details, but I ran across the bits in at least 3-4 places.

kdrag0n
2 replies
9h55m

Dev here — I've been meaning to update the Homebrew cask to be more complete on zap, but there's a good reason that all of these are needed:

- ~/.orbstack

- Docker context that points to OrbStack (for CLI)

- "source ~/.orbstack/shell/init.zsh" in .zprofile/bash_profile (to add CLI tools to PATH)

- ~/.ssh/config (for convenient SSH to OrbStack's Linux machines)

- Symlinks to CLI tools in ~/.local/bin, ~/bin, or /usr/local/bin depending on what's available (to add CLI tools to existing shells on first install — only one of these is used, not all)

- Standard macOS paths (~/Library/{Application Support, Preferences, Caches, HTTPStorages, Saved Application State, WebKit})

- Keychain items (for secure storage)

- ~/OrbStack (empty dir for mounting shared files)

- /Library/PrivilegedHelperTools (to create symlinks for compatibility)

Not sure what the best solution is for people who don't use Homebrew to uninstall it. I've never liked separate uninstaller apps, and it's not possible to detect removal from /Applications when the app isn't running.

xlii
0 replies
8h57m

IMO documenting this (and uninstall section in GUI with link) would be enough for me. Used that and never felt neglected by devs.

And cough since we’re at with - did you consider Nixpkgs distribution?

I’m slowly moving deeper and deeper into ecosystem and use Home Manager for utilities that I use often (and use nix shell/nix run for one offs). Some packages are strictly GUI and while they aren’t handled flawlessly (self-updaters) it’s nice to have them on a single list.

Yet based on your list it’s a definitely a nixventure…

mdaniel
0 replies
23m

I've never liked separate uninstaller apps

And yet, you are the only(?) one with that knowledge, so the alternative seems to be replying to HN threads with a curated list of things that a user must now open iTerm2 and handle by themselves. Something, unless I'm mistaken, that computers are really good at doing (Gatekeeper and privilege elevation nonsense aside)

Even just linking to the zap portion of your brew cask could go a long way since it would be the most succinct manifest if I correctly understand what it does

cqqxo4zV46cp
0 replies
17h22m

I’ve found this to be the norm for ‘Docker Desktop alternatives’. Not to say that Orbstack isn’t uniquely messy.

xlii
0 replies
17h12m

I’m also on Orbstack mostly for performance.

But unfortunately cross compiling quickly broke when I started doing mild customization (and one reasons I’m doing this is a complex setup that’s very sensitive to version changes).

In the end solution was to “simply” get darwin.linux-builder up but that pulled a lot of weight behind it.

It works, but it’s not the first time I spent my time on nix-ventures.

takeda
2 replies
14h23m

The big problem is how docker was designed. It is essentially a jail that is supposed to contain a Linux binary.

Things are straight forward on Linux. You build your binary, place in a docker container and you are done. The nix code will also be straight forward. If you can build your code, then creating a container is just one more operation away.

Unfortunately docker requires Linux binary and you are on Mac. So the docker desktop actually runs a Linux VM and performs all operations on it, abstracting this away from you.

Nix doesn't do that and you have two options:

1. Do cross compilation, the problem is that for this to work you need to be able to cross compile down to glibc, the problem is that while this will work for most community used dependencies you might get some package where the author didn't put effort making sure it cross compile. To make things worse the Hydra that populates standard caches that nix uses, doesn't do cross compile builds, so you will run into lengthy processes that might potentially end with a failure.

2. You can have a Linux builder, that you add to your Mac and configure to send build jobs for x86_64-linux to that builder. Now you could have a physical box, create a VM, or even have a NixOS docker container (after all docker ion Mac runs inside of the VM).

The #1 seems like the proper way, while #2 is more of a practical way.

I think you are running into issues, because you're likely trying #1, and that requires a lot of experience not only with Nix, but also with cross compiling. I wish Nix's Hydra would also build Darwin to Linux cross compilation as that would not only provide caches, but also help making sure the cross compilation doesn't break, but that would also increase costs for them.

I think you should try the #2 solution.

Edit: looks like there might have been an official solution to this problem: https://ryantm.github.io/nixpkgs/builders/special/darwin-bui... I haven't used it yet.

indiv0
1 replies
8h47m

Hydra not populating with cross compile builds is the bane of my existence.

I'm using `clang` from `pkgs.pkgsCross.musl64.llvmPackages_latest.stdenv` to cross-compile Rust binaries from ARM macos to `x86_64-unknown-linux-musl`. It _works_, but every time I update my `flake.nix` it rebuilds *the entire LLVM toolchain*. On an M2 air, that takes something like 4 hours. It's incredibly frustrating and makes me wary of updating my dependencies or my flake file.

The alternative is to switch to dockerized builds but:

1) That adds a fairly heavyweight requirement to the build process

2) All the headache of writing dockerfiles with careful cache layering

3) Most importantly, feels like admitting defeat.

cloudripper
0 replies
4h13m

Not sure if this applies to your situation, but I believe you can avoid a full rebuild by modularizing the flake.nix derivations into stages (calling a separate *.nix for each stage in my case). That is how it appears to be working for me on a project (I am building a cc toolchain without pkgscross).

I pass the output of each stage of a toolchain as a dependency to the next stage. By chaining the stages, changes made to a single stage only require a rebuild of each succeeding stage. The final stage is the default of the flake, so you can easy get the complete package.

In addition, I can debug along the toolchain by entering a single stage env with nix develop <stage>

Not sure if this is the most optimal way, but it appears to work in modularizing the rebuild.(using 23.11)

bsder
2 replies
17h6m

In Docker, the dark corners have dust. In Nix, the dark corners have a grue.

xedrac
1 replies
16h49m

100% this. Nix may seem better, until something goes wrong and you have to waste your weekend digging into it's depths.

MadnessASAP
0 replies
14h14m

On the flip side, once you have fixed the problem it has a very strong tendency to stay fixed. More importantly, the fix does not typically require me to remember that fix months later.

If something does break, rollbacks are free and an integral part of Nix.

miduil
0 replies
2h50m

The way I've set this up for our macos devs at work was a script that runs nix builds inside docker-for-desktop using the official nixos upstream docker image (and some tricks to get ssh forwarding, filesystem mounts, ...) working. Works quite alright. Benefit is you don't need some weird Linux remote builder vm with ssh running.

deathanatos
0 replies
14h48m

macOS is a definitely rougher. I use colima there, and it does alright. There are one or two bugs with it, but I think those are primarily around volumes. But it does alright with building Docker images.

The rougher part is the speed of it; it's a one-two punch between the hardware & the fact that Docker has to emulate a Linux VM.

whazor
16 replies
19h40m

This blog post is missing the reasoning on why shared docker layers are useful. It is because of caching. The more images are sharing the same layers the better, as it allows you to cache more stuff. Better caching means faster startup of containers.

Why is docker bad at this? In order to enjoy the caching benefit, each time you build a docker image you want it to output as much existing layers as possible. So running apt-get install python3 today should result in the exact same layer as yesterday, if there are no new updates. But this requires the all the files to be exactly the same, including the metadata like creation time. As docker layers are cached by hashing the files.

Now, Nix already does storing dependencies by hash. So the layers will always be the same with the same version and same configuration.

hamandcheese
10 replies
16h8m

I would rephrase this as:

The Dockerfile format imposes a hierarchical relationship between layers. This quickly becomes very annoying, since dependencies usually form dependency graphs, not dependency trees.

Alternative tools, like nix (probably bazel too), are not bound in the same way. They can achieve fine grained caching by mapping their dependency graph to docker layers, which is something that can not be expressed with a Dockerfile.

cpuguy83
9 replies
13h4m

Steps in a stage are hierarchical.

The final result need not be. You can build a bunch of things then merge the results in a final stage without any hierarchy (this is "COPY --link" in a Dockerfile).

georgyo
8 replies
9h20m

That requires some very explicit and non-obvious effort to do. It's quite painful to do this properly in Docker.

cpuguy83
7 replies
2h21m

And the consensus seems to be nix is not straight forward?

georgyo
6 replies
1h56m

And the snake eats it tail.

With nix, re-usability is very high. It's a function that is very baked in at very low levels of it's design. This comes with up front complexity but getting to these reusable layers is basically forced.

Docker is very simple and often touts reusable layers, but in practice is not. Unless you tackle that complexity.

Making reproducible and reusable content takes effort. Other tools are not designed for that. As a result the getting to the same state requires a similar amount of complexity. Worse, with docker you can never be sure that you actually succeeded in your goal of reproducibility.

An analogy could be rust. Rust has up front complexity, but tackling that complexity gives confidence that memory safety and concurrency primitives are done correctly. It's not that C _can't_ achieve the same runtime safety, it's just requires a lot more skill to do correctly; and even then memory exploits are reported on a near daily basses for very popular and widely used libraries.

Complex problems are complex. And sooner or later you'll need to face that complexity.

cpuguy83
5 replies
1h22m

This is not how docker works. Docker, exactly like nix, is based on a graph of content addressable dependencies.

What you are describing is chaining a bunch of commands together. Yes, this forms a dependency chain stored in separate layers and is part of the cache chain.

Nix suffers the exact same problems with reproducibility. The thing it provides is the toolchain of dependencies that are reproducible. Docker does not provide your dependencies.

If the inputs change then so does the output. If the output itself is not reproducible (like, say an artifact with a build-time embedded in it) then you have something that is inherently not reproducible and two people trying to build the same exact nix package will have different results.

EDIT: Fixed a sentence I apparently got distracted while writing and didn't complete (about layer caching).

Foxboron
4 replies
1h5m

Nix is not content addressable though, the hashes is based off on the derivation files which are equal to the lock files you would find in other package managers.

The thing it provides is the toolchain of dependencies that are reproducible. [...] If the inputs change then so does the output. If the output itself is not reproducible (like, say an artifact with a build-time embedded in it) then you have something that is inherently not reproducible and two people trying to build the same exact nix package will have different results.

There are no guarantees they are reproducible. The only guarantees Nix gives you is that the build environment is the same which allows you to make some claims about the system behaving the same way. But they are certainly no guarantees about artifacts being bit-by-bit identical.

cpuguy83
3 replies
59m

It's content addressable, just what are you addressing?

The content address of a docker image is a json blob (referencing other objects).

The content address of a Dockerfile "RUN" command is the content address of what came before it and the command being run.

Foxboron
2 replies
56m

It's content addressable, just what are you addressing?

In the case of Nix it's addressed by the input. Not the content of the build. It's an important distinction and one Nix also makes.

https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3...

But doing this is going to give you a slight headace as most of the package repository in Nix is not checked for reproducible builds and there are no way to guarantee the hashes are actually static.

cpuguy83
1 replies
34m

Right, all builds are dependent on their inputs. Your inputs determine your outputs. If your input(s) change, then so does your output.

We are saying the same thing here, I'm just trying to point out this is exactly how docker build works, but rather it is more about what you are willing to put into your docker build.

Foxboron
0 replies
1m

I think we are talking past each other. I'm just trying to clear up a misconception on how nix works, not anything about the docker portion of what you have written.

m1keil
4 replies
18h34m

In Docker, if the layers are cached, the layer with apt-get won't be automatically invalidated (unless --no-cache or any changes to the upper layers).

raffraffraff
2 replies
18h0m

But that's what I would expect to happen. I don't see a problem.

m1keil
0 replies
17h18m

I didn't claim there is a problem. The original comment made it sound as if docker will expire a cached layer because the (potential) result of apt-get is different, which isn't the case.

eichin
0 replies
17h40m

Won't get invalidated even if what "apt-get install python3" does changes - the cache is only based on the syntax of the RUN string plus the previous layer hash, IIRC. (COPY actually invalidates if the file being copied changes, so maybe there's a way to fetch a hash of the repo and stash it where copy will notice, or something, but then it seems you need external tooling to do that bit?)

whazor
0 replies
12h37m

I am thinking more about pipelines that run daily. Also, this ‘docker cache’ effectively means not running the step. So you might miss important security updates. Via Nix you can ensure that your dependencies are updated. And no updates means same hash.

When said caching, I meant on the nodes that run the containers. With Nix you can also update only one layer, while keeping the other layers the same.

tetris11
7 replies
21h15m

Guix is also pretty good at this, only lacking up to date packages that one would want to build an image with

Zambyte
5 replies
17h6m

It is very easy to overload package versions locally for your needs, and it's quite easy to push that upstream to Guix so that others may benefit as well :)

djaouen
2 replies
13h38m

Can you explain (or point to an explanation) of exactly how to do that? The Guix versions of Erlang and Elixir are way out of date, and I would like to push a fix.

Zambyte
1 replies
13h11m

See here[0] for pushing a fix, here[1] for the anatomy of a package definition (often you only need to bump the version number and update the hash, but compilers may be a bit more involved). It may be useful to define package variants[2], which is what I do for some packages locally. You can also see this page[3] for using ad-hoc package variants using command line flags. Hope this helps :)

[0] https://guix.gnu.org/manual/devel/en/html_node/Contributing....

[1] https://guix.gnu.org/manual/devel/en/html_node/Defining-Pack...

[2] https://guix.gnu.org/manual/devel/en/html_node/Defining-Pack...

[3] https://guix.gnu.org/manual/devel/en/html_node/Package-Trans...

djaouen
0 replies
12h30m

Thanks for the info, I will look into these links and see if I can push an update to Erlang and Elixir in the coming days. :)

tetris11
1 replies
9h16m

I have patches on Guix over a year old :-)

The problem isn't contributions, it's reviews

Zambyte
0 replies
4h40m

Hm, simple version bumps tend to be outstanding for about a week for me before they are merged, and in the meantime I just add my package definition to my profile. Is there more to it than that?

djaouen
0 replies
21h9m

This is my one gripe with Guix. Alas, we (apparently) can't have it all!

debuggerpk
6 replies
21h11m

horrible, horrible font on the website!

eichin
2 replies
17h33m

It's actually a smooth readable (but tall-and-skinny) font on my personal laptop, and a pixelated mess on my work laptop; I've never figured out why... Oh, huh, they're using a custom font: https://xeiaso.net/blog/iaso-fonts/ so that should actually be consistent. (After the weekend I'll poke at it on the other laptop and see what's up.)

What I'm getting at is that they might (unintentionally) be complaining about a rendering bug and not the actual font...

xena
1 replies
16h4m

What browser?

eichin
0 replies
9h27m

google chrome in both cases.

mrd3v0
0 replies
16h59m

What a neat reply I am going to use whenever valid criticism about he readability or accessibility of my websites arises.

"Just use an extension that changes the website bro!"

cstrahan
0 replies
16h35m

The font is a custom build of Iosevka, which is almost certainly inspired by the commercial font Pragmata Pro (https://fsd.it/shop/fonts/pragmatapro/). When Pragmata Pro was first released a little over 10 years ago, it sold for around $400 (I know this because I and many, many others bought a copy back then).

As another commenter points out, you may have some rendering issue. Alternatively, you may just not like the font. Can't please everyone.

denysvitali
5 replies
15h14m

Unfortunately the result of a Nix Docker image is an image that is 100+ MB for no particular reason :(

denysvitali
1 replies
11h28m

From experience - but I'm more than happy to be proven wrong as I would love to build all the Docker images with Nix.

What you linked is the equivalent of:

FROM scratch

COPY hello-world /hello-world

Of course that's small (hello-world is statically linked). Try to add coreutils (or any other small package) and you'll see what I mean. In my experience the size of a Docker image built with some nix packages is greater than the Debian counterpart. I don't know why though.

georgyo
0 replies
9h13m

No, that dockerfile is not equivalent because the hello-world is not statically built in the nix version.

However I'll give you that it could be smaller in more complex examples. For example glibcLocales is for all locales which is quite chunky but your application only needs one locale.

TeeMassive
1 replies
14h50m

There are ways to properly build a nix container image so this kind of things doesn't happen. You'll find plenty of projects on GitHub dedicated to only that.

denysvitali
0 replies
3h5m

Can you please provide an example? Everything I've tried ended up being way bigger than I think it should

paulddraper
4 replies
13h36m

I really like Bazel (rules_oci) for building containers.

It's the way building containers ""should"" be.

Here's my base image, here's my files, here's my command, write the into an image.

takeda
1 replies
12h4m

That's pretty much how building on nix works, except you don't need base image or your application file, you specify what command to run from which package and it will be placed in the container with all runtime dependencies automatically.

Of course you can customize the container further if needed.

paulddraper
0 replies
10h37m

Yes, similar idea. With the objective of reproducible software.

(Also, you don't need a base image for rules_oci either; most people choose to start with one.)

kmarc
1 replies
11h5m

As many bazel rules, rules_oci's predecessor (rules_docker) was an unmaintained spagetti of hell, now we are pushed to rules_oci, and it's rpmtree recommended way of installing rpms which then in turn don't support post install script...

All this bazel-is-our-savior complex burns down when we want to build a tiny bit complicated thing with it. And we unfortunately do try to do that (our devbox image), which with full caching takes long minutes to an hour, and it's a freaking thousands pine mess instead of using a lined dockerfioe with pinned versions.

I absolutely hate bazel and its broken unmaintained Google-abandoned rulesets and I wish we either used either Docker or Buck2 for everything.

paulddraper
0 replies
10h39m

unmaintained spagetti of hell,

rules_docker was fundamentally flawed -- and overreaching in scope -- in ways that rules_oci is not.

broken unmaintained Google-abandoned rulesets

rules_oci last commit was 12 hours ago, and it's been actively maintained for years.

(This criticism is even weirder from someone pushing Buck2. Like, it's a great tool. But, apparently it warranted a complete rewrite too, eh?)

instead of using a lined dockerfioe with pinned versions

You'll pin every transitive version?

All this bazel-is-our-savior complex burns down when we want to build a tiny bit complicated thing with it.

Let me rephrase. I would not use Bazel to build images from rpm or deb packages.

But....would you use Nix for installing rpm or deb packages????? I believe you've lost the thread.

madjam002
4 replies
21h30m

Does anyone here have any experience using https://github.com/pdtpartners/nix-snapshotter ?

I build a lot of Docker images using Nix, and while yes it’s generally more pleasant than using Dockerfiles, the 128 layer limit is really annoying and easy to hit when you start building images with Nix. The workaround of grouping store paths makes poor use of storage and bandwidth.

hinshun
2 replies
19h20m

Author of nix-snapshotter here.

Yes, one of the main downsides of Docker images using Nix is the 128 layer limit. It means we have to use a heuristic to combine packages into the same layer and losing Nix’s package granularity. When building containers with Nix packages already on a Nix binary cache you also have to transform the Nix packages into layer tarballs effectively doubling the storage requirements.

Nix-snapshotter brings native understanding of Nix packages to the container ecosystem so the runtime prepares the container root filesystem directly from the Nix store. This means docker pull == Nix substitution and also at Nix package granularity. It goes a bit further with Kubernetes integration that you can read about in the repo.

Let me know if you have any other questions!

k8svet
1 replies
10h44m

What's the state of deployment for something like nix-snapshotter nowadays (with the realization that the answer depends on which of N k8s install methods might be in use)?

I assume it's mostly in the field of ... "you're making a semi-large investment on this enough that you're doing semi-custom kubernetes deployments with custom containerd?"

Or maybe the thought is nix-snapshotter users are running k8s/kubelet with nixos anyway so its not a big deal to swapout/add containerd config?

hinshun
0 replies
3h53m

Yes it’s going to depend on which k8s distribution you’re using. We have work in-progress for k3s to natively support nix-snapshotter: https://github.com/k3s-io/k3s/pull/9319

For other distributions, nix-snapshotter works with official containerd releases so it’s just a matter of toml configuration and a systemd unit for nix-snapshotter.

We run Kubernetes outside of NixOS, but yes the NixOS modules provided by the nix-snapshotter certainly make it simple.

mikepurvis
0 replies
21h27m

I haven't tried it yet as I need to produce containers that can work on public cloud k8s, but it definitely looks like the way to go. All the existing methods for grouping store paths into layers are finnicky, brittle, and non-optimal.

jrockway
4 replies
16h42m

I spent a half a day or so relatively recently trying to build our CI base image with Nix (at the recommendation of our infra team), but it was huge, and some stuff didn't work because of linking issues.

One issue that really bugged me was to build multi-arch images, it actually wants to execute stuff as the other architecture, and only supports using qemu with hardware virtualization for that. My build machine (and workstation) is a VM, so I don't have that. I do have binfmt-misc, though, so if you just happened to fork and exec the arm64 "mkdir" to "mkdir /tmp", it would have worked. Of course, this implementation is a travesty when docker layers are just tar files, and you can make the directory like this:

    echo "tmp uid=0 gid=0 time=0 mode=0755 type=dir" | bsdtar -cf - @-
(As an aside, I'm sure this exact layer already exists somewhere. So users probably don't even have to download it.)

Every time I try nix, I feel like it's just a few months away from being something I'd use regularly. nixpkgs has a lot of packages, everything you could ever want. They all install OK onto my workstation. But "I need bash, python, build-essential, and Bazel" doesn't seem like something they're targeting the docker image builder at. I guess people just want to put their go binary in a docker image and ... you don't need nix for that. Pull distroless, stick your application in a tar file, and there's your container. (I personally use `rules_oci` with Bazel... but that's all it does behind the scenes. It just has some smarts about knowing how to build different binaries for different architectures and assembling and image index yaml file to push to your registry.)

viraptor
0 replies
16h17m

to build multi-arch images, it actually wants to execute stuff as the other architecture

You should be able to cross compile binaries for other architectures without actually running them. As long as the package's build files support it of course.

and only supports using qemu with hardware virtualization for that

That doesn't sound right. You can use qemu for architectures that be only software emulated too.

The minimal example is discussed here:

https://discourse.nixos.org/t/how-do-i-get-a-shell-nix-with-...

I don't want to say it should be as simple as using pkgCross (https://nix.dev/tutorials/cross-compilation.html), but... are some specific issues with the usual process that you're running into?

operator-name
0 replies
9h36m

What did your final Nix and Docker file look like, and did you have to use `buildFHSEnv` at all to support the odd 3rd party binaries?

I think Nix really needs some articles outlining how to play well and smoothly transition from an existing system piece by piece.

l0b0
0 replies
14h38m

  >  I spent a half a day or so relatively recently trying to build our CI base image with Nix (at the recommendation of our infra team), but it was huge, and some stuff didn't work because of linking issues.
You must be talking about the official Nix Docker image[1], which indeed is huge. I've been using it for years for a handful of projects, but if the size is an issue you can use the method mentioned in the article and build a very minimal image with only the stuff you specify.

[1] https://hub.docker.com/r/nixos/nix/tags

bfrog
0 replies
16h3m

Hmm? Cross compiling to docker images is exactly what I used nix for. I even had musl being used, it was the smallest image I could build with any tool and built the images quickly and consistently in ci with caching working well.

I never saw went being used so im a bit confused where that came into play for you

Nonoyesnoyes
4 replies
20h53m

The article lost me somewhere. Long intro and than just assuming too much.

That's just possible due to nix being more granular? Is that right?

Can I really build nix from 5 years ago? No src gone? No cache server gone? Nothing?

I mean yeah Ubuntu as a base is shitty.

xena
1 replies
20h1m

The text is written to be spoken, it works better when I present it. I'll have the video edited next week, that may flow better for you.

Nonoyesnoyes
0 replies
8h52m

I will check it out then.

Would you answer my assumption? Is this easier with nix because it's more granular?

Like if I have a cacerts alone I can select it?

And can I really build stuff from 5 years ago?

chaxor
1 replies
17h5m

I don't like Ubuntu or Debian as a base image for docker, but it's typically my go-to if I need glibc stuff or browser emulation.

Is there a better alternative like Alpine but with glibc that isn't Debian?

imp0cat
0 replies
7h39m

I think Debian is a solid choice, mostly everything is either present or available. Image sizes can get out of hand tho.

operator-name
3 replies
21h14m

This is great if you've already adopted Nix, and I'd love for nothing than more declarative package management solutions like Nix or Guix to take off.

If you're already using Docker but want to gradually adopt Nix, there is an alternative approach outlined by this talk: https://youtu.be/l17oRkhgqHE. Instead of migrating both the configuration AND container building to Nix straight away, you can keep the Dockerfile to build the nix configuration.

The biggest downside is that you don't take advantage of layers at all, but the upside is that you can gradually adapt your Dockerfiles, and reuse any Docker infrastructure or automation you already use.

AtlasBarfed
1 replies
14h12m

So one of the pillars of the article is that docker builds aren't reproducible, but Nix is.

But... is a lot of that irreproducibiity (apologies for that word) because there's no guarantee one of the docker layers will be available?

And... does Nix have some guarantee to the end of the universe that package versions will stay in the repository?

operator-name
0 replies
9h48m

I'd give this article a read, as it can explain it more clearly than I can: https://serokell.io/blog/what-is-nix

But to briefly answer your specific questions: Docker files are commonly not reproducible because they contain arbitary stateful commands like `apt-get update`, `curl`, etc. For a layer with these kinds of commands to be reproducible you would need a mechanism to version and verify the result.

Nix provides such a mechanism, and a community package repository with versioned dependancies between packages. These are defined in a domain specific language called Nix (text files) and kept into a git repository. This should be familiar if you've used a package manager with lock files before.

You can guarentee the package version will stay in the repository by pinning your build to an exact commit hash in the repository.

benreesman
3 replies
8h54m

Nix and NixOS are in something like the state git was in before GitHub: the fundamental idea is based on more serious computer science than the status quo (SVN, Docker), the plumbing still has some issues but isn’t worse, and the porcelain and docs are just not there for mainstream adoption.

I think that might have changed with the release of flox: https://flox.dev, it’s basically seamless (and that’s not surprising coming from DE Shaw).

Nix doesn’t really make sense without flakes and nix-command, those things are documented as experimental and defaulted off. The documentation story is getting better, but it’s not there. nixlang is pretty cool once you learn it well, but it’s never going to be an acceptable barrier to entry at the mainstream. It’s not really the package manager it’s advertised as, nix-env -iA foo is basically never what you want. It’s completely unsurprising that it’s still a secret weapon of companies with an appetite for bleeding-edge shit that requires in-house expertise.

flox addresses all of this for the “try it out and immediately have a better time” barrier.

Nix/NixOS or something like it is going to send Docker to the same dustbin Subversion is in now, but that’s not going to happen until it has the GitHub moment, and then it’ll happen all at once.

Most of the complaints with Nix in this thread are technically false, but eminently understandable and more importantly (repeat after me Nix folks): it’s never the users fault.

I’m aware that I’m part of a shrinking cohort who ever knew a world without git/GitHub, so I know this probably sounds crazy to a large part of the readership, but listen to Linus explaining to a room full of people who passed the early Google hiring bar why they should care about a tool they feel is too complicated for them:

https://youtu.be/MjIPv8a0hU8?si=QC0UnHXRdMpp2tI4

rfoo
1 replies
8h17m

I believe for a developer tool to success, for the most common thing to do there has to be at least three ways an engineer may misuse your tool and still get it "done" (by leaving non-obvious tech debts behind).

This is true for git, but not so true yet for Nix, so I'm not sure a GitHub-like moment helps.

benreesman
0 replies
7h13m

In a full-metal-jacket NixOS setting it’s bloody hard to bash (no pun intended) your way through to the next screen by leaving behind tech debt (Python comes to mind, I made the mistake of trying to use Nix to manage Python once, never again).

But anywhere else you just brew/apt/rpm install whatever and nix develop —impure, which is easier than most intermediate git stuff and plenty of beginner stuff. git and Nix are almost the same data structure if you start poking around in .git or /nix/store, I might not understand what you mean without examples.

But all my guesses about what you might mean are addressed well by flox.

ronef
0 replies
3m

Ron from flox.dev here, the note brought a lot of smiles across the team. We've been working on this for a while now and would love to hear if there is anything we can prioritize or do to keep making it better.

mikepurvis
2 replies
21h29m

No discussion about Nix-built containers is complete without mentioning nix2container:

https://github.com/nlewo/nix2container

It is truly magical for handling large, multi-layered containers. Instead of building the container archives themselves and storing them in the nix store, it builds a JSON manifest that is consumed by a lightly patched version of skopeo that streams the layers directly to either your local container engine or the registry.

This means you never rebuild or reupload a container layer that is unchanged.

Disclosure: I contributed a change in nix2container that allows cheaply pulling non-Nix layers into the build, just using the content hashes from their registry manifests.

Nullabillity
1 replies
17h28m

Nixpkgs' streamDockerImage does something similar, instead of storing a multi-layer tarball it produces a script that cats them all together on demand, ready to feed into `docker load` or whatever.

takeda
0 replies
14h3m

In that case streamDockerImage produces script that you run then you pipe output to skopeo or docker.

nix2container wraps all of that and it automatically runs it in behind the scenes when you call nix run.

The whole image generation is much more efficient as well. In the standard streamDockerImage you get a script that generates a docker layers and the image. In nix2container all layers are stored in nix cache so subsequent run doesn't regenerate them. I believe that was the main goal behind this solution.

Another benefit (and I think it is even better than the caching) is that it also allows you manually specify what dependencies go to each layer.

The automatic way that the code offers, is nice in theory, but because docker has a limit of 128 layers, in practice it starts nicely then the last layer is a clump of remaining dependencies that didn't fit in previous layers.

With nix2container I managed for example to create the first layer which contains just Python and all of its dependencies, then the next layer are my application dependencies (Python packages) the last layer is my application.

With this approach a simple bugfix in the application only replaces the last layer that's the size of a few kb. Only a change of Python version will rebuild the whole thing.

ildjarn
2 replies
9h58m

What the post misses is that lots of packages are not available on Nix, but everything is available on Docker automatically.

If what you need is available, however, then it can be so much better.

georgyo
1 replies
9h45m

What is automatic about docker? Do you mean other people have already put in the work? Or do you mean that it's more trivial to pip/npm/cargo install stuff?

pluto_modadic
0 replies
4h22m

Nix as package management, yes - you're waiting on someone else to make upstream work.

Where Debian/PIP/NPM/Cargo already do work.

Nix as an instruction set, no, you still have to declare what you want, same as a dockerfile.

it's just... Docker will do what a shell script will.

Nix is like hoping someone's done the upstream finagling to get your particular dependencies happy.

febed
2 replies
20h17m

I didn’t fully grok how this works - what is the base image for the generated image? Also wouldn’t the image size be large if the glibc is copied over again

aidenn0
1 replies
20h5m

what is the base image for the generated image?

Default is none (i.e. like "FROM scratch" in a Dockerfile); you can specify a baseImage if needed, but I haven't had to yet. It works by copying parts of the nix store into the image as needed, but see also below.

wouldn’t the image size be large if the glibc is copied over again

The original Nix docker-tools buildImage did suffer from poor reuse of common dependencies. Docker already has a way to reuse parts of images (e.g. if you build 7 images where the first N lines of a Dockerfile are the same, the 7 images will use a shared store for the results of running the first N lines). There are several backends for Docker storage that accomplish this in various ways (e.g. FS overlays, tricks with ZFS/btrfs snapshots).

Nix docker-tools now has a "buildLayeredImage" that uses this ability of Docker to share much of the storage for the dependencies, so if you build several images that all rely on glibc, you only pay the cost of storing glibc in docker once.

febed
0 replies
19h43m

Thanks, that made the article clearer for me

verdverm
1 replies
6h35m

I've been using Dagger, it's awesome. It's the second take by the creators of Docker. It accomplishes most of what nix does for this problem

Write your pipelines and more in languages you already use. It unlocks the advanced features of BuildKit, like the layer graph and advanced caching.

nit: this post makes a point about deterministic builds and then uses "latest" for source image tags, which is not deterministic. I've always appreciated Kelsey's comment that "latest" is not a version

j-bos
1 replies
20h26m

I like the article but had a hard time following the specifics of the configs and commamds. Feels like it's more meant for people already familiar with nix, or sufficiently interested to study up while reading

earthling8118
0 replies
16h38m

That's very interesting, because as someone familiar with nix my take was that this was information I considered to be aimed at people who weren't familiar.

woile
0 replies
9h13m

I've been using nix to build docker containers (from a Mac). I would like to skip docker as well, but I wouldn't know how. On the server, I use docker swarm, with traefik as load balancer, in a very small machine, which I can later grow. It works pretty well for me. Nix on the CI has never fail for anything but mistakes of my own.

torcete
0 replies
7h33m

Coincidentally, Two days ago I was trying to adapt a flake to include a docker derivation. I came across with xelaso's page and inspired by the example provided (and after a few tries) I manage to compose a docker image. That was very cool! BTW: Thanks Xelaso.

solatic
0 replies
13h1m

Having the author do this for a service written in Go is a mistake. Your first address for containerizing Go services should be ko: https://ko.build/ , and similar solutions like Jib in the Java ecosystem: https://github.com/GoogleContainerTools/jib . No need to require everyone to install something heavy like Nix, no need for privileged containers in CI to connect to a Docker daemon so that actual commands can be executed to determine filesystem contents, just the absolute bare minimum of a manifest defining a base layer + the compiled artifacts copied into the tarball at the correct positions. More languages should support this kind of model - when you see that pnpm's recipe (https://pnpm.io/docker), ultimately, is to pick a pre-existing node base image, then copy artifacts in and set some manifest settings, there's really no technical reason why something like "pnpm build-container-image", without a dependency on a Docker daemon, hasn't been implemented yet.

Using nix, or Dockerfile, or similar systems are, today, fundamentally additional complications to support building containerized systems that are not pure Go or pure Java etc. So we should stop recommending them as the default.

qxxx
0 replies
1h58m

I tried nix (for spinning up dev environments). It is very slow compared to docker.

djaouen
0 replies
21h11m

I just wanted to chime in here and say that Guix also has a nice and easy-to-use Docker option with "guix pack -f docker" [1]. Guix also has the advantage of using an already-used language (Guile/Scheme) rather than its own bespoke one. :)

[1] https://guix.gnu.org/manual/en/html_node/Invoking-guix-pack....

conradludgate
0 replies
10h32m

I wanted to love nix. It seems like something I would like. I tried to compile rust using nix on my mac. Didn't work, known bug. I reinstalled my desktop to use nixos. I got lost between flakes, nixpkgs, homemanager. I managed to get vscode installed but when I added the nix extension (declared in nix) it would refuse to run vscode... It's just not a good experience so I reinstalled arch

asmor
0 replies
11h53m

Little known (possibly unintended) feature, but you can put the `toplevel` attribute of a nixosSystem into docker image `contents`, which lets you use NixOS modules to set things up. Just be sure to import the minimal preset, because those images get large.

Unfortunately booting the entire system with /init is largely broken, especially without --privileged. This would be an amazing feature if it didn't require so much extra tinkering.

andrewstuart
0 replies
11h27m

I like nspawn over docker because it doesn't use the layered file system thing.

Instead, it's just a simple root directory placed somewhere and you run the container on that. Much more straightforward.