return to table of content

Cold-blooded software

imran-iq
61 replies
23h3m

Python is a really bad example of cold blooded software. There is constant breaking changes with it (both runtime and tooling). So much so that the author still has to use python2 which has been EOL'd for quite a while.

A much better example would be something like go or java where 10 year old code still runs fine with their modern tooling. Or an even better example, perl, where 30 year old code still runs fine to this day

pdonis
17 replies
17h13m

> There is constant breaking changes with it (both runtime and tooling).

I'm not sure what you mean. Python 2 to 3 was a breaking change, but that was just one change, not "constant breaking changes".

If you stick with one major version no old code breaks with a new minor version (e.g., you can run old 2.x code under 2.7 just fine, and you can run old 3.x code under 3.12 just fine). The minor version changes can add new features that your old code won't make use of (for example, old 3.x code won't use the "async" keyword or type annotations), but that doesn't make the old code break.

davedx
7 replies
9h44m

The problem is `requirements.txt` doesn't do anything with downstream dependencies. There's nothing like a shinkwrap/lockfile in python. Even if you pin dependencies to exact versions, if you check your project out in a new environment and run pip install -r requirements.txt, you can end up with different, broken downstream dependencies.

This happens a lot for me.

lf-non
2 replies
7h12m

There's nothing like a shinkwrap/lockfile in python

Use poetry? I don't program in python regularly but looking at the github repo it seems actively maintained and quite popular.

https://python-poetry.org

paleface
1 replies
6h45m

> Use poetry?

Why not simply use something stable?!

I personally don’t understand why people think such glib, throwaway comments, are helpful. They always strike me, as lacking any foresight.

How many abstractions, on the core tool, are required, to force its stability over time? What happens if poetry introduces breaking changes?

lf-non
0 replies
4h19m

Why would you bluntly assume my comment lacks any foresight? I was simply recommending you a tool that I used, albeit briefly, that solves the exact the same problem for which you are claiming no solution exists.

Nobody is denying that it would be ideal if there is one best solution to every problem in the ecosystem. But at the end of the day all software, including core and third party libs is just code written by people, and it is too much to expect that any person (or a group of them) gets everything right the first time. Change, breaking or otherwise, is inevitable as people learn from their mistakes - its not like the core is guaranteed to never have any breaking changes either.

Just like you can pin the version of libraries, you can pin the versions of your tools too, as long as they are not depending on external services with no versioning. The point of the post is not absolute avoidance of change. It is to opt into a workflow and tooling setup so you can deal with the upstream changes at your own time and convenience.

And BTW, looking at their versioning, poetry hasn't yet had any breaking changes in its 4+ years of existence.

toasted-subs
0 replies
5h51m

Try poetry, I like it a lot more than conda.

The basic virtual environments work excellently too.

baq
0 replies
4h44m

That's an awareness problem. requirements.txt was invented... a long time ago, I think before the much more sane (but still not perfect) dependencies/lockfile split got popular. requirements.txt tries to be both - and it can be both, just not at the same time.

In short, you want your deployed software to use pip freeze > requirements.txt and libraries to only specify dependencies with minimal version conditions.

adamckay
0 replies
4h31m

If you want to stick with using `pip` over any of the newer tools that build on top of it (Poetry - my favourite, pdm, pipenv, rye, ...) the simplest way I used in the past was to use a `requirements.human.txt` to set my dependencies, then install them in a venv and do `pip freeze > requirements.txt` to lock all of the transitive dependencies.

Too
0 replies
5h32m

Completely false. Use pip freeze and pip install -c.

It’s one command more than npm install but that doesn’t mean it’s not there.

lmm
4 replies
16h17m

you can run old 2.x code under 2.7 just fine

No you can't lol. There were major breaking changes from 2.4 -> 2.5, and smaller but still breaking ones for 2.5 -> 2.6.

pdonis
3 replies
16h9m

> There were major breaking changes from 2.4 -> 2.5, and smaller but still breaking ones for 2.5 -> 2.6.

Such as?

lmm
2 replies
15h16m

I don't remember the specifics, but I spent at least a few weeks early in my career fixing a Python 2.3 codebase to run on 2.4.

pdonis
1 replies
14h40m

In the GP you said 2.5 and 2.6, not 2.4.

That said, I remember all three of those transitions (2.3 to 2.4, 2.4 to 2.5, and 2.5 to 2.6), and I remember changing Python code to make use of new features introduced in those transitions (for example, using with statements and context managers in 2.5), but those aren't breaking changes; the old code still worked, it just wasn't as robust as using the new features.

lmm
0 replies
14h15m

In the GP you said 2.5 and 2.6, not 2.4.

Sorry, yes, fixing a 2.4 codebase to run on 2.5. It was a while ago.

bryancoxwell
3 replies
16h38m

The Python 3.11 release notes have a pretty lengthy list of removed or changed Python and C APIs followed by guidance on porting to 3.11, which implies potentially breaking changes to me.

pdonis
2 replies
16h11m

It's a fair point that Python minor version changes can and do involve removal of previously deprecated APIs, which would break old code that used those APIs.

That said, when I look through the 3.11 release notes you refer to, I see basically three categories of such changes:

- Items that were deprecated very early in Python 3 development (3.2, for example). Since 3.3 was the first basically usable Python 3 version, I doubt there is much legacy Python 3 code that will be broken by these changes.

- Items related to early versions of new APIs introduced in Python 3 (for example, deprecating early versions of the async APIs now that async development has settled on later ones that were found to work better). These sorts of breaking changes can be avoided by not using APIs that are basically experimental and are declared to be so (as the early async APIs were).

- Items related to supporting old OSs or data formats that nobody really uses any more.

So, while these are, strictly speaking, breaking changes, I still don't think that "constant breaking changes" is a good description of the general state of Python development.

jujube3
0 replies
13h59m

Python's changes between releases are not limited to removing deprecated APIs. Sometimes semantics changes in breaking ways, or new reserved words crop up, etc. etc. It certainly is Russian roulette trying to run python code on any version other than the one it was written for.

InfamousRece
0 replies
5h12m

For me switching to Python 3.11 was really tough because of various legacy stuff removals (like coroutine decorators etc). While my code did not use these, the dependencies did. For some dependencies I had to switch to different libraries altogether - and that required rewriting my code to work with them.

There was also some time in the past when async became a keyword. It turned out many packages had variables named async and that caused quite a bit of pain too.

Aeolun
9 replies
19h41m

java where 10 year old code still runs fine with their modern tooling

I don’t know about you. But even when I try to run 3 year old Java code with a new SDK it’s always broken somehow.

lmm
3 replies
16h15m

Any examples? Actual Java code works all the way back to 1996 IME, the only thing that breaks is reflection nonsense, usually Spring.

pjmlp
0 replies
2h50m

All the stuff that got removed since Java 9, as the new policy is to actully remove deprecations after a couple of warning releases, instead of supporting them forever.

Additionally, being more strict regarding internal APIs security, not allowing access to JVM internals by naughty 3rd party libraries.

bombcar
0 replies
14h31m

You can run Minecraft 1.7.10 and mods on Java21 with surprisingly few changes needed.

https://github.com/GTNewHorizons/lwjgl3ify

8n4vidtmkvmk
0 replies
14h1m

I had major issues running Bitcoin Armory after a few years, which was rather problematic.

vbezhenar
1 replies
14h1m

I write Java for 15 years and I've yet to encounter a single JVM bug or incompatibility. It's always application to blame.

The only exception is Java 9 which removed some Java EE dependencies from JDK, but that's easily solved.

Just an example: I'm running application on Java 21 which uses Oracle 9i driver that was compiled for Java 1.4 and it works fine.

pjmlp
0 replies
2h52m

There are plenty of other removals, deprecated stuff like Thread.stop(), security providers, JDBC interfaces changes, finalizers no longer do anything (better not depend on them ever running) and might be removed in some future version.

hmottestad
0 replies
19h24m

I maintain Eclipse RDF4J and I noticed this too between Java 9 and 11, after that there haven’t been any breaking things except for having to bump a maven plugin dependency. We make sure to stay compatible with both Java 11 and whatever is the newest version by running our CI on both Java versions.

baq
0 replies
4h37m

I've had JRE minor version revisions break things. The compatibility is there, but it's below legendary.

ahoka
0 replies
19h29m

Years? After one year, something amongst the hundreds of deps will have a horrible security vulnerability and updating means breaking changes.

wmorgan
7 replies
14h49m

Very true!

As an author of software, sometimes you make mistakes, and those mistakes are often of the form, "I permitted the user to do something which I didn't intend." How do you correct something like that? In the Java world, the answer is "add newer & safer & more intentional capabilities, and encourage the user to migrate."

In the Python world, this answer is the same, but it also goes further to add, "... and turn off the old capability, SOON," which is something that Java doesn't do. In the Java world, you continue to support the old thing, basically forever, or until you have no other choice. See, for example, the equals method of java.net.URL: known to be broken, strongly discouraged, but still supported after 20+ years.

Here's an example of the difference which I'm talking about: Python Airflow has an operator which does nothing -- an empty operator. Up through a certain version, they supported calling this the DummyOperator, after an ordinary definition for "dummy." But also -- the word "dummy" has been used, historically & in certain cultures & situations, as a slur. So the Airflow maintainers said, "that's it! No more users of our software are permitted to call their operators DummyOperator -- they now must call it EmptyOperator instead!" So if you tried to upgrade, you would get an error at the time your code was loaded, until you renamed your references.

This decision has its own internal consistency, I suppose. (I personally would not break my users in this way). But in the Java world it wouldn't even be a question -- you'd support the thing until you couldn't. If the breakage in question is basically just a renaming, well, the one thing computers are good at is text substitution.

So overall & in my opinion anyway, yes, it's very much true that you can upgrade Java & Java library dependencies much more freely than you can do the same sorts of things with Python.

xvector
2 replies
8h3m

So the Airflow maintainers said, "that's it! No more users of our software are permitted to call their operators DummyOperator -- they now must call it EmptyOperator instead!"

Man, some companies and people have far too much time to waste.

WesolyKubeczek
1 replies
6h55m

What a bunch of pompous empties.

withinboredom
0 replies
6h23m

Let's totally forget that words have MULTIPLE meanings and NUANCE where context is important.

subtra3t
1 replies
5h47m

certain cultures & situations, as a slur

English isn't my first language but I haven't seen "dummy" being used as a slur, in any conversations I've engaged in or any books I've read. For me its connotation is more of a playful nature. When I think of slur I don't think of "dummy", I think of the r word and the like.

At least I can get the reasons for github's change to "main" for the default branch in a git repo. Maybe I don't agree with it but I can at least see how some people would interpet the word "master" in a negative way. I can't say the same for the word "dummy" though.

wmorgan
0 replies
3h32m

Yes your understanding is how almost everyone treats the word today. The slur is from a different meaning & different context. You'd never come across it in regular life, unless you were, like, into the history of baseball:

https://en.wikipedia.org/wiki/Dummy_(nickname)

pjmlp
0 replies
2h55m

Note that Java no longer keeps deprecated stuff around forever.

matt123456789
0 replies
14h24m

Not to detract from your point (which I agree with), but rather as a side note, Airflow's developers publish top-notch migration and upgrade documentation and tools which hold your hand through the process of updating your DAGs when upgrading Airflow. Which IMO is the next best thing to do when you break backwards compatibility.

JohnFen
7 replies
21h3m

Agreed. This is one of the reasons why I avoid using Python whenever possible. Python code I write today is unlikely to be functional years from now, and I consider that a pretty huge problem.

heurist
2 replies
20h33m

This really depends on your environment. I've been running legacy Python servers continuously for 4+ years without breaking them or extensively modifying them because I invested in the environment and tooling around it (which I would do for any app I deploy). I can't say I want to bring all of them entirely up to date with dependencies, but they're still perfectly functional. Python is pretty great, honestly. I rarely need to venture into anything else (for the kind of work I do).

JohnFen
1 replies
20h27m

I've been running legacy Python servers continuously for 4+ years

That seems like a large amount of effort to make up for a large language deficiency. My (heartfelt) kudos to you!

I might have been willing to do the same if I used Python heavily (I don't because there are a number of other things that makes it very much "not for me") -- but it would still represent effort that shouldn't need to be engaged in.

aragilar
0 replies
16h16m

I think it depends on which bits of the Python ecosystem you're interacting with. The numerical/scientific parts have been quite stable for at least the past 10 years (new features have been added, but only small amounts of removal), compared with the more "AI" focused parts where I wouldn't trust the code to be working in 6 months. Similarly, some web frameworks are more stable than others. I think also over the last 5 or so years, there's been a change in maintainers of some larger projects, and the new maintainers have introduced more breaking changes than their predecessors.

None of this is implied by the language, I think it's much more driven by culture (though I think the final dropping of support for Python 2 did give some maintainers an excuse to do more breaking changes than was maybe required).

toasted-subs
1 replies
18h41m

Any problem you have with python could be used describe Amy nodejs application.

Your just trying to get people to use languages which are less useful in practice (other than Java).

toasted-subs
0 replies
6h15m

I love how people just downvote comments instead of provide adequate feedback.

Like a chain of tiny dicked mean people who want other people to use broken languages.

pdonis
1 replies
17h12m

> Python code I write today is unlikely to be functional years from now

I don't see why not. I have been writing Python for close to 20 years now, and I still have code from my early days with it that runs just fine today.

xvector
0 replies
7h59m

Poor hygiene in the community about breaking changes, and a very clunky and frustrating mechanism to vendor dependencies.

frizlab
3 replies
20h9m

Go is not a good example either. Some times ago we tried compiling a code a few years after it was made, it did not work. Someone who actually knew the language and tooling tried and said there was a migration to be done and it was complicated. I have not followed the subject up close but in the end they just abandoned IIRC.

from-nibly
2 replies
14h6m

Yeah I had go code that didn't last a year before it couldn't be compiled.

gwd
0 replies
8h32m

I don't think I've ever had that problem -- particularly once they introduced Go modules, which specified a specific version of a library dependency. My experience is like the author's: Even old random packages I wrote 5 years ago continue to Just Work when used by new code.

There are a handful of functions that they've deprecated, which will produce warnings from `go vet`; but that doesn't stop `go build` from producing a usable binary.

fbdab103
0 replies
12h49m

Curious to me as backwards compatibility has been one of the strengths I hear Go proponents cheer.

Any idea what were the APIs that were likely to cause problems?

fliggertiggy
2 replies
10h15m

I've had good luck with sveltekit (a framework for js sites). They'll break something with a new version but provide you with very helpful compile errors pointing to a migration script to re-write any old code.

C# has been pretty good as well.

But at some point you're going to need data for your app and that's where you'll get surprised. That Yahoo currency data you used to get for free or Wikipedia's mobile API? Gone ten years later.

zx8080
1 replies
10h2m

C# with AWS lambda can easily become a trap. Once a dotnet version comes EoL on lambda, good luck with a DB schema (specifically, EF).

neonsunset
0 replies
9h37m

What? You can just update EF Core without ever having to do a migration of the schema. It just works. Also, the versions that are EoL today are a really poor choice for Lambda anyway because you really do want to be using Native AOT + Dapper AOT with it instead.

blakesley
2 replies
19h52m

Regarding Python: Really? Obviously v2-to-v3 was an absolute fiasco, but since then, it's been great in my personal experience.

Don't get me wrong: Python hasn't overcome its tooling problem, so there's still that barrier. But once your team agrees on a standardized tool set, you should be able to coast A-OK.

toasted-subs
0 replies
18h40m

Yeah but that's expected given it went from a fancy scripting language to one that supports modern features of other programming languages.

I had a similar problem moving apps from earlier versions of Java a decade ago.

behnamoh
0 replies
2h4m

Every time there's a 0.1 Python version increase, it takes months for other libraries to catch up. I still have to install conda envs with Python=3.9 because they are warm-blooded software.

hmottestad
1 replies
19h26m

Maven is fantastic. As long as you stick to an LTS Java version and pick good dependencies you can always get things up and running. With Python I remember a ML class I took where one of the dependencies had introduced breaking API changes overnight and the lecturer hadn’t noticed because he was just using whatever version was available a few weeks ago when he first started prepping for the class.

toasted-subs
0 replies
5h52m

Mavens definitely nicer than Gradle, but when you have to shade Jars it becomes almost impossible to use.

In python you can specify the version. The popular ML libs in Java are actually still broken and have been for years.

tsss
0 replies
4h47m

The worst example perhaps. I have the unfortunate honor to work on our python projects from time to time, but rarely and every time that I do, something is broken. No other software is as unreliable. Only Ruby comes close and probably for the same reason.

toasted-subs
0 replies
18h54m

I have yet to have a python versioning issue, but with java I've had tons.

Worst of all, it's always a clear "use the latest version and it will work". With python using the latest version almost always works, and you can import the previous functions if you really want to use the new interpreter on old code.

Maybe this is because most of the time with python you barely have external libraries. Similar to Java, but in Node.js it's like asking for trouble.

pjmlp
0 replies
2h56m

Note that since Java 9, this isn't exactly true, after modules, removal of private apis being misused by packages and effectly removing deprecated APIs instead of having it forever on the platform.

Still way better than Python, though.

TheNewAndy
0 replies
19h51m

Depends if you mean python the interpreter or python the language. e.g. pypy still supports python2 and has "indefinite support" or something along those lines.

Even the cpython2 interpreter is no longer supported by the original authors, but that doesn't stop someone else from supporting it.

082349872349872
0 replies
22h55m

2 and 3 don't really differ that much; true cold-blooded software doesn't care which it's being run with.

vrnvu
41 replies
1d1h

one thing I’ve noticed is that many engineers, when they’re looking for a library on Github, they check the last commit time. They think that the more recent the last commit is, the better supported the library is.

But what about an archived project that does exactly what you need it to do, has 0 bugs, and has been stable for years? That’s like finding a hidden gem in a thrift store!

Most engineers I see nowadays will automatically discard a library that is not "constantly" updated... Implying it's a good thing :)

fabian2k
14 replies
1d1h

A library can only stay static if the environment it's used in is also static. And many of the environments in which modern software is developed are anything but static, web frontends are one example where things change quite often.

A library that can stand entirely on its own might be fine if it's never updated. But e.g. a library that depends on a web frontend framework will cause trouble if it is not updated to adapt to changes in the ecosystem.

tedunangst
5 replies
21h24m

This is a very strange example. Browsers have fantastic backwards compatibility. You can use the same libraries and framework you used ten years ago to make a site and, with very few exceptions, it will work perfectly fine in a modern browser.

pcthrowaway
0 replies
6h15m

Yeah, but there are still thousands (hundreds of thousands?) of games on Newground that you can no longer play without running a separate flash player

kazinator
0 replies
21h12m

The problem arises when you're not using old libraries and frameworks. You're using new stuff, and come across an old, unmaintained library you'd like to use.

Hey, it uses the same frameworks you're using --- except, oh, ten years ago.

Before you can use it, you have to get it working with the versions of those frameworks you're using today.

Someone did that already before you. They sent their patch to the dead project, but didn't get a reply, so nobody knows about it.

hiatus
0 replies
21h5m

You absolutely can do that, but it is likely the final output will have numerous exploitable vulnerabilities.

ferbivore
0 replies
18h17m

Browsers have decent backwards compatibility for regular webpages, but there’s a steady stream of breakage when it comes to more complex content, like games. The autoplay policy changes from 2017-2018, the SharedArrayBuffer trainwreck, gating more and more stuff behind secure contexts, COOP/COEP or other arbitrary nonsense... all this stuff broke actual games out in the wild. If you made one with tools from 10 years ago you would run into at least a couple of these.

crabmusket
0 replies
20h20m

Browsers themselves aren't usually the problem. While sometimes they make changes, like what APIs are available without HTTPS, I think you're right about their solid backwards compatibility.

What people really mean when they talk about the frontend is the build system that gets your (modern, TypeScript) source code into (potentially Safari) browsers.

Chrome is highly backwards compatible. Webpack, not so much.

This build system churn goes hand-in-hand with framework churn (e.g. going from Vue 2 to 3, while the team have put heaps of effort into backwards compatibility, is not issue-free), and more recently, the rise of TypeScript and the way the CJS to ESM transition has been handled by tools (especially Node).

adonovan
4 replies
1d

Also, even a very stable project that is "done" will receive a trickle of minor tweak PRs (often docs, tests, and cleanups) proportional to the number of its users, so the rate of change never falls to zero until the code stops being useful.

josephg
1 replies
22h12m

I disagree. Tiny libraries can be fine indefinitely. For example this little library which inverts a promise in JavaScript.

I haven’t touched this in years and it still works fine. I could come in and update the version of the dependencies but I don’t need to, and that’s a good thing.

https://github.com/josephg/resolvable

xmprt
0 replies
22h7m

I think total number of commits is probably a good metric too. If the project only has 7 commits to begin with then it's unlikely to get any more updates after it's "done". But a 10 year old project with 1000 commits where the last commit was 3 years ago is a little more worrying.

diggan
0 replies
1d

so the rate of change never falls to zero until the code stops being useful

Non-useful software changes all the time ;) Also, Useful software stands still all the time, without any proposed changes.

derefr
0 replies
22h33m

I think this is also in inverse proportion to the arcane-ness of the intended use of the code, though.

Your average MVC web framework gets tons of these minor contributors, because it's easy to understand MVC well enough to write docs or tests for it, or to clean up the code in a way that doesn't break it.

Your average piece of system software gets some. The Linux kernel gets a few.

But ain't nobody's submitting docs/tests/cleanups for an encryption or hashing algorithm implementation. (In fact, AFAICT, these are often implemented exactly once, as a reference implementation that does things in the same weird way — using procedural abstract assembler-like code, or transpiled functional code, or whatever — that the journal paper describing the algorithm did; and then not a hair of that code is ever touched again. Not to introduce comments; not to make the code more testable; definitely not to refactor things. Nobody ever reads the paper except the original implementor, so nobody ever truly understands what parts of the code are critical to its functioning / hardening against various attacks, so nobody can make real improvements. So it just sits there.)

zer00eyz
0 replies
23h14m

> web frontends are one example where things change quite often.

There is a world of difference between linux adding USB support and how web front ends have evolved. One of them feels like they are chasing the latest shiny object...

xmprt
0 replies
22h9m

As someone who migrated a somewhat old project to one which uses a newer framework, I agree with this. The amount of time I spent trying to figure out why and old module was broken before realizing that one of it's dependencies was using ESM even though it was still using CJS... I don't even want to think about it. Better to just make sure that a module was written or updated within the last 3 years because that will almost certainly work.

LeifCarrotson
0 replies
20h3m

Even if the environment it's used in is not static, the world it lives in is not static.

I work in industrial automation, which is a slow-moving behemoth full of $20M equipment that get commissioned once and then run for decades. There's a lot of it still controlled with Windows 98 PCs and VB6 messes and PXI cards from the 90s, even more that uses SLC500 PLCs.

But when retrofitting these machines or building new ones, I'll still consider the newness of a tool or library. Modern technology is often lots more performant, and manufacturers typically support products for date-on-market plus 10 years.

There's definitely something to be said for sticking with known good products, but even in static environments you may want something new-ish.

duped
5 replies
22h52m

But what about an archived project that does exactly what you need it to do, has 0 bugs, and has been stable for years? That’s like finding a hidden gem in a thrift store!

Either the library is so trivial to implement myself that I just do that anyway, which doesn't have issues w.r.t maintenance or licensing, or it's unmaintained and there are bugs that won't be fixed because it's unmaintained and now I need to fork and fix it, taking on a legal burden with licensing in addition to maintenance.

Bugs happen all the time for mundane reasons. A transitive dependency updated and now an API has a breaking change but the upstream has security fixes. Compilers updated and now a weird combination of preprocessor flags causes a build failure. And so on.

The idea that a piece of software that works today will work tomorrow is a myth for anything non-trivial, which is why checking the history is a useful smell test.

derefr
2 replies
22h29m

Consider an at-the-time novel hashing algorithm, e.g. Keccak.

• It's decidedly non-trivial — you'd have to 1. be a mathematician/cryptographer, and then 2. read the paper describing the algorithm and really understand it, before you could implement it.

• But also, it's usually just one file with a few hundred lines of C that just manipulates stack variables to turn a block of memory into another block of memory. Nothing that changes with new versions of the language. Nothing that rots. Uses so few language features it would have compiled the same 40 years ago.

Someone writes such code once; nobody ever modifies it again. No bugs, unless they're bugs in the algorithm described by the paper. Almost all libraries in HLLs are FFI wrappers for the same one core low-level reference implementation.

tedunangst
0 replies
21h21m

Keccak is perhaps not the best example to pick. https://mouha.be/sha-3-buffer-overflow/

duped
0 replies
22h25m

In practice, this code will use a variety of target-specific optimizations or compiler intrinsics blocked behind #ifdefs that need to be periodically updated or added for new targets and toolchains. If it refers to any kind of OS-specific APIs (like RNG) then it will also need to be updated from time to time as those APIs change.

That's not to say that code can't change slowly, just the idea that it never changes is extremely rare in practice.

spc476
0 replies
17h2m

I'm checking the zlib changes file [1] and there are regular gaps of years between versions (but there are times where there are a few months between versions). zlib is a very stable library and I doubt the API has changed all that much in 30 years.

[1] https://www.zlib.net/ChangeLog.txt

QuadmasterXLII
0 replies
22h35m

I submit math.JS and numeric.JS. Math.JS has an incredibly active community and all sorts of commits numeric. JS is one file of JavaScript and hasn’t had an update in eight years if you want to multiply 2 30 by 30 matrices, numeric.JS works just fine in 2023 and is literally 20 times faster.

hiAndrewQuinn
1 replies
1d

The Haskell community has a lot of these kinds of libraries. It comes with the territory to some extent.

samus
0 replies
22h52m

The GHC project churns out changes at a quite high rate though. The changes are quite small by themselves, but they add up and an abandoned Haskell project is unlikely to be compilable years later.

bratbag
1 replies
1d

Most engineers have probably been bitten in the ass by versioned dependencies conflicting with each other.

wccrawford
0 replies
22h36m

And the other way, too, with the underlying language's changes making the library stop working.

It's just really unlikely that a project stays working without somewhat-frequent updates.

Uehreka
1 replies
1d

By zero bugs do you mean zero GitHub issues? Because zero GitHub issues could mean that there are security vulnerabilities but no one is reporting them because the project is marked as abandoned.

diggan
0 replies
1d

By zero bugs do you mean zero GitHub issues?

Or, the library just have zero bugs. It's possible, although probably pretty uncommon :)

NanoYohaneTSU
1 replies
22h31m

I'm sort of confused on where your comment is coming from. In the modern world (2023 in case your calendar is stuck in the 90s) we have a massive system of APIs and services that get changed all the time internally.

If a library is not constantly updated then there is a high likely hood (99%) that it just won't work. Many issues raised in git are that something changed and now the package is broken. That's reality sis.

bee_rider
0 replies
21h24m

Are you suggesting that all we need to do is use 30 year old languages to free ourselves from this treadmill? That seems like an easy choice!

troupe
0 replies
23h50m

If you are asking yourself, "will this do what it says it will do?" and you are comparing a project that hasn't had any updates in the last 3 years vs one that has seen a constant stream of updates over the last 3 years, which one do you think has a greater probability of doing what it needs to do?

Now I do get your point. There is probably a better metric to use. Like for example, how many people are adding this library to their project and not removing it. But if you don't have that, the number of recent updates to a project that has been around for a long time is probably going to steer you in the right direction more often than not.

scruple
0 replies
1d1h

I'm generally doing that to check for version compatibility across a much broader spectrum than the level of a single library.

pmichaud
0 replies
1d1h

Even though it’s not strictly true, checking for recent updates is an excellent heuristic. I don’t know the real numbers, but I feel confident that in the overwhelming majority of cases, no recent activity means “abandoned”, not “complete and bug free”.

pizzafeelsright
0 replies
23h17m

Good point. I have also seen Great Endeavor 0.7.1 stay there because the author gave up or graduated or got hired and the repo sits incomplete, lacking love and explanation for dismissal.

matheusmoreira
0 replies
14h39m

Last commit time is a pretty good indicator that the project has someone who still cares enough to regularly maintain it.

I have some projects I consider finished because they already do what I need them to do. If I really cared I'm sure I could find lots of things to improve. Last commit time being years ago is a pretty good indicator that I stopped caring and moved on. That's exactly what happened: my itch's already been scratched and I decided to work on something else because time is short.

I was once surprised to discover a laptop keyboard LED driver I published on GitHub years ago somehow acquired users. Another developer even built a GUI around it which is awesome. The truth is I just wanted to turn the lights off because when I turn the laptop on they default to extremely bright blue. I reverse engineered everything I could but as far as I'm concerned the project's finished. Last commit 4 years ago speaks volumes.

hw
0 replies
9h58m

It’s extremely rare to have projects be considered stable for years without any updates. Unless there are no external dependencies, uses very primitive or core language constructs, there’s always updates to be had - security updates, EOLs are common examples. What works in Python 2 might not work in Python 3

Software needs to be maintained. It is ever evolving. I am one of those that will not use a library that has not been updated in the last year, as I do not want to be stuck upgrading it to be compatible with Node 20 when Node 18 EOLs

dr_kiszonka
0 replies
16h31m

That's good insight.

One disadvantage of archived repos is that you can't submit issues. For this reason it is hard to assess how bug free the package is. My favorite assessment metric is how long it takes the maintainer(s) to address issues and PRs (or at least post a reply). Sure, it is not perfect and we shouldn't expect all maintainers to be super responsive, but it usually works for me.

diggan
0 replies
1d

I remember seeing a bunch of graphs which showed how programming languages have changed over time, and how much of the original code is still there.

It showed that some languages were basically nothing like the 1.0 versions, while others had retained most of the code written and only stuff on top.

In the end, it seems to also be reflected in the community and ecosystem. I remember Clojure being close/at the top of the list as the language hardly does breaking changes anymore, so libraries that last changed 5 years ago, still run perfectly well in the current version of the language.

I guess it helps that it's lisp-like as you can extend the core of the language without changing it upstream, which of course also comes with its own warts.

But one great change it did to me, is stop thinking that "freshness" equals "greatness". It's probably more common I use libraries today that basically stopped changed since some years ago, than I use libraries that were created in the last year. And without major issues.

derefr
0 replies
22h40m

Depends on the language.

Some languages have releases every year or two where they will introduce some new, elegant syntax (or maybe a new stdlib ADT, etc) to replace some pattern that was frequent yet clumsy in code written in that language. The developer communities for these languages then usually pretty-much-instantly consider use of the new syntax to be "idiomatic", and any code that still does things the old, clumsy way to need fixing.

The argument for making the change to any particular codebase is often that, relative to the new syntax, the old approach makes things more opaque and harder to maintain / code-review. If the new syntax existed from the start, nobody would think the old approach was good code. So, for the sake of legibility to new developers, and to lower the barrier to entry to code contributions, the code should be updated to use the new syntax.

If a library is implemented in such a language, and yet it hasn't been updated in 3+ years, that's often a bad sign — a sign that the developer isn't "plugged into" the language's community enough to keep the library up-to-date as idiomatic code that other developers (many of whom might have just learned the language in its latest form from a modern resource) can easily read. And therefore that the developer maybe isn't interested in receiving external PRs.

RhodesianHunter
0 replies
20h35m

That's only true for libraries with zero transitive dependencies.

Otherwise you're almost guaranteed to be pulling in un-patched vulnerabilities.

MrDresden
0 replies
8h1m

It depends.

A heavily used library, gauged from download stats as reported from package repositories or github star count for example, with low to none open issue count (and even better a high closed issue count) gives me a better feel for the state of a library than it's frequency of updates.

EnigmaFlare
0 replies
10h2m

I chose a .Net library (Zedgraph) about 10 years ago, partly for the opposite reason. It was already known to be "finished", what you might call dead. It reliably does what I want so I don't care about updates. I'm still using the same version today and never had to even think about updating or breakages or anything. It just keeps on working.

Mind you, it's a desktop application not exposed to the internet, so security is a little lower priority than normal.

jollyllama
19 replies
1d2h

This is why I am trying to switch as many projects I'm on as possible to HTMX. The churn involved with all of the frontend frameworks means that there's far too much update work needed after letting a project sit for N quarters.

mikewarot
18 replies
1d1h

I googled HTMX, all excited that maybe, just maybe, the browser people got their shit together and came up with a framework we can all live with, something native to the browser with a few new tags, and no other batteries required....

and was disappointed to find it's just a pile of other libraries 8(

dartos
8 replies
1d

Everything is a pile of libraries.

It’s a pile of someone else’s code all the way down.

diggan
6 replies
1d

You can also use the web platform straight up without transpilation, build tools, post-css compilation and all that jazz.

Just vanilla JavaScript, CSS, HTML, some sprinkles of WebComponents. And you can be pretty sure that you won't have to update that for a decade or more, as compatibility won't be broken in browsers.

Heck, I have vanilla JS projects I wrote 15 years ago that still render and work exactly like how they rendered/worked when I wrote them.

jollyllama
5 replies
23h36m

Indeed, that baggage is all that I avoid by using HTMX.

diggan
4 replies
22h57m

You do you. It's worth knowing though that using HTMX is not vanilla JS/HTML/CSS, it's literally the opposite of that.

kugelblitz
1 replies
20h28m

It's one small dependency. Worst case, you write the library yourself.

You send a request to the backend, it then sends you HTML back (all rendered in the backend using a templating language such as Django templating engine, Twig or Liquid), you insert it into a div or so.

Htmx was Intercooler, worst case you create your own. But no additional scripts needed.

I've been able to kick out Vue out because Htmx covers my use case.

diggan
0 replies
2h5m

It's one small dependency. Worst case, you write the library yourself.

Every abstraction comes with a cost :) I'm not saying it never makes sense to use React, Vue, Htmx or anything else. But that's besides this conversation.

You send a request to the backend, it then sends you HTML back

You're just trading doing stuff in the frontend for doing stuff in the frontend + backend. Might make sense in a lot of cases, while not making sense in other cases.

dartos
1 replies
16h24m

Have you ever worked with just raw js?

Anything more than a todo list becomes unwieldy almost instantly.

Taking a small dependency to avoid that is well worth it.

Taking a whole “virtual dom” may be overkill though (looking at you, react)

diggan
0 replies
2h7m

Have you ever worked with just raw js?

Yes

Anything more than a todo list becomes unwieldy almost instantly.

That's not a fact, just your personal experience

Taking a small dependency to avoid that is well worth it.

Sometimes, yeah. Sometimes, no.

Taking a whole “virtual dom” may be overkill though (looking at you, react)

In most cases, probably yeah. React was created to solve a specific problem a specific company experienced, then the community took that solution and tried to put it everywhere. Results are bound to be "not optimal".

Tao3300
0 replies
18h51m

https://xkcd.com/2347/

And don't forget the alt-text

replwoacause
7 replies
23h40m

Nothing to be disappointed in here AFAICT, however, it’s shocking that you had to Google HTMX, seeing as it shows up on HN a few times a month at least.

diggan
6 replies
22h51m

I'm guessing the disappointing feeling come from parent saying "Pff, I'm so tired of all these libraries that eventually update their APIs in a breaking way, so now I'm using X" while X is just another library exactly like all the rest, and will surely introduce a breaking change or two down the line.

jollyllama
4 replies
22h7m

You're arguing from the abstract point of view, rather than the practical. The point is that it takes an order of magnitude more time to clone, say, a Vue project from three years ago that nobody has touched since then and try to download your dependencies and build on a new machine, as compared to an HTMX project.

uxp8u61q
3 replies
19h23m

As if "npm/yarn install" wouldn't work for the hypothetical Vue project? A charitable interpretation of what you're saying is that you cannot clone a vue project from three years ago, update all dependencies to the latest version, and expect that to work. But then, how is it different for HTMX, other than for the fact that 1. it's younger 2. you don't have the ecosystem around it to update - but that also means you're doing less or redoing everything yourself.

jollyllama
2 replies
1h51m

As if "npm/yarn install" wouldn't work for the hypothetical Vue project?

I'm not talking in hypotheticals. No, if you do this for a Vue project that hasn't been touched in a few years, it doesn't work. Upon cloning the source, and running npm install, you'll run into loads of build errors between incompatible versions of npm dependencies, even after you've used nvm to switch back to an old version. A build process, especially one based on npm, intrinsically introduces great amounts of fragility to the project.

Yes, you pay for it by having to invent a lot of things yourself, but limiting the project to HTMX means you've just got one dependency to store and it'll work so long as you do that.

Back to the point of TFA: you can have a cold blooded project with a dependency to HTMX and one or two other JS libs. Once you introduce an npm build, you're squarely out of cold blooded territory due to the constant updates and maintenance required just to keep your build working.

uxp8u61q
1 replies
1h44m

Okay, go ahead. Show me a (serious) project that hasn't been touched in three years and that plain doesn't work it you install packages from the lock file. You made a claim, I said I was skeptical, your only counter argument was... To reiterate your initial point without adding anything new. So, time for evidence.

jollyllama
0 replies
1h1m

It's a work project, you'll have to take my word for it. I'm staring at the node gyp errors right now.

stefs
0 replies
4h55m

HTMX is not _exactly_ like the rest. It's far simpler than the others, e.g. by not requiring a build step, being pure JS and just having a smaller scope overall. Hot/cold isn't binary.

One of the contributors to the project wrote about the issue here: https://htmx.org/essays/no-build-step/

recursivedoubts
0 replies
7h0m

htmx is written in vanilla javascript, has zero dependencies and can be included from source in a browser and just works

it doesn't introduce new tags (there are probably enough of those) but it does introduce new attributes that generalize the concept of hypermedia controls like anchors & forms (element, event, request type & placement of response/transclusion)

what do you mean?

tlhunter
5 replies
1d1h

I love the sentiment of this post. I absolutely hate that my recent mobile apps from only a couple years ago l now require a dozen hours to patch them up and submit updates.

The author's final point is interesting wherein they refer to their own static site generator as being cold-blooded and that it runs on Python 2. Python 2 is getting harder to install recently and will eventually make it a warm blooded project.

ryandrake
4 replies
1d1h

I have a little hobby project (iOS and macOS) that I don't regularly develop anymore, but I use it quite often as a user, and I like to keep it compiling and running on the latest OSes. It's aggravating (and should be totally unacceptable) that every time I upgrade Xcode, I have a few odds and ends that need to be fixed in order for the project to compile cleanly and work. My recent git history comments are all variations of "Get project working on latest Xcode".

I could almost understand if these underlying SDK and OS changes had to be made due to security threats, but that's almost never the case. It's just stupid things like deprecating this API and adding that warning by default and "oh, now you need to use this framework instead of that one". Platforms and frameworks need to stop deliberately being moving targets, especially operating systems that are now very stable and reliable.

I should be able to pull a 10 year old project out of the freezer and have it compile cleanly and run just as it ran 10 years ago. These OS vendors are trillion dollar companies. I don't want to hear excuses about boo hoo how much engineering effort backward compatibility is.

dartos
2 replies
1d

Hardware changes over 10 years.

Macs don’t even run on the same CPU architecture or support OpenGL.

Sometimes things just need to change.

beambot
1 replies
1d

The worst is when your virtualization environments intended to provide long-term support don't even accomodate the "new" mainline hardware. Most frustrating example: Virtualbox doesn't work on Apple M1 or M2 chipsets.

genewitch
0 replies
16h34m

why would it, though? Qemu (probably) works on "M" macs. Virtualbox is linked intimately with the underlying hardware, it's a translation layer - even though it can do emulation, it's x86 emulating x86.

i always thought i was one of the few people that used virtualbox instead of the more popular ones; i tend to forget that there's probably a subset of developers that still use it for the orchestration software that can use it.

angra_mainyu
0 replies
23h36m

Apple is notoriously bad when it comes to this.

I used to work on a cross-platform product and Windows was relatively stable across versions, as was Linux.

Macs on the other hand required a lot of branching for each version.

coreyp_1
5 replies
1d1h

This. This, so very much!

I built my websites on Drupal 7 and have enjoyed a decade of stability. Now, with D7 approaching EOL in 1 year, I'm looking for a solution that will last another decade. There's no reason for the EOL, either, other than people wanting to force everyone to move on to a newer version. It undoubtedly means more business for some people, as they will be able to reach out to their clients and say, "Your website is about to be a security risk, so you have to pay to update it!" Unfortunately, it means more work for me to support my personal projects.

And why? Because someone somewhere has decided that I should move on to something newer and more exciting. But I don't want new and exciting... I want rock solid!

I'm on vacation this week. Am I learning a new hot language like Rust, Zig, Go, etc.?

Nope.

I have no desire to. I don't trust them to be the same in a decade, anyway.

I'm focusing on C. It's far more enjoyable, and it's stable.

dartos
1 replies
1d

Enjoyable is subjective. I can’t think of anything less enjoyable than hunting for segfaults in C.

I’d call Go pretty rock solid at this point. Modern go vs decade old go isn’t very different. Maybe just the packages tools had 1 major changed.

You’d get the same thing in C if your hardware significantly changes in the 10 years too.

coreyp_1
0 replies
22h59m

Haha... I agree about it being subjective! I find that I enjoy the process as much as the result. It's like bringing order to a chaotic universe. :)

The thing is, I don't have many segfaults in C, and I find C much easier to debug and hunt down issues in than even C++ (which I also enjoy). Also, because C uses very little "magic", and I also know exactly what I'm getting with my code, I find it much easier to reason about.

I heard a quote the other day while watching a presentation "When you're young you want results, when you're old you want control." I think I'm on the old side now.

As for Go, I genuinely don't have anything against it, but I don't see why I need it either. I don't doubt that others have stellar use cases and impressive results with Go, and that's fine, too, but I don't sense any lack which prompts me to investigate further. I would love to learn more about it, but most of what I see online is either over-the-top (and therefore vomit-inducing) fanboyism, or otherwise unspectacular, which makes me ask "why bother?"

pxtail
0 replies
9h52m

why? Because someone somewhere has decided that I should move on to something newer and more exciting. But I don't want new and exciting... I want rock solid!

Well, it also could be because someone else decided to move on to something newer and more exciting instead of dutifully maintaining 10y old free software because someone WANTS to have peace of mind on their vacation.

People and companies don't want to pay for maintenance work. I think that this is actually the main reason for all of this complaints about perceived short longevity of libraries and languages. Unfortunately entropy is a bitch, one can put up colossal amount of work up front, pyramids amount of work equivalent but eventually decay will catch up.

flir
0 replies
20h30m

https://backdropcms.org/ ? D7 fork. If you want to stay there.

edg5000
0 replies
9h50m

How much interaction do your sites have? If you ran a little program locally that took the sitemap, and generated a static site, then you will be immune for life for those security and maintenance arguments.

You can probaly pin the PHP, SQL, and webserver versions, compile them from source so that you will always have the binaries at hand. Then it will last another 1000 years.

However, if you need user interaction, then you are stuck in an eternal rat race of security updates and deprecation, leading to major upgrades, leading to more security updates!

languagehacker
4 replies
1d3h

Cold-blooded software seems like a great idea in spaces where the security risk and business impact are low. I can think of a lot of great hobbyist uses for this approach, like a handmade appliance with Arduino or Raspberry Pi.

The ever-evolving threat landscape at both the OS and application level makes this unviable for projects with any amount of money or sensitivity behind them. Imagine needing to handle an OS-level update and learning that you can no longer run Python 2 on the box you're running that project on. Fine for a blog, but calamitous for anything that handles financial transactions.

kardianos
1 replies
1d3h

If you are a bank, a store, or handle PHI, you will have contractual obligations to maintain it. However, I still think that can be "cold-blooded" maintenance. When I update a Go project after running `govulncheck ./...`, it is generally easy. I vendor; builds and runtime only rely on systems I control.

apantel
0 replies
20h14m

Many large companies and business like banks and manufacturers run legacy code in ancient runtimes. The projects can be so frozen in time that nobody has the courage to touch them.

joshuaissac
0 replies
1d2h

Cold-blooded software seems like a great idea in spaces where the security risk and business impact are low. I can think of a lot of great hobbyist uses for this approach, like a handmade appliance with Arduino or Raspberry Pi.

I think it would be the other way around. A low-impact hobby project can use exciting, fast-moving technology because if it breaks, there is not so much damage (move fast and break things). But something with high business impact should use boring, tried-and-tested technologies no external network dependencies (e.g. a package being available in a third-party repository at compile time or runtime). For something like that, the OS updates (on the LTS branch if Linux) would be planned well ahead, and there would be no surprises like the Python 2 interpreter suddenly breaking.

chuckadams
0 replies
1d1h

Meh, just keep a container around with py2 in it, maybe just containerize the whole app. The ultimate in vendored dependencies, short of a whole VM image.

kugelblitz
4 replies
20h16m

I've been maintaining my own side project. It started 12-13 years ago, with vanilla php, later rewritten with Laravel, later rewritten again with Symfony in 2017-ish. Since then I've had phases from 6-18 months where I had a total of 2-3 tiny commits (I was working full time as a freelancer, so I didn't have energy to work on my side project). But then when I had time, I would focus on it, add features, upgrade and just experiment and learn.

This was super valuable to me to learn how to maintain projects long-term: Update dependencies, remove stuff you don't need, check for security updates, find chances to simplify (e.g. from Vagrant to Docker... or from Vue + Axios + Webpack + other stuff to Htmx). And what to avoid... for me it was to avoid freshly developed dependencies, microservices, complexified infrastructure such as Kubernetes.

And now I just developed a bunch of features, upgraded to PHP 8.2 and Symfony 7 (released a month ago), integrated some ChatGPT-based features and can hopefully relax for 1-3 years if I wanted to.

In the last 4-5 years the project has made about the same revenue as an average freelance year's revenue, so it's not some dormant unknown side project.

punkybr3wster
1 replies
18h15m

As a person who’s considered learning more native symphony, can I ask - what was your reason moving to it from something like laravel?

kugelblitz
0 replies
16h45m

Laravel was easier to get into but once you strayed from "The Laravel Way", it gets quite messy.

I got into Symfony by "accident", because a freelance colleague put me on projects that used Symfony. So for a couple of years I used Laravel and Symfony in parallel, but after a few years I decided to go full Symfony.

Some of the things that were better for my use case:

Many of the Laravel components are "Laravel only". Whereas in Symfony, you can just pick and choose the components you need - it's very modular and extendible without forcing your hand. You don't even need the Symfony framework and just choose the components you want.

That's how Laravel can depend on Symfony modules; but Symfony can't depend on Laravel modules.

Entities vs. Models (Data Mapper vs. Active Record): The entities in Symfony (equivalent to Models in Laravel) were just simple PHP objects. I can see what properties an entity has, I can configure directly there in a simple way. I can add my own functions, edit the constructor, etc, etc. Also: You create the properties, and the migrations were generated based on that. In Laravel, you create the migrations, and the actual model is based on going through the migration steps. This just feels odd to me.

In Laravel, the Models extend the Eloquent Model class and it feels "heavier" and I had more trouble re-configuring some things. Plus without using an additional "auto-complete" generator, I couldn't just see what the properties / columns of the model / table was.

I also don't like Facades (because they hide too much stuff and I have trouble figuring out the service that it actually represents).

Templating: I also like that Twig is more restrictive, it forces me to think more about separating logic and the view, whereas Blade allows way more things. You don't have to use it, but I reckon since it's allowed, people will do so.

One thing I still envy from Laravel, though, is the testing suite.

This is pretty neat:

    $response = $this->getJson('/users/1');
 
    $response
        ->assertJson(fn (AssertableJson $json) =>
            $json->where('id', 1)
                 ->where('name', 'Victoria Faith')
                 ->where('email', fn (string $email) => str($email)->is('victoria@gmail.com'))
                 ->whereNot('status', 'pending')
                 ->missing('password')
                 ->etc()
        );
I tried integrating it in Symfony, but it was quite messy and somewhat incompatible. That shows the above point, that it's "Laravel only". It's really nice, but not enough for me to advocate for Laravel over Symfony.

Aeolun
1 replies
19h39m

I think PHP, as horrible as it feels to go back, is one example of something that’s truly backwards compatible even to its own detriment.

Haven’t worked with it for years, went back to find that the horrible image manipulation functions are still the same mess that I left behind 8 years ago.

kugelblitz
0 replies
18h22m

Yeah, some things are still a mess, but many things I use constantly have improved so much. Here is an excerpt of a function that shows many of the updates that I use regularly:

  #[AsMessageHandler]
  readonly class JobEditedHandler
  {
      public function __construct(
          private Environment $twig,
          private EmailService $mailer,
          private string $vatRate,
      ) {}

      public function __invoke(JobEdited $jobEdited): void
      {
          $this->sendNotificationToJobPublisher($jobEdited);
      }
You have attributes, much better type-hinting, constructor property promotion, read-only properties / classes. Additionally you have native Enums, named arguments and also smaller things such as match expressions (instead of case switch), array spread operator, null coalescing assignment operator, etc, etc.

Especially in a CRUD-heavy setting like mine (I run a niche jobboard) it reduces so much boilerplate and increases type-safety, thus makes it way less error-prone. Combined with new static analyzers (phpstan, php-cs-fixer, psalm - take your pick), you find possible errors way earlier now.

I think it gets a lot of inspiration from Java. Symfony gets lots of inspiration from Spring Boot. The Twig templating language is heavily related to the Django templating language. So many of the tools and concepts are somewhat battle-tested.

And this is on top of the huge performance improvements in the last years.

So yeah, there's many things that are still fixable. But the improvements have been staggering.

__natty__
4 replies
20h2m

In the Node & JavaScript ecosystem, there is the web framework Express. The current major version 4.x.x branch is over 10 years old [1]. And yet it powers so many apps in the ecosystem (over 17M downloads every week [2]). It lacks some features and is not the most performant [3]. But me and coworkers I talked with, like it because it allows for quick, stable development and long-term planning without worrying about drastic API changes and lack of security patches for older versions. Even better stability is provided with Go where we can run over 10-year-old programs thanks to a mix of wide stdlib and the promise of compatibility. [4]

[1] https://www.npmjs.com/package/express?activeTab=versions

[2] https://www.npmjs.com/package/express

[3] https://fastify.dev/benchmarks/

[4] https://go.dev/doc/go1compat

rareitem
2 replies
14h36m

Express v5 is allegedly coming out in the near future (https://github.com/expressjs/express/issues/4920)

pcthrowaway
0 replies
6h27m

And that's been in production for over 8 years https://github.com/expressjs/express/issues/2755

erikpukinskis
0 replies
3h59m

I find the Express 5 situation hilariously wonderful.

It’s basically done. Any other project would’ve slapped a major release on it and fixed any issues that came up in a patch. Everyone who is using it says it works great.

But the maintainer won’t release it because they don’t feel it’s gotten enough testing. So they’re just waiting. And no one really cares because Express 4 is great and works fine.

It’s a beautiful example of mature software development.

steve_adams_86
0 replies
14h47m

This gave me a pleasant shock. I'd forgotten that Express has been around for 13 years now. It was considered a super shoddy, pretend-programmer piece of junk by many when it first arrived (largely by virtue of being written in JavaScript). Since then I've helped a lot of companies build cool stuff that made real money with it. It's probably serving a crazy number of requests these days.

I write a lot of things with Go now instead, but I'm still totally content to build things with Express. It's good software, generally speaking.

082349872349872
4 replies
1d3h

I suspect our differences in preferences for cold- vs warm-blooded projects may be related to the "Buxton Index" as mentioned in https://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/E...

chrisweekly
3 replies
1d1h

Curious, I read the linked transcript to find:

"My third remark introduces you to the Buxton Index, so named after its inventor, Professor John Buxton, at the time at Warwick University. The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2,for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner. The party with the smaller Buxton Index is accused of being superficial and short-sighted, while the party with the larger Buxton Index is accused of neglect of duty, of backing out of its responsibility, of freewheeling, etc.. In addition, each party accuses the other one of being stupid. The great advantage of the Buxton Index is that, as a simple numerical notion, it is morally neutral and lifts the difference above the plane of moral concerns. The Buxton Index is important to bear in mind when considering academic/industrial co-operation."

apantel
1 replies
1d

Great concept, but isn’t it just “time horizon”? Everyone knows “time horizon”.

Jtsummers
0 replies
1d

Not everyone knows it, strangely, many of the (senior or junior) project management-types I work with have to be introduced to the term and concept (and if they listen it can at least resolve confusion, if not conflict, about the different priorities and behaviors of all the parties involved). But yes, they describe the same thing.

salawat
0 replies
1d1h

Holy shit. I am so using this as a communication clarifying tool. Nice concept.

js8
3 replies
19h35m

I work on IBM mainframe (z/OS). Nothing else I know comes as close in maintaining backwards compatibility as IBM. Microsoft (Windows) is the 2nd, I think. Linux (kernel) ABI has the 3rd place, but that's only a small portion of Linux ecosystem.

Almost everything else, it's just churn. In OSS this is common, I guess nobody wants to spend time on backward compatibility as a hobby. From an economic perspective, it looks like a prisoner's dilemma - everybody externalizes the cost of maintaining compatibility onto others, collectively creating more useless work for everybody.

userbinator
1 replies
18h22m

In OSS this is common, I guess nobody wants to spend time on backward compatibility as a hobby.

There's a lot of chasing new and shiny in OSS but I wouldn't say that applies to everyone... just look at the entire retrocomputing community, for example. Writing drivers for newer hardware to work on older OSes is not unheard of.

js8
0 replies
9h5m

These are amazing people, and I like what they do, but they are still chasing the churn of newer hardware, which also introduces incompatible APIs. The incompatible APIs are often introduced commercially for business and not technical reasons, either out of ignorance, legal worries or in order to gain a market advantage.

matheusmoreira
0 replies
15h10m

I guess nobody wants to spend time on backward compatibility as a hobby.

Getting paid to maintain something certainly goes a long way. Without payment, I suppose it comes down to how much one cares about the platform being built. I deliberately chose to target the Linux kernel directly via system calls because of their proven commitment to ABI stability.

On the other hand, I made my own programming language and I really want to make it as "perfect" as possible, to get it just right... So I put a notice in the README that explains it's in early stages of development and unstable, just in case someone's crazy enough to use it. I have no doubt the people who work on languages like Ruby and Python feel the same way... The languages are probably like a baby to them, they want people to like it, they want it to succeed, they just generally care a lot about it. And that's why mistakes like print being a keyword just have to be fixed.

armchairhacker
2 replies
1d4h

Counterpoint: some types of software aren’t meant to last long. Even if it still builds and can be worked on later, the usecase itself may have changed or disappeared, or someone has probably come up with a new better version, so that it’s no longer worth it to continue.

This probably doesn’t apply to many types of software over 6 months, but in a couple years or a couple decades. Some online services like CI or package managers will almost certainly provide backwards-compatible service until then.

Another possibility is that developer efficiency improves so much that the code written 10 years ago is easier to completely rewrite today, than it is to maintain and extend.

This is why I’m hesitant to think about software lasting decades, because tech changes so fast it’s hard to know what the next decade will look like. My hope is that in a few years, LLMs and/or better developer tools will make code more flexible, so that it’s very easy to upgrade legacy code and fix imperfect code.

iamthepieman
1 replies
1d

"Another possibility is that developer efficiency improves so much that the code written 10 years ago is easier to completely rewrite today, than it is to maintain and extend."

This seems completely false to me and I'm curious what has caused you to believe this as I'm a fairly imaginative and creative person yet I cannot imagine a set of circumstance that would lead someone to this conclusion.

In other words, I disagree so very strongly with that statement that I wanted to engage rather than just downvote. (I didn't btw).

I agree with your first statement though and I don't think the op is saying only make cold-blooded projects.

dartos
0 replies
1d

Well. I don’t know if I agree or not, but felt like playing devils advocate.

take the example of game development. Trying to maintain, say, the hobbit game from the early 2000s to today would almost certainly take more work than just making a new one from scratch today (GPUs have changed drastically over the past 20 years and making simple 3d platformers with unreal is so easy, “asset flips” are a new kind of scam)

Or a tool which lets people visually communicate over vast distances without specialized hardware.

That was a huge lift in the 2000s when Skype was the only major player, but you can find tutorials for it now using webrtc.

sowbug
1 replies
1d1h

A link to this article would be an effective curt reply to the "is this project dead?" GitHub issues that have been known to enrage and discourage cold-blooded project owners.

bee_rider
0 replies
21h18m

I wonder if GitHub is a bad fit for cold blooded projects? It has social media elements, I’d expect lots of extra chatter and “engagement.”

ranger207
1 replies
13h20m

The only software that can go without updates is software that gets it right the first time If you're building software for yourself, this is relatively easy. Your tastes probably won't change that much even after a decade. You can probably ignore minor problems like using an O(n^2) function where there exists an O(n) because n is small. If you're writing software that other people will use, then that's where the problems come in. Other people don't have the same requirements as you, and may have a large enough N that the O(n) function makes it worth it, for example.

But regardless of if you're writing for yourself or someone else, sometimes you just can't foresee problems. Maybe it crashes processing files larger than a gig, but because you've only ever used files <100KB it's never mattered to you. Then you go in to fix the crash and it turns out you're going to have to rewrite half the thing.

This is, I think, the biggest argument against the idea that software that doesn't change is inherently better than software that changes frequently[0]: it may be that unchanging software was perfect from the first line, or it may be that there's terrors lurking in the deep, and a priori it can be difficult to tell which a particular project is.

[0] This is not to say that rapidly updating software is inherently better than slowly updating software either. There's many factors other than just update speed

fauigerzigerk
0 replies
9h15m

I don't think the idea is that software should never change. If requirements change then software obviously has to change as well.

But over the course of 10 years a lot of things can change that have nothing to do with changing requirements.

Open source projects are abandoned or change direction. Commercial software gets discontinued. Companies get acquired. App Store / Play Store rules change. APIs go away or change pricing in ways that render projects economically unviable. Toolchains, frameworks and programming languages, paradigms and best practices change.

I think the point is that you don't want external changes that are unrelated to your requirements to force change on you. It's a good principle but as always there are trade-offs.

There is stable and then there is obsolete. The difference is often security.

And what if an important new requirement is easy to meet, but only if you bump a vendored library by seven major versions causing all sorts of unrelated breakage?

What if there aren't enough people left who are familiar with your frozen-in-time toolset and nobody wants to learn it any longer?

I think careful and even conservative selection of dependencies is a good idea, but not keeping up with changes in that hopefully small set of dependencies is one step too far for me.

layer8
1 replies
1d1h

I’m glad this isn’t about ruthless software.

jujube3
0 replies
13h57m

Look at that subtle off-white colouring of the project landing page.

The tasteful thickness of it.

Oh my God, it even has a watermark!

kirillrogovoy
1 replies
6h51m

One interesting thing about learning Elixir and its (+Erlang) ecosystem after 5+ years of JS/TS is that half of the most popular libraries seemed abandoned.

However, when you look closer, turns out that most of them just finished the work within their scope, fixes all reproducible bugs, and there's just not much left to do.

If there's a JS dependency with the last commit in 2022, it probably won't even build. (I'm half-joking of course, but only half)

sergiomattei
0 replies
6h41m

I noticed this too and I’m still getting used to it.

jes5199
1 replies
13h35m

I’ve got a similar one, yet to be written, about “cold computing”. How do you compute if you’re on a limited solar+battery installation? what if your CPU wakes up and you have only a couple of hours of runtime? What if you only can turn on wifi for 20 minutes a day?

g8oz
0 replies
12h50m

This sounds very interesting, please do write it up.

d_burfoot
1 replies
19h43m

This essay showcased an excellent writing technique: at the outset, I had no idea what the title meant. But at the conclusion, it made perfect sense.

lioeters
0 replies
3h51m

I also liked that it started with a little story about the frozen turtle slowly coming back to life.

blastbking
1 replies
22h12m

I had this experience making an iOS game. After a few years of making the game, I went back to it, and found that I was unable to get it to compile. I guess iOS games are very warm blooded. Perhaps if I had stuck with a desktop platform or web it would have remained fine? Not entirely sure.

yellow_lead
0 replies
22h1m

Mobile in general is this way. For instance, on Android, if your app isn't targeting a high enough sdk version, Google will remove it after some time. If you have to upgrade your target sdk, you may find many libraries are broken (or not supported), and it also can lead to other cascades of upgrades, like having to upgrade gradle or the NDK if you use it.

aeternum
1 replies
21h17m

What a terrible name for this. Cold blooded animals are highly dependent on their environment whereas the body of warm-blooded animals eliminate the dependency on external temperature via metabolism.

In any case, it's unnecessarily ambiguous. Why not simply say 'software without external dependencies' and eliminate the paragraphs of meandering explanation?

littleroot
0 replies
11h21m

This is literally the only reply that hit the core of the article's problem and of course no one on this site upvoted it lol. The only thing I dislike more than software development posts that use inappropriate analogy from nature to shallowly jump to conclusion, is software development posts that use inappropriate analogy from nature to shallowly jump to conclusion with absolutely flawed understanding of the supposedly analogous natural phenomenon.

And of course, painted turtles (among a few other species) can survive being frozen not because of their cold-bloodedness, but thanks to special antifreeze protein they have. Other lizards (and cold blooded animals for that matter) would just rupture their own tissues upon thawing.

zubairq
0 replies
11h30m

I had to read this article a couple of times before I got it. I guess dependencies can make an app warm blooded, but Docker or containerization can also paper over some of these issues. However, whenever I choose libraries for a project I do alot of research to make sure that the libraries themselves are "cold blooded" too, as even one badly chosen library can cause your project to fail in 10 years time

userbinator
0 replies
18h17m

I have some Windows binaries from the mid 90s that I still use today. Mainly small utilities for various calculations/conversions, filesystem organisation, and the like.

talentedcoin
0 replies
15h27m

Using Python 2 in 2023 for a new project is a crime

synack
0 replies
16h4m

If you limit your dependencies to what’s available in your distro’s LTS or stable release breaking changes are much less common. Living on the bleeding edge has a cost.

slaymaker1907
0 replies
23h34m

Besides what is stated in the article, it is also important to have an inherently secure threat model. For example, full websites are inherently warm-blooded since you are constantly dealing with attackers, spam bots, etc. However, static pages like Tiddlywiki are a lot better since you can avoid putting it on the web at all and browsers are incredibly stable platforms.

secwang
0 replies
13h15m

c , Common Lisp , apl All of these have a solid history of backward compatibility.

ryanar
0 replies
1d2h

I really appreciate this idea after rewriting my blog engine three times because the frameworks I was using (Next, Remix) had fundamental changes after a year and I was multiple major versions behind. Though it depends on what you are after. If the goal is to be able to blog, time spent upgrading and rewriting code because the framework is evolving is wasted time unless you want to stay up to date with that framework. Think about how we view physical goods today, they aren’t built to last. In certain situations, like a personal blog, you want reliable software that works for years without the need to change. It also helps to have software that uses common data formats that are exportable to another system, like a blog based in markdown files, rather than JSX.

pqwEfkvjs
0 replies
4h49m

hot bloodedness ~= num dependencies

petercooper
0 replies
22h0m

I think Go's backward compatibility promise – https://go.dev/blog/compat – would make much Go software 'cold blooded' by this definition (so long as you vendor dependencies!)

oooyay
0 replies
20h5m

This got me thinking if any of my side projects or work projects that are in maintenance mode could qualify as "cold blooded". Conceptually, they can - I have many projects written in Go, Typescript, and Python where I could cache my dependencies (or at least the SHAs) and do what this is implying. The problem is that it stops being useful beyond proving the concept. In reality, all my projects have a slow churn that usually has to do with vulnerability updates. Maybe more aptly put, "Can I take this Go repository off the shelf, rebuild the binary, and let it run?"; the answer is of course - assuming HTML and web standards haven't changed too much. The problem is that then some old vulnerability could be immediately used against it. The assumption I also made, that HTML and web standards haven't changed too much, will almost assuredly be falsey. They may have not have changed enough to be breaking, but they'll have certainly changed to some degree; the same can be said for anyone that's developed desktop applications for any OS. The one constant is change. Either side of that coin seems to be a losing proposition.

kageiit
0 replies
1d1h

For many use cases, cold-blooded software is not viable. We need better tools to automate and remove the tedium involved in upgrading dependencies or modernizing codebases to protect against ever evolving threats and adapt to changes in the ecosystem

johnnyworker
0 replies
16h46m

I kept making CMS as a hobby, starting with flat files and PHP, moving to MySQL.. simple things. I did it precisely because I figured if I modify and write plugins for Wordpress, I would have to keep updating them on their schedule. Especially since even back then I really liked removing things I don't want, and while carrying over additions I created over to new versions might be easy enough, maintaining a stripped down version of something like Wordpress (even 20 years ago..) would have been impossible.

I felt like a stubborn dumb ass in the early 2000s (and there was also this constant mockery of "NIH syndrome" in the air) but by now, I'm so glad I basically disregarded a lot of stuff and just made my own things out of the basics. And coincidentally, the last one I made has also lasted me over 12 years by now. I still love it actually, it's just the code that is terrible. So I started a new one, to fix all the mistakes of the previous one, which mostly is cutting less corners because now I know that I'll use this for way longer than I can reasonably estimate right now, so I try to be kind(er) to future me.

(But I'll also make fascinating new mistakes, because I decided to de-duplicate more or less everything at the DB level, on a whim, without prior experience or reading up on it. And then I'll write some monstrosity to pipe 12 years of content from the old CMS into the new one, and I will not break a single link even though nobody would really care. Just because I can.)

iqandjoke
0 replies
23h3m

How about security?

imglorp
0 replies
19h25m

Worth mentioning the Hare language, designed to be stable for 100 years. After they release 1.0 they don't plan to change it beyond fixes. It's Drew DeVault's project.

https://harelang.org/roadmap/

hiAndrewQuinn
0 replies
1d

Most of the software I write is at least somewhat cold-blooded by this definition. My program to find the dictionary forms of Finnish words is an okay example:

https://github.com/hiAndrewQuinn/finstem

I wrote the initial draft in an afternoon almost a year ago, and from then on endeavored to only make changes which I know play nicely with my local software ecology. I usually have `fzf` installed, so an interactive mode comes as a shell script. I usually have `csvkit`, `jq`, and if all else fails `awk` installed, so my last major update was to include flags for CSV, JSON, and TSV output respectively. Etc, etc.

The build instructions intentionally eschew anything like Poetry and just gives you the shell commands I would run on a fresh Ubuntu VirtualBox VM. I hand test it every couple of months in this environment. If the need to Dockerize it ever arose I'm sure it would be straightforward, in part because the shell commands themselves are straightforward.

I don't call it a great example because the CLI library I use could potentially change. Still, I've endeavored to stick to only relatively mature offerings.

ganzuul
0 replies
22h54m

https://en.wikipedia.org/wiki/Unix_philosophy

Seems related. Tools built like this which still need constant updating must have a foundation of sand.

csdvrx
0 replies
23h59m

I follow a similar approach but maybe more extreme : whenever possible, I use "YESTERDAY'S TECHNOLOGY TOMORROW"

It's nicely presented on http://itre.cis.upenn.edu/~myl/languagelog/archives/000606.h...

I want yesterday's technology tomorrow. I want old things that have stood the test of time and are designed to last so that I will still be able to use them tomorrow. I don't want tomorrow's untested and bug-ridden ideas for fancy new junk made available today because although they're not ready for prime time the company has to hustle them out because it's been six months since the last big new product announcement. Call me old-fashioned, but I want stuff that works.

The same thing is true with free software: I prefer to use the terminal. In the terminal, I prefer to run bash and vim, not zsh and neovim.

When I write code, I've found C (and perl!) to be preferable, because "You can freeze it for a year and then pick it back up right where you left off."

There are rare exceptions, when what's new is so much better than the previous solution (ex: Wayland) that it makes sense to move.

However, that should be rare, and you should be very sure. If you think you made the wrong choice, you can always move back to your previous choice: after playing with ZFS for a few years, I'm moving some volumes back to NTFS.

Someone mentions how the author choice (python2) is getting harder to install. Cold blooded software works best when done with multiplatform standards, so I'd suggest the author does the bare minimum amount of fixes necessary to run with https://cosmo.zip/pub/cosmos/bin/python and call it a day.

With self-contained APEs and the eventual emulator when say 20 years from now we move to Risc V, you don't have to bother about dependencies, updates or other form of breakage: compile once in a APE form (statically linked for Windows/Linux/BSD/MacOS) it will run forever by piggybacking on the popularity of the once-popular platform.

Wine lets you run Windows 95 binaries about 30 years layer: I'd bet than Wine + the Windows part of the APE will keep running long after the kernel break the ABI.

aranchelk
0 replies
22h37m

In my mind this is a lot more about tooling and platform than language, library, architecture, etc.

I have a project that’s quite complicated and built on fast-moving tech, but with every element of the build locked down and committed in SCM: Dockerfiles, package sets, etc.

Alternatively, one of my older projects uses very stable slow-moving tech. I never took the time to containerize and codify the dependencies. It runs as an appliance and is such a mess that it’s cheaper to buy duplicates of the original machine that it ran on and clone the old hard drive rather than do fresh installs.

DeathArrow
0 replies
9h55m

Cold blooded: random utility written in C 40 years ago.

Warm blooded: random app using random Javascript framework, written 6 months ago.

ChrisMarshallNY
0 replies
19h46m

I wrote an SDK, in 1994-95, that was still in use, when I left the company, in 2017.

It was a device control interface layer, and was written in vanilla ANSI C. Back when I wrote it, there wasn't a common linker, so the only way to have a binary interface, was to use simple C.

I have written stuff in PHP (5), that still works great, in PHP 8.2. Some of that stuff is actually fairly ambitious.

But it's boring, and has a low buzzword index.