Worth noting that there are only two active "core" devs, Maxim Dounin (the OP) and Roman Arutyunyan. Maxim is the biggest contributor that is still active. Maxim and Roman account for basically 99% of current development.
So this is a pretty impactful fork. It's not like one of 8 core devs or something. This is 50% of the team.
Edit: Just noticed Sergey Kandaurov isn't listed on GitHub "contributors" because he doesn't have a GitHub account (my bad). So it's more like 33% of the team. Previous releases have been tagged by Maxim, but the latest (today's 1.25.4) was tagged by Sergey.
It is scary to think about how much of web relies on projects maintained by 1 or 2 people.
Not that scary when you remember there are some systems that haven't been significantly updated for decades (e.g. the Linux TTY interface). A lot of stuff can just coast indefinitely, you'll get quirks but people will find workarounds. Also this is kind of why everything is ever so slightly broken, IMHO.
OTOH, things that update too often seem to be more than slightly broken on an ongoing basis, due to ill-advised design changes, new bugs and regressions, etc.
I am thinking with things that don't update often, we just get used to the broken parts. People learned to save every five minutes in Maya since the app crashes so often, for example. Every now and then, a PuTTY session will fill the screen with "PuTTYPuTTYPuTTYPuTTYPuTTY[...]" but it's been that way for at least 20 years, so it's not that remarkable.
tangent but i havent seen that happen on any of my putty clients in years and i use it everyday, so i think that finally got fixed? or maybe was a side effect of something stupid
next question: why are people still using putty
Because Windows does not have a good SSH implementation and PuTTY has always worked extremely well for me as a serial and SSH terminal (also, it starts up instantly and never crashed on me).
Are there any better alternatives?
Many people I know just use SSH from the WSL CLI.
That's a very limited terminal in terms of capabilities.
Then there's things like x11-style copy-paste.
You can run SSH from a Windows terminal without even having the WSL installed...
PuTTY is from before WSL, and old habits die hard.
I like having a library of hosts to choose and maybe multiple tabs in one place, and although there are some slightly less cumbersome PuTTY frontends like KiTTY (please keep your expectations very very low), I'll rather use WinSCP (no quantum leap in usability either). Edit: to those suggesting W10 command line - yes it's there and works, but it's just that, a command line, not much help when you have dozens of servers.
Windows 10 natively supports SSH as far as I can tell, I don’t use it a ton, but haven’t had any issues just typing ssh username@domain
I used to use KiTTY, because it is more versatile.
Doesn't Windows ship OpenSSH these days?
Putty met my needs in 2004 and my needs haven't changed. It still works as good in 2024.
I'm not 100% sure when I started using putty, but I definitely used it in 2004. I still need a ssh client and terminal emulator for Windows. I still don't want to install a unix like environment just to have a terminal. I still don't want tabs in my terminal, lots of windows works just fine. I still need X11 forwarding so I can run programs on remote systems and display them on Windows (VcXsrv is an easier to get going X server than others I've used on Windows).
I might like to have something that can do whatever magic so I can gcloud and aws auth on my remote machine without cutting and pasting giant urls and auth blobs to and fro all the time; but I'm using a auth token that needs to stay connected to the windows machine. In a more integrated corp environment this would probably be keberos/active directory/magic?
The difference in 2024 is that windows ships openssh client and server as a built-in optional component and it also ships a workable terminal emulator. No WSL needed in either case.
(But yeah I'm still using putty, too)
Microsoft stopped shipping HyperTerminal, last I checked. It wasn't really worth the effort to make it do SSH.
I'm not really a fan of cmd or powershell, although I guess I could use them in a pinch. Wouldn't look like what I'm used to though. :p
same. if i want a term, it's putty. windows shell and builtin ssh is a backup for when i am working from a foreign system
Why shouldn't people use putty?
I still use putty because it does what I need for it to do. No need to change just because MS has their own terminals application, which besides I far from trust.
You trust them to run the entire OS and every stack included in it, but not to make an ssh client?
There's trust in the security sense, which yeah, you're stuck with the whole deal.
But there's also trust in the rely on sense. Which at least I try to compartmentalize. I can trust Microsoft (or Google) to make an OS I can rely on to run other people's apps. If Microsoft or Google want to provide apps, they'll be evaluated as they are, not with a bias because the OS provider shipped them.
its great for serial and raw on windows.
They're used to it, tutorials online recommend it, admins install it out of inertia, some places have old Windows versions, etc.
The "PuTTY" string is because a program sent it ^E: https://the.earth.li/~sgtatham/putty/0.67/htmldoc/Chapter4.h...
When I was in Systems/Linux Operations you wouldn’t believe how many tickets from other internal teams we supported that said “Putty is down” in the title. It never ceased to make me chuckle every single time.
The problem with bug full updating software is usually that they don’t release changes fast enough, ironically.
Apple routinely holds back changes for a .0 release for advertising reasons. This means that they routinely have big releases that break everything at once. Bugs could come from 4 or 5 different sets of changes. But if they spread out changes… bug sources would be way more easy to identify.
And bug fix velocity going up could mean people stop treading water on bugs, and actually get to making changes to avoid entire classes of bugs!
Instead, people think the way to avoid bugs is to avoid updates, or do it all at once. This leads to iOS .0 releases being garbage, users of non-rolling release Linux distros to have bugs in their software that were fixed upstream years ago, and ultimately to make it harder to actually fix bugs.
This means they should either push updates quickly on an ongoing basis, or not push them at all and provide service packs at regular intervals like Windows XP and 7 used to do.
As a user, my problem is that I receive functional or design changes that I didn't want and that make the software worse for me. So I tend to avoid updates. e.g. the last time I updated Android was for that webp cve. Otherwise I just want it to stay the way it was when I bought it, not how some new product designer wants to make it to show their "impact". Especially when it's things like "we're going to silently uninstall your apps (Google) and/or delete your files (Apple) and add nag screens when you turn off our malware (Google again) or add ads (Microsoft)".
I do regularly install updates on my (Linux) desktop/laptop because guess what? It consistently works exactly the same afterward. Occasionally new formats like jxl images just start working everywhere or something. But otherwise it has just continued to work unchanging with no fanfare for the last decade or so. It's amazing to me how much higher quality in that way volunteer software is compared to commercial software.
If you want to move fast, you must accept that things break.
If you want things not to break, you must slow down.
It isn’t reasonable to ask for these two things at once:
* lots of change
* stability
That only helps if it stays static. For example, if the Linux TTY interface was unchanged for decades to such a degree that nobody worked on it, but then had a vulnerability, who would be able to fix it quickly?
Perhaps someone with more knowledge can chime in. But, my impression is that there are vulnerabilities with TTY, it's just that we stay educated on what those are. And we build systems around it (e.g. SSH) that are secure enough to mitigate the effects of those issues.
SSH was a replacement for Telnet. But any weaknesses at the TTY level is orthogonal to that, right?
Unless you mean, having thin clients use SSH as opposed to directly running serial cables throughout a building to VT100 style hardware terminals, and therefore being vulnerable to eavesdropping and hijacking?
But I think when we talk about TTY we mostly don’t refer to that kind of situation.
If someone talks about TTY today, I assume they mean the protocol and kernel interfaces being used. Not any kind of physical VT100 style serial communication terminals.
SSH was a replacement for RSH, not telnet.
Where does this idea come from? I see it repeated a lot, but it's not correct.
rsh was common on internal networks, but almost never used on the wider Internet. telnet was everywhere all across the net.
ssh was a revelation and it replaced telnet and authenticated/non-anonymous ftp primarily.
And also sometimes rsh, but less importantly.
How could it be incorrect? rsh was clearly modelled after rlogin, and ssh was clearly modelled after rsh.
The command line options were almost identical for an easy switch. ssh even respected the .rhosts file! Last time I checked, that functionality was still in place.
Both the rlogin-family of commands and the telnet/ftp-family were in use across the Internet, certainly in cases where Kerberos was used. I would think telnet was more common, certainly so outside the UNIX sphere of influence, but things like Kermit also existed.
They all got SSL-encapsulated versions in time, but Kerberos solved authentication for free, and for the simpler use cases ssh had already taken over by then. And in the longer run, simple almost always wins!
This was on HN two(?) days ago: https://news.ycombinator.com/item?id=39313170
Emphasis mine.
Cheers.
https://docs.oracle.com/cd/E36784_01/html/E36870/ssh-1.html from man page: It is intended to replace rlogin and rsh, and to provide secure encrypted communications between two untrusted hosts over an insecure network.
I miss rooms of green and amber screen terminals hooked up via serial cable. As an undergrad I remember figuring out how to escape from some menu to a TTY prompt that I could somehow telnet to anywhere from. Later, I would inherit a fleet of 200 of them spread across 12 branch libraries. I can't remember how it worked except that somehow all the terminals ran into two BSDi boxes in the core room of the central library, and it had been hardened so you could not break out of the menus and telnet to arbitrary places. Over a year I replaced them all with windows machines that ran version of netscape navigator as the shell with an interface that was built in signed javascript. It was the early days of the web, and we had to support over 300 plug ins for different subscriptions we had. The department that ran the campus network didn't want to let me on the network until I could prove to them everything was secure.
This already happened with the kernel console, no more scrollback. https://security.snyk.io/vuln/SNYK-UNMANAGED-TORVALDSLINUX-3...
I recognize it fixed a security issue, but nonetheless it's very inconvenient. I don't always have tmux at hand, especially when the system is booting in some degraded mode...
They're open source.
I wonder how many of these things that are just coasting are gonna have issues in 14 years.
Not the web though
HTTP 1.1 isn’t really changing is it?
That and a small collection of other things are standards based and not going though changes.
Sure, but HTTP3 was proposed in 2022.
Yeah but you can just continue to use HTTP/1.1, which is simpler and works in more scenarios anyway (e.g. doesn't require TLS for browsers to accept it).
You could have stayed with HTTP/1.0 as well. Or Gopher.
Without HTTP/1.1 either the modern web would not have happened, or we would have 100% IPv6 adapation by now. The Host header was such a small but extremely impactful change. I believe that without HTTP/3, nothing much would change for the majority of users.
But also, the only thing in most of the organizations I've been in that was using anything other than HTTP 1.1 was the internet facing loadbalancer or cloudflare, and even then not always. Oh yeah we might get a tiny boost from using HTTP/2 or whatever, but it isn't even remotely near top of mind and won't make a meaningful impact to anyone. HTTP/1.1 is fine and if your software only used that for the next 30 years, you'd probably be fine. And that was the point of the original comment, nginx is software that could be in the "done with minor maintenance" category because it really doesn't need to change to continue being very useful.
Certainly the web can mostly coast indefinitely. There are webpages from decades ago that still function fine, even that use JavaScript. The web is an incredibly stable platform all things considered. In contrast, it's hard to get a program that links to a version of Zlib from 10 years ago running on a modern Linux box.
The web is the calm looking duck that is paddling frantically. You want to be using SSL from the 90s, or IE vs. Netscape as your choice etc. Nostalgia aside!
I'm not sure about that, for anything besides static resources, given the rate at which various vulnerabilities are found at and how large automated attacks can be, unless you want an up to date WAF in front of everything to be a pre-requisite.
Well, either that or using mTLS or other methods of only letting trusted parties access your resources (which I do for a lot of my homelab), but that's not the most scalable approach.
Back end code does tend to rot a lot, for example, like log4shell showed. Everything was okay one moment and then BOOM, RCEs all over the place the next. I'm all for proven solutions, but I can't exactly escape needing to do everything from OS updates, to language runtime and library updates.
this problem -- great forward compatibility of the web -- has been taken care of with application layer encryption, deceitfully called "transport layer" security (tls)
Meanwhile my anaconda installation died after a casual apt-get update lol
I now believe that every piece of software should be shipped as a container to avoid any system library dependencies.
That is what Snap is for, but there are… issues
Nginx is still evolving a lot though.
Eg: http3 support was stabilized with 1.25.1 , which came out June 2023.
This isn't one though. I think the issue he is talking about is around the CVEs that came out with the HTTP3 implementation. This is an area of very active and complex development.
It's not that scary. If a project everyone depends on is broken and unmaintained, someone else will manufacture a replacement fairly quickly and people will vote with their feet.
NGINX is the de facto standard today, but I can remember running servers off apache when I began professionally programming. I remember writing basic cross-broweser spas with script.aculous, and prototypejs in 2005, before bundlers and react and node.
Everything gets gradually replaced, eventually.
Best memberberries ever
I still deploy Apache httpd, because that’s what I know best, and it works.
You can also probably host without a reverse proxy. Also there are alternatives like Caddy. IIS!! And I imaging the big cloud would swoop in and help since their expensive CDNs and gateways will rely on it, or maybe Kubernetes maintainers, since most likely they use it.
IME, the best software is written by "1 or 2" people and the worst software is written by salaried teams. As an end user, it's only the encroachment by the later that scares me.
Yep. IME the only way to make a salaried team of 10 devs work efficiently is to have enough work that you can split it cleanly into 5-10 projects that 1-2 people can own and work on autonomously.
Too bad every team I've ever worked on as a consultant does the opposite. The biggest piles of shit I've ever seen created have all been the product of 10 people doing 2 people's worth of work...
Yes and no. Small 2 person teams as vastly more efficient, but who will take over when they quit/retire/die? Larger teams have more continuity, I think.
On one hand projects developed by 2 passionate devs ; on the other hand a team of entry to mid level devs working on someone else's project for the money.
That team changes every 6 month when another company offers more money. If only one or two people are working on a project, that's a high risk for the company.
If you got one or two highly skilled people in that team of 10, you are lucky. Managers don't want them to work alone on their project, they want them to help the team grow.
It is also why companies don’t buy SaaS services from single founders or small companies where risk of key people leaving is high impact.
Expand on that comment for me, because it has high impact. I dont doubt the surface logic, but the implication is that to succeed in B2B SaaS, you _must_ be sufficiently well funded to have a decently sized staff team. That is, there are no organic 2 person startups in B2B SaaS. Is that really true?
(Obviously once bigco buys such a startup's offering, that startup needs to hire, fast)
You probably can get your foot in with $500 a month recurring payment if some dev/employee wants to do or try out stuff and his manager puts in credit card.
But that is peanuts and for me basically no difference than B2C and that is not something you can put on "customers that trusted us" banner on your landing page.
If you want big company to rely on your services and have 50-100 users each seat paid $500 a month form a single company, that is not just some manager swiping CC and for that you have to have a team and business continuity.
This is your semi-annual reminder to fork and archive offline copies of everything you use in your stack.
There's plenty of copies of the code. That doesn't help with the actual problems with the setup.
HTTP/1, HTTP/2 and HTTP/3 are huge standards that were developed, considered and separately implemented by hundreds of people. It's built in C which has an even more massive body of support through the standard, the compilers, the standard libraries, and the standard protocols it's all implemented on.
1 or 2 people maintain one particular software implementation of some of these standards.
It's interesting to think of what a large and massive community cheap and reliable computation and networking has created.
I mean at that point you might as well talk about the people building microchips and power plants. You can always abstract down, but you're ignoring the fact that nginx is ~250k very important LOC with huge impact on the world. That is non-trivial in its own right.
For the vast majority of use cases nginx from 10 years ago would not make a difference. You actually see the nginx version on some html pages and very often it's old.
nginx from 5 years ago has some pretty nasty actively exploited CVEs.
Not scary at all. I think much better of such projects compared to ill-functioning multi-people projects which get worse and worse over time.
This is one reason maintainability is very important for the survival of a project. If it takes an extra person to maintain your build system or manage dependencies or... or... it makes it all the more fragile.
Obligatory XKCD: https://xkcd.com/2347/
Evergreen xkcd is evergreen. https://xkcd.com/2347/
Relevant Xkcd comic: https://xkcd.com/2347/
That's why they work well. Not corrupted by corporate systems or group governance. Individuals have better vision and take responsibility.
I think if 2 people designed most of the world’s water treatment plants, that’s not scary.
If 2 people are operating the plants, that’s terrifying.
I don't worry when it's open source, as if it's that valuable someone will pick it up, or corps would be forced to. I do wish those 1 or 2 devs got more support monetarily from the huge corps benefitting.
IANAL, but i strongly recommend reconsidering the name as the current one contains a trademark.
They could take the Postgres naming approach.
Ingress was forked; the Post fork version of Ingress was called "Post"gres.
So maybe name this new project "PostX" (for Post + nginx).
Though that might sound too similar to posix.
"Postginx" has a nice ring to it, could be an alcoholic beverage, a name of a generation, or even a web server.
gintonx
Sounds like an character from the Asterix comic book :)
Postgres name is said to be a reference to ingres db, not a fork of ingres.
[https://dsf.berkeley.edu/papers/ERL-M85-95.pdf]
Isn't this a bit pedantic.
Fork vs "hacked up [Ingress] enough ... Consequently, building a new database system" named Postgres.
... and postfix
Go roman? nginxii ?
nginy?
Bump each letter in nginx and we get.... ohjoy!
Dude, please, just create a fork & explain the name. ohjoy sounds perfect and the meaning is brilliant. This must be it.
This might even look like enough a reason to spend the rest of their life maintaining it.
Jesus Christ. That’s incredible.
Wow, that's perfect!
Insane find. Brilliant!!
There was also a time where ng postfix was used to denote "next generation", so they could go with nginxng :)
Not necessary. It’s not like F5 is going to go to Russia and file suit against any of them.
Maybe not today, but one day they might. Better to start with a workable long term name.
How about EngineF?