return to table of content

A 2024 plea for lean software

nox101
40 replies
1d21h

Bloat is most libraries on npm. The authors don't know good design and instead try to make every library do everything. Oh, my library converts strings from one encoding to another, I'll make it load files for you, save files for you, download them over the internet, and give you a command line tool, all in the same repo. The library should just do its one thing. The rest should be left to the user.

I get the impression it's no better in rust land. Go try to edit the rust docs. Watch it install ~1000 crates.

The issue is not the language, it's that anyone can post a library (not complaining) and anyone does. People who "just want to get stuff done" choose the library with the most features and then beg for even more since they can't be bothered to write the 3 lines of code it would have been to solve it outside the library. "Can you add render to pdf?"

I don't know how to solve it. My one idea is to start an "Low Dependency" advocacy group with a badge or something and push to get people to want that badge on their libraries and for people to look for that badge on libraries they choose.

PH95VuimJjqBqy
13 replies
1d20h

People who "just want to get stuff done" choose the library with the most features and then beg for even more since they can't be bothered to write the 3 lines of code it would have been to solve it outside the library.

I feel this so much.

I once had a contract for ruby on rails work. They were experiencing severe performance issues, so bad that they were developing in release mode (server won't detect a file change and reload for you).

One day I get tired of it and start digging into it. I don't remember how many gems they were pulling in but it was A LOT. I came across one particular gem which saved 3 lines of code. Seriously.

I stayed away from the RoR community after that. I recently picked up a contract for more RoR work (after all these years, lol) and ... it's not nearly as bad but it's not good either.

Some communities just do NOT respect the risk that dependencies bring.

moritonal
9 replies
1d16h

Given you've seen the same effect in both Ruby and JS maybe the takeaway should instead be that there is a group of devs who will always reach for a package first, not that a specific language has a problem.

PH95VuimJjqBqy
8 replies
1d12h

and that group of devs tends to flock to certain ecosystems.

I've never seen a .net project with 100+ dependencies, I've easily seen that multiple times for RoR and node.

charlie0
2 replies
1d2h

On the flipside, it takes way longer to get things done on .net. There should be a balance here, but it's not going to happen. There will always be a fair amount of users installing packages for everything.

jonathanlydall
1 replies
1d1h

That’s not my experience with .NET, at least not on any project of non-trivial scope.

What kinds of things do you find take longer in .NET?

PH95VuimJjqBqy
0 replies
1d

If you're talking about initial productivity they're probably right. If you're talking about long term productivity, they're most definitely wrong.

I say this as someone with extensive experience in .net, ruby, PHP (laravel, et al), and so on.

Even something like ActiveRecord is going to blow .net out of the water in terms of pure speed, but long term the typing and the stability of .net gives it the advantage.

moritonal
1 replies
1d3h

Of all the languages I didn't think you'd hold .NET as the pillar of lean dependency trees. Maybe it's bad luck, or even just me, but NuGet hell is a real place I've been to and .NET's packaging system is brutal at resolving when you pass even 50 packages.

PH95VuimJjqBqy
0 replies
22h43m

I can say that's not been my experience but I've never seen 50 nuget packages in a single project. It doesn't necessarily surprise me that it becomes painful after 50 but what in the world were those 50 packages for?

You probably know this already, but just in case;

I would also strongly recommend not using packages.config or upgrading if you can. PackageReference can deal with transitive dependencies because the compiler resolves them for you: https://learn.microsoft.com/en-us/nuget/consume-packages/pac...

So you don't have to add a dependency to a project just because another dependency has a dependency on it. It might allow you to start removing dependencies.

I've been told packages.config allows pre/post events that PackageReference doesn't so it may be that it's not an option for you, but even then I'd try really hard to move away from packages.config.

Here's some migration documentation just in case: https://learn.microsoft.com/en-us/nuget/consume-packages/mig...

majormajor
1 replies
1d12h

In Javaland you can have a huge nested tree of dependencies but you often don't have a "Gemfile.lock" or such to list them all in a file in your repo...

PH95VuimJjqBqy
0 replies
1d4h

I'm not a fan of the Java community either, they love their structure. people jest about the FactoryFactoryFactory but you'll legit see that stuff out in the wild.

Paradoxically, I like the PHP community because they're so practical (yes, the language itself is ugly). RoR built rake for running jobs (it does more things, but that was the primary motivation). PHP community just used cron. Although, having said that, Laravel takes a lot of its cues from RoR and has the same sort of tooling.

But when Laravel first hit the scene a lot of people would criticize it for its use of static, but it was legitimately nice to use and that won out in the end.

I don't make a judgement of Rails as a community. Most of my Ruby work has been Ruby on Rails so that's' the lense I see things through. Rubyists may be the most practical of all, it's just that so much of Ruby code is RoR that it's difficult for me to separate them out.

jonathanlydall
0 replies
1d1h

Speaking as a .NET developer, I suspect this is because the base libraries are somewhat extensive.

sgarland
2 replies
1d17h

My experience from having worked at a mostly Python shop (and loving Python myself), and working at a Node shop, is that the latter is by far the worst.

Python probably has just as many shitty packages as Node, but Python’s stdlib is so vast that you often don’t need to, if you bother to read the docs. Today, I was marveling at a function someone wrote at my work to cast strings to Titlecase. I learned today that JS has no built-in for that.

That’s a tiny example, but there are far more.

spyke112
1 replies
1d9h

That’s also just a one line function with template literals in JS, if by Titlecase you mean upper casing the first character. Maybe a bit overkill to bake into a stdlib.

charlie0
0 replies
1d2h

It's not about ease, it's about frequency of use. If it's used often, it should be in a stdlib.

sweeter
5 replies
1d20h

This is why I like Go and the stdlib. Its very much "do it yourself" unless we are talking about monumental libraries for very specific applications. Outside of that if we are talking about simple libraries that do one simple thing, its better to just roll your own using the stdlib. Almost all of the building blocks are there.

On the other hand its really nice that any git repo can host a Go lib and anyone can use it from that url

MrDresden
3 replies
1d19h

Never used Go, but I much more prefer a proper stdlib that covers most base cases so that there isn't a need for rolling-your-own each time, or worse using a badly made/maintained bloated third party lib.

Examples of excellent stdlibs in my opinion would be for instance Kotlin and Python.

LamaOfRuin
2 replies
1d19h

I think that's what they were saying (and what I would agree with). The Go std lib is very complete for important things, especially in the context of networked services. So the default is that you don't have to pull in many libraries for common needs.

scubbo
0 replies
21h2m

The Go std lib is very complete for important things

Important thing like sets[0], or get-with-default from maps[1], or enums[2], or appending-to[3] or mapping-over[4] slices?

[0] https://stackoverflow.com/questions/34018908/golang-why-dont... [1] https://www.digitalocean.com/community/tutorials/understandi... [2] https://stackoverflow.com/questions/14426366/what-is-an-idio... [3] https://go.dev/tour/moretypes/15 [4] https://stackoverflow.com/questions/71624828/is-there-a-way-...

MrDresden
0 replies
20h41m

This is why I like Go and the stdlib. Its very much "do it yourself" unless we are talking about monumental libraries for very specific applications.

I took this to mean that Go's stdlib required a lot of 'do-it-yourself' implementations.

scubbo
0 replies
21h7m

I too have noticed this tendency for the GoLang community to prefer for everyone to reimplement basic functionality from scratch rather than have it provided.

crooked-v
4 replies
1d20h

On this note, I wish more npm library authors would emulate sindresorhus's approach of making small packages that each do very specific things in a predictable and well-documented manner.

whstl
2 replies
1d19h

Hell no. This person is probably the worst offender in terms of supply-chain problems and bloat in the NPM ecosystem.

A terminal spinner ("ora") with 15 dependencies (including transitive) is not an example of good design.

Inflating the numbers of downloads in your own packages is doing no good to the world of software.

brnt
1 replies
1d9h

What you hope for is shallow dependency trees, not too branchy, but probably also not branchless (an indicator of vendoring).

Larger projects will have and likely require depper trees through, but the branchiness should be relatively independent. I wonder if this has ever been formalized.

whstl
0 replies
1d8h

What I hope for is for code that is actually auditable, and doesn't pull a lot of unused bloat just to pump up someone else's vanity metrics.

This applies for large and small projects. Babel also doesn't need to do what it does: it pulls a couple hundreds of non-reusable sub-packages even though it's all written by the same people and maintained in the same monorepo.

Aeolun
0 replies
1d17h

Honestly, I’ve often considered chunking all those libs into a single big one to eliminate 50% of my npm dependency chain.

amelius
2 replies
1d21h

The "3 more lines" problem can be solved by LLMs.

nehal3m
0 replies
1d20h

A black box that turns a few megawatts and 60TB of data into a model that can write three lines is antithetical to lean anything

mewpmewp2
0 replies
1d20h

Yeah honestly one key thing of copilot has been that I can create a fn definition I need that is very common and it will do the boilerplate for me. It feels dirty not using a library, but in the end I save bytes and have clear control over the fn that otherwise is quite standard.

worble
1 replies
1d19h

Oh, my library converts strings from one encoding to another, I'll make it load files for you, save files for you, download them over the internet, and give you a command line tool, all in the same repo. The library should just do its one thing. The rest should be left to the user.

I don't know how to solve it. My one idea is to start an "Low Dependency" advocacy group with a badge or something and push to get people to want that badge on their libraries and for people to look for that badge on libraries they choose.

It sounds like you're conflating low bloat and low dependency, which is like trying to have your cake and eat it too. If you want low bloat libraries, then it's likely you're going to be pulling a lot in that don't independently do much. If you want low dependency libraries, then you'll be pulling in a few that do do a lot.

From my perspective I'd rather have slightly fatter libraries if they're low dependency and by authors I trust. Lodash for instance, sure it's big but the es6 module version supports tree shaking and it's basically the standard library JS never had. Same for something like date-fns for Date. I pretty much pull these two in by default into any project to fill those holes in JS's core library.

trickstra
0 replies
1d1h

What I saw way too often is that a project will use a dependency only for one tiny function. That's how we get a calendar app with 2.5GB node_modules.

whstl
0 replies
1d19h

The problem comes from two sides.

First you have package makers that want to make this into a career, and the only way to generate buzz is by having thousands of them. So they focus on generating packages that depends on others created by them, and on trying to get their one or two useful packages into someone else's code.

The second is people who believe that the only solution to problems involves including a new package, without caring about how many dependencies it has, or even if the actual problems is actually hard to solve. So instead of learning how to solve the problem, they must learn the API of some wrapper with 4 GitHub stars.

throwawayqqq11
0 replies
1d8h

Libraries as pure functions where ever possible and a build system that does not only enforce this but warns you when an update introduces a new/changed syscall. Yes please!

I know its not exactly low-dependency but at least lower attack surface.

mike_ivanov
0 replies
1d19h

Reminds me of that disk defragmenting utility with an embedded mp3 player from early 2000s.

iambvk
0 replies
1d16h

...since they can't be bothered to write the 3 lines of code...

They actually think saner devs have not-invented-here syndrome :0

freeopinion
0 replies
1d1h

Ironically, solving for your crticism leads to the criticism in the comment next to yours. They complain about a text encoding library that also reads and writes files, makes network connections, etc. This is a form of bloat. But it minimizes my number of dependencies.

So which is the worse approach?

eternityforest
0 replies
20h27m

Libraries that do everything are better that 50 different tiny libraries, amy one of which could be the next left-pad. Even if the big library uses a bazillion small libraries, the big library team usually at least does some testing.

Maybe we could move to having big teams maintain collections of small libraries... But with tree shaking I'm not sure why we'd really need to. I'm happy just sticking with things that have lots of GitHub stars.

I think the solution is just don't use stuff posted by random people on npm if there's an alternative that's widely used, with lots of eyeballs on it. Which there usually is, but people probably choose the smaller one because it's simpler.

dogcomplex
0 replies
1d16h

While "add a black box complexity extremely resource intensive AI to sort out the dependency trees" is somewhat of a tragic answer to the problem, I think it's the real one we're gonna get - and we'll still be better off than this current clusterfuck.

Hopefully these will also spend the tedious hours pruning unnecessary packages, minimizing responsibilities of each, and overall organizing the environment in a sensible way, then caching that standalone so there's no LLM dependency - just an update checker/curator to call in when things go to shit.

Honestly this is one of the worst problems of modern software and makes 50+% projects unusable. It seems like a very tedious yet solvable problem - perfect for the right LLM agent. Would be a godsend.

charlie0
0 replies
1d2h

By definition, the only languages where this isn't true are languages that have a high barrier to entry, and because of that, they would also be unpopular. Perhaps if the goverment mandated a standard set of programming tools....

begueradj
0 replies
1d2h

"Bloat is most libraries on npm. The authors don't know good design and instead try to make every library do everything."

That is true for all other programming languages.

MagicMoonlight
0 replies
1d4h

And that’s why Java is the only non-meme language.

Need a hashmap? Need a webserver? Need a UI? It’s all built in. It just works on any device with the same code. People who actually know what they’re doing wrote it and you can trust it. No random downloads or weird packages.

If you want to download random things you still can, but realistically you don’t need to.

cptaj
38 replies
1d20h

In Vernor Vinge's book A Deepness in the Sky, humanity is spread out around the stars with only subluminal technology. Interstellar ships are very old and mix many technologies from different systems and civilizations.

They touch on the fact that computer systems have evolved for so long that nobody really knows most of the code anymore. They just use it and build on top of it.

One particular character has been traveling and in stasis so long that he is probably one of the oldest humans alive. A systems engineer of old. It turns out to be a big advantage to know the workings and vulnerabilities of his time because he can use them in the future when everyone else is building many layers on top of that and have no way of knowing exactly what he's doing.

Vernor had a point, I think.

smt88
8 replies
1d19h

They touch on the fact that computer systems have evolved for so long that nobody really knows most of the code anymore.

This isn't far from the current reality, where critical systems rely on nearly-dead skills like COBOL programming.

JohnFen
6 replies
1d18h

Indeed so. The highest-paid devs I personally know earn their living working with old COBOL code. It's highly paid simply because there aren't that many expert COBOL programmers around anymore.

In a prior job I had, we developed enterprise software aimed at large corporations. We had to support several old mainframes that don't actually exist anymore outside of museums -- but these companies ran emulators of the mainframes solely to continue to use the software they'd been using for decades. Nobody at those companies even has the source for that software, let alone know how it really works. It just works and they can't justify the expense of reworking their entire system just to get rid of it.

FirmwareBurner
4 replies
1d17h

>It's highly paid simply because there aren't that many expert COBOL programmers around anymore.

They're highly paid not because they're part of a short supply of COLOB devs, but because they have COBOL experience and the battle scars to know how to solve production issues that those new to Cobol might not know about, but which the old timers saw several times already in their careers and know how to fix

If you start learning Cobol now to cash in on the this market, as a Cobol junior you won't be remotely as valuable as those Cobol graybeards with battle scars, which is why nobody's pivoting to Cobol.

DiggyJohnson
3 replies
1d11h

I don’t think this is as mutually exclusive as you imply.

Good COBOL programmers are expensive because they’re rare, and the only way to become a good COBOL programmer is to spend a decent fraction of your education and/or career working with it. That doesn’t happen organically anymore for any significant fraction or junior devs.

charlie0
2 replies
1d2h

So what you're saying is, unlike the web dev market, which is oversaturated, there exists an unsaturated market in COBOL? Hmmmm

smt88
1 replies
22h43m

Yes, exactly. If you can find a way to get good enough to do it, you're guaranteed a great income.

FirmwareBurner
0 replies
19h45m

>Yes, exactly. If you can find a way to get good enough to do it, you're guaranteed a great income.

You can't replicate years or decades of Cobol project experience to "get good at it", out of thin air by doing some side projects at home. No amount of individual self study can prepare you for industry specific cruft and issues you've never encountered. If it were that accessible, a lot of people would do it.

mech422
0 replies
1d14h

>Indeed so. The highest-paid devs I personally know earn their living working with old COBOL code. It's highly paid simply because there aren't that many expert COBOL programmers around anymore.

Heh - my backup retirement plan :-) Hell, COBOL paid really well during Y2K just add 2 characters to the date field :-P

dehrmann
0 replies
1d13h

But that's because taxpayers don't want to foot the bill to modernize systems.

preommr
8 replies
1d16h

This reminds of Asimov's awful "The Feeling of Power" where in the future mankind has forgotten how to do basic math and someone has rediscovered it to the astonishment of the people in power who now intend to use it to their advantage in war.

I hate that short story because of how silly it is. I get the point that it's trying to make, but it's packaged in such an absurdly unrealistic way that it loses all impact.

vonjuice
5 replies
1d14h

I always considered Asimov to have good ideas but he's a pretty bad writer, specially of female characters.

CyberDildonics
2 replies
1d13h

specially

I think you mean 'especially'.

vonjuice
1 replies
16h28m

No.

CyberDildonics
0 replies
16h2m

It's either especially or it doesn't make sense

flenserboy
1 replies
1d3h

Perhaps "human" would be a better substitution. There's a reason his robots are far more memorable than the other characters he crafted.

vonjuice
0 replies
16h32m

Yeah but it's more grating on the stories I read with female protagonists.

klooney
0 replies
1d13h

Hey, I think about that story all the time too, I'm glad I'm not the only one it's haunting.

jodrellblank
0 replies
1d14h

This reminds of Asimov's story where in the future mankind has forgotten how food tastes, and are having a computer-designed-flavours competition, and someone has rediscovered growing plants in soil makes delicious garlic to the astonishment of the judges, who are later horrified and disgusted when they learn the truth about what he fed them.

https://scifi.stackexchange.com/questions/209233/short-story...

theamk
6 replies
1d12h

There are two kinds of "nobody knows": "Nobody knows how to make room temperature semiconductors" and "Nobody knows why my washing machine failed"

In the former case, it's a genuine mystery that can be only solved by very smart people and modern science. In the latter, it's the lack of interest - sure, for $$$ a knowledgeable engineer will take the washing machine apart and figure out the exact defect, but no one is going to pay this, they'll just throw the washing machine away and get a new one.

The historical software knowledge is definitely the latter. It is eminently possible to dig into any part of software and eventually get a full understanding of this part. But most of the time, it's way cheaper and more practical to shrug and ignore the problem, or maybe add yet another layer to compensate.

mcdonje
1 replies
1d2h

There's also the classic, "Nobody knows how to make a pencil."

That kind of "nobody knows" is about the complexity of many large interconnected systems, and the deep wells of knowledge, theory, and history in each of the various domains.

I argue it's different from your washing machine type because the domains of computing are vast. Sure, you can dive in and figure some things out when necessary, but you can do that with pencil production too.

freeopinion
0 replies
1d2h

You can figure out _a_ way to make a pencil, but it may not be as efficient or clever as they did it the first time. Or it might be a lot more practical with modern abilities that were impossible at the time of the first pencil.

If you started today, you might not use graphite, or wood. That's ok for pencils. But it might mean we've forgotten a technique they used for making pencils that's also useful for tiny gear shafts. But we don't use it for gear shafts anymore because somebody invented the Swiss lithoscropy process.

valleyer
0 replies
1d10h

I guess you mean "room-temperature superconductors"; if you don't, I submit you and your laptop can come out of the walk-in freezer now. It'll still work. ;)

sunsetonsaturn
0 replies
1d6h

How would you approach this problem https://github.com/pyasn1/pyasn1/issues/55?

There is a library called `pyasn1`, the author passed away and there are some challenges, such as intimidatingly long error messages that are not easy to interpret, or counter-intuitive behaviour of some of the functions in its API.

Do you have any tips for approaching this with the few resources that are available?

sph
0 replies
16h56m

"Nobody knows how to operate this 40 year old kubernetes cluster anymore". Some of you will go billing thousands per hour rewriting legacy Helm manifests for antiquated banking infrastructure.

fxtentacle
0 replies
1d12h

Unless, for example, you invested billions into trains that run on Windows 3.11 and retrofitting all of them with a modern control system is prohibitively expensive. Which means the second type of knowledge becomes as irreplaceable as the first one.

https://web.archive.org/web/20240127140416/https://www.gulp....

peter_l_downs
6 replies
1d19h

Vinge definitely got it right. I love the title of "Programmer Archaeologist", it's an extremely good description of what we actually do every day.

For more discussion, see http://lambda-the-ultimate.org/node/4424

lioeters
2 replies
1d12h

For sure, I imagine "software archaeology" will become a field of its own in the future. It reminds me of GitHub's Arctic Code Vault project, a snapshot of all public repositories as of 2020-02-02, meant to last for at least a thousand years.

https://github.blog/2022-09-20-if-you-dont-make-it-beautiful...

I wonder what future humans will make of it, digging through such a massive amount of good, bad, and terrible code. Much of it probably won't run without serious effort.

otabdeveloper4
1 replies
1d10h

I can't even run a node.js project that was written two years ago.

Khaine
0 replies
1d8h

That's probably for the best

AlotOfReading
2 replies
1d19h

As someone who is separately both a programmer and an archeologist, I love the concept and the books. The real skills aren't always as far apart as you'd think either. I once spent a week deconstructing hardware to figure out what assembly language a project was written in. The assembly was only documented in a TRM in a dusty filing cabinet written before I was born. Once I could read the assembly I could read the source code and work my way up the stack to start answering the actual questions I had.

lloeki
1 replies
1d18h

The nice thing with software is that everyone gets to be an archaeologist in short order; ~6 months is more than enough...

  - Howdy! there's a bug!
  - Okie, lookie!
  *dig dig dig, stares at line 47*
  - Oh my, this horsemanure could not have possibly worked, ever! What kind of damaged wetware wrote this?!
  $ git blame
  - Oh.

jonathanlydall
0 replies
1d9h

I like to say that git blame is a great tool for working out that the problematic code you’re looking at was in fact written by yourself.

rvbissell
1 replies
1d19h

Oh man, 3 more novels to read -- thanks!

pavel_lishin
0 replies
1d19h

For what it's worth, while the first two are set in the same universe, you can read them in either order. The third one is a definite sequel, and imo, the weakest of the three.

oreally
0 replies
1d16h

Unfortunately knowing humanity and it's gatekeeping mechanisms he'd be near-unemployable. 'Oh you don't have XYZ framework experience? GTFO'

Also it's frustrating working in such conditions where you have to dig through framework code to get to where it matters. It feels like your time is wasted.

javajosh
0 replies
1d11h

Gregory Benford made a similar statement about technology, where humans are in constant war with AI and often don't know how the tech they use work. IIRC there was a spaceship in Great Sky River[0] that could only be operated by humans because they did the tutorials. (Humanity's lack of knowledge was less a permanent regression and more a side-effect of constantly fighting, and losing, the war against the machines. Hard to study and learn when you're fighting all the time).

0 - https://en.wikipedia.org/wiki/Great_Sky_River_(novel)

geuis
0 replies
1d13h

Reminds me of a character in one of Alastair Reynolds novels who is a software archaeologist. I think they're in Tau Ceti, but anyway. The character has a specialty in digging through code that's hundreds of years old.

I think we're already dealing with this. My uncle is in his 60's and maintains old truck shipping software in COBOL. Btw there are job openings in old tech like this, for those that are interested. Happy to provide introductions.

But the basic problem stands, the left-pad issue.

We still deride this choice. Junior engineers without supervision hap-hazardly installing dependencies. But over the course of decades and generations of developers we still "sum to zero", where most software will rely on some number of unknown dependencies.

Say in 2100 an update needs to be issued. You push it through whatever is managing npm dependencies at the time. Meanwhile there is a solar system of dependent devices that need security updates. There could be trillions of dependent devices and any number of of independent intermediary caches that may or may not be recently updated. I can't event imagine what that dependency tree would look like.

austin-cheney
0 replies
1d10h

Yes Pham was considerably among the best talent due to the time spent training and diving deeper into the layers of abstractions while other were in stasis while drifting between the stars. That would later prove to become a superior advantage.

This is among my most favorite books and not just because it’s about a long time soldier software developer like myself.

jurschreuder
31 replies
1d20h

This is what I keep saying about Rust.

Maybe you have 70% fewer vulnerabilities per line of code than C++ if really 70% of (old) C++ vulnerabilities are memory related.

But if you then pull in hundreds of packages in Rust and have 10x as many lines of code...

30% of 100k lines is more in total than 100% of 10k lines of code.

estebank
17 replies
1d20h

Counting crates and comparing that with number of C++ libraries is making an obtological error. In Rust, a single team usually splits their project in multiple crates. Something like QT would be hundreds of crates on its own if it were written in Rust, but the amount of code and level of risk taken would be exactly the same.

jsheard
9 replies
1d20h

Plus many crates have zero unsafe code, so they're not much of a liability at least in terms of the common memory safety problems. The nice thing about unsafe code having to be declared is you can get an idea of a crates attack surface at a glance just by grepping for it - if only evaluating C/C++ libraries were so easy.

PH95VuimJjqBqy
4 replies
1d20h

no, absolutely not, and I have no idea where this misapprehension came from.

the root cause of a problem in unsafe could can absolutely be from safe code because the safe code can set state in a way that causes problems.

One can easily see this if you consider safe code that sets an index and then calls into unsafe code but the index is out of range. The root cause is absolutely in the safe code.

ptx
1 replies
1d19h

Isn't that the idea that unsafe code can do unsafe things internally while exposing a safe interface? So in your example the bug would be in the unsafe code, which shouldn't have allowed the unsafety to leak into the safe code.

BobbyTables2
0 replies
1d18h

Yes, but by that same logic, safe programs can also be written C.

However, I do believe Rust gets a whole lot of things right.

I sincerely hope that unnecessary use of unsafe code is avoided.

jsheard
1 replies
1d19h

There has to be unsafe code for that to happen though, if you consume a crate which doesn't touch unsafe at all then it's only going to happen if you write the sloppy unsafe code which makes assumptions that don't actually hold. If you blindly pass indices from code you didn't write and haven't audited into Slice::get_unchecked() then that's on you, if you're not willing to do your due diligence then stick to the bounds checked version. That's what it's there for.

PH95VuimJjqBqy
0 replies
1d12h

unsafe code should be sitting in a module protected by safe code. Which implies safe code can be the root cause of unsafety.

People have got to stop believing that safe code in rust is automatically not the root cause of problems. It's a misapprehension.

arccy
1 replies
1d18h

you don't need unsafe to sneak in a cryptominer

jsheard
0 replies
1d18h

Hence why I said it gives you an idea of exposure to memory safety issues, not that grep unsafe is a substitute for a full audit. Besides, time not spent having to pore over every line of code looking for subtle memory safety bugs is time you can spend looking for bundled cryptominers instead.

robertlagrant
0 replies
18h10m

you can get an idea of a crates attack surface at a glance just by grepping for it

I hope that the code that streams your data out to an endpoint isn't memory safe, so you easily find it!

PhilipRoman
0 replies
1d20h

you can get an idea of a crates attack surface at a glance just by grepping for it

please don't... I understand your point but there are hundreds of vulnerabilities introduced every day in memory-safe languages that have nothing to do with rust's concept of "unsafe".

cperciva
2 replies
1d15h

the amount of code and level of risk taken would be exactly the same.

The amount of code might be the same, but that doesn't guarantee that the level of risk is the same. A lot of bugs are introduced at interfaces -- the provider of an API makes a subtle change without realizing how it affects some of the API consumers -- and that's inherently more likely to occur if the two sides of the API are developed separately.

In the FreeBSD world we've found that it's incredibly useful to develop the kernel and libc and system binaries together; all sorts of problems Linux trips over simply never happen in FreeBSD.

josephg
0 replies
1d14h

This risk is much worse in C or C++ than rust because of the borrow checker. I’ve certainly seen some crimes against performance at the boundary between libraries, but I don’t think I’ve seen any subtle interface bugs.

estebank
0 replies
1d13h

A really common thing in the Rust ecosystem is to write APIs that are impossible or at least hard to misuse. That cultural aspect plus the ownership system does help reduce those kind of problems. I've seen crates where there are "mistakes" in the API, lending a mutable borrow instead of returning an owned value, for example, that make them less useful than they could be otherwise, but I've seen fewer cases of bug prone APIs.

kstrauser
1 replies
1d19h

an obtological error

I don't know if that was a deliberate portmanteau of obtuse+ontological, or if it was a happy typo, but I'm stealing it.

estebank
0 replies
1d18h

It was typo ^_^'

steveklabnik
0 replies
1d20h

https://wiki.alopex.li/LetsBeRealAboutDependencies is an interesting exploration of these dynamics.

arccy
0 replies
1d18h

teams might split their own stuff into crates, but rust has still managed to adopt to npm culture of pulling in lots of small dependencies for trivial things, opening them up to different sorts of supply chain attacks.

Ygg2
8 replies
1d19h

But if you then pull in hundreds of packages in Rust and have 10x as many lines of code...

No one is forcing you to use the libraries. Just write your own software stack yourself.

But big problem is vulnerablilities. Is it better to fix hundred libs by fixing a bug in one shared lib or is it better to fix hundred libs one by one?

whstl
7 replies
1d9h

False dichotomy.

What is actually better is to have code that is properly auditable for end-users, or even to not introduce the risk for those vulnerabilities in the first place.

Things like Heartbleed and Log4Shell were serious vulnerabilities that were useful only for a minority of people, and were hidden inside bloated code. The same thing is happening now with package supply chain.

Ygg2
6 replies
1d4h

It's not a false dichotomy. You either put your vulnerabilities in a few baskets, or you spread (and multiply) them in many more baskets. Neither is intrinsically better.

Why multiply? If everyone rolls their own libs, you can expect different library makers to repeat similar mistakes (stuff like not sanitizing input, forgetting to cap anything that generates output from outside sources, etc.).

Heartbleed and Log4Shell were serious vulnerabilities that were useful only for a minority of people

Minority of people need SSL and logging?!? Most applications need logging, and most of today's applications need the Internet to function. Could have they been omitted when possible? Perhaps, but you're not making a persuasive argument.

whstl
4 replies
1d4h

I never said people should roll their own. I said code should be auditable for end users. Hundreds (or thousands) of packages isn’t auditable.

I didn’t make a suggestion, but micro-packages aren’t silver bullets.

Deep transitive dependencies are problematic for their lack of visibility, and often a signal of bloat.

Minority of people need SSL and logging?!

Heartbleed and Log4Shell weren’t caused by the core SSL or logging code of those libraries, they were caused by code in niche features (heartbeat extension and SAMBA) that should have been plugins (or at least deactivatable by flags). that would massively decrease their impact.

Both Heartbleed and Log4Shell were caused by feature bloat and lack of auditability.

Ygg2
3 replies
21h58m

I said code should be auditable for end users. Hundreds (or thousands) of packages isn’t auditable.

And what is your idea to fix this?

Deep transitive dependencies are problematic and...

Gonna stop you right here. Deep transitive dependencies are nothing more than huh, other library find this functionality useful, let's pull it into a library.

By not having deep dependencies, you are in no uncertain terms claiming everyone should write their own. Or God forbid copying code into your own project.

Imagine for a moment we write a set of perfect auditable libraries. Fuzzed, written to spec, proven correct & fast in theory and practice.

Guess what? Everyone and their mother will use it, leading to deep transitive dependencies.

The complexity of consumer hardware and software requires that even trivial libraries carry with them many dependencies.

The alternative is basically Ludditism. Where do you draw the line at what is or isn't bloat? Everyone only uses 20% of the features but in aggregate, they use 110% (10% from Hyrum's Law) of the features.

yukkuri
1 replies
13h23m

Yes, even the original link admits their "simple is better" solution is only better in limited use cases.

"Our software will be so much better quality if it won't do the things you're paying me to make it do" is a non-starter in the real world.

whstl
0 replies
6h12m

You two are talking with a straw man.

Nobody ever said anything of the sort.

whstl
0 replies
6h17m

Fuzzed, proven correct libraries clearly aren't the problem here.

But this is really far away from the bulk of Rust or NPM libraries, and it's very naive to pretend that the situation isn't like that.

I already mentioned two examples of things that could have been separate from the main libraries (or at least require flags or separate compilation) and caused global problems: heartbeat extensions from OpenSSL and LDAP/JDNI from Log4J.

EDIT: I can mention two other examples that I already talked about in this thread: I often come across simple GraphQL-API-consuming apps or libraries that have dependencies on large frontend libraries, while a simple fetch would suffice. The library is needed for introspection, or for building autocomplete clients, but for pure consumption this is unnecessary. Another example are libraries from authors that are splitting code only to bump their NPM or Cargo metrics, like is-odd, or the ora package that has 15 dependencies and a lot are covered by stdlib.

yukkuri
0 replies
13h27m

Once upon a time I had to track down a catastrophic performance failure (process that should have taken a few minutes couldn't finish in the THREE DAYS allotted to it as a maximum runtime) that turned out to be because it was using a hand-rolled "simple" JSON parser instead of just pulling in Newtonsoft.

tcoff91
2 replies
1d20h

memory related vulnerabilities tend to be the worst kind though, like RCEs and stuff. Remote code execution is so much worse than many other types of vulnerabilities like DOS. How many RCEs have there been in rust programs compared with C++? I bet C++ has far more than 70% more RCE frequency than rust.

lionkor
0 replies
1d8h

the worst kind? probably not. The worst kind is the vulerability that is a logic error, which no language catches for you, which can leave unskilled attackers accidentally breavhing your whole system. RCE takes effort usually, exploiting a logic error not so much.

For example, rust happily lets you access a database before checking that the user's auth token is valid - absolutely nothing prevents you from that.

bsdpufferfish
0 replies
1d20h

Only for software directly connected to the internet. Unix process separation is a thing too.

IshKebab
0 replies
1d4h

Do you have any evidence that Rust programs actually run 10x as much code as C++ programs? That seems very unlikely. Most translations from C++ to Rust or vice versa that I've seen are within like 30% of each other.

The fact that Rust makes it easy to pull in a large number of small dependencies instead of a few enormous dependencies is irrelevant. You aren't using any more code.

For example are you counting Rusts `regex` crate as a dependency? Well that's just in the standard library for C++.

Does Boost count as a single dependency in C++? Because that would be like 30 separate crates in Rust.

geodel
16 replies
1d21h

Lately I have seen developers are nuking the bloat by converting existing applications to hundreds/thousands of ultra slim micro services. I applaud this approach of taking this issue head on at least in server side domains.

namaria
8 replies
1d21h

hundreds/thousands of ultra slim micro services

Sounds to me like trading one type of bloat for another.

7thaccount
7 replies
1d21h

More macro level complexity for sure, but I guess you eliminate some issues from having all that as a monolith.

infogulch
4 replies
1d21h

Yes this is a great move, turning code bloat problems into a distributed systems bloat problem is definitely an improvement.

troupe
1 replies
1d19h

Agreed, and if security is an issue, creating network connections between every single piece of your application seems to exponentially increase the attack surface.

burnished
0 replies
1d15h

Au contraire, an attacker must understand and then navigate the web you have woven which as we all know is impossible. Security could not be more perfect

shrimp_emoji
1 replies
1d21h

We'll break the elephant apart into a brainiphant, a heartiphant, a liveriphant, two lungiphants, and they'll all agree over a communication protocol with which to exchange matter and energy.

It's more a way to enforce API boundaries than reducing complexity. I guess when you have some anxiety about your ability to do so otherwise.

thfuran
0 replies
1d20h

If the goal is just to have harder boundaries, it seems like entirely the wrong tool for the job and comes with a ton of unnecessary baggage. Something like java's module system seems like a much better way to try to enforce API boundaries. Of course, there are languages that lack an equivalent but which can be used to write microservices.

namaria
1 replies
1d21h

And create many others by having networking being now part of your architecture. Bloat is bloat, changing how it's distributed isn't getting at the problem.

7thaccount
0 replies
1d19h

I agree and don't really work in this area anyway, but work with brilliant developers who didn't go that route just to add something on their resume. There are obviously tradeoffs and it isn't a silver bullet. Trading one kind of complexity for another may make sense depending on the use.

Sohcahtoa82
5 replies
1d20h

In a sane world, this would be satire, but I really can't tell these days.

Have people taken "Function-as-a-Service" too literally and done the equivalent of moving "is_even" into an AWS Lambda? Or maybe have a dedicated "is_even" nano-service with its own Kubernetes cluster?

geodel
3 replies
1d20h

You are thinking in right direction. Ultra small but critical service like "is_even" and combining it with reliability of a kubernetes cluster, that is genius level stuff.

BobbyTables2
2 replies
1d18h

How many have even setup a Kubernetes cluster manually?

They use a Kubernetes “distribution” that adds another layer of automation on top of that!!!

The day is near where “is_even” will be decomposed into multiple containers, with an added caching layer, and extensible authentication layer.

peteradio
0 replies
1d5h

Caching makes it performant

geodel
0 replies
1d14h

Agree. One can't really run enterprise grade solutions by haphazardly throwing code in a kubernetes cluster. It need to have fully automated pipeline with authn/authz and other fixings.

triggercut
0 replies
1d9h
meindnoch
0 replies
1d21h

Can't tell if sarcastic or not.

hgs3
15 replies
1d20h

A typical app today is built on Electron JS

Not enough folks seem to realize but you can use each platforms native web control rather than bundling electron. If you do that your distributed app can be in the kilobytes. This approach also gives you the freedom to use whatever backend programming language / tech stack you want so long as it can communicate with the web view.

moralestapia
9 replies
1d20h

How?

jsheard
4 replies
1d20h

I think Tauri is the most established "like Electron but with the system webview" framework. It supports wiring up native Rust code behind the webview frontend.

https://tauri.app

The obvious downside compared to Electron, besides maturity, is you have to test the frontend on multiple browser engines rather than just bundling Chromium everywhere and calling it a day. It uses Microsofts flavor of Chromium on Windows, Apples flavor of WebKit on their platforms, and WebKitGTK on Linux.

grujicd
2 replies
1d20h

There's a small problem with Windows, there are no guarantees you'll have Chromium based Edge installed on a system. You'll have to download and install it if not present, and when I looked it was 100+MB. Sure, you don't have to include it in installer, but you have to account for that case.

jsheard
0 replies
1d20h

I haven't seen it in action but according to the docs their installer will automatically download Chromium Edge if it's not already installed, so the end-user doesn't have to deal with it, and Windows 11 has it out of the box so that case will only trigger on older versions.

duckmysick
0 replies
1d7h

Isn't that how it works with Windows software that requires Microsoft Visual C++ Redistributables? The installer checks if they are there, downloading the missing ones when needed.

josephg
0 replies
1d14h

Even tauri’s own benchmarks show it takes thousands of syscalls to bring up a “hello world” app in tauri. This isn’t lean software. Just because the download size is small doesn’t mean the software is fast or efficient.

hgs3
3 replies
1d20h

You can create the web control using each platforms native GUI toolkit and setup JS communication yourself OR you can use a lightweight library that does it for you [1] (search this projects README for language "bindings").

[1] https://github.com/webview/webview

grujicd
2 replies
1d20h

You still need access to native menus, clipboard, dialogs, drag&drop, etc. There are no light multiplatform libraries for that afaik. Either you'll use Electron, which is more or less these services + browser. Or you'll use some of full multiplatform GUIs, which all have their limitations and drawbacks. Potentially you can use hybrid - web control inside one of multiplatform GUIs (if web control is available at all), but you'll have to reimplement some IPC stuff for communicating with web part. That part already exists for Electron. And you'll still have to rely on this multiplatform GUI and deal with its issues. There's also a matter of trust, Electron if nothing else has VS Code, so we all know that production ready application is possible, even if it's not trivial. It's not that clear cut for Avalonia, MAUI, etc.

hgs3
1 replies
1d

You still need access to native menus, clipboard, dialogs, drag&drop, etc.

You don't need a library for these features. Their easy to access through each platforms native API's and if you go the route of creating the webview yourself (no library) it's a moot point as you're interfacing with the OS anyway.

grujicd
0 replies
19h45m

Is it easy if you have to use 3 separate native APIs for 3 platforms, where presumably you're mostly proficient with just a single one?

To paraphrase Churchill, Electron is the worst multiplatform GUI solution, except for all the others that have been tried.

whartung
1 replies
1d15h

What are the current gaps for PWA?

PWA should be viable for a pretty large percentage of applications nowadays, right?

I don't know, but could Discord be a PWA instead of an Electron app?

The biggest gap is something capable like SQLite vs IndexedDB, but even then I bet most apps wouldn't require the sophistication of a higher level query language compared the "b-tree" model of IndexedDB.

whstl
0 replies
1d9h

A database is a minor obstacle. Most Electron apps I come across mostly use cloud storage, or even require some form of login for no reason other than spying on me (Postman).

edflsafoiewq
1 replies
1d20h

"Lean software" doesn't just mean download size.

hgs3
0 replies
1d19h

With the native web control approach you can write the bulk of your app in whatever efficient programming language / tech stack you want (Rust, C, C++, Go, whatever). The webview need only be for presentation - the 'V' in MVC. You aren't stuck with Node.js.

petabyt
0 replies
1d20h

And the app takes several seconds to start up, it's slow, and ends up using more RAM than electron because the user isn't using the native webview as their main browser.

lifeisstillgood
11 replies
1d21h

>> Software is now (rightfully) considered so dangerous that we tell everyone not to run it themselves. Instead, you are supposed to leave that to an “X as a service” provider, or perhaps just to “the cloud.” Compare this to a hypothetical situation where cars are so likely to catch fire that the advice is not to drive a car yourself, but to leave that to professionals who are always accompanied by professional firefighters.

I’m stealing that line :-)

bryanlarsen
5 replies
1d21h

If cars were invented in 2024, there's no way that the general public would be permitted to drive them. We freak out over new products that result in a few deaths, let alone 40,000 per year.

lcnPylGDnU4H9OF
1 replies
1d20h

I'm not so convinced. People talk similarly about the issues that smart phones have caused with the rise of social media but only fringe political groups want their use restricted by law (with exceptions, such as while driving). I'd expect the most popular opinion to be that the convenience outweighs the cost.

(Of course, as a sibling comment points out, it's impossible to really know.)

throwaway598
0 replies
1d13h

Like smoking in the 1960s.

supertrope
0 replies
1d19h

Status quo bias is very strong. If you proposed accepting tens of thousands of fatalities per year, bulldozing entire city neighborhoods, kids not being able to cross the street safely, smog, and constant traffic noise people would be horrified. One common justification NIMBYs use is increased traffic aka the burden other drivers impose on a neighborhood by driving through. Ironically instead of identifying the root of the problem (too many cars and the subsidized infrastructure supporting them) they blame one of the solutions (more housing).

samtho
0 replies
1d21h

This is hard to say because an alternate universe where the car didn’t exist would be quite different and may have not led to our contemporary understanding and management of risk in the first place.

mejutoco
0 replies
1d9h

People often say this about bicycles too. It is appealing, but difficult to say. Self-driving cars are being invented and they are allowed (with restrictions).

My guess is if the invention had a big corporation behind and enough economic potential it would get passed.

Karellen
2 replies
1d21h

Software is now (rightfully) considered so dangerous that we tell everyone not to run it themselves. Instead, you are supposed to leave that to an “X as a service” provider, or perhaps just to “the cloud.”

Well, that's what cloud providers tell you. And their employees, whose salaries depend on them not understanding otherwise.

mk89
1 replies
1d20h

In my experience, it's not that simple as you make it sound.

In some cases, especially due to some compliance (ISO27001/SOC2-3, Fedramp, etc.) companies implement a proper management of CVEs. It has become a standard practice (typically, this is part of the Enterprise subscription of the SAAS/product of choice).

Having a service/lib/whatever that uses a crappy lib that doesn't get a CVE fixed in a specific timeframe can lead you to miss the SLA you define with your customers (e.g., "we fix high risk CVEs in 90 days and critical in 30 days", that kind of stuff).

For such companies having a service like that which is probably not even core business can become really a cost, especially because Enterprise companies will push very hard on you to make those CVEs fixed. I have seen colleagues working over weekends just to have 1 single CVE because of 1 single company pushing us really hard to fix that stuff. It was a big contract, you don't want to lose it.

So, yes: paying X$ to someone that promises you "we take care of CVEs", etc. can be a win. You're not just buying software: you are in a sense buying some accountability, although at the end of the day YOU are what the customer sees, not your SaaS behind the scenes.

supertrope
0 replies
1d19h

All so that these vendors can mis-configure a S3 bucket or use the password "solarwinds123."

vonjuice
0 replies
1d14h

I actually used that exact car argument when someone here was strongly against any use of psychedelics unless medical and aided by a professional.

thimp
0 replies
1d20h

Can I just point out that the vast majority of users I know are basically teetering on the edge of the cliff of losing everything they have ever done. I'm not suggesting that selling your soul to the SaaS vendors (disclaimer: I work for one) is the right solution. In fact some of them you're probably better setting fire to your data than trusting them (disclaimer: I work for one of those!).

Example my now ex-girlfriend was distrusting of "the cloud", for rational reasons I will add thanks to a former Eastern Bloc childhood. However the alternative solution was hoping she wasn't going to lose the HP laptop she'd paid bottom dollar for. Some education later and she had peace of mind, at least on that front.

What we have is a general lack of education and consideration of what the consequences of that lack of education are. The end game is that you either have to accept the risk, and I've seen many a tear there, educate yourself, and I've not seen much of that, or suck up to the SaaS and cloud vendors.

It's a matter of personal responsibility and no one seems to have it so leaving that to the professionals (ha!) might be a less bad solution that trusting yourself.

Education is the right answer though but hopeless. I'm not sure if my post has a conclusion but it sure depresses me re-reading it.

liendolucas
6 replies
1d17h

If I had known 20 years ago that software would be like it is today I would have ever chosen to be a programmer. Everything seems gargantuesque. Hardware and software hand in hand competing in an endless race. And things are not better, nor easier, nor simpler. What a disappointment.

foobarian
2 replies
1d17h

I wish we could do what Linus did and start a lean walled garden ruled with an iron fist. How nice would everything be if it was written like Apollo mission software but ran on today's hardware! Sigh

liendolucas
1 replies
1d16h

I was actually thinking about the Apollo computers. We assisted man on landing the moon with a micro fraction of we have today. And flawlessly! It astonishes me so much that achievement when I compare it to where the industry is today that each day I'm more convinced that we have completely lost the skyline.

atq2119
0 replies
1d13h

Not flawlessly. There was a pretty scary bug with the radar(s?) during the moon landing of Apollo 11.

inversetelecine
1 replies
1d16h

Showed a young kid (I think the machine was older than him) an old Gateway G6-300 machine from 1999. Pentium 2 300MHz, 128MB SDRAM, 5GB hdd, Win98 FE, with over 3.5 GB free. I'm more a hardware than software person so the main draw for me was the unknown (to me) Chromatic Research Mpact AGP onboard graphics.

Anyway, his comment made me chuckle: "Wow, looks like everything runs pretty quick!" And he was right, everything loaded instantly. Even IE6, Netscape Communicator, Adobe Acrobat 5, etc

omoikane
0 replies
1d11h

Related, there was this post from few months back about how everything was more responsive with an older OS:

https://news.ycombinator.com/item?id=36446933 - Windows NT on 600MHz machine opens apps instantly. What happened?

Follow up post by the original author:

https://news.ycombinator.com/item?id=36503983 - Fast machines, slow machines

austin-cheney
0 replies
1d9h

And the people you are working with are progressively more insecure, less confident, less capable, and less literate.

Mistletoe
5 replies
1d21h

Is software bloated because that's the only way for everyone to justify promotions and salary increases? It's hard to get a promotion for slimming down your program to a tiny size and it still works perfectly and is exactly what the user wants/needs. But that is exactly how promotions and bonuses should work.

namaria
2 replies
1d21h

I think the huge influx of money into technology has been akin to pumping energy into a closed system. Entropy went through the roof.

amelius
1 replies
1d21h

Since Eternal September it hasn't been a closed system.

namaria
0 replies
1d20h

Closed systems can communicate with other systems/their environment. Closed doesn't mean sealed here, it means well defined boundaries.

edit: ok so for thermodynamics analogy purposes, the statement 'closed system' means no matter flows but energy obviously yes... furthermore, 'the tech system' being made of isolated computers and specialized professionals, or networks of computers and specialized professionals is immaterial for the analogy... lots of energy flowing in, in the form of huge amounts of cash, created a lot of entropy, in the form of accidental complexity.

zem
0 replies
1d20h

no, it's bloated because that's the path of least resistance. it's easier to write code that builds on frameworks which build on libraries which build on virtual machines etc, and that is arguably even a better use of the programmer's time than doing everything from scratch, but the tooling hasn't kept pace to let us produce a final artefact that strips out all the bloat and collapses the layers into a small binary. (this is theoretically possible with a "sufficiently smart compiler", tree shaking, and a host of related techniques, but is impossibly hard in practice, given the current state of the art)

chasd00
0 replies
1d20h

i know in the consulting world you always want to maximize the number of logos on any given architecture diagram.

givan
4 replies
1d20h

My approach to avoid bloat with Vvveb CMS was to avoid general purpose frameworks and libraries that do 1 thing I want and 100 more that I don't.

Even if it's harder to get started in the end it's less time spent because you know the code better and the code is not adapted around the library and it can better follow the application design.

This approach made everything lean and easy to develop and as a bonus everything is rocket fast.

pcardoso
1 replies
1d20h

Congrats, looks very good.

givan
0 replies
1d20h

Thank you.

christophilus
1 replies
1d15h

Your GitHub link is broken, fyi. It goes to “.github.com”

givan
0 replies
1d8h

Thanks for letting me know, fixed it.

AdamH12113
4 replies
1d20h

What amazes me is how many people (even here) will leap to their feet to defend an exponential increase in complexity to provide a minuscule improvement in convenience from putting their garage door/fridge/front door/etc. on the internet. I really hope the garage door opener bit is a joke (they come with radio remote controls!), but I have a bad feeling it's not.

I can almost see how this sort of thing could work -- a secure LAN for the house with appliance controls based on open protocols driven by a local server. Your phone would talk to the server via a LAN (in-house) or a VPN (remotely), decoupling the connectivity from the actual control. Heck, while we're at it, drop IP from the appliances entirely and use some low-bandwidth power line communication system (X10?) -- no need for an OS at all.

That would require a lot of industry coordination, though, and in an age of walled gardens and digital surveillance I don't see it happening anytime soon.

Semaphor
1 replies
1d20h

I don't really see those comments, everyone usually talks about how people are stupid for getting cloud devices instead of going local with something like home assistant

AdamH12113
0 replies
1d20h

Home Assistant looks like at least 80% of what I was imagining. Glad to see that some serious work is going into this.

ryandrake
0 replies
1d20h

Also, a lot of software developers will defend the exponential increase in complexity by pointing to developer comfort and speed. Building on top of these 40 layers of abstraction is easier and faster for the developer, so therefore it's always worth it. Binary/download size, performance, security issues, user experience... so much can get sacrificed at the altar of developer convenience if we let it.

It gets defended because "developers are expensive" but nobody thinks of all the person-hours of our users' time lost because they are waiting for the code to execute up and down that class hierarchy...

ipsi
0 replies
1d18h

That... pretty much already exists, in the form of Home Assistant + Zigbee and/or Thread? Though that's still wireless, and I haven't seen any focus on trying to connect everything with wires (not something I'd be keen on, personally, I'm quite happy with the wireless protocols).

galleywest200
3 replies
1d20h

That Github they linked in the article with 1,600 (!!!) dependencies. Out of all of the programs ever written, this is certainly one of them. I am sorry to the programmer and this is not meant to be a slight on them, but holy moly.

https://github.com/SashenJayathilaka/Photo-Sharing-Applicati...

crooked-v
1 replies
1d20h

I think it's fair to note that a huge chunk of those come from react-scripts (and its own dependency Babel), though. I would bet that only a small fraction of all the depedencies are ever actually included in the output of the build.

whstl
0 replies
1d19h

Yes.

Out of the 1600 packages, just looking at package names:

227 are related to Jest, 167 are related to Babel, 93 are related to PostCSS, 66 are related to Webpack, 47 are related to Firebase, 43 are related to Webpack, 24 are related to Workbox.

There are even more packages that are related to those but whose name doesn't include the main product.

These multi-packages, in 90% of cases, are all maintained by the same people and often live in the same GitHub repository.

wahnfrieden
0 replies
1d20h

That’s not high

lelanthran
2 replies
1d9h

Software cannot get leaner, because that takes time, skill and highly paid people, not just people gluing together examples from 12 different tech stacks into a frankensuite.

I'm an independent developer, and the person who learned node.js last year, who will throw node.js, containers, random AWS hosted DB services, lambda services, object storage, cloudflare, yaml, react, vite and other dependencies together to produce a cookie cutter, yet still very fragile, webapp together in a day will always bid lower than me.

The lean, fast, cheap to run and cheaper to maintain software just can't be written profitably, even if it is cheaper in the long run.

mrd3v0
1 replies
1d8h

Yup, it is not about awareness, it's about how the economy is structured. If people are paid for unsustainable software, they'll do it.

fuzzfactor
0 replies
1d

The world ships too much code, most of it by third parties, sometimes unintended, most of it uninspected. Because of this, there is a huge attack surface full of mediocre code.

And that's just the attack surface when it comes to leanness mediocrity.

The "defect & deficiency surface" is much larger than the attack surface, and defects are always doing their deeds while attacks are (hopefully) relatively seldom.

kps
2 replies
1d20h

“Have you looked at a modern airplane? Have you followed from year to year the evolution of its lines? Have you ever thought, not only about the airplane but about whatever man builds, that all of man's industrial efforts, all his computations and calculations, all the nights spent over working draughts and blueprints, invariably culminate in the production of a thing whose sole and guiding principle is the ultimate principle of simplicity?

“It is as if there were a natural law which ordained that to achieve this end, to refine the curve of a piece of furniture, or a ship's keel, or the fuselage of an airplane, until gradually it partakes of the elementary purity of the curve of a human breast or shoulder, there must be the experimentation of several generations of craftsmen. It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.”

— Antoine de Saint Exupéry, Terre des Hommes

verisimi
0 replies
1d9h

"It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.”

Perfection/truth

mejutoco
0 replies
1d9h

I never saw the context of the quote. It makes even more sense when you know the author was a pilot.

godzillafarts
2 replies
1d2h

Clicked the link. Immediately greeted with a CTA banner, a Google ad, and a cookie banner. Dismissing the cookie banner immediately reveals another Google ad, but this one is sticky and follows me as I scroll. Reading through the article I came across at least three more advertisements.

Kind of difficult to take this seriously.

mcdonje
0 replies
1d2h

That is ironic, given the topic.

JoachimSchipper
0 replies
1d2h

That is on IEEE, not on the author - the article was originally published on the (fast, ad-free, privacy-preserving) author's site at https://berthub.eu/articles/posts/a-2024-plea-for-lean-softw...

flenserboy
2 replies
1d17h

I remember when the stated dream was to have a standard set of system-provided hooks that led to routines everyone would use for interfaces & more (think: Macintosh Toolbox / QuickDraw) while the main job of the developer would be to code the program logic. There was a great deal of talk about making any changes or additions be transparent — system calls would seamlessly fulfill the same task, even if the code behind them changed, & additions would be supersets of what came before, allowing older code to compile &/or run/function without a hitch, while giving newer software more capabilities. This was supposed to make maintenance easier, regularize interfaces, & at the same time keep code lean since it would rely so much on system calls (external libraries were supposed to be avoided, &c., &c.).

This dream quickly fell apart (remember DLLs?), & it seems that much package management & packaging has to do with making sure the right libraries are available. It makes sense that it would not have worked as was hoped, as mass software development was still essentially in its infancy. Here's my question: now that there has been a great deal of collective experience in these matters, is it the case that it has been learned that this dream is simply impossible to bring about in a sane fashion (codebases may simply not be moveable to such a system without unaffordable effort), or has there been enough experience with the current messy state of affairs to drive people toward a modern attempt at making this actually work? If we're wanting fast, lean, stable, secure software (& all four may not be possible at once), I'm not sure that the current situation is heading toward those ends.

crq-yml
1 replies
1d16h

It's iterative pressure to push features further down the stack. Early Unix did very little! Modern BSDs ship in a very "complete" state. The early Lisps put very little in the language, but Raku shoves every trivial thing that would go into an npm library into the language spec. The C language made you figure out how you wanted to build your code, while every new compiled language comes with some form of build tooling.

There are certain things we are doing in this landscape that are pretty effective, but they are done outside of the "you, the machine, and a greenfield project" context that drove Wirth's endeavors. The problem tends to be that they come with certain monumental thresholds of "big dependency" like a database or browser engine, and if you don't like how the big dependency was made you end up unhappy.

flenserboy
0 replies
1d14h

That makes sense. Thank you for the response.

akprasad
2 replies
1d19h

I'm reminded of Maciej Cegłowski's "The Website Obesity Crisis" (2015):

https://idlewords.com/talks/website_obesity.htm

cylinder714
1 replies
1d15h

Thanks for posting this, as MC removed it from his website.

burkaman
0 replies
1d14h

That is his website.

jauntywundrkind
1 replies
1d17h

In 2025 this winge can finally die. Die hard.

The size of software is such a rallying point for malcontents & grumblers. Endless belly-aching, back in my day software was all fast and hand coded in assembly, no, binary, by real men/women.

Load times can be a bit bad, and yes 400MB app sizes so kind of suck but honestly, it'll swap out fine and if you have an even passible modern SSD, it'll swap right back in fine too in crazy fast speeds. In principle it's unpleasant, in practice, for most people this is not a real impact. But oh how it attracts aggression and disdain.

The point about security is interesting because the stakes are so high. And oh my, we do see some epic disasters. What was the npm one with cyrptocoin wallets being drained recently? But you know? In general the number of occurences have been fantastically stunningly low; it's rarely an issue. There's some log4j shit where longstanding legacy unmaintained apps have some horrific vulnerability that no one is around to import the fix for. But generally? Most exploits are dealt with with amazing haste these days, at much shorter intervals than most dev's do uodates, such that most teams likely won't see impact. There's incredibly good tooling available to report dependencies & automatedly offer patches. The stakes are high but our defenses are astoundingly modern capable & fast acting. We are vulnerable, absolutely, but in practice where we are is amazing.

The security footing I think is hella legit as a plea for lean software. Far more so than so many other persistent endless grumbles. I'm so weary that so many folks have a common place to get together and bellyache & moan, endlessly, again and again, about how bad software is, and how awful things are, when it's not. Shit is amazing. Our ability to stitch great shit together is amazing and having some extra code or parallel utility functions coded in just isn't really a great offense.

Software has plenty of bad architectures underneath the surface, that make it low performance and bad. There's real hard problems about doing minimal amount of work when things change, figuring out what work to do, understanding what data structures and algorithms are helping and hurting. I want software dissenters, I want anger, but it feels like such low grade feed stock complaints that focus on such low & mostly-irrelevant factors that they can easily get together & breed disdain over.

So, here we are in 2024, and this security stance seems like a super legit if improbable thing to be terrified about, an actual forcing function to care about the growth of software under layers. If only we had some kind of secure maybe cross-language hopefully capabilities-based sandboxed system for running code that we could use to run libraries safely in... Aoj wait, we do, it's called wasm, and wasi just shipped preview 2 with component-model, so we can make this happen. I'm excited that we can progress past this reason for conservative Fear Uncertainty and Doubt, by encapsulating the problem and containing it, without having to undergo some radical revolution where we completely remake software development in some undefined safe secure mysterious way where everyone starts all using the same code which we all agree is the exactly right code & nothing else is possible. People love to get together & feel smug that they have been in touch with the better halcyon universe where everything is great & only as it should have been, but no one has offered even the remotest suggestiom of how we get there. MS tried with years of .NET but even they had to constantly reinvent what batteries included meant. Who else will do better, make only perfect decisions, such that no one else ever need touch or address the topic again? This longong for some higher authority to make code perfect & simple as can be flies in the face of what should be exciting & compelling & powerful about the coding world to me; it's messiness brings amazing glory & possibility & we thresh towards better slowly & chaotically. What happens is amazing & I'm so tired of the positive of this world having no outlet, no rallying points, while the people who want to trash software can cand together so regularly so easily & so frequently. It's unfair that software conservatism has as much appeal as it does, and wasm stands to cut this reality-bias off at the kneecaps, thank the stars.

yukkuri
0 replies
13h11m

"Who else will do better, make only perfect decisions, such that no one else ever need touch or address the topic again?"

Nobody, of course, but complaining about "kids these days" is never going to stop being popular.

austin-cheney
1 replies
1d10h

As a former JavaScript developer the problem is poor training and poor candidate selection. Many people, perhaps most, writing JavaScript professionally cannot do their jobs without advanced considerable assistance, fear (as in phobia) writing original software, and cannot communicate at a professional level.

What would happen if things like Angular or React suddenly went away. Would these developers be able to write new applications. Likely not. What if a developer could no longer use jQuery or equivalent helper libraries? Could they continue to deliver work? Almost certainly not.

When most of the work force is only interested in their own narcissism the industry has failed on multiple levels. This is experienced when the first, second, and third priorities are only in the developer’s own self interest such as needing things to be easy. That is why software is bloated, because when the self-interest of the developer is all that matters performance and efficiency become absurdly irrelevant.

The solution to this problem is to set high standards. Not high compared to other professions but astonishingly high compared to software today. If current developers cannot achieve such lofty professional standards do not hire them as senior developers. It may mean it’s in the best interest for employers to train people off the street who are not yet corrupted by poor habits. Eventually the solution will become economic: what is cheaper for the employer. Will it be cheaper to employ large staffs that deliver bad products slowly in their own self-interest or start anew with well trained teams that deliver excellence rapidly at great training or certification expense.

Until the industry figures this out I will never write JavaScript professionally again. Currently, children run the daycare.

whstl
0 replies
1d8h

You had been downvoted, but I share the same experience.

It has become very difficult to have some discussions with less experienced developers, because they have a problem accepting that there are alternatives they're not familiar about.

Just this week I had a discussion with a recent hire that claimed it was impossible to use GraphQL without a dedicated client library. The claim that our team built our current product with just fetch was met with "there must be some security issue, or you had to rebuild from scratch something that is as complex as Apollo".

MrDresden
1 replies
1d21h

'Do less, but do better' is a mantra that could do with being applied more often.

surfpel
0 replies
1d21h

Dieter Rams mantra is “Less, but better”

His 10 principles guide everything I create.

MagicMoonlight
1 replies
1d4h

The first iPad children are growing up. Wait until the software developers don’t even know what an executable is.

It’s going to be interesting when the people who made things like Linux die and there’s nobody to replace them.

pritambarhate
0 replies
1d4h

By then AI will know how to write operating systems most probably.

sharas-
0 replies
1d20h

"Writing has been called the process by which you find out you don’t know what you are talking about.

Actually doing stuff, meanwhile, is the process by which you find out you also did not know what you were writing about."

Why architects code: https://bitslap.it/blog/posts/why-architects-code.html

raid2000
0 replies
1d8h

Even as early as 15 years ago, MIT switched from scheme to python. The reasons are detailed in an interview[1] (see HN discussion[2]): the educators belived the future of programming would be snapping pre-made libraries together instead of engineering from scratch.

1: https://www.wisdomandwonder.com/link/2110/why-mit-switched-f...

2: https://news.ycombinator.com/item?id=14167453

nonrandomstring
0 replies
1d21h

One more cheer for suckless [0] philosophy. Hoorah!

[0 ]https://suckless.org/

mouse_
0 replies
1d21h

See OpenVPN (~70k lines of code) being replaced with WireGuard (~4k lines of code), resulting in far less vulnerabilities and performing far better on top of that.

lawgimenez
0 replies
1d17h

Speaking of bloat software, I just reformat my Mac to cleanse it from all the Electron bullshit.

kazinator
0 replies
1d14h

If you want leaner software, the first thing you must do is eliminate the vast throngs of users who are generating the demand for the bloated software, such that making that kind of software becomes the economically preferable activity.

This issue has analogies elsewhere. Why anything sucks at scale in any way has to do with large numbers of other people.

Why can't you find a good quality Widget at any price, only cheap ones? There aren't enough of you who want the better Widgets, so it's not worth it. And when you say any price, you don't actually mean it anyway.

ho4
0 replies
1d18h

As an enterprise web developer by day, I find my salvation in my personal open source projects. It's a topic that has bothered me for about a decade and had actually made me quit programming as a hobby a few times.

As someone else had once said, limitation sparks creativity. Without limitations (such as for instance a low number of dependencies, or deliberately simpler code) the activity becomes mundane and overly goal-focused, which ultimately results in poor quality of the outcome.

I personally found myself greatly improve the quality of all my activities in life by imposing "challenging" limitations on them. Though surely it's not an approach for everyone.

hilbert42
0 replies
21h30m

It's now nearly 30 years since this article titled Software's Chronic Crisis by W Wayt Gibbs appeared in the September 1994 issue of Scientific American: https://www.researchgate.net/publication/247573088_Software'....

What's truly shocking is that the chronic problems with software that were identified in the 1994 SciAm article are still essentially same as those outlined in this current IEEE article.

Back then, Wayt Gibbs correctly concluded that software development couldn't be classed as a professional engineering discipline, so why, if anything, has software development sunk even further into the mire over the past 30 years?

It is truly incredible that so many have allowed this unmitigated shambles to perpetuate for so long.

Why the fuck can't the software industry and computer science finally get their act together?

deathanatos
0 replies
1d18h

I think I kinda disagree? This is hard (and likely the reason it's not happened) but I think we need, in some ways more code. We need the right code. (And there-in is the rub.)

For example, today I was trying to … put a link into Slack. And it is very clear, as a user of Slack, that Slack utterly lacks a parser and is almost without a doubt simply tossing some regexes at the problem. A parser might be "more code" … but it would simply work, whereas I often find valid markdown getting mangled by … whatever is attempting to "parse" it, but probably isn't.

We have a script that renews certs with certbot. If I get enough free time, I'm going to be replacing it with a Rust program, as certbot is just too difficult to get automated. It wants to prompt users with questions (do you want to upgrade to EC? No, no, I just want to renew) and regularly fails (it attempts to wait for DNS propagation, but it does so too simplistically; this is made doubly hard on its part that Cloudflare will gladly ack a write in their API … but their DNS is apparently not read-your-writes.)

Even the Rust replacement, which is in a prototype, has to re-implement part of ACME, as the ACME library doesn't expose the part I need. I'd rather not … but missing code.

I want to parse .deb files to walk their dependency tree, but .deb files use an artisanal format. They're old, so I sort of get it, but it's nonetheless annoying.

Read-your-writes is the single biggest reason I have to do inane amounts of retry logic around APIs that ack writes and then gaslight me on subsequent requests dependent on that write.

I've hacked around Cython failing to accurately transpile Python. I've hacked around Azure APIs just … not doing what they're paid to do. I've written scripts to make Docker API calls that … could have been exposed in a CLI tool, but just weren't, because tough cookies.

The WASM people were fighting about whether strings should be forced to be valid, or not. Think about it: every future WASM program might have to deal with SomeNotQuiteAStringType, for the rest of time, all just to make interop with JavaScript "easier". Hopefully, in Rust, we'll get a WasmString with a `.to_string() -> Result<String, Ffs>`, but that's still an fallible path that didn't need to be.

People still think CSV is a decent format.

left-pad, widely derided as "one of those dependencies that is so simple you don't need it", failed to correctly implement the function! Then when the ragefest about it broke out the broken function was standardized.

Go for the longest time didn't have a round function, and the bug report where it was requested got a "it's so simple just implement it yourself" and it took commenters like 6 or 7 attempts to get it right. round()! (Thankfully, Go saw the light and eventually added it to the stdlib.)

In many ways, quality engineers who know what they're doing and write good, high quality systems would go a long ways. But SWE companies are, AFAICT, penny wise and pound foolish in that department, and have traded paying devs for quality with devs dealing with low-quality systems for all of eternity.

d-lisp
0 replies
1d20h

A few years ago I thought "how smart I am" when I built a smart TV out of a dumb TV, old computer, small server and light apk on my phone to control the newly created dumb-smart-tv, but in fact even this toy project mediately depends on a the linux kernel, drivers, android and a lot of code I cannot even hope to have the possibility to read in my lifetime. The true smart solution would be to have some kind of system on chip that would send via hdmi the proper signal from a source fetched by a sort of curl but with streaming. But then I would expect the source itself to rely on lean software; something like netflix but instead of a web app you would just have some catalog (a printed one?) of the available routes or codes you can send (like a telephone number) to ask for the emission of bytes to your hardware. But then I would ditch software, you would just compose a number on an old analogue phone and plug the video cable in the enclosure to receive your movie, while listening to the audio via the phone speaker.

crossroadsguy
0 replies
1d17h

Maybe something like this can help

- Product Managers being educated in causes and short and long term side-effects of bloat in depth. PMs and their ever increasing lust for making a piece of software do moreeeee!

- Coders should be penalised for frivolous use of libraries (especially in that glorious NPM world). Sometimes I wonder whether they’d just include a library instead of typing the word “include” if they could.

- Dependency registries like NPM (actually especially NPM) should have a strict IIEN check before any library is published there (IIEN: Is It Even Needed). Sometimes it’s okay, actually better, to just write your own quick little wheel.

- package.json (there’s yarn as well?) should be declared a health hazard especially for anyone who is not directly in the field (they might cause panic attacks) - and that’s just the first level listing.

christophilus
0 replies
1d21h

We are likely looking at over 50 million active lines of code to open a garage door, running several operating-system images on multiple servers.

It is kind of insane when you see it written out that way. It is staggering how much code I'm running right now on my machine as I type this-- code that I never reviewed and that very likely hasn't had much rigorous review at all. :/

Well, back to installing npm dependencies.

barnabee
0 replies
1d20h

Over time I’m coming to two conclusions:

1. Standard libraries are the new operating systems

2. The only way to design reasonable (lean, secure, responsive, interoperable, reliable, long lasting, etc.) software is for rich and carefully thought out abstractions to be incorporated into operating system and/or standard library APIs.

We have to remove complexity by building better operating systems, programming language, and core standards/ abstractions. A great example is web components—they should have destroyed the case for React and its ilk, instead a completely wasted opportunity.

Timber-6539
0 replies
1d4h

A post on HN almost a week ago delves further on the practicalities of shipping lean software. https://news.ycombinator.com/item?id=39240471

TheAceOfHearts
0 replies
1d19h

Lean code doesn't get written because there's crazy time constraints and the market generally doesn't reward or care about code quality. Managers want things shipped last month and they want to keep churning new features without long-term planning.

But even going beyond that, we're forced to keep building upon tons of old design decisions which don't always match modern software expectations. It doesn't help that modern operating systems have failed to evolve in order to provide a better ecosystem. And that's not even taking into consideration the barriers created by artificial platform segmentation enforced through copyright abuse. In general, platform owners are very resistant to working together.

The biggest innovation in the OS space during the past decade which I'm aware of has been the proliferation of containers. We've given up on combating software complexity and decided that the best thing to do is throw everything into a giant opaque ball of software and ship that.

Anyway, for all my ranting all this bloat has at least enabled a lot of people to ship code when they probably wouldn't have otherwise shipped anything at all. The choice is rarely between good code and bad code, it's often going to be between nothing and bad code. And a lot of this shitty horrible code is often solving real world problems, even if it's bloated.

RomanHauksson
0 replies
1d20h

I would have liked to see some statistics cited instead of anecdotes about individual security incidents. Is industry software really less secure than it used to be because of larger attack surfaces, proportional to the size of the software industry?

ChrisArchitect
0 replies
1d19h

[dupe]

Syndication of article from last month:

More discussion: https://news.ycombinator.com/item?id=39049956

Ancalagon
0 replies
1d20h

Leave the bloat, that’ll ensure employment for me forever.