Bloat is most libraries on npm. The authors don't know good design and instead try to make every library do everything. Oh, my library converts strings from one encoding to another, I'll make it load files for you, save files for you, download them over the internet, and give you a command line tool, all in the same repo. The library should just do its one thing. The rest should be left to the user.
I get the impression it's no better in rust land. Go try to edit the rust docs. Watch it install ~1000 crates.
The issue is not the language, it's that anyone can post a library (not complaining) and anyone does. People who "just want to get stuff done" choose the library with the most features and then beg for even more since they can't be bothered to write the 3 lines of code it would have been to solve it outside the library. "Can you add render to pdf?"
I don't know how to solve it. My one idea is to start an "Low Dependency" advocacy group with a badge or something and push to get people to want that badge on their libraries and for people to look for that badge on libraries they choose.
I feel this so much.
I once had a contract for ruby on rails work. They were experiencing severe performance issues, so bad that they were developing in release mode (server won't detect a file change and reload for you).
One day I get tired of it and start digging into it. I don't remember how many gems they were pulling in but it was A LOT. I came across one particular gem which saved 3 lines of code. Seriously.
I stayed away from the RoR community after that. I recently picked up a contract for more RoR work (after all these years, lol) and ... it's not nearly as bad but it's not good either.
Some communities just do NOT respect the risk that dependencies bring.
Given you've seen the same effect in both Ruby and JS maybe the takeaway should instead be that there is a group of devs who will always reach for a package first, not that a specific language has a problem.
and that group of devs tends to flock to certain ecosystems.
I've never seen a .net project with 100+ dependencies, I've easily seen that multiple times for RoR and node.
On the flipside, it takes way longer to get things done on .net. There should be a balance here, but it's not going to happen. There will always be a fair amount of users installing packages for everything.
That’s not my experience with .NET, at least not on any project of non-trivial scope.
What kinds of things do you find take longer in .NET?
If you're talking about initial productivity they're probably right. If you're talking about long term productivity, they're most definitely wrong.
I say this as someone with extensive experience in .net, ruby, PHP (laravel, et al), and so on.
Even something like ActiveRecord is going to blow .net out of the water in terms of pure speed, but long term the typing and the stability of .net gives it the advantage.
Of all the languages I didn't think you'd hold .NET as the pillar of lean dependency trees. Maybe it's bad luck, or even just me, but NuGet hell is a real place I've been to and .NET's packaging system is brutal at resolving when you pass even 50 packages.
I can say that's not been my experience but I've never seen 50 nuget packages in a single project. It doesn't necessarily surprise me that it becomes painful after 50 but what in the world were those 50 packages for?
You probably know this already, but just in case;
I would also strongly recommend not using packages.config or upgrading if you can. PackageReference can deal with transitive dependencies because the compiler resolves them for you: https://learn.microsoft.com/en-us/nuget/consume-packages/pac...
So you don't have to add a dependency to a project just because another dependency has a dependency on it. It might allow you to start removing dependencies.
I've been told packages.config allows pre/post events that PackageReference doesn't so it may be that it's not an option for you, but even then I'd try really hard to move away from packages.config.
Here's some migration documentation just in case: https://learn.microsoft.com/en-us/nuget/consume-packages/mig...
In Javaland you can have a huge nested tree of dependencies but you often don't have a "Gemfile.lock" or such to list them all in a file in your repo...
I'm not a fan of the Java community either, they love their structure. people jest about the FactoryFactoryFactory but you'll legit see that stuff out in the wild.
Paradoxically, I like the PHP community because they're so practical (yes, the language itself is ugly). RoR built rake for running jobs (it does more things, but that was the primary motivation). PHP community just used cron. Although, having said that, Laravel takes a lot of its cues from RoR and has the same sort of tooling.
But when Laravel first hit the scene a lot of people would criticize it for its use of static, but it was legitimately nice to use and that won out in the end.
I don't make a judgement of Rails as a community. Most of my Ruby work has been Ruby on Rails so that's' the lense I see things through. Rubyists may be the most practical of all, it's just that so much of Ruby code is RoR that it's difficult for me to separate them out.
Speaking as a .NET developer, I suspect this is because the base libraries are somewhat extensive.
My experience from having worked at a mostly Python shop (and loving Python myself), and working at a Node shop, is that the latter is by far the worst.
Python probably has just as many shitty packages as Node, but Python’s stdlib is so vast that you often don’t need to, if you bother to read the docs. Today, I was marveling at a function someone wrote at my work to cast strings to Titlecase. I learned today that JS has no built-in for that.
That’s a tiny example, but there are far more.
That’s also just a one line function with template literals in JS, if by Titlecase you mean upper casing the first character. Maybe a bit overkill to bake into a stdlib.
It's not about ease, it's about frequency of use. If it's used often, it should be in a stdlib.
This is why I like Go and the stdlib. Its very much "do it yourself" unless we are talking about monumental libraries for very specific applications. Outside of that if we are talking about simple libraries that do one simple thing, its better to just roll your own using the stdlib. Almost all of the building blocks are there.
On the other hand its really nice that any git repo can host a Go lib and anyone can use it from that url
Never used Go, but I much more prefer a proper stdlib that covers most base cases so that there isn't a need for rolling-your-own each time, or worse using a badly made/maintained bloated third party lib.
Examples of excellent stdlibs in my opinion would be for instance Kotlin and Python.
I think that's what they were saying (and what I would agree with). The Go std lib is very complete for important things, especially in the context of networked services. So the default is that you don't have to pull in many libraries for common needs.
Important thing like sets[0], or get-with-default from maps[1], or enums[2], or appending-to[3] or mapping-over[4] slices?
[0] https://stackoverflow.com/questions/34018908/golang-why-dont... [1] https://www.digitalocean.com/community/tutorials/understandi... [2] https://stackoverflow.com/questions/14426366/what-is-an-idio... [3] https://go.dev/tour/moretypes/15 [4] https://stackoverflow.com/questions/71624828/is-there-a-way-...
I took this to mean that Go's stdlib required a lot of 'do-it-yourself' implementations.
I too have noticed this tendency for the GoLang community to prefer for everyone to reimplement basic functionality from scratch rather than have it provided.
On this note, I wish more npm library authors would emulate sindresorhus's approach of making small packages that each do very specific things in a predictable and well-documented manner.
Hell no. This person is probably the worst offender in terms of supply-chain problems and bloat in the NPM ecosystem.
A terminal spinner ("ora") with 15 dependencies (including transitive) is not an example of good design.
Inflating the numbers of downloads in your own packages is doing no good to the world of software.
What you hope for is shallow dependency trees, not too branchy, but probably also not branchless (an indicator of vendoring).
Larger projects will have and likely require depper trees through, but the branchiness should be relatively independent. I wonder if this has ever been formalized.
What I hope for is for code that is actually auditable, and doesn't pull a lot of unused bloat just to pump up someone else's vanity metrics.
This applies for large and small projects. Babel also doesn't need to do what it does: it pulls a couple hundreds of non-reusable sub-packages even though it's all written by the same people and maintained in the same monorepo.
Honestly, I’ve often considered chunking all those libs into a single big one to eliminate 50% of my npm dependency chain.
The "3 more lines" problem can be solved by LLMs.
A black box that turns a few megawatts and 60TB of data into a model that can write three lines is antithetical to lean anything
Yeah honestly one key thing of copilot has been that I can create a fn definition I need that is very common and it will do the boilerplate for me. It feels dirty not using a library, but in the end I save bytes and have clear control over the fn that otherwise is quite standard.
It sounds like you're conflating low bloat and low dependency, which is like trying to have your cake and eat it too. If you want low bloat libraries, then it's likely you're going to be pulling a lot in that don't independently do much. If you want low dependency libraries, then you'll be pulling in a few that do do a lot.
From my perspective I'd rather have slightly fatter libraries if they're low dependency and by authors I trust. Lodash for instance, sure it's big but the es6 module version supports tree shaking and it's basically the standard library JS never had. Same for something like date-fns for Date. I pretty much pull these two in by default into any project to fill those holes in JS's core library.
What I saw way too often is that a project will use a dependency only for one tiny function. That's how we get a calendar app with 2.5GB node_modules.
The problem comes from two sides.
First you have package makers that want to make this into a career, and the only way to generate buzz is by having thousands of them. So they focus on generating packages that depends on others created by them, and on trying to get their one or two useful packages into someone else's code.
The second is people who believe that the only solution to problems involves including a new package, without caring about how many dependencies it has, or even if the actual problems is actually hard to solve. So instead of learning how to solve the problem, they must learn the API of some wrapper with 4 GitHub stars.
Libraries as pure functions where ever possible and a build system that does not only enforce this but warns you when an update introduces a new/changed syscall. Yes please!
I know its not exactly low-dependency but at least lower attack surface.
Reminds me of that disk defragmenting utility with an embedded mp3 player from early 2000s.
They actually think saner devs have not-invented-here syndrome :0
Ironically, solving for your crticism leads to the criticism in the comment next to yours. They complain about a text encoding library that also reads and writes files, makes network connections, etc. This is a form of bloat. But it minimizes my number of dependencies.
So which is the worse approach?
Libraries that do everything are better that 50 different tiny libraries, amy one of which could be the next left-pad. Even if the big library uses a bazillion small libraries, the big library team usually at least does some testing.
Maybe we could move to having big teams maintain collections of small libraries... But with tree shaking I'm not sure why we'd really need to. I'm happy just sticking with things that have lots of GitHub stars.
I think the solution is just don't use stuff posted by random people on npm if there's an alternative that's widely used, with lots of eyeballs on it. Which there usually is, but people probably choose the smaller one because it's simpler.
While "add a black box complexity extremely resource intensive AI to sort out the dependency trees" is somewhat of a tragic answer to the problem, I think it's the real one we're gonna get - and we'll still be better off than this current clusterfuck.
Hopefully these will also spend the tedious hours pruning unnecessary packages, minimizing responsibilities of each, and overall organizing the environment in a sensible way, then caching that standalone so there's no LLM dependency - just an update checker/curator to call in when things go to shit.
Honestly this is one of the worst problems of modern software and makes 50+% projects unusable. It seems like a very tedious yet solvable problem - perfect for the right LLM agent. Would be a godsend.
By definition, the only languages where this isn't true are languages that have a high barrier to entry, and because of that, they would also be unpopular. Perhaps if the goverment mandated a standard set of programming tools....
That is true for all other programming languages.
And that’s why Java is the only non-meme language.
Need a hashmap? Need a webserver? Need a UI? It’s all built in. It just works on any device with the same code. People who actually know what they’re doing wrote it and you can trust it. No random downloads or weird packages.
If you want to download random things you still can, but realistically you don’t need to.