Wow, this is sad news indeed for the world at large, but also me personally. I'm suddenly in deep regret for having procrastinated reaching out the thank him. He doesn't know this, but he was somewhat of a mentor for me.
In the early 00s I bought a laptop that had an RTL 8188 CE card in it that ran awful under Linux. I forked his driver and made a number of changes to it and eventually got my wifi working really well (by doing things that could never be upstreamed due to legal/regulatory restrictions). Over the years I rebased often and reviewed the changes, and learned a lot from watching his code. It took a bit of getting used to, but a certain amount of beauty and appreciation emerged. One thing was very clear: This man was doing a lot of the work to keep the ship together. Even just keeping the driver compilable with each kernel release often took some non-trivial refactoring, which he did reliably and well.
Larry, you will be missed my friend. RIP.
If you are waiting to reach out to somebody, don't wait too long or it may suddenly be too late. The years can slip by in a flash.
I realise that the kernel contributors themselves are mostly volunteers and as such can just do what they want, and writers of non-upstreamed drivers will need to get used to chasing trunk, but to me this sort of thing seems like a waste of human talent. once something works, it should never not work again. it seems to me that despite all the layers of cruft that we've accumulated over the decades haven't really helped much with this, in fact they probably mean more work just to stay still.
I've struggled philosophically with that very question as well (especially having to update my own WORKING code just to conform to API changes), and I'm very torn on it. I just don't think there's a good point to pick at which to "freeze" the API and say no more changes are allowed. It would greatly hamper innovation and IMHO ultimately lead the Linux kernel having major forks or being displaced by something more adaptable. Despite the pain points, I think it's an overall good, though that doesn't stop the stinging of having to keep up. It really forces you to either be in or out. It can be very difficult to be a part time contributor.
Just accept the fact that software does age with time. If it didn't change a literal bit, the real world in runs for and the software ecosystem it runs on did. Any software older than 10 years should be rewritten from absolute scratch, not be kept on life support at ever increasing expense.
virtual machines exist, and it might be easier to run something in them than to port them every two years.
I'm talking about innovation and progress. In an ideal world retro computing remains a hobby.
in my ideal world, innovation and progress means new software brings new functionality - not half the old functionality that barely works because "rewrite".
Progress is in the eye of the beholder. Case in point: Fiddler. A debugging HTTP proxy, originally a one-man-show; the Classic version is still amongst the most powerful tools out there. Except...well...there's no way to monetize it. So, a New and Innovated version, which is less powerful (yes, it has some new features, but at the expense of dropping the actual power tools), more bloated, and subscription-based. Progress!
I would encourage you to read this if you haven’t: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
There are exceptions to every rule, re-writing old software because it’s old needs a rare exception indeed.
I know that: Rewrites are bad, it's harder to read the code than write, a rewrite failing Netscape and many others, whatever. I aim to share a view I know is radical and controversial, that unequivocally all software needs a rewrite a decade, provided there is finances to do it.
Why?
I've always wondered about how versioning could help. i don't think it would work with a monolithic kernel, but a microkernel might allow you to run multiple versions of different things, and they would still work.
I've also fantasised about the same thing in application programming languages - e.g. the way racket runs loads of different dialects, you could also have versioning for different things running at the same time.
POV-Ray does exactly this, and warns you if you do not include a #version statement in your source file.
It not only makes it difficult to be a part time contributor, I imagine it also makes life more difficult for maintainers: before they accept a patch from a part time contributor, they always have to ask themselves "will this guy still be around to update his code when needed"?
The Linux kernel has an incredibly strong userspace API stability guarantee. I think in practice this guarantee is unmatched in modern computing, and it's what allows things like containers to deliver such incredible long term value.
Software is defined by its interfaces, and unlike userspace APIs, this guarantee does not apply to internal kernel APIs. The ability to update the latter is what enables the former. Inability to update internal implementation details inevitably leads to ossification and obsolescence of the whole system. Linus Torvalds spoke about this recently in the context of new Rust code in the kernel.
No, in practice I think it leads to growing complexity, because the system then adds additional APIs (e.g., v1, v2, v3, etc.) to support new features, but has to continue to maintain the old APIs for backwards compatibility, leading to lots of extra code and a huge maintenance burden.
Like winapi with its ...Ex and numbered functions (NdrClientCall4).
Or Linux with it's numbered syscalls (dup, dup2, dup3).
Yes, precisely. The escalating complexity, maintenance burden and constraints eventually cause the whole system to lose momentum, lose support and become obsolete.
Is there some reason that the Linux kernel couldn't offer a stable API for device drivers? At least for common types of devices. Would that really lead to ossification? Stable APIs usually make it easier to change code at lower levels.
Politics. Offering a stable-ish internal API would instantly lead to hardware vendors shipping closed source drivers for Linux.
Currently, you have to be the size of NVIDIA or AMD to be able to afford a closed source driver, it's simply a huge work to keep up with the constant improvements in the Linux kernel.
Not true, there are smaller companies providing closed source kernel modules as well. Check any Android image for a device with SD card slot in the era where open source exfat driver didn't exist, and you'll likely find one of the 2 proprietary exfat driver modules.
Embedded/mobile projects don't tend to use bleeding-edge kernels and stuck on one version regardless of closed source modules anyway.
Yeah and these closed source exfat drivers were a true pain to maintain which is why Paragon (one of the vendors) was pretty pissed when Samsung's driver was upstreamed to Linux [1] - it killed off their business.
[1] https://arstechnica.com/information-technology/2020/03/the-e...
These guarantees are not that strong, unfortunately. For example, API between kernel-mode and user-mode halves of the GPU driver is unstable. To have 3D accelerated graphics on RK3288 you gonna need either libmali-midgard-t76x-r14p0-r0p0-gbm.so or libmali-midgard-t76x-r14p0-r1p0-gbm.so userspace library, depending on a minor hardware revision of that RK3288.
Much of windows OS weirdness can be attributed to the fact they have a relentless dedication to backwards compatibility.
Much of Linux's weirdness can be attributed to that as well -- at least the userland API.
Kernel contributors are mostly employees of hardware companies getting paid to contribute: https://lwn.net/Articles/941675/
It may be that in the hypothetical absence of a salary giving them an excuse, many of these individuals may also choose to volunteer, but it's wrong to call contributors "mostly volunteers" when that's their day job. Changesets from those known to be spending their own time on contributing are a relatively small fraction of the changes.
What things did you do that weren't legal to make your WiFi work better?
Increasing the cap on max transmit power. I compared the output of that card to other cards using my spectrum analyzer, and max tx on that card was extremely weak compared to the others.
Probably increasing maximum tx power or extending the frequency range beyond allowed in the region to enable more channels and hence being able to get decent speed/quality in a crowded area.
These things are illegal for a reason.
This is what makes me come to HN every single day.
This is what makes me come here every single minute my anti-procastination setting allows me to.
This is what makes me delete my anti-procrastination setting and quit my job to spend more time here.
it's really nice to read comments like this one. thanks for sharing it. don't feel too regretful or guilty, things can happen.
Thanks for sharing your story, it meant a lot to me today.
Found a small typo, but it's too late to edit the original. I meant early 2010s (not early 00s)