What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.
Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".
Just for example, I'm planning to make one of my commercial projects open source, and I am going to have to do a lot of fixing up before I'm willing to show the source code in public. It's not terrible code, and it works perfectly well, but it's not the sort of code I'd be willing to show to the world in general. Better documentation, TODO and FIXME fixing, checking comments still reflect the code, etc. etc.
But for all my sense of shame for this (perfectly good and working) software, I've seen the insides of several closed-source commercial code bases and seen far, far worse. I would imagine most "enterprise" software is written to a similar standard.
Linus would disagree and there’s a reason why the kernel keeps a 1000 ft wall up at all times. He would outright reject majority of user land code with good reason. It’s a miracle that anything outside of the kernel works and it shows very often. People seem to forget how often distros shit the bed on every update and how frequently people have to rebuild the entire fucking thing to get it to work again. Updating is an incredibly stressful procedure for production anything. I haven’t even gotten to audio and video - that’s a separate Fuck You ft. Wayland mix. So no, Windows is the gold standard in many ways, precisely because it’s mercilessly fucked from every which way and manages to run every day across a bajillion machines doing critical things. I don’t care about who is being financially compensated and who isn’t - the depth of decision making shows itself in the musings of Raymond Chen and others, and that level of thoroug thinking is very rare, even in the OSS world.
Tbf, internally windows may be in a similar situation, it’s just not in the open. So there could be some visibility bias.
IMO the difference is, it’s usually pretty easy to excise offending code from the Linux ecosystem, but not on windows.
Don’t like Wayland? Stick with X.
Don’t like systemd? Don’t use it.
Dont like cortana or recall? Tough, it’s gonna be on your machine.
Sure, but can you realistically run an up-to-date server today without systemd? Especially for these organisation that runs stuff like CrowdStike.
Devuan? Void? MX? Guix?
Do sysadmins actually run any of them for large systems like stores, hospitals, airports etc?
Alpine Linux. I'm sure they do run that one.
I left Alpine off of my list because the only place I've ever seen it used is inside of containers, which usually don't run their distro's init system at all.
The point is that they can if they have the want/need to.
Sysadmins at the places you listed use windows because that’s where the software support is and Active Directory exists.
It's a matter of a single click to disable cortana, recall or anything you don't like in Windows, with tools like w10privacy.
Sure, but that’s a whole other tool.
Cortana and recall are just examples. Microsoft (or OEMs) can put anything they want in the OS and make it difficult to remove.
It’s harder to do that kind of stuff for the Linux foundation and the kernel team.
is it? where Linux = Redhat or Ubuntu in the real world, Ubuntu managed to force snaps and advertising for Ubuntu pro down everybody's throats and the Linux foundation was utterly helpless against that.
Sure, but that’s Ubuntu.
If one was fed up enough with Ubuntu, they could switch to Debian or mint and all their programs would still run and their workflows likely will not change too much.
But for windows you’d have to switch to osx or Linux, neither of which is going to easily support your software (unless it happens to run great under wine, but again that’s a different tool)
I feel like the argument of "don't like X, use Y" is often missing the point when people are expressing their pain with OSS. I find X painful because of reason A, B, C so I take the advice to switch to Y, be happy for a half day before I try to do anything complex and find pain points A', B' and C'. It's often a carousel of pain, and a question of choosing your poison over things that should just work, just working.
Just as an example, I spent a couple hours yesterday fighting USB audio to have linear scaling on my fresh Debian stable install, and I'm not getting that time back ever. Haven't had that sort of issue in more opinionated platforms like Windows/MacOS in living memory.
Linux is a more complicated and more powerful (at least more obviously powerful) tool than windows or macOS. Daily Linux use isn’t for everyone. It can be a hobby in and of itself.
The knowledge floor for productivity is much higher because most Linux projects aren’t concerned with mass appeal or ease of use (whether or not they should be is another discussion)
Debian, for example, is all about extreme stability. They tend to rely on older packages that they know are very very stable, but which may not contain newer features or hardware support.
The strength is extreme customization and control. Something that’s harder to get with windows and even harder to get with macOS.
OSS software has few to no profit incentives. It it written to do something, not to sell something. It also has little time pressure. If a release slips, there is no impact to quarterly numbers. Commercial software is not an engineering effort, it is a marketing exercise.
But interesting that with no time pressure, OSS does not miss lots of features of commercial products. Often their are even ahead.
I disagree. OSS is only ahead if there is no money in it, like new programming languages. Whenever it is profitable, OSS projects just cannot compete with professionals working full time.
OSS is usually reinventing the wheel free from commercial pressures (Linux, GNU, Apache). Or they are previous commercial products (Firefox, LibreOffice, Kubernetes, Bazel).
Can you explain why apple, google and microsoft all use a fork of a browser made by KDE?
They use a fork of a browser engine and there is no money in building a browser engine.
The money is in building a browser around the engine because there you can inject tracking and try to make your product unique.
Many OSS projects have professionals working full time on them.
Windows is profitable, but Linux is competing well on servers.
Not when it comes to video/image/vector editing. DaVinci Resolve, Adobe and Affinity are still miles ahead of FOSS creativity tools like The GIMP.
Try Krita, Darktable, Scribus and Blender.
You're comparing household name with household name. Commercial software has a marketing budget, but free software spreads more by word-of-mouth (or association with a big and processional organisation like GNU), so that's an apples-to-oranges comparison. GIMP isn't very good, as free software image editors go: Script-Fu, plugins, or UI familiarity are basically the only reasons to choose it these days.
Is there any feature in Krita, Darktable, Scribus, or Blender, which Adobe products do not have? It certainly is the case the other way round.
Darktable has some features that RawTherapee doesn't, and vice versa. I imagine that some of that stuff isn't in the Adobe software. (I've heard that recent versions of Lightroom have removed local file management support, which both these programs still have – though don't quote me on that.)
Krita has a lot that Photoshop doesn't: https://docs.krita.org/en/user_manual/introduction_from_othe... .
I'm curious as to which ones they do not have compared to Adobe products.
The only one I can think of is proper material layer painting in Blender, you can get there with addons but haven't found one that's as good. Genuinely the only thing that I miss, and I do this full time.
You obviously haven’t compared GUIs, the most hodgepodge mix of we have that feature! in a sea of confusion and disrespect for interface standards.
To remove the technology part.
When Komatsu decided to go after Catapiliers market, the set quality as their first strategic intent. They then made sure that later strategic steps were beholden to that earlier one.
XP/Agile manafesto emphasized 'working software' which in theory was to have a similar intent.
But the problem with manafestos is that people package them and sell them.
Agile manafesto signatories like Jeff Sutherland selling books with titles promising twice the code in half the time don't help.
OSS has a built in incentive to maintain quality, at least for smaller projects.
Companies could, but unfortunately management practices that make people quite successful becoming habits that are hard to change even when they want to.
Hopefully these big public incidents start to make the choice to care about quality and easier sell.
The point being is that quality is still an important thing for profit oriented companies, but it is easy to drop it and only notice it after it is too late.
Showing that it aligns with long term goals is possible, but getting people to do so is harder.
Yep. Some of the garbage I've seen out there is shocking. It scares me.
Then I try and get fractional scaling working on Wayland with my NVidia card and want to gouge my eyes out with frustration that after a decade I still can't do what I can do on a closed source thing that came free with my computer. Actually make that 25 years now. The enterprise crap while horrible, actually mostly does work reasonably well. Sometimes I feel dirty about this.
Quality is therefore relative to the consumer. The attention is on what the engineers care about with Linux, not the users I find. Where there's an impedance mismatch there are a lot of unhappy users.
I don't know the specifics, but there's a good chance that your issue is ultimately because Nvidia wants to keep stuff closed, and Linux is not their main market - at least for actual graphics, I guess these days it's a big market for GPU computing. So it's the interface between closed and open source that's giving you grief.
I don't think this is nVidia issue. Wayland/nVidia woes are primarily about flickering, empty windows and other rendering problems. I may be wrong, but I believe HiDPI support is mostly hardware-independent issue.
If it isn't a hardware independent issue, they really fucked up Wayland.
Here's a decent summary that tries to explain why things are bad: https://news.ycombinator.com/item?id=40909859
Oh wow. They really did fuck up Wayland.
It doesn't work properly on Intel or AMD either. It just sucks worse on NVidia.
I have been using only 4k monitors in Linux for at least a decade and I have never had any problem with fractional scaling.
I continue to be puzzled whenever I hear about this supposed problem. AFAIK, this is something specific to Gnome, which has a setting for enabling "fractional scaling", whatever that means. I do not use Gnome, so I have never been prevented to use any fractional scaling factor that I liked for my 4k monitors (which have been most frequently connected to NVIDIA cards), already since a decade ago (i.e. by setting whatever value I desired for the monitor DPI).
All GUI systems had fractional scaling already before 1990, including X Window System and MS Windows, because all had a dots-per-inch setting for the connected monitors. Already around 1990, but probably already much earlier, the recommendations for writing any GUI applications was to use for fonts or for any other graphic elements only dimensions in typographic points or in other display independent units.
For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems. There have always been some incompetent programmers who have used dimensions in pixels, making unscalable their graphic interfaces, but that has been their fault and not of X Window System or of any other window system that was abused by them.
It would have been better if no window system had allowed the use of any kind of dimensions given in pixels in any API function. Forty years ago there was the excuse that scaling the graphic elements could sometimes be too slow, so the use of dimensions in pixels could improve the performance, but this excuse had already become obsolete a quarter of century ago.
Parent comment was talking about Wayland, and Wayland does not even have a concept of display DPI (IIRC, XWayland simply hardcodes it to 96).
You're correct - in theory. In practice, though, it's a complete mess and we're already waaay past the point of no return, unless, of course, somehow an entirely new stack emerges and gains traction.
I don't have any hard numbers to back this, but I have a very strong impression that most coders use pixels for some or all the dimensions, and a lot of them mix units in a weird way. I mean... say, check this very website's CSS and see how it has an unhealthy mix of `pt`s and `px`es.
In a window manager or KDE(x11) you can use nvidia-settings you can click advanced in monitor settings and selected viewport in and viewport out. If you set the viewport out to be the actual resolution and the viewport in to be some multiple of the actual resolution you can get fractional scaling and if you make the factor chosen a function of the relative DPI of your respective monitors you can make things perceptibly the same size across monitors. That's right fractional scaling AND mixed DPI!
You can achieve the same thing with xrandr --scale and its easier to automate happening at login.
You can also achieve pretty good fractional scaling in Cinnamon (x11) directly via its configuration. You enable fractional scaling on the right tab and suddenly global scale is replaced with a per monitor scale. Super user friendly.
Also your copy of Windows was just as free your computer was. You paid someone to configure windows acceptably for you and Microsoft and various OEMs who make windows hardware split your money giving you something usable in return.
You then decided that you wanted Linux on it and now you are the OEM which means you get to puzzle out integration and configuration issues including choosing a DE that supports the features you desire and configuring it to do what you want it to do.
I do not think "amateurs" is a good description about the people writing the code - most will be highly technical people with lots of experience. And "loosely coordinated" can be applied to many "corporations" as well.
I think it matters that people coding in open source do it because they care (similar to your idea but on the positive side). If you want to make something nice/working/smart you have more chances to succeed if you care than if you are just being paid to do it (or afraid that you will be embarrassed)
If you get payed you're a professional otherwise an amateur, at least in the original meaning of the words.
in this case would a programmer with a day job that also hacks on linux in their free time be a professional at work and an amateur on anything they do independently? Or really any sort of engineer, contractor, person who makes stuff, etc?
Just my interpretation: that programmer is a professional. They are paid to do programing. They are still professional even when they are working on their hobby project, because it is not a function if they are paid for that particular code, but if they are paid for any coding at all.
If that programer would go and coach their friend’s basketball team for free they would be an amateur coach, but they are still a professional programmer even while coaching.
I agree - and in that case I'd bet a lot of Linux and open source etc is written by professionals to some degree that's likely significant.
Yes, they wouldn't be considered doing the Linux stuff in a professional capacity.
If we're doing originalism, an amateur is someone who does not engage in any wage-earning labor.
I don't know, out of the software crises I can remember out of the top of my head, 2 (Heartbleed and Log4Shell) are from FOSS.
how many ransomware attacks have you been a a part of remediation for?
I admit none, but isn't that just because ransomware frequently comes into a network through user error which usually is using a Windows machine?
But way less damage.
How many computers crashed?
I'm trying to agree with you here, not shame you, but I do think there's something to the idea that you just shouldn't write code that you wouldn't want to be public. In the long run, it's a principle that will encourage growth in beneficial directions.
Also proprietary code is harder to write because you can't just solve the problem, you have to solve the problem in a way that makes business sense--which often means solving it in a way that does not make sense.
When prototyping something to see if a concept works, or building something for your own private use, you really shouldn't waste time trying to make the code perfect for public consumption. If later you find you want to open source something you wrote, there will inevitably be some clean-up involved, but thinking of writing the cleanest and most readable code on a blue sky project just hampers your ability to create something new and test it quickly.
but that's the reality of writing software.
the problem is that no matter how sincere the promise of "I'll clean it up and release the code" is, it rings very hollow because few people realistically actually ever get there.
if a developer is so afraid of judgement that they can't release the code to something they want to release, we have a cultural problem (which we do), but the way forwards from that is to normalize that sharing code that is more functional than it is pretty is better than future promises of code.
as the saying goes, one public repo up on GitHub is worth two in the private gitlab instance
Leave it broken, that's fine, just don't leave it misleading. And leave hints for how a passer by might improve it in the future.
Probably you'll be that passer by in the future, and you'll thank yourself. Or you won't, and someone will thank you.
I think deadlines are probably also a big factor. Many OSS developers build their projects in their free time for themselves and others. So it could be a passion project where someone takes more pride in their work. I'm a big advocate for that actually.
Much commercial software feels like it is duct taped together to meet some managers deadline, so you feel like 'whatever' and are happy to be done with it.
Also, pressure from product owners and business can affect delivery negatively.
don't romanticize the situation too much, open source software is almost entirely written by professional software developers, mostly at their day jobs
For larger projects there's still usually the benefit of having developers/maintainers from multiple institutions with different goals.
Linus is not gonna merge some hacky crap just because somebody's boss says that it must be merged for the next TPS report.
Per FOSS survey, most of it is written by professionals and they get paid for it.
Then, most of the rest is written by professionals who do something on the side. The amateur for free thing is mostly a myth.
This is such a gross mischaracterization.
Linux enjoys stability and robustness because multi-billion dollar corporations like RedHat and Canonical throw tons of money and resources at achieving this end and turning the loose collection of scripts plus a kernel into a usable OS.
If they didn't, Linux would have single-digit adoption by hobbyists and these companies would still be running Solaris and HP/UX.
I've committed sins in production code that I would never dream of doing in one of my published open source projects. The allure of " no one will ever see this" is pretty strong
Resiliency of OSS varies a lot between projects and even between parts of certain projects.
For example Linux ACPI support was pretty flaky until Linus pushed for no breakages in that area.
At least this one you can argue that they can't test every Linux and Unix flavour. But windows..
The things I build because I'm paid to, in ways I disagree with because MBAs have the final say, are terrible compared to my hobby and passion projects. I don't imagine I'm entirely alone. I hope there's always communities out there building because they want useful things and tools for doing useful things that they control.
I mostly agree, but I think that there's a delayed effect for OSS projects falling apart. Most of these projects are literally just 1 or 2 people coding in their spare time with maybe a few causal contributors. The lack of contributors working on extremely important software makes them vulnerable to bad actors (e.g. the XZ backdoor) or to the maintainers going AWOL. The upside is that it's easy for anybody to just go in and fix the issue once it's found, but the problem needs to happen first before anybody does that.
It's a hunch, but I feel like open source has more churn and chaos & is multi-party. And that mileau resembles nature, with change evolution & dynamics.
Corporations are almost always bound into deep deep path dependence. There's ongoing feature development upon their existing monolithic applications. New ideas are typically driven by the org & product owners, down to the workers. Rarely can engineers build the mandate to do big things, imo.
Closed source's closeness is a horrible disadvantage. Being situated not alone & by yourself but part of a broader environment, new ideas & injections can happen, works to reduce the risk of cruft maladaption & organizational mismanagement & malpractice. Participating in a boarder world & ecosystem engenders a dynamism, resists being beset by technical & organizational stasism.
this is one of larry wall's 3 virtues of great programmers:
hubris: the quality that makes you write (and maintain) programs that other people won't want to say bad things about.
There's something to it. Anecdote of one: at one time management threatened^Wannounced that they planned to open the code base. I for one was not comfortable with that. In a commercial setting, I code with time to release in mind. No frills, no optimizations, no additional checks unless explicitly requested. I just wrote too much code which was never released (customer/sales team changed its mind). And time to market was typically of utmost importance. If the product turns out to be viable, one can fix the code later (which late in my career I spent most time on).
but...it's all the open, and it might have more bugs, but the bugs get fixed faster
Well, in the industry usually fewer eyes are looking at the code than in open source. Nobody strives to make it perfect overall, people are focused on what is important. Numbers, performance, stability, priorities depend on the project. There are small tasks approved by managers, developers aren't interested in doing more. Bigger company works the same, it has just more projects.
The tradition of writing tests in OSS projects plays a huge role here.