return to table of content

CrowdStrike broke Debian and Rocky Linux months ago

lambdaone
70 replies
23h55m

What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

Just for example, I'm planning to make one of my commercial projects open source, and I am going to have to do a lot of fixing up before I'm willing to show the source code in public. It's not terrible code, and it works perfectly well, but it's not the sort of code I'd be willing to show to the world in general. Better documentation, TODO and FIXME fixing, checking comments still reflect the code, etc. etc.

But for all my sense of shame for this (perfectly good and working) software, I've seen the insides of several closed-source commercial code bases and seen far, far worse. I would imagine most "enterprise" software is written to a similar standard.

stonethrowaway
13 replies
22h15m

Linus would disagree and there’s a reason why the kernel keeps a 1000 ft wall up at all times. He would outright reject majority of user land code with good reason. It’s a miracle that anything outside of the kernel works and it shows very often. People seem to forget how often distros shit the bed on every update and how frequently people have to rebuild the entire fucking thing to get it to work again. Updating is an incredibly stressful procedure for production anything. I haven’t even gotten to audio and video - that’s a separate Fuck You ft. Wayland mix. So no, Windows is the gold standard in many ways, precisely because it’s mercilessly fucked from every which way and manages to run every day across a bajillion machines doing critical things. I don’t care about who is being financially compensated and who isn’t - the depth of decision making shows itself in the musings of Raymond Chen and others, and that level of thoroug thinking is very rare, even in the OSS world.

dartos
12 replies
22h9m

Tbf, internally windows may be in a similar situation, it’s just not in the open. So there could be some visibility bias.

IMO the difference is, it’s usually pretty easy to excise offending code from the Linux ecosystem, but not on windows.

Don’t like Wayland? Stick with X.

Don’t like systemd? Don’t use it.

Dont like cortana or recall? Tough, it’s gonna be on your machine.

tored
5 replies
18h12m

Sure, but can you realistically run an up-to-date server today without systemd? Especially for these organisation that runs stuff like CrowdStike.

josephcsible
4 replies
16h27m

Devuan? Void? MX? Guix?

tored
3 replies
12h36m

Do sysadmins actually run any of them for large systems like stores, hospitals, airports etc?

spookie
1 replies
9h20m

Alpine Linux. I'm sure they do run that one.

josephcsible
0 replies
10m

I left Alpine off of my list because the only place I've ever seen it used is inside of containers, which usually don't run their distro's init system at all.

dartos
0 replies
6h22m

The point is that they can if they have the want/need to.

Sysadmins at the places you listed use windows because that’s where the software support is and Active Directory exists.

EVa5I7bHFq9mnYK
5 replies
21h56m

It's a matter of a single click to disable cortana, recall or anything you don't like in Windows, with tools like w10privacy.

dartos
4 replies
21h51m

Sure, but that’s a whole other tool.

Cortana and recall are just examples. Microsoft (or OEMs) can put anything they want in the OS and make it difficult to remove.

It’s harder to do that kind of stuff for the Linux foundation and the kernel team.

fragmede
3 replies
20h50m

is it? where Linux = Redhat or Ubuntu in the real world, Ubuntu managed to force snaps and advertising for Ubuntu pro down everybody's throats and the Linux foundation was utterly helpless against that.

dartos
2 replies
18h24m

Sure, but that’s Ubuntu.

If one was fed up enough with Ubuntu, they could switch to Debian or mint and all their programs would still run and their workflows likely will not change too much.

But for windows you’d have to switch to osx or Linux, neither of which is going to easily support your software (unless it happens to run great under wine, but again that’s a different tool)

whatevertrevor
1 replies
13h11m

I feel like the argument of "don't like X, use Y" is often missing the point when people are expressing their pain with OSS. I find X painful because of reason A, B, C so I take the advice to switch to Y, be happy for a half day before I try to do anything complex and find pain points A', B' and C'. It's often a carousel of pain, and a question of choosing your poison over things that should just work, just working.

Just as an example, I spent a couple hours yesterday fighting USB audio to have linear scaling on my fresh Debian stable install, and I'm not getting that time back ever. Haven't had that sort of issue in more opinionated platforms like Windows/MacOS in living memory.

dartos
0 replies
6h15m

Linux is a more complicated and more powerful (at least more obviously powerful) tool than windows or macOS. Daily Linux use isn’t for everyone. It can be a hobby in and of itself.

The knowledge floor for productivity is much higher because most Linux projects aren’t concerned with mass appeal or ease of use (whether or not they should be is another discussion)

Debian, for example, is all about extreme stability. They tend to rely on older packages that they know are very very stable, but which may not contain newer features or hardware support.

The strength is extreme customization and control. Something that’s harder to get with windows and even harder to get with macOS.

kermatt
13 replies
22h32m

it is still more robust than software created by multi-billion dollar corporations

OSS software has few to no profit incentives. It it written to do something, not to sell something. It also has little time pressure. If a release slips, there is no impact to quarterly numbers. Commercial software is not an engineering effort, it is a marketing exercise.

f1shy
11 replies
22h28m

But interesting that with no time pressure, OSS does not miss lots of features of commercial products. Often their are even ahead.

qznc
4 replies
22h18m

I disagree. OSS is only ahead if there is no money in it, like new programming languages. Whenever it is profitable, OSS projects just cannot compete with professionals working full time.

OSS is usually reinventing the wheel free from commercial pressures (Linux, GNU, Apache). Or they are previous commercial products (Firefox, LibreOffice, Kubernetes, Bazel).

LtWorf
1 replies
12h16m

Can you explain why apple, google and microsoft all use a fork of a browser made by KDE?

qznc
0 replies
9h42m

They use a fork of a browser engine and there is no money in building a browser engine.

The money is in building a browser around the engine because there you can inject tracking and try to make your product unique.

sillywalk
0 replies
14h59m

OSS projects just cannot compete with professionals working full time.

Many OSS projects have professionals working full time on them.

Thorrez
0 replies
19h48m

Whenever it is profitable, OSS projects just cannot compete with professionals working full time.

Windows is profitable, but Linux is competing well on servers.

aAaaArrRgH
4 replies
20h45m

Not when it comes to video/image/vector editing. DaVinci Resolve, Adobe and Affinity are still miles ahead of FOSS creativity tools like The GIMP.

wizzwizz4
3 replies
20h40m

Try Krita, Darktable, Scribus and Blender.

You're comparing household name with household name. Commercial software has a marketing budget, but free software spreads more by word-of-mouth (or association with a big and processional organisation like GNU), so that's an apples-to-oranges comparison. GIMP isn't very good, as free software image editors go: Script-Fu, plugins, or UI familiarity are basically the only reasons to choose it these days.

qznc
2 replies
9h37m

Is there any feature in Krita, Darktable, Scribus, or Blender, which Adobe products do not have? It certainly is the case the other way round.

wizzwizz4
0 replies
2h37m

Darktable has some features that RawTherapee doesn't, and vice versa. I imagine that some of that stuff isn't in the Adobe software. (I've heard that recent versions of Lightroom have removed local file management support, which both these programs still have – though don't quote me on that.)

Krita has a lot that Photoshop doesn't: https://docs.krita.org/en/user_manual/introduction_from_othe... .

spookie
0 replies
9h11m

I'm curious as to which ones they do not have compared to Adobe products.

The only one I can think of is proper material layer painting in Blender, you can get there with addons but haven't found one that's as good. Genuinely the only thing that I miss, and I do this full time.

philistine
0 replies
3h47m

You obviously haven’t compared GUIs, the most hodgepodge mix of we have that feature! in a sea of confusion and disrespect for interface standards.

nyrikki
0 replies
20h12m

To remove the technology part.

When Komatsu decided to go after Catapiliers market, the set quality as their first strategic intent. They then made sure that later strategic steps were beholden to that earlier one.

XP/Agile manafesto emphasized 'working software' which in theory was to have a similar intent.

But the problem with manafestos is that people package them and sell them.

Agile manafesto signatories like Jeff Sutherland selling books with titles promising twice the code in half the time don't help.

OSS has a built in incentive to maintain quality, at least for smaller projects.

Companies could, but unfortunately management practices that make people quite successful becoming habits that are hard to change even when they want to.

Hopefully these big public incidents start to make the choice to care about quality and easier sell.

The point being is that quality is still an important thing for profit oriented companies, but it is easy to drop it and only notice it after it is too late.

Showing that it aligns with long term goals is possible, but getting people to do so is harder.

teeheelol
9 replies
23h43m

Yep. Some of the garbage I've seen out there is shocking. It scares me.

Then I try and get fractional scaling working on Wayland with my NVidia card and want to gouge my eyes out with frustration that after a decade I still can't do what I can do on a closed source thing that came free with my computer. Actually make that 25 years now. The enterprise crap while horrible, actually mostly does work reasonably well. Sometimes I feel dirty about this.

Quality is therefore relative to the consumer. The attention is on what the engineers care about with Linux, not the users I find. Where there's an impedance mismatch there are a lot of unhappy users.

takluyver
5 replies
23h27m

after a decade I still can't do what I can do on a closed source thing that came free with my computer

I don't know the specifics, but there's a good chance that your issue is ultimately because Nvidia wants to keep stuff closed, and Linux is not their main market - at least for actual graphics, I guess these days it's a big market for GPU computing. So it's the interface between closed and open source that's giving you grief.

drdaeman
3 replies
22h7m

I don't think this is nVidia issue. Wayland/nVidia woes are primarily about flickering, empty windows and other rendering problems. I may be wrong, but I believe HiDPI support is mostly hardware-independent issue.

teeheelol
2 replies
21h47m

If it isn't a hardware independent issue, they really fucked up Wayland.

teeheelol
0 replies
20h42m

Oh wow. They really did fuck up Wayland.

teeheelol
0 replies
23h21m

It doesn't work properly on Intel or AMD either. It just sucks worse on NVidia.

adrian_b
1 replies
23h6m

I have been using only 4k monitors in Linux for at least a decade and I have never had any problem with fractional scaling.

I continue to be puzzled whenever I hear about this supposed problem. AFAIK, this is something specific to Gnome, which has a setting for enabling "fractional scaling", whatever that means. I do not use Gnome, so I have never been prevented to use any fractional scaling factor that I liked for my 4k monitors (which have been most frequently connected to NVIDIA cards), already since a decade ago (i.e. by setting whatever value I desired for the monitor DPI).

All GUI systems had fractional scaling already before 1990, including X Window System and MS Windows, because all had a dots-per-inch setting for the connected monitors. Already around 1990, but probably already much earlier, the recommendations for writing any GUI applications was to use for fonts or for any other graphic elements only dimensions in typographic points or in other display independent units.

For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems. There have always been some incompetent programmers who have used dimensions in pixels, making unscalable their graphic interfaces, but that has been their fault and not of X Window System or of any other window system that was abused by them.

It would have been better if no window system had allowed the use of any kind of dimensions given in pixels in any API function. Forty years ago there was the excuse that scaling the graphic elements could sometimes be too slow, so the use of dimensions in pixels could improve the performance, but this excuse had already become obsolete a quarter of century ago.

drdaeman
0 replies
22h9m

For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems.

Parent comment was talking about Wayland, and Wayland does not even have a concept of display DPI (IIRC, XWayland simply hardcodes it to 96).

You're correct - in theory. In practice, though, it's a complete mess and we're already waaay past the point of no return, unless, of course, somehow an entirely new stack emerges and gains traction.

There have always been some incompetent programmers who have used dimensions in pixels

I don't have any hard numbers to back this, but I have a very strong impression that most coders use pixels for some or all the dimensions, and a lot of them mix units in a weird way. I mean... say, check this very website's CSS and see how it has an unhealthy mix of `pt`s and `px`es.

michaelmrose
0 replies
22h48m

In a window manager or KDE(x11) you can use nvidia-settings you can click advanced in monitor settings and selected viewport in and viewport out. If you set the viewport out to be the actual resolution and the viewport in to be some multiple of the actual resolution you can get fractional scaling and if you make the factor chosen a function of the relative DPI of your respective monitors you can make things perceptibly the same size across monitors. That's right fractional scaling AND mixed DPI!

You can achieve the same thing with xrandr --scale and its easier to automate happening at login.

You can also achieve pretty good fractional scaling in Cinnamon (x11) directly via its configuration. You enable fractional scaling on the right tab and suddenly global scale is replaced with a per monitor scale. Super user friendly.

Also your copy of Windows was just as free your computer was. You paid someone to configure windows acceptably for you and Microsoft and various OEMs who make windows hardware split your money giving you something usable in return.

You then decided that you wanted Linux on it and now you are the OEM which means you get to puzzle out integration and configuration issues including choosing a DE that supports the features you desire and configuring it to do what you want it to do.

vladms
6 replies
23h14m

I do not think "amateurs" is a good description about the people writing the code - most will be highly technical people with lots of experience. And "loosely coordinated" can be applied to many "corporations" as well.

I think it matters that people coding in open source do it because they care (similar to your idea but on the positive side). If you want to make something nice/working/smart you have more chances to succeed if you care than if you are just being paid to do it (or afraid that you will be embarrassed)

croes
5 replies
22h58m

If you get payed you're a professional otherwise an amateur, at least in the original meaning of the words.

mrmetanoia
3 replies
22h50m

in this case would a programmer with a day job that also hacks on linux in their free time be a professional at work and an amateur on anything they do independently? Or really any sort of engineer, contractor, person who makes stuff, etc?

krisoft
1 replies
22h30m

Just my interpretation: that programmer is a professional. They are paid to do programing. They are still professional even when they are working on their hobby project, because it is not a function if they are paid for that particular code, but if they are paid for any coding at all.

If that programer would go and coach their friend’s basketball team for free they would be an amateur coach, but they are still a professional programmer even while coaching.

mrmetanoia
0 replies
22h20m

I agree - and in that case I'd bet a lot of Linux and open source etc is written by professionals to some degree that's likely significant.

throwaway3306a
0 replies
19h36m

Yes, they wouldn't be considered doing the Linux stuff in a professional capacity.

Talanes
0 replies
19h24m

If we're doing originalism, an amateur is someone who does not engage in any wage-earning labor.

jowea
3 replies
23h6m

I don't know, out of the software crises I can remember out of the top of my head, 2 (Heartbleed and Log4Shell) are from FOSS.

fragmede
1 replies
20h33m

how many ransomware attacks have you been a a part of remediation for?

jowea
0 replies
14h30m

I admit none, but isn't that just because ransomware frequently comes into a network through user error which usually is using a Windows machine?

croes
0 replies
22h57m

But way less damage.

How many computers crashed?

__MatrixMan__
3 replies
23h20m

I'm trying to agree with you here, not shame you, but I do think there's something to the idea that you just shouldn't write code that you wouldn't want to be public. In the long run, it's a principle that will encourage growth in beneficial directions.

Also proprietary code is harder to write because you can't just solve the problem, you have to solve the problem in a way that makes business sense--which often means solving it in a way that does not make sense.

noduerme
2 replies
22h51m

When prototyping something to see if a concept works, or building something for your own private use, you really shouldn't waste time trying to make the code perfect for public consumption. If later you find you want to open source something you wrote, there will inevitably be some clean-up involved, but thinking of writing the cleanest and most readable code on a blue sky project just hampers your ability to create something new and test it quickly.

fragmede
0 replies
20h34m

but that's the reality of writing software.

the problem is that no matter how sincere the promise of "I'll clean it up and release the code" is, it rings very hollow because few people realistically actually ever get there.

if a developer is so afraid of judgement that they can't release the code to something they want to release, we have a cultural problem (which we do), but the way forwards from that is to normalize that sharing code that is more functional than it is pretty is better than future promises of code.

as the saying goes, one public repo up on GitHub is worth two in the private gitlab instance

__MatrixMan__
0 replies
19h0m

Leave it broken, that's fine, just don't leave it misleading. And leave hints for how a passer by might improve it in the future.

Probably you'll be that passer by in the future, and you'll thank yourself. Or you won't, and someone will thank you.

sva_
1 replies
22h45m

I think deadlines are probably also a big factor. Many OSS developers build their projects in their free time for themselves and others. So it could be a passion project where someone takes more pride in their work. I'm a big advocate for that actually.

Much commercial software feels like it is duct taped together to meet some managers deadline, so you feel like 'whatever' and are happy to be done with it.

lwhi
0 replies
19h26m

Also, pressure from product owners and business can affect delivery negatively.

Palomides
1 replies
22h35m

don't romanticize the situation too much, open source software is almost entirely written by professional software developers, mostly at their day jobs

jampekka
0 replies
22h15m

For larger projects there's still usually the benefit of having developers/maintainers from multiple institutions with different goals.

Linus is not gonna merge some hacky crap just because somebody's boss says that it must be merged for the next TPS report.

watwut
0 replies
10h23m

much of it code and lashed together by amateurs for free

Per FOSS survey, most of it is written by professionals and they get paid for it.

Then, most of the rest is written by professionals who do something on the side. The amateur for free thing is mostly a myth.

wannacboatmovie
0 replies
22h12m

What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

This is such a gross mischaracterization.

Linux enjoys stability and robustness because multi-billion dollar corporations like RedHat and Canonical throw tons of money and resources at achieving this end and turning the loose collection of scripts plus a kernel into a usable OS.

If they didn't, Linux would have single-digit adoption by hobbyists and these companies would still be running Solaris and HP/UX.

utensil4778
0 replies
21h33m

Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

I've committed sins in production code that I would never dream of doing in one of my published open source projects. The allure of " no one will ever see this" is pretty strong

raverbashing
0 replies
22h40m

Resiliency of OSS varies a lot between projects and even between parts of certain projects.

For example Linux ACPI support was pretty flaky until Linus pushed for no breakages in that area.

rakkhi
0 replies
19h40m

At least this one you can argue that they can't test every Linux and Unix flavour. But windows..

mrmetanoia
0 replies
22h43m

The things I build because I'm paid to, in ways I disagree with because MBAs have the final say, are terrible compared to my hobby and passion projects. I don't imagine I'm entirely alone. I hope there's always communities out there building because they want useful things and tools for doing useful things that they control.

matteoraso
0 replies
22h26m

What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

I mostly agree, but I think that there's a delayed effect for OSS projects falling apart. Most of these projects are literally just 1 or 2 people coding in their spare time with maybe a few causal contributors. The lack of contributors working on extremely important software makes them vulnerable to bad actors (e.g. the XZ backdoor) or to the maintainers going AWOL. The upside is that it's easy for anybody to just go in and fix the issue once it's found, but the problem needs to happen first before anybody does that.

jauntywundrkind
0 replies
23h32m

It's a hunch, but I feel like open source has more churn and chaos & is multi-party. And that mileau resembles nature, with change evolution & dynamics.

Corporations are almost always bound into deep deep path dependence. There's ongoing feature development upon their existing monolithic applications. New ideas are typically driven by the org & product owners, down to the workers. Rarely can engineers build the mandate to do big things, imo.

Closed source's closeness is a horrible disadvantage. Being situated not alone & by yourself but part of a broader environment, new ideas & injections can happen, works to reduce the risk of cruft maladaption & organizational mismanagement & malpractice. Participating in a boarder world & ecosystem engenders a dynamism, resists being beset by technical & organizational stasism.

harrison_clarke
0 replies
22h55m

this is one of larry wall's 3 virtues of great programmers:

hubris: the quality that makes you write (and maintain) programs that other people won't want to say bad things about.

guenthert
0 replies
23h15m

Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

There's something to it. Anecdote of one: at one time management threatened^Wannounced that they planned to open the code base. I for one was not comfortable with that. In a commercial setting, I code with time to release in mind. No frills, no optimizations, no additional checks unless explicitly requested. I just wrote too much code which was never released (customer/sales team changed its mind). And time to market was typically of utmost importance. If the product turns out to be viable, one can fix the code later (which late in my career I spent most time on).

caycep
0 replies
23h7m

but...it's all the open, and it might have more bugs, but the bugs get fixed faster

astromaniak
0 replies
22h40m

it is still more robust than software created by multi-billion dollar corporations

Well, in the industry usually fewer eyes are looking at the code than in open source. Nobody strives to make it perfect overall, people are focused on what is important. Numbers, performance, stability, priorities depend on the project. There are small tasks approved by managers, developers aren't interested in doing more. Bigger company works the same, it has just more projects.

EugeneOZ
0 replies
22h19m

The tradition of writing tests in OSS projects plays a huge role here.

can16358p
22 replies
1d

Product quality is on freefall: from aircraft to software. Lack of QA is the norm nowadays as everyone just care about the extra penny.

bloopernova
13 replies
1d

The economic consequences for doing things wrong are less than the profit made.

Until that changes, nothing else will.

nothercastle
5 replies
23h52m

I’m not even sure it’s more profitable to do things wrong it’s just easier and more advantageous for individual managers

trashtensor
4 replies
23h48m

Why is it advantageous though? Surely the behaviors of managers are incentivised by something.

kermatt
1 replies
22h35m

CYA incentives:

If I don't install one of these system (and global IT) killing EDR systems, and I have a breach _I_ am responsible.

If my company requires it, and I install it and the entire network falls over, responsibility is passed to the EDR vendor. Everytime the EDR platform in my org kills an app, much of the reaction internally is "Oh well, at least we are protected. Let's open a support ticket."

"Security" software has been troublesome since the first AV platform was released. But the personal risk for management to not deploy it is high.

nothercastle
0 replies
19h0m

Yeah this allows outsourcing both risk and responsibility. The institutional risk that you take in exchange is acceptable because it lowers personal risk

stackskipton
0 replies
23h34m

Generally individual managers who make these decisions are acting as short sighted as companies they belong to. Along with that, Company interests and individual manager interests don't always align.

I've been plenty of places where individual manager comes up with some grand plan, implements it, gets praise and leverages into new job. Meanwhile, that plan never makes it past MVP and is massive tech debt that will weigh down the company but they don't care.

rbanffy
0 replies
23h44m

Because the benefits are reaped well before the full cost becomes evident. By the time everything catches fire, the person responsible has retired into their Mediterranean villas and will never come back to fix what they caused.

fifteen1506
4 replies
1d

Penalty 1: stock buybacks forbidden

Penalty 2: separate the company into two, separating financialization procedures from manufacturing ones.

Penalty 3: greenlight a union by default.

photonbeam
2 replies
23h33m

Penalty 2 is the same as closing the company down

nicce
0 replies
22h2m

One could say that most of the companies outside U.S are non-financial

AnimalMuppet
0 replies
22h28m

No, why? If there's a company that provides actual value, why should splitting off the financialization part kill the part that provides real value? It's maybe the same as closing down the financialization part of the company, but if so, what loss to society?

rbanffy
0 replies
23h43m

Penalty 3: greenlight a union by default.

This should be a fundamental feature in any functioning society that expects (or wishes) to remain functional.

sudosysgen
1 replies
1d

They are higher, actually, especially in this case and otherwise by definition. The problem is that most of the cost ends up externalized.

nicce
0 replies
22h24m

Indeed. Company should go close to bankrupty in incident like this. But CrowdStrike pays fractions.

stanleykm
4 replies
1d

ive said this before but we have min-maxed our economy to optimize for profit. We may be entering the reaping phase of that now.

its the same reason we cant make enough artillery shells for ukraine or onshore chipmaking or build ships.

zrav
1 replies
22h51m

ive said this before but we have min-maxed our economy to optimize for profit. We may be entering the reaping phase of that now.

Expanding the scope beyond the economy, one could certainly make the claim that the Age of Consequences is upon us, and that William Gibsons "Jackpot" isn't far off either. We're increasingly and collectively impacted by the fallout from decades of bad decisions.

utensil4778
0 replies
21h23m

Age of consequences, indeed.

It really does feel like we (humanity) are on the precipice of something. We're smack in the middle of an era that entire books will be written about. I really don't like thinking about the decades to come and what kind of world our grandchildren will have.

rbanffy
1 replies
23h45m

Who would imagine that optimising for profit and profit alone would result in such a fragile ecosystem?

Sorry for the sarcasm.

abbadadda
0 replies
23h31m

Boeing enters the chat

whatwhaaaaat
0 replies
1d

It’s also the proliferation of not-really-technical people calling the shots and even developing at technical programs.

15 years ago most people in the industry actually wanted to be there except for perhaps a few exceptions.

Now it’s just like any other job with the majority of developers not giving a shit about what comes out which turns out to be shit.

I haven’t ran in to a senior or lower developer that actually bothered to test their junk in at least a few years.

teeheelol
0 replies
23h52m

This is not about product quality.

This is marketoid head-shitting influencers selling garbage to people who have no idea what the fuck they are doing other that compliance box ticking to cover their paranoid vendor induced psychosis. They have no idea about threat modelling, no idea about how an operating system even works. Using the CIA triad, they traded off AVAILABILITY entirely, CONFIDENTIALITY partially by sending every fart a system makes to some cloud company for some false sense of INTEGRITY that the vendor does not even guarantee. In fact I've found fuck all evidence that the product actually does anything at all in a correctly layered security architecture.

This is the diametric opposite of a security proposition. A house of cards built on lies and incompetence. Literally this product is the definition of malware. It egresses data you have no control over. It has built in remote command and control that can take you out. And it runs completely arbitrary code in ring 0 (clearly written by a blind monkey)

I called this ENTIRE thing in March 2023 as a critical business risk and the above people steamrolled it. Literally I have the objections recorded and I'm going to use it at this point to steamroll them the fuck back.

It doesn't matter if it's made of shit if people buy it. It happens to be made of shit though.

I am done with this now. I am so fucking done.

TestingWithEdd
0 replies
1d

Yeah it’s sad. Some companies are better than others, and it’s personally my favorite part of software engineering, but a lot of large companies cut it at the first opportunity.

pipes
9 replies
23h46m

What is cloud strikes unique selling point? Genuine question, because I'd never heard of them before this.

rbanffy
4 replies
23h40m

“Dear CTO, do you want to be seen as the person who didn’t take every measure to stop hackers from stealing your data? Then buy our stuff”

Something like that.

teeray
3 replies
23h31m

Close, but instead of an email, it’s probably a conversation during a game of golf at an exclusive private club.

rbanffy
1 replies
23h27m

“It’s a nice IT infrastructure you have at XCorp. Would be a shame if something happened to it”…

Or

“You really don’t want to be the last one in the finger pointing chain. Come on, sign here and you can point to us. Our lawyers will deal with the mess”.

Am4TIfIsER0ppos
0 replies
22h58m

“It’s a nice IT infrastructure you have at XCorp. Would be a shame if something happened to it”…

That happens to be the Cloudflare sales pitch too.

marcosdumay
0 replies
22h18m

It's an auditor, from insurance or some financial service, with a list of 2 or 3 companies, 1 of them being famously invaded since the turn of the century (once, continually).

jdgoesmarching
2 replies
22h12m

Really weird to see HN fail to explain a simple software question.

Crowdstrike Falcon specifically is their AV offering, the selling point is the ease of deploying and managing their agents alongside the rest of their security platform.

Many compliance frameworks either require or are interpreted as requiring AV, so regardless of utility this type of tool is necessary for many orgs. Hopefully this whole debacle will shine a light on that assumption, but that’s a separate conversation.

Deploying AV agents in bulk is a huge pain in the ass and most companies that make it easier aren’t going to be cheap. I imagine C-suites are more likely to approve expensive RFPs if they’ve heard of this company that sponsors a major F1 team.

LtWorf
1 replies
11h3m

Are you a salesman for crowdstrike?

jdgoesmarching
0 replies
3h57m

If that’s the conclusion you’ve drawn from me questioning their entire business model and arguing that their selling point is advertising on race cars, I don’t know how to help you.

CSMastermind
0 replies
23h41m

At face value they sell it as a security product. In reality it's a tool for employers to spy on employees, control what they do on the device, etc.

The unique selling point in a sense is plausible deniability about the true purpose of the software.

xyst
6 replies
1d

Need a “crowdstrike.sucks” or maybe a general “it.sucks/{company}” to gather all of these company misgivings.

Avoid this company at all costs. Move to competitors which offer the same “audit passing” requirements

jsheard
5 replies
1d

Funnily enough crowdstrike.sucks is already registered through CSC, the same fancy corporate registrar which handles crowdstrike.com.

The people behind .sucks have a great racket going, every big brand has to pay $300/year for their name dot sucks just so nobody else can get it.

rbanffy
3 replies
23h39m

They also own “clownstrike.com”. The fact they proactively registered it says a lot.

j-bos
1 replies
23h29m

Proative marketing culture > proactive testing culture

svnt
0 replies
23h7m

Probably the relationship is more causative: A decided lack of a proactive testing culture feeds back into the org through sales interactions but only makes it as far as the discretionary credit card of someone in the marketing group

smsm42
0 replies
20h51m

They've been called that way before they made Jun 19 the International Bluescreen Day.

xyst
0 replies
23h32m

Wow, $300/year is wild. In this influencer age, negative attention is the cash grab.

<researching how to setup TLD, maybe set it up as a DAO>

yolo3000
4 replies
23h27m

Is anyone here using Crowdstrike, what does it do? I see it referred to as an 'anti-virus'? I have it installed on my work laptops and I see it as a keylogger and activity monitor. "I got nothing to hide", but still bothers me when some corporate super users spy on me.

surfingdino
1 replies
22h40m

It gets you a sign-off from the security, compliance, and legal teams.

jdgoesmarching
0 replies
22h11m

This right here

kchr
1 replies
20h43m

Instead of calling out AV/EDR solutions as malware and spyware, I have a better solution: Stop using your workstations for private stuff. They belong to the company, and they are a liability since you use them to access the company environment and could cause damage if actual malware would find its way onto it. Use a separate device for private stuff if you want privacy.

josephcsible
0 replies
18h43m

Yes, you should be using a separate computer for personal stuff, but you should still be calling out that spyware for what it is too.

paholg
4 replies
23h56m

I used to work in this space, and I always had the nagging question of "is any of this stuff actually useful?"

It seems a hard question to answer, but are there any third party studies of the effectiveness of Crowdstrike et al. or are we all making our lives worse for some security theater?

marcosdumay
2 replies
22h22m

Have you seen it actually stop anything? (I'm sure the company that made the tool used it too, right?)

If I make a WWW-wide question of "has anybody seen it?", somebody will appear. But the number of people that got a security flaw caused by those tools is huge, and the people that got stability and availability problems because of them is basically the number of people that use them.

paholg
1 replies
22h0m

I worked on something different, but we integrated with Crowdstrike and such.

Maybe someone could do a study of like breaches in Fortune 500 companies that use an EDR vs. those that don't, but they probably all do at this point.

sam_bristow
0 replies
19h48m

I would imagine any study like that would also be just packed with confounding factors.

nocsi
0 replies
19h13m

It’s like trying to study the effectiveness of antivirus. But you already said it. As long as it produces consumable metrics a c-level can ingest, then it’s worth it. Because really, how does it make sense to add something so invasive? Anyways in the 90s, antivirus makers also wrote viruses. They’d go on to flood networks with their creations, but magically block infection for their subscribers.

lkdfjlkdfjlg
4 replies
23h56m

( ... ) experienced significant disruptions as a result of CrowdStrike updates, raising serious concerns about the company's software update and testing procedures

To me the issue isn't CrowdStrike's testing procedures. To me the issue is why does Debian depend on CrowdStrike? Does anyone understand this?

Sakos
2 replies
23h45m

Debian doesn't depend on CrowdStrike. CS provides Linux clients for organizations that want to deploy it with their Linux workstations/servers.

From the article:

In April, a CrowdStrike update caused all Debian Linux servers in a civic tech lab to crash simultaneously and refuse to boot. The update proved incompatible with the latest stable version of Debian, despite the specific Linux configuration being supposedly supported. The lab's IT team discovered that removing CrowdStrike allowed the machines to boot and reported the incident.

Debian is as at fault for CrowdStrike's incompetence and negligence as Microsoft is. That is to say, not at all.

lkdfjlkdfjlg
1 replies
23h33m

Right, thanks for that. My fault for skipping every other word.

This had really soured my day for a bit there, but you brought it back.

Debian is one of those things that I consider a source of stability in my software life, together with git, wikipedia, and openstreetmap. Believing that it depends on some dodgy company really put me in a bad mood.

rbanffy
0 replies
23h29m

Indeed. Debian is the BSD of the Linux world, CentOS/RHEL being the Solaris.

As usual, this was a self-inflicted pain by companies wishing to check yet another box to externalise liability.

jccooper
0 replies
23h43m

Debian doesn't depend on CrowdStrike. That's about people who installed it on such systems.

righthand
3 replies
1d

“No one noticed” which is a cute way to say that Crowdstrike suppressed the media noticing. The day of the bug, the HN post had comments about how people tried reporting the issue months ago.

Even the article is written as people noticing. So who didn’t notice? Or were the issues not popular enough to be not ignored?

rbanffy
1 replies
23h32m

I remember something about it, but, at the time I thought that very few people would install such software in Linux (and, indeed, very few companies do).

The blast radius was minimal, probably smaller than a bad Nvidia driver.

LtWorf
0 replies
11h7m

My company installs falcon on servers!

I know because our apt mirror takes much longer now to sync. That's because the crowdstrike agent is using all the CPU to scan every .deb package.

And they're ar-tar packed, so unless there's a special algorithm to scan for them, nothing will ever be found in them anyway.

WillPostForFood
0 replies
23h28m

Crowdstrike suppressed the media noticing

You think they sent checks to the NY Times, WaPo, and major networks or something? Media doesn't care if some servers crash unless it is noticeable in the world at large (like the airline groundings).

andyjohnson0
2 replies
1d

Relevant comment from yesterday's Crowdstrike mega-thread:

"Crowdstrike did this to our production linux fleet back on April 19th, and I've been dying to rant about it." [1]

Continues with a multi-para rant.

[1] https://news.ycombinator.com/item?id=41005936

ufo
0 replies
23h44m

Not just a relevant comment; this HN thread serms to be a primary source for the article.

rkagerer
0 replies
1d

...weeks later had a root cause analysis that didn't cover our scenario (Debian stable running version n-1, I think, which is a supported configuration) in their test matrix. In our own post mortem there was no real ability to prevent the same thing from happening again -- "we push software to your machines any time we want, whether or not it's urgent, without testing it"
FreakLegion
0 replies
23h42m

That's completely different. The injected DLL is an optional, off-by-default feature that admins are specifically warned to check for compatibility against their fleet before rolling out. It has four levels of increasingly unstable hooks it can apply, and there are repeated warnings about using it in the UI and documentation. E.g.:

> Extended User Mode Data (XUMD) allows the sensor to monitor information in running processes by loading a library that can hook various user-mode APIs.

Some endpoint telemetry can be gathered only through user-mode hooking. XUMD provides a flexible way to provide information about which APIs a process is leveraging. This information feeds a variety of prevention mechanisms that are available to the sensor based on the accumulated behavior observed.

Unlike Additional User Mode Data (AUMD), the cloud can dynamically modify XUMD visibility without a sensor update.

Supported prevention policy settings for XUMD:

Disabled: The extended visibility, detection, and prevention capabilities of XUMD are disabled. The hooking library is not loaded into processes.

Cautious: XUMD is enabled with high-confidence hooks that are accessible to detection and prevention logic. Performance and compatibility impact at this setting is expected to be negligible, but we recommend testing this setting in a staging environment before deploying it to production.

Moderate: XUMD is enabled with high- and medium-confidence hooks that are accessible to detection and prevention logic. This setting can result in performance or application-compatibility impact but provides expanded visibility. Performance impact at this setting is expected to be negligible, but we recommend testing this setting in a staging environment before deploying it to production.

Aggressive: XUMD is enabled with high-, medium-, and low-confidence hooks that are accessible to detection and prevention logic. This setting can result in significant performance or application-compatibility problems. This setting is not recommended for production environments without significant prior testing in a staging environment.

Extra Aggressive: XUMD is enabled with high-, medium-, low-, and experimental-confidence hooks that are accessible to detection and prevention logic. This setting can result in significant performance problems or application compatibility problems. This setting is not recommended for any production environment but might be appropriate for penetration and stress testing in specific limited deployments.

Because XUMD is loaded in user processes that were not developed with it, negative interactions with other software might occur. This is most common when other security products are installed. In certain software environments, conflicting software might crash, fail to start, or suffer degraded performance. In these scenarios, move a test system into a policy where XUMD is disabled, reboot the host, and then retry the software. If the issue is resolved, open a Support case and request assistance in resolving the conflict. Support can assist in diagnosing and resolving these issues between XUMD and specific software.

To determine which processes have loaded the XUMD DLL, run the following command at the command line:

tasklist /m csxumd*

utensil4778
0 replies
21h30m

Huh, this story sounds familiar. I read a HN comment the other day telling this same story. They didn't just turn a random HN comment into a news article, did they?

Yup. They did. At least they cited it I suppose.

smsm42
0 replies
20h56m

The update proved incompatible with the latest stable version of Debian, despite the specific Linux configuration being supposedly supported.

The analysis revealed that the Debian Linux configuration was not included in their test matrix.

This is suspiciously close to actual fraud. They declare they support configuration X, but they actually do not do any testing on configuration X. That's like telling me my car will have seatbelts, but in no place in manufacturing it is ensured the seatbelts are actually installed and work. I think a car maker that does something like that would be prosecuted. Why Crowdstrike isn't? I mean, one thing if they don't support some version of Linux - ok, too many of them, I can get it. But if you advertise support for it without even bothering to test on it - that's at best willful negligence and possibly outright fraud.

romaniitedomum
0 replies
18h43m

Not just Debian and Rocky, but RHEL too. https://access.redhat.com/solutions/7068083

I ran into this doing CentOS7 to Alma9 upgrades. The bug was in RHEL, Alma and Rocky and any other distro derived from RHEL. I had a VM go into an endless reboot cycle and the only way to get back in was to boot to an emergency rescue console and disable falcon-sensor.

The problem was something to do with eBPF, and one of the workarounds was to tell falcon sensor to use kernel mode and not auto or user (bpf) mode.

We don't allow automatic updates on hosts, however, so thankfully this was contained, but it certainly begs the question of just what testing Crowdstrike are doing.

neilwilson
0 replies
1d

This is all a consequence of firms being able to contract out of consequential liability.

Perhaps we should render such clauses unenforceable, as we do with contracting out of consequential loss of life.

Or at least limit them.

kermatt
0 replies
22h28m

Is it possible that these events had less impact because the damage was less / more easily fixed due to the nature of the OS?

Or perhaps because the admins of Linux systems are typically more knowledable about how to run their platforms, and not just install them?

Or is it due to sheer numbers of enterprise software running on Windows?

kayo_20211030
0 replies
22h27m

In the end, because of regulatory pressure, the only pressure that matters in a commercial environment, there will be three supported OS's: a windows one, a mac one, and probably one, and only one, Linux distribution, or flavor thereof. Everything else will be toast in a commercial environment. For Linux there might be this AWS one, and that Google one, but they'll be close. And, in order to satisfy regulatory requirements, they'll be very, very close. Commercial organizations have bosses and, more ominously, regulators. We, and they, need a checkbox checked. So let's not fool ourselves with thoughts of freedom and liberty. There's a real world out there.

CrowdStrike screwed up, but there's more chance that a 1000 linux's go to one than 1 CrowdStrike goes to zero.

egorfine
0 replies
20m

CrowdStrike should prioritize rigorous testing across all supported

They should not.

Testing costs money and they aren't selling their product to a company that needs or wants it on a competitive market. Their business model is based on shoving the product down the throat of enterprises due to compliance and therefore they have zero incentive to invest any money into quality.

bigcat12345678
0 replies
22h32m

I think someone noticed. And was thinking: People wont be happy to fix this and I am not allowed to fix either. Well, it might be just like the 3000 other rare issues that would one day break the world's IT. Who cares...

3np
0 replies
19h52m

Anecdote from SWIM: Employer corp supposedly has CS deployed on all endpoints. Been getting away with just running it in a VM with restricted resources and not hearing anything about it. Did notice the VM failing around that time.

Also heard from an ex-coworker in another large corp where IT just gave up on enforcing compliance for Linux endpoints. I wouldn't be surprised if some IT admins effectively adopt a "don't ask don't tell" policy here: If you can figure out the alternative stack by yourself without causing noise or lying about it, you're on your own. It'd certainly make sense if the motivation for enforcement is largely checkbox- compliance.

I wonder just how widespread this kind of admittedly malicious compliance is and how much it contributed to the April incident not being bigger news...