return to table of content

Timeline of the xz open source attack

nrawe
42 replies
5h9m

"It's also good to keep in mind that this is an unpaid hobby project." ~ Lasse Collin, 2022-06-08.

As someone working in security, the fact that _foundational_ pieces of the computing/networking rely on motivated individuals and essentially goodwill is mind blowing.

There are great aspects to the FOSS movement, but the risks – particularly the social engineering aspects as demonstrated here – and potential blast radius of supply chains like this... We take it all for granted and that is lining up to bite us hard, as an industry.

zoeysmithe
9 replies
4h46m

Sorta but flawed relevant xkcd: https://xkcd.com/2347/

I don't see how this is any strongly different than some unappreciated skill worker in a corporation. Its interesting the double standard we have for FOSS. Meanwhile in the commercial world, supply chain attacks are commonplace and barely solicit headlines.

Yes, FOSS needs to be able to address these kinds of attacks, but the world runs on the efforts of the low-level few, generally. The percent of people who work to build and maintain core infrastructure has always been small in any economic system. The world is held up by the unsung labor of the anonymous working class. Think of all the people working right now to make sure you have clean water, electricity, sanitation, etc. Its a tiny fraction of the people in your city.

Conversely, why aren't all these corporations who depend on this contributing themselves? Or reaching out? There's a real parasitic aspect here that gets swept under the rug too.

I'd even argue this isn't really a hobby for many, especially for higher profile projects. For many its done for social capital reasons to build up one's reputation which has all sorts of benefits, including corporate advancement, creating connections for startups, etc. Its career adjacent. And that's ignoring all the companies that contribute to FOSS explicitly with on-the-clock staff.

So there are motivators more than just "I'm bored and need a hobby." Its a little dismissive to call FOSS development just a hobby. Is what Linus does a hobby? I don't think most people would think so. Things like this have important social and economic motivators. The hypothetical guy in the comic isn't some weirdo doing something irrationally, but has rational motivators.

I'd also argue that its pretty harmful to FOSS adoption if the community takes on a "well, its a hobby don't expect quality, security, or professionalism." This is a great way to chase people away from FOSS. We can't just say "Oh FOSS is better than much closed software" when things are good, then immaturely reply "its just a dumb hobby, you're dumb for trusting me," when things go south. I think its pretty obvious there's a lot of defensiveness right now and people being protective over their social capital and projects, but I think this path is just the wrong way to go.

Comms, PR, and image management in FOSS is usually bad (see Linus's rage, high profile flame wars, dramatic forkings, ideological battles, etc), so optics here aren't great, because optics is something FOSS struggles with. The community is at best, herding cats, with all manner of big personalities and egos, and its usually a bit of a controlled car crash on the best of days.

phicoh
6 replies
4h33m

I think there is a fundamental difference between how corporations used to work and how open source typically works.

In a traditional corporation, people would come to an office. It would be known where they live. If you would require something like (code) review, it becomes a lot harder to plant something. Obviously not impossible, but hard for all but the most dedicated attackers.

In contrast, with open source and poorly funded projects. People don't always have money to travel. So the people working on an open source project may only know each other by some online handles. Nerds typically don't like video conferencing. So it is quite possible to keep almost everything about an identity secret.

And that makes it a lot more attractive to just try something. If something goes wrong, the perpetrator is likely in a safe jurisdiction.

zoeysmithe
2 replies
4h20m

tbf, most security issues aren't from some insider, but outsiders discovering exploits. The insider scenario here is extremely rare both in commercial and FOSS software.

Corporate insiders do stuff like this too, its just how often do we hear about it? FOSS has high visibility but closed source doesn't. Think of all the shady backdoors out there. Or what Snowden and others revealed.

On average a 100% FOSS organization is going to be much, much more secure than a 100% commercial close source one. Think of all the effort it takes to moderately secure a Windows/closed source stack environment. Its an entire massive industry! Crowdstrike alone has a $76bn marketcap and that's just one AV vendor!

Commercial software obeys the dictates of modern capitalism. Projects get rushed, code review and security take a backseat to quarterly reports and launch dates, etc. This makes closed source security issues common.

Usually when the exploit is discovered the attacker is far outside the victim's jurisdiction. See all the crypto gangs operating from non-Western non-extradition states.

pixl97
1 replies
3h15m

Corporate insiders do stuff like this too, its just how often do we hear about it?

Pretty much never.

One particular terrible case I saw was when a developer left a testing flag in a build that got pushed to production and used for years. Had you set the right &whatever flag in the URL you'd have unauthenticated access to everything. It was discovered years after the fact when the software was no longer in supported status, so nothing was ever wrote up about it and told to the users. "They shouldn't have been using it by now anyway, no use in bad press and getting users worried".

01HNNWZ0MV43FF
0 replies
2h2m

And I'm guessing there was no Five Whys or equivalent to ask how to prevent this from happening again.

No time to do things right...

singularity2001
1 replies
4h13m

You may want to read Kevin Mitnick on how (relatively) easy it is to infiltrate physical spaces.

Fnoord
0 replies
4h3m

Mitnick, at this point, has deceased.

Read up on red teaming and social engineering in general. Many more examples of red teaming are available, for example. I thoroughly enjoy these specific stories on Darknet Diaries podcast.

nradov
0 replies
4m

True, but we have to assume that nation states are now actively inserting or recruiting intelligence agents in prominent tech companies. US authorities already caught a Saudi spy in Twitter. How many haven't been caught yet? If I was running foreign intelligence for China or Israel or any other major country I would certainly try to place agents into Google, Apple, OpenAI etc.

nradov
0 replies
9m

If you're not getting paid then it's just a hobby. And there's nothing wrong with hobbies. As a FOSS contributor myself I feel no obligation to promote FOSS adoption. Quality, security, and professionalism are not my problem; anyone who cares about those things is welcome to fork my code.

asa400
0 replies
1h31m

This is one of the sanest comments I've ever seen describing what FOSS actually is. I think you nailed it when you said:

We can't just say "Oh FOSS is better than much closed software" when things are good, then immaturely reply "its just a dumb hobby, you're dumb for trusting me," when things go south.

It's weird. There are the explicit expectations of FOSS (mostly just licenses, which say very little), and the implicit expectations (everything else).

It's anarchic and ad hoc in a way that leaves the question of "what are we actually doing with this project(s)" up for all kinds of situational interpretation, as you noted. This is bad, because this ambiguity leads to conflict when the various actors are forced to reveal their expectations, and in doing so show that their expectations are actually quite divergent (i.e., "this is my fun hobby project!" vs. "my company fails without this bugfix!" vs. "I thought this was a community project!" vs. "This project is for me and my company, I call the shots, you're welcome to look at the code, though").

It's a little bit like the companies that are like "we have a flat management hierarchy, no one really reports to anyone else". It's just not true. It's almost always used as a ruse to dupe a certain class of participant that isn't sophisticated enough to know that these kinds of implicit power hierarchies leave them at a disadvantage. There's always a structure, it's just whether that structure is explicit or not. This kind of wishy-washy refusal to codify project roles/importance in FOSS is not doing us any favors. In fact I think it prevents us from actively recognizing the "clean water" role that an enormous number of projects play.

There's real labor power here if we want it, but our continued desire to have FOSS be everything to everyone is choking it.

riskable
8 replies
4h37m

This event should be a wake-up call to businesses everywhere: It's not just a small number of "core" FOSS projects that need their support (funding and assistance!). Before this event who was thinking about a compression library when considering the security of their FOSS dependencies?

The scope of "what FOSS needs to be supported and well-funded" just increased by an order of magnitude.

kashyapc
2 replies
4h24m

Yeah, as I note elsewhere in this thread, the OpenSSL "Heartbleed" saga should've taught some lessons, but alas, it's "classic" human nature to repeat our mistakes.

mistrial9
1 replies
3h16m

no - funding is going to places where profit is returned on investment, NOT to the tedious and long-term work that all of this sits on. It is not "human nature" because tedious, high-skill maintenance is done by humans, which built the infrastructure and continue to be crucial.

There is no accountability and in fact high-five and star shots for those taking piles of money and placing it on more piles of money, instead of doing what appears to be obvious to almost everyone on this thread -- paying long-term engineers.

kashyapc
0 replies
3h10m

No counter-argument; fully agree. FWIW, that's what I was referring to when I said, "fat companies" (and executives) glossing over the important tedious here[1].

[1] https://news.ycombinator.com/item?id=39905802

raxxorraxor
1 replies
3h37m

This would of course be nice since the fact that so much of our infrastructure is based on the work of people sharing it openly, a practice heavily in contrast to industry behavior, is sadly still a little known fact outside of software development.

The demand for more assistance here is the angle that was played in social engineering, specifically the demand to acquire more maintainers due to workload. Especially if such support would take the form like providing source archives with manipulated build scripts that are rarely checked by third parties.

There is also a problem of badly behaving industry that tries to take control of "hobby projects". Speaking of which, these "hobby"-projects often have much better code than many, many industry codebases.

I think FOSS overall still lessens the risk. It got more risky since it has been integrated in social media since these often allow for developer being shouted down or exploited much more easily.

consp
0 replies
3h26m

allow for developer being shouted down or exploited much more easily

No is an answer. And blocking people should be a thing.

Personally I do not publish anything anymore which is not "non-commercial only" as a result of demands, you can do it yourself if you want to make money off it (or if you demand things for that matter). Fortunately my online stuff isn't used much but even then it's possible to get "requests".

xhkkffbf
0 replies
2h52m

I'm all for funding FOSS, but how would the money have made any difference here? It just would have made Jian a bit richer, right?

lolinder
0 replies
3h56m

Funding all of these deep dependencies may have helped in this case but wouldn't address the root of the problem, which is that every business out there runs enormous amounts of unsandboxed third party code. Funding may have helped xz specifically from falling to pressure to switch to a malicious maintainer, but it does nothing about the very real risk that any one of the tens of thousands of projects we depend on has always been a long con.

The solution here has to be some combination of a dramatic cut back on the number of individual projects we rely on and well-audited technical solutions that treat all code—even FOSS dependencies—as potentially malicious and sandboxes it accordingly.

jdsalaro
0 replies
4h14m

This event should be a wake-up call to businesses everywhere

This ought to be not only a wake-up call for businesses, but also to hobbyists and members of the public in general; our code, our projects and our social "code-generating systems" must be hardened or face rampant abuse and ultimately be weaponised against us.

In a way, these issues which FOSS is facing and are becoming apparent are no different to those democracy been submitted since time immemorial.

phicoh
5 replies
4h41m

I think a problem is that there doesn't seem any way to automatically check this. If we assume that anything that is used during build time can be malicious then figuring out those dependencies is already hard enough. Mapping that to organizational stability is one step further.

dylan604
4 replies
3h18m

This is where the but FOSS is reviewable so it is trusted falls down. This situation is a prime example of how that fallacy is misconstrued. By being FOSS didn't make it trustworthy, it just meant that people had a fighting chance to find out why when something does happen. That's closing the barn door after the horses already left.

I'm not knocking FOSS at all. I just think some people have the concept twisted. Just like the meme of being written in Rust means the code is fast/safe from the mere fact it was written in Rust. I don't write Rust, but if I did, I guarantee that just from sheer not knowing WTF I'm doing would result in bad code. The language will not protect me from myself. FOSS will not protect the world from itself, but it does at least allow for decent investigations and after action reports.

jethro_tell
2 replies
3h6m

You don't think every nation state has people inside private software shops? Especially big tech?

Look at stuff getting signed with MS keys, hardware vendors with possible backdoors.

Social engineering is social engineering and it can happen anywhere no matter the profit motivation or lack there of.

Money interest in software won't save you any more than Foss.

nrawe
1 replies
2h39m

It'd be naive to assume that nation state actors are not trying to penetrate the supply chain at all levels, as it just takes a single weak link in the chain. That weak link could be behind corporate doors or in the open.

The main issue is that this attack shows how a relatively unknown component, as part of a much larger and more critical infrastructure, is susceptible to pressure as a result of "this is a hobby project, lend a hand".

At what point do these components become seen as a utility and in some way adopted into a more mainline, secure, well-funded approach to maintenance? That maintenance can, and probably should, happen in the open, but with the requisite level of scrutiny and oversight worthy of a critical component.

We got very lucky, _this time_.

dylan604
0 replies
2h30m

this attack shows how a relatively unknown component

why just this one? do we collectively have the memory of a gold fish? just recently, log4j had a similar blast radius. is it because one was seemingly malicious that the other doesn't count?

phicoh
0 replies
2h8m

We should not think in absolutes, but in terms of tools. What risks come with using a certain tool.

In your Rust example, using C is like using a power tool without any safety measures. That doesn't mean that you are going to get hurt, but there is an expectation that a sizable fraction of users of such tools will get hurt.

Rust is then the same tool with safety measures. Of course it is still a power tool, you can get hurt. But the chances of that happening during normal operation is a lot lower.

I think xz is a good example where open source happened to work as intended. Somebody noticed something weird, alerted and other people could quickly identify what other software might have the same problem.

jddil
5 replies
4h3m

Never understood why our industry seems unique in our willingness to do unpaid work for giant corps. Your compression library isn't saving the world, it's making it easier for amazon to save a few bucks.

You have the right to be paid for your time. It's valuable.

I enjoy coding too... but the only free coding I do is for myself.

Use a proper license, charge for your time and stop killing yourself doing unpaid hobby projects that cause nothing but stress.

dugite-code
1 replies
3h55m

why our industry seems unique in our willingness to do unpaid work for giant corps.

Because it never starts that way. It scratches an itch, solves an interesting puzzle and people thank and praise the work. Deep down we all want to be useful, and it helps that it looks great on a résumé.

After it's established the big corps come along, but the feeling of community usefulness remains. It's also why so many devs burn themselves out, they don't want to disapoint.

maerF0x0
0 replies
1h27m

IMO this is exactly why. The payout comes later, if the project is successful.

cesarb
1 replies
3h25m

Never understood why our industry seems unique in our willingness to do unpaid work for giant corps. Your compression library isn't saving the world, it's making it easier for amazon to save a few bucks.

The work was not being done "for giant corps"; it was being done for everyone, and giant corps just happen to be a part of "everyone", together with small corps, individual people, government, and so on.

You have the right to be paid for your time. It's valuable.

When you think of free software contributions as "volunteer labor" instead of just a hobby, it makes more sense. Yes, my time is valuable; when I'm working on free software, I'm choosing to use this valuable time to contribute to the whole world, without asking for anything in return.

squigz
0 replies
2h59m

Weird that this is a concept people struggle to understand...

bombcar
0 replies
2h4m

Did you get paid for the post you just wrote? People at giant corps are reading it right now, they're getting value from it. You deserve to be paid!

IF you understand why you'd post without being paid, you're 80% of the way to realizing why people program without being paid.

JTbane
3 replies
4h30m

That's just the nature of free software- you have to either trust the maintainer or do it yourself. There is not really a way around it.

- Corporate maintainers are great until they enshittify things in the pursuit of profit (see Oracle)

- Nonprofits are probably the best but can go the same route as corps (see Mozilla)

- Hobbyists are great until they burnout (see xz)

apantel
2 replies
4h18m

I think the level of complexity is the problem. A bad actor can be embedded in any of the above contexts: corp, non-profit, FOSS hobbyist. It doesn’t matter. The question is: when software is so complex that no one knows what 99.9% of the code in their own stack, on their own machine, does (which is the truth for everyone here including me), how do you detect ‘bad action’?

Eisenstein
1 replies
3h30m

The level of complexity involved in making sure that electrical plants work, that water gets to your home, that planes don't crash into each other, that food gets from the ground to a supermarket shelf, etc, is unfathomable and no single person knows how all of it works. Code is not some unique part of human infrastructure in this aspect. We specialize and rely on the fact that by and large, people want things to work and as long as the incentives align people won't do destructive things. There are millions of people acting in concert to keep the modern world working, every second of every day, and it is amazing that more crap isn't constantly going disastrously wrong and that when it does when are surprised.

01HNNWZ0MV43FF
0 replies
2h27m

Code is not some unique part of human infrastructure in this aspect

It kinda is. The fact that code costs fractions of a penny to copy and scale endlessly, changes everything.

There's hard limits on power plants, you need staff to run them, it's well-understood.

But software - You can make a startup with 3 people and a venture capitalist who's willing to gamble a couple million on the bet that one really good idea will make hundreds of millions.

Software actually is different. It's the only non-scarce resource. Look at the GPL - Software is the only space where communism / anarchy kinda succeeded, because you really can give away a product with _nearly_ no variable costs.

And it's really just the next step on the scale of "We need things that are dangerous" throughout all history. Observe:

- Fire is needed to cook food, but fire can burn down a whole city if it's not controlled

- Gunpowder is needed for weapons, but it can kill instantly if mishandled

- Nuclear reactors are needed for electricity, but there is no way to generate gigawatts of power in a way that those gigawatts can't theoretically cause a disaster if they escape containment

- Lithium-ion batteries are the densest batteries yet, but again they have no moral compass between "The user needs 10 amps, I'm giving 10 amps" and "This random short circuit needs 10 amps, I'm giving 10 amps"

- Software has resulted in outrageous growth and change, but just like nuclear power, it doesn't have its own morality, someone must contain it.

Even more so than lithium and nuke plants, software is a bigger lever that allows us to do more with less. Doing more with less simply means that a smaller sabotage causes more damage. It's the price of civilization.

So the genie ain't going back in. And private industry is always going to be a tragedy of the commons.

I'm not sure what government regulation can do, but there may come a point where we say, even if it means our political rivals freeload off of us, it's better for the USA to bear the cost of auditing and maintaining FOSS than to ask private corporations to bear that cost duplicating each other's work and keeping it secret.

Is that a handout to Big Tech? 100%. Balance it with UBI and a CO2 tax that coincidentally incentivizes data centers to be efficient. We'll deal with it.

jrochkind1
2 replies
3h9m

What it reminds me of, is my reoccuring thought, not just about open source but including this aspect, that we've built up a society based on software, that we could literally only afford because we've done it unsustainably. The economy could not bear the costs of producing all this software in an actual reliable sustainable way. So... now what.

mschuster91
0 replies
3h1m

It might be a good idea for governments to coordinate with computing advocacy groups/associations (e.g. German CCC, NANOG, popular Linux distributions such as Ubuntu, Debian, Fedora, Arch)... set up a fund of maybe 10 million euros, have the associations identify critical, shared components, and work out funding with their core developers. 10M a year should fund anything from 100-200 developers (assuming European wages), that should be more than enough, and it's pocket change for the G20 nations.

If that's too much bureaucracy or people fear that governments might exert undue influence: hand the money to universities, go back to the roots - many (F)OSS projects started out in universities after all. Only issue there is that projects may end up like OpenStack in the end ;)

Filligree
0 replies
3h5m

Could it really not afford it? I’m not convinced that is the case, so much as we don’t have a way to pay people for their effort.

Inf0s3c
1 replies
3h9m

Also in security and 100% tired of the code slingers who get annoyed by security reviews

Here on HN it’s been derided as “company just checking a box for compliance but adds no functionality. It slows us down when we want to disrupt!” - developer of yet another todo list or photo editor app…

Buffer overflows and the like are one thing. Notions from this blog that certain normal files won’t be well reviewed is a bad smell in software. Innocuous files should be just as rigorously reviewed as it’s all part of the state of the machine

“This is how it’s always worked” is terrible justification

Startup script kiddies git pulling the internet, and single maintainers open source projects aren’t cutting it; if it’s that important to the whole those in charge of the whole need to make sure it’s properly vetted.

I’m an EE first; this really just makes me want to see more software flashed into hardware.

ok123456
0 replies
34m

A security review or snake oil AI black box didn't stop this. It was stopped by a 'code slinger' who noticed a performance regression in ssh.

Deloitte coming in a checklist would have NEVER stopped this one.

djvdq
0 replies
3h0m

Nothing will really change, sadly. Remember log4j? There were also a lot of talking why people working on FOSS should be paid. And after one month almost no one remembered these voices exept for small minority of people.

agumonkey
0 replies
2h21m

What's gonna happen now ? a team of foss sec chaos monkey trying to run checks on a core set of libs ?

psanford
36 replies
2h42m

One big take away for me is that we should stop tolerating inscrutable code in our systems. M4 has got to go! Inscrutable shell script have got to go!

Its time to stop accepting that the way we've done this in the past is the way we will continue doing it ad infinitum.

kibwen
19 replies
2h38m

That's a great first step, but ready your pitchforks for this next take, because the next step is to completely eliminate Turing-complete languages and arbitrary I/O access from standard build systems. 99.9% of all projects have the capability to be built with trivial declarative rulesets.

hyperman1
9 replies
2h18m

Java's Maven is an interesting case study, as it tried to be this:. A standard project layout, a standard dependency mechanism, pom.xml as standard metadata file, and a standard workflow with standard target(clean/compile/test/deploy). Plugins for what's left.

There might have been a time where it worked, but people started to use plugins for all kinds of reasons, quite good ones in most cases. Findbugs, code coverage, source code generation,...

Today, a maven project without plugins is rare. Maven brought us 95%, but there is a long tail left to cover.

wongarsu
3 replies
1h54m

Most of these could still be covered with IO limited to the project files though.

There is a sizable movement in the Rust ecosystem to move all build-time scripts and procedural macros to (be combined to) WASM. This allows you to write turing-complete performant code to cover all use-cases people can reasonably think of, while also allowing trivially easy sandboxing.

It's not perfect, for example some build scripts download content from the internet, which can be abused for information extraction. And code generation scripts could generate different code depending on where it thinks it's running. But it's a lot better than the unsandboxed code execution that's present in most current build systems, without introducing the constraints of a pure config file.

comex
1 replies
1h7m

But the xz backdoor didn’t involve a build script that tried to compromise the machine the build was running on. It involved a build script that compromised the code being built. Sandboxing the build script wouldn’t have helped much if at all. Depending on the implementation, it might have prevented it from overwriting .o files that were already compiled, maybe. But there would still be all sorts of shenanigans it could have played to sneakily inject code.

I’d argue that Rust’s biggest advantage here is that build scripts and procedural macros are written in Rust themselves, making them easier to read and thus harder to hide things in than autotools’ m4-to-shell gunk.

But that means it’s important to build those things from source! Part of the movement you cite consists of watt, which ships proc macros as precompiled WebAssembly binaries mainly to save build time. But if watt were more widely adopted, it would actually make an attacker’s life much easier since they could backdoor the precompiled binary. Historically I’ve been sympathetic to dtolnay’s various attempts to use precompiled binaries this way (not just watt but also the serde thing that caused a hullabaloo); I hate slow builds. But after the xz backdoor I think this is untenable.

mjw1007
0 replies
19m

Maybe not untenable.

If everything is done carefully enough with reproducible builds, I think using a binary whose hash can be checked shouldn't be a great extension of trust.

You could have multiple independent autobuilders verifying that particular source does indeed generate a binary with the claimed hash.

RedShift1
0 replies
1h11m

How does the sandboxing help if the compiler and/or build scripts or whatever modifies its own output?

shawnz
1 replies
1h42m

When your programmatic build steps are isolated in plugins, then you can treat them like independent projects and apply your standard development practices like code review and unit tests to those plugins. Whereas when you stuff programmatic build steps into scripts that are bundled into existing projects, it's harder to make sure that your normal processes for assuring code quality get applied to those pieces of accessory code.

dotancohen
0 replies
1m

My standard development practices like code review and unit tests do not scale to review and test every dependency of every dependency of my projects. Even at company-wide scale.

kibwen
1 replies
2h11m

> Findbugs, code coverage, source code generation,...

For the purpose of this conversation we mostly just care about the use case of someone grabbing the code and wanting to use it in their own project. For this use case, dev tools like findbugs and code coverage can be ignored, so it would suffice to have a version of the build system with plugins completely disabled.

Code generation is the thornier one, and we can at least be more principled about it than "run some arbitrary code", and at least it should be trivial to say "this codegen process gets absolutely no I/O access whatsoever; you're a dumb text pipeline". But at the end of the day, we have to Just Say No to things like this. Even if it makes the codebase grodier to check in generated code, if I can't inspect and audit the source code, that's a problem, and arbitrary build-time codegen prevents that. Some trade-offs are worth making.

hyperman1
0 replies
1h32m

The xz debacle happened partiallybecause the generated autoconf code was provided. Checking in generated code is not that much better. It's a bit more visible, but not much people will spend their limited time to validate it, as it's not worth it for generated code. xz also had checked in inscrutable test files, and nobody could know it was encrypted malware.

I'm not a fan of generated code. It tends to cause misery, being in a no mans land between code and not-code. But it is usefull sometimes, e.g rust generating an API from the opengl XML specs.

Sandboxing seems the least worst option, but it will still be uninspected half code that one day ends up in production.

ptx
0 replies
2h4m

Maybe now that we have things like GitHub Actions, Bitbucket Pipelines, etc., which can run steps in separate containers, maybe most of those things could be moved from the Maven build step to a different pipeline step?

I'm not sure how well isolated the containers are (probably not very – I think GitHub gives access to the Docker socket) and you'd have to make sure they don't share secret tokens etc., but at least it might make things simpler to audit, and isolation could be improved in the future.

smallmancontrov
1 replies
2h24m

What does modern C project management look like? I'm only familiar with Autotools and CMake.

infamouscow
0 replies
1h38m

Redis. Simple flat directory structure and a Makefile. If you need more, treat it as a sign from God you're doing something wrong.

klysm
1 replies
1h43m

until you have to integrate with the rest of the world sure

kibwen
0 replies
1h35m

Looking at the state of software security in the rest of the world, this may not be much of a disincentive. At some point we need to knuckle down and admit that times have changed, the context of tools built for the tech of the 80s is no longer applicable, and that we can do better. If that means rewriting the world from scratch, then I guess we better get started sooner rather than later.

eadmund
1 replies
1h13m

I think that what we need is good sandboxing. All a sandboxed Turing-complete language can do is perform I/O on some restricted area, burn CPU and attempt to escape the sandbox.

I would like to see this on the language level, not just on the OS level.

semi-extrinsic
0 replies
39m

I have been thinking the exact same thing, and specifically I would like to try implementing something that works with the Rye python manager.

Say I have a directory with a virtualenv and some code that needs some packages from PyPI. I would very much like to sandbox anything that runs in this virtualenv to just disk access inside that directory, and network access only to specifically whitelisted URLs. As a user I should only need to add "sandbox = True" to the pyproject.toml file, and optionally "network_whitelist = [...]".

From my cursory looking around, I believe Cloudflare sandbox utils, which are convenience wrappers around systemd Seccomp, might be the best starting point.

You mention sandboxing on the language level, but I don't think it is the way. Apparently sandboxing within Python itself is a particularly nasty rabbit hole that is ultimately unfruitful because of Python's introspection capabilities. You will find many dire warnings on that path.

azemetre
1 replies
2h31m

Can you explain more in-depth what you mean? I'm also unaware of how you could have declarative rulesets in a non turing-complete language.

Sounds like it would be impossible but maybe my thinking is just enclosed and not free.

ngruhn
0 replies
50m

Haven’t used it much but Dhall is a non Turing complete configuration language: https://dhall-lang.org/

koito17
0 replies
2h28m

In this case, I think the GP is absolutely right. If you look at the infamous patch with a "hidden" dot, you may think "any C linter should catch that syntax error and immediately draw suspicion." But the thing is, no linter at the moment exists for analyzing strings in a CMakeLists.txt file or M4 macro. Moreover, this isn't something one can reliably do runtime detection for, because there are plenty of legitimate reasons that program could fail to compile, but our tooling does not have a way to clearly communicate the syntax error being the reason for a compilation failure.

Dalewyn
5 replies
2h27m

The internet would become a nicer place (again) if we killed off JavaShit and the concept of obfuscating ("minifying") code to make View Source practically unusable.

Sammi
2 replies
2h12m

Javascript is pretty much guaranteed to be permanent. It is the language of the web.

(There's webassembly too, but that doesn't remove js)

homarp
0 replies
2h6m

I don't think anything is "pretty much guaranteed": things evolve

Dalewyn
0 replies
2h8m

Thanks for being part of the problem.

davedx
1 replies
1h58m

Nonsensical argument. JavaScript and TypeScript projects are developed with patches of the original source not some compressed artefact. Take your trolling elsewhere

Dalewyn
0 replies
1h11m

The vast majority of JavaShit executed by the end user in their user agent are "compressed artefacts" that cannot be read by human eyes in any practical way. Likewise most HTML and CSS which are also obfuscated to impracticality.

This is coincidentally drawing a parallel with the xz attack: The source code and the distributed tarballs (the "compressed artefacts") are different.

tomxor
4 replies
1h50m

That would be an improvement for sure... but this is not fundamentally a technical problem.

Reading the timeline, the root cause is pure social engineering. The technical details could be swapped out with anything. Sure there are aspects unique to xz that were exploited such as the test directory full of binaries, but that's just because they happened to be the easiest target to implement their changes in an obfuscated way, that misses the point - once an attacker has gained maintainer-ship you are basically screwed - because they will find a way to insert something, somehow, eventually, no matter how many easy targets for obfuscation are removed.

Is the real problem here in handing over substantial trust to an anonymous contributor? If a person has more to lose than maintainership then would this have happened?

That someone can weave their way into a project under a pseudonym and eventually gain maintainership without ever risking their real reputation seems to set up quite a risk free target for attackers.

usefulcat
2 replies
1h30m

Is the real problem here handing over substantial trust to an anonymous contributor?

Unless there's some practical way of comprehensively solving The Real Problem Here, it makes a lot of sense to consider all reasonable mitigations, technical, social or otherwise.

If a person has more to lose than maintainership then would this have happened?

I guess that's one possible mitigation, but what exactly would that look like in practice? Good luck getting open source contributors to accept any kind of liability. Except for the JiaTans of the world, who will be the first to accept it since they will have already planned to be untouchable.

tomxor
0 replies
1h20m

I guess that's one possible mitigation, but what exactly would that look like in practice? Good luck getting open source contributors to accept any kind of liability. Except for the JiaTans of the world, who will be the first to accept it since they will have already planned to be untouchable.

It's not necessary to accept liability, that's waived in all FOSS licenses anyway. What I'm suggesting would risk reputation by association, and from what I've seen, most of the major FOSS contributors and maintainers freely use their real identity or associate it with their pseudonym anyway, since that attribution comes with real life perks.

I don't know exactly what it would look like in practice, and as with any kind of social engineering, it can only be mitigated. But disallowing anonymous pseudonyms would raise the bar quite a bit and require more effort from attackers to construct or steal plausible looking online identities for each attack, especially when they need to hold up for a long time as with this attack.

patmorgan23
0 replies
19m

Create an organization of paid professionals who are responsible for maintaining these libraries (and providing support to library users).

It's heartblead all over again (though those were honest mistakes, not an intentional attack

flockonus
0 replies
1h39m

Agreed it's mainly a social engineering problem, BUT also can be viewed as a technical problem, if a sufficiently advanced fuzzer could catch issues like this.

It could also be called an industry problem, where we rely on other's code without running proper checks. This seems to be an emerging realization, with services like socket.dev starting to emerge.

rurban
0 replies
2h9m

Great, let the cmake dummies wander off into their own little dreamworld, and keep the professionals dealing with the core stuff.

pera
0 replies
1h25m

While I do agree that M4 is not great I don't believe any alternative would have prevented this attack: you could try translating that build-to-host file to say python with all the evals and shell oneliners and it would still not be immediately obvious for a distro package maintainer doing a casual review after work: for them it would look just like your average weekend project hacky code. Even if you also translated the oneliners it wouldn't be immediately obvious if you were not suspicious. My point is, you could write similarly obfuscated code in any language.

patmorgan23
0 replies
22m

That and we need to pay open source maintainers and find new ways to support them.

And all code that gets linked into security critical applications/libraries needs to be covered by under some sort of security focused code review.

So no patching the compression code that openSSL links to with random junk distribution maintainers.

orthecreedence
0 replies
7m

This doesn't read as a technical failure to me. This was 99% social engineering. I understand that the build system was used a vector, but eliminating that vector doesn't mean all doors are closed. The attacker took advantage of someone having trouble.

ok123456
0 replies
36m

With all the CI tooling and containerization, it seems to be going in the opposite direction.

lukaslalinsky
35 replies
6h30m

The social side of this is really haunting me over the last days. It's surprisingly easy to pressure people to giving up control. I've been there myself. I can't even imagine how devastating this must be to the original author of XZ, especially if he is dealing with other personal issues as well. I hope at least this will serve a strong example to other open source people, to never allow others to pressure them into something they are not comfortable with.

tommiegannert
19 replies
6h13m

The Jigar Kumar nudges are so incredibly rude. I would have banned the account, but perhaps they contributed something positive as well that isn't mentioned.

I wonder if it would be possible to crowdsource FOSS mailing list moderation.

goku12
7 replies
5h43m

There is a good chance that everyone in that thread except the original maintainer is in on the act. It's likely that all those accounts are managed by a single person or group. Targeting just one account for rudeness isn't going to help, if that's true.

lukaslalinsky
4 replies
5h30m

It does help on the social/psychological side. If you, as an open source project maintainer, have a policy that such rudeness is not acceptable, you are much less likely to become a successful victim of a social attack like this.

sigmar
2 replies
3h46m

That would be true if you could ban the person from using new emails, but I don't think that's true when the thread if rife with sock puppet accounts. You ban the first rude email account, then there will be 2 new accounts complaining about both the lack of commits and the "heavy-handed mailing-list moderation" stifling differing views.

pixl97
0 replies
3h7m

Yep, as the attacker you bias the entire playing field to your side. If a mailing list has 20 or so users on it, you create 50 accounts over time that are nice, helpful, and set a good tone. Then later you come in with your attack and the pushy assholes. Suddenly those 50 puppets just slightly side with the asshole. Most people break under that kind of social pressure and cave to the jerks request.

geodel
0 replies
3h23m

Absolutely right. Considering there is a whole cottage industry about asshole replies from Linus Torvalds on linux mailing lists.

For lesser/individual maintainers there is no way to survive this kind of mob attack. Corporate maintainers may be able to manage as it could be considered just as paid job and there are worse ways to make money.

michaelt
0 replies
3h12m

It's entirely possible for an evildoer to make someone feel bad while remaining completely polite.

First send a message to the mailing list as "Alice" providing a detailed bug report for a real bug, and a flawless patch to fix it.

Then you reply to the mailing list as "Bob" agreeing that it's a bug, thanking "Alice" for the patch and the time she spent producing such a detailed bug report, then explaining that unfortunately it won't be merged any time soon, then apologising and saying you know how frustrating it must be for Alice.

Your two characters have been model citizens: Alice has contributed a good quality bug report, and code. Bob has helped the project by confirming bug reports, and has never been rude or critical - merely straightforward, realistic and a bit overly polite.

imglorp
0 replies
5h16m

The mechanism employed here seems like the good cop, bad cop interrogation/negotiation technique. There is the one person who has taken care to show cultural and mission alignment. Then there are several misaligned actors applying pressure which the first person can relieve.

How to identify and defuse: https://www.pon.harvard.edu/daily/batna/the-good-cop-bad-cop...

cduzz
0 replies
5h30m

Reminds me of the "no soap radio" joke. Joke being euphemism for collective gas lighting, but typically a "joke" played by kids on each other.

Play is just preparing for the same game but when stakes are higher?

https://en.wikipedia.org/wiki/No_soap_radio

npteljes
2 replies
3h18m

I wonder if it would be possible to crowdsource FOSS mailing list moderation.

I think this could be a genuine use of an AI: to go through all of the shit, and have it summarized in a fashion that the user wants: distant and objective, friendly, etc. It could provide an assessment on the general tone, aggregate the differently phrased requests, many things like that.

Crowdsourcing would works best with the reddit / hacker news model I feel, where discussion happens in tree styled threads, and users can react to messages in ways that are not text, but something meta, like a vote or a reaction indicating tone.

Both of these have significant downsides, but significant upsides too. People pick the mailing list in a similar way.

johnny22
1 replies
1h42m

A big problem is that people allow this sort of thing as part of the culture. I've followed the Fedora and PHP development mailing lists a few different times over the years ans this sort of thing was tolerated across the board. It doesn't matter if you crowdsource the moderation if nobody thinks the behavior is bad in the first place.

Trying to do something about it was called censorship.

npteljes
0 replies
58m

I'm sorry I don't understand your point clearly. Why is it a big problem, and whose problem it is?

soraminazuki
1 replies
4h27m

Not surprising, unfortunately. You'd think malicious actors would be nice to people they're trying to deceive. But after watching a few Kitboga videos, I learned that they more often yell, abuse, and swear at their victims instead.

pixl97
0 replies
3h4m

Being nice gives people time to think.

Being mean is stressful and stops your brain from working properly. If someone doesn't allow you to be abusive, then they are not a mark. Predators look for prey that falls into certain patterns.

delfinom
1 replies
4h51m

Yea people are too accepting of allowing asshats like the Jigar messages.

Simple ban and get the fuck out. Too often I've dealt with people trying to rationalize it as much as "o its just cultural, they don't understand". No, get the fuck out.

But hey I'm a NYer and telling people to fuck off is a past time.

nindalf
0 replies
3h43m

Jigar was the same person/group as Jia. They were the bad cop and Jia was the good cop. Banning wouldn't have changed anything. Even if Jigar had been banned, the maintainer would still have appreciated the good cop's helpful contributions in contrast to the unhelpful bad cop. Jia would have become a maintainer anyway.

unethical_ban
0 replies
2h34m

It seems from the reading of this article that jigar is in on the scam. That said, I agree.

jeltz
0 replies
4h15m

I think that is intentional and that the goal would have been achieved even if Jigar (who probably is the same guy as Jia) had been banned.

coldpie
0 replies
4h59m

I would have banned the account

Yeah, same. We should be much more willing to kick jerks out of our work spaces. The work is hard enough as it is without also being shit on while you do it.

TheCondor
0 replies
5h4m

The xz list traffic was remarkably low. More than a few times over the years, I thought it broke or I was unsubscribed.

Messages like Jigar’s are kind of par for the course.

lr1970
4 replies
4h48m

Look how brilliantly they selected their target project:

(1) xz and the lib are widely used in the wild including linux kernel, systemd, openSSH; (2) single maintainer, low rate of maintenance; (3) the original maintainer has other problems in his life distracting them from paying closer attention to the project.

I am wondering how many other OSS projects look similar and can be targeted in similar ways?

baq
1 replies
4h39m

I'm thinking 95% of home automation which is full of obscure devices and half baked solutions which get patched up by enthusiasts and promptly forgotten about.

Controlling someone's lights is probably less important than Debian's build fleet but it's a scary proposition for the impacted individual who happens to use one of those long tail home assistant integrations or whatever.

davedx
0 replies
1h53m

A lot of home automation controls EV charging these days too. Imagine an attack that syncs a country’s EV fleet to charge in a minute where demand is at a peak. You could cause some damage at the switchgear I bet if not worse

lenerdenator
0 replies
3h20m

Many.

We're in a tech slowdown right now. There are people who got used to a certain lifestyle who now have "seeking work" on their LinkedIn profiles, and who have property taxes in arrears that are listed in county newspapers-of-record. If you're an intelligence operative in the Silicon Valley area, these guys should be easy pickings. An envelope full of cash to make some financial problems go away in exchange for a few commits on the FOSS projects they contribute to or maintain.

apantel
0 replies
4h16m

Yes it seems a lot like a case of a predator picking off a weak and sick individual.

nathell
2 replies
4h59m

It makes Rich Hickey’s „Open Source Is Not About You” [0] particularly poignant.

As a hobbyist developer/maintainer of open source projects, I strive to remember that this is my gift to the world, and it comes with no strings attached. If people have any expectations about the software, it’s for them to manage; if they depend on it somehow, it’s their responsibility to ensure timely resolution of issues. None of this translates to obligations on my part, unless I explicitly make promises.

I empathize with Lasse having been slowed down by mental issues. I have, too. And we need to take good care of ourselves, and proactively prevent the burden of maintainership from exacerbating those issues.

[0]: https://gist.github.com/g1eny0ung/9e7d4d0f72547a8d156452e76f...

raxxorraxor
0 replies
3h28m

This is why I find some disclaimers in some open source projects quite superfiscial, that the software is provided as is without any warranty. Of course it is, this should be the obvious default.

If there is a law that would entitle a user to more, it is a bug in legislation that needs urgent fixing.

pixl97
0 replies
3h11m

having been slowed down by mental issues

Anyone and everyone in the OSS world should be concerned about this too. You have nation state level actors out there with massive amounts of information on you. How much information have you leaked to data brokers? These groups will know how much debt you're in. The status of your relationships. Your health conditions and medications? It would not take much on their part to make your life worse and increase your stress levels. Just imagine things like fake calls from your bank saying that debt of yours has been put in collections.

oefrha
1 replies
4h23m

I’ve given semi-popular projects that I no longer had the bandwidth to maintain to random people who bothered to email, no pressuring needed. While those projects are probably four to five magnitudes less important than xz, still thousands of people would be affected if the random dude who emailed was malicious. What should I have done? Let the projects languish? Guess I’ll still take the chance in the future.

01HNNWZ0MV43FF
0 replies
37m

I guess all you can do is not give the brand away.

Put a link saying "Hey this guy forked my project, I won't maintain it anymore, he may add malware, review and use at your own risk"

mirekrusin
1 replies
4h51m

It's bizzare enough as it is to start asking questions to confirm that "mental issue" had natural cause.

couchand
0 replies
3h31m

Your experiences may differ, but I'd say pretty much anyone who lived through the past few years has reason enough to pay careful attention to their mental health.

nebulous1
0 replies
1h5m

In "Ghost in the Wires" Kevin Mitnik details one of the ways he obtained information was via a law enforcement receptionist* who he managed to trick into believing he was law enforcement over the phone. He obtained information this way multiple times over multiple years, and fostered a phone based friendship with this woman. He seemed to have no qualms in doing this.

He was also turned on by multiple people who he considered close friends. In the book it did not seem that he had considered that it might not be a "them" problem.

*my details may be off here, I read it some time ago

lenerdenator
0 replies
3h22m

I feel for Lasse.

It's time for more of the big vendors who use these projects in their offerings to step up and give people running these small projects more resources and structure. $20k to have maintainers for each project actually meet twice a year at a conference is chump change for the biggest vendors, especially when compared against the cost of the audits they'll now be doing on everything Jia Tan and Co. touched.

jrochkind1
17 replies
6h4m

This seems very difficult to defend against. What is a project with a single burnt-out committer to do?

rsc
5 replies
5h50m

lcamtuf's two posts argue that this may simply not be an open-source maintainer's job to defend against. ("The maintainers of libcolorpicker.so can’t be the only thing that stands between your critical infrastructure and Russian or Chinese intelligence services. Spies are stopped by spies.")

That doesn't mean we shouldn't try to help burnt out committers, but the problem seems very hard. As lcamtuf also says, many things don't need maintenance, and just paying people doesn't address what happens when they just don't want to do it anymore. In an alternate universe with different leadership, an organization like the FSF might use donated funds to pay a maintenance staff and an open-source maintainer might be able to lean on them. Of course, that still doesn't address the problem of Jia Tan getting a job with this organization.

https://lcamtuf.substack.com/p/technologist-vs-spy-the-xz-ba... https://lcamtuf.substack.com/p/oss-backdoors-the-allure-of-t...

cb321
1 replies
3h8m

Why are people assuming it's any intelligence service/state actor? With cryptocurrency valuations, it would seem like remote rooting gajillions of machines would be highly incentivized for a private person/collective. Not to mention other financial incentives. Our digital infrastructure secures enormous value much of which can be pilfered anonymously.

I admit, the op has a "professional/polished vibe" to me as well, but we seem to know very little except for what work time/zones were preferred by the possibly collective/possibly singular human(s) behind the Jia Tan identity. Does anyone have slick linguistic tools to assess if the writer is a single author? Maybe an opportunity to show off.. It's sort of how they caught Ted Kaczynski.

It also absolutely makes sense to think of all the state actors (I agree including as you say the US/UK) as part of the ongoing threat model. If the KGB/Ministry of State Security/NSA/MI6 were not doing this before then they surely might in the future. Maybe with more gusto/funding now! They all seem to have an "information dominance at all costs" mentality, at least as agency collectives, whatever individuals inside think.

maerF0x0
0 replies
1h31m

people often assume state actors are the pinnacle of sophistication, and especially long games. (Also notably Chinese culture is very attuned / trained for long games, relative to American impatience). This was a sophisticated attack, therefore presumption.

uluyol
0 replies
5h38m

There are efforts from industry to try to secure open source, e.g., https://cloud.google.com/security/products/assured-open-sour...

I suspect some variant of this will grow so that some companies, MS/GitHub for example, audit large body of code and vet it for everyone else.

coldpie
5 replies
5h13m

It's not a perfect solution, but I think Big Companies could have a role to play here. Some kind of tenure/patronage system, where for every $1B a Big Company makes in profit, they employ one critical open source maintainer with a decent salary (like $200k or something). The job would only have two requirements: 1) don't make everyone so mad that everyone on the Internet asks for you to be fired, and 2) name a suitable replacement when you're ready to move on. The replacement would become an employee at Big Company, which means Big Company would need to do whatever vetting they normally do (background checks, real address to send paychecks and taxes, etc).

In this scenario, Jia Tan would not be a suitable replacement, since they don't friggin' exist.

Yes, there's problems with this approach. When money gets involved, incentives can get distorted. It limits the pool of acceptable maintainers to those employable by a Big Company. But I think these are solvable problems, especially if there's a strong culture of maintainer independence. It provides a real improvement over the current situation of putting so much pressure on individuals doing good work for no benefit.

jodrellblank
3 replies
3h22m

"It's not a perfect solution"

Is it a solution at all? Say Oracle offer Lasse Collin $200k for maintaining xz but he doesn't want to work for Oracle so he refuses, then what? Amazon offer Lasse $200k but they require fulltime work on other open source packages which he isn't experienced with or interested in, so he refuses, then what? Google employ someone else for $200k but they can't force Lasse Collin to hand over commit rights to xz to Google, or force him to work with a fulltime Google employee pestering with many constant changes trying to justify their job, and they can't force Debian to accept a new Google fork of xz, so then what? And NetFlix, Microsoft, Facebook, Uber, they can't all employ an xz maintainer, xz doesn't need that many people, but if they just employ 'open source maintainers' scattering their attention over all kinds of random projects they have no prior experience with, how would they catch this kind of subtle multi-pronged long-term attack on some low-attention, slow moving, project?

Google already employ a very capable security team who find issues in all kinds of projects and publicise them, they didn't find this one. Is it likely this attack could have made its way into ChromeOS and Android if it wasn't noticed now, or would Google have noticed it?

"1) don't make everyone so mad that everyone on the Internet asks for you to be fired"

So it's a sinecure position, doing nothing is the safest thing?

"and 2) name a suitable replacement when you're ready to move on"

How could Lasse Collin have named a more suitable replacement than someone who seemed technically capable, interested in the project, and motivated to work on some of the boring details like the build system and tests and didn't seem to be doing it for the hype of saying "I improved compression by 10%" for their resume? Are they needing to be skilled in hiring and recruitment now?

coldpie
2 replies
2h39m

I think you've misunderstood my suggestion. I said the job has two responsibilities. No more. You added a bunch of other responsibilities, I'm saying those wouldn't be allowed. It would be in the employment agreement that is purely payment for doing the maintainership tasks they were already doing. It would be a culture expectation that the company not apply pressure on maintainers.

but if they just employ 'open source maintainers' scattering their attention over all kinds of random projects they have no prior experience with

They would pay the people who are doing the work now. Under this hypothetical, one of the Big Companies would have hired Lasse as the xz maintainer, for example. His job responsibilities are to maintain xz as he had been doing, and identify a successor when he's ready to move on. Nothing else.

So it's a sinecure position, doing nothing is the safest thing?

No. Not doing the maintenance tasks would make everyone mad, violating one of the two job responsibilities.

How could Lasse Collin have named a more suitable replacement than someone who seemed technically capable, interested in the project, and motivated to work on [it]

Lasse would suggest Tan as a suitable replacement. Big Company's hiring pipeline would approach Tan and start the hiring process (in person interviews, tax docs, etc etc). At some point they would realize Tan isn't a real person and not hire him. Or, the adversary would have to put up "a body" behind the profile to keep up the act, which is a much higher bar to clear than what actually happened.

jodrellblank
1 replies
1h2m

Leaving aside issues of how it could work, Lasse Collin wasn't the one who saw this attack and stopped it so how would paying him have helped against this attack?

"Lasse would suggest Tan as a suitable replacement. Big Company's hiring pipeline would approach Tan and start the hiring process (in person interviews, tax docs, etc etc). At some point they would realize Tan isn't a real person and not hire him"

What if they find that Tan is a real person but he doesn't want to work for Amazon or legally can't (This is before knowing of his commits being malicious, we're assuming he's a fake profile but he could be a real person being blackmailed)? Collin can't leave? Collin has to pick someone else out of a candidate pool of people he's never heard of? Same question if they find Tan isn't a real person - what then; is there an obligation to review all of Tan's historical commits? Just committing under a pseudonym or pen name isn't a crime, is it? Would the new maintainer be obliged to review all historic commits or audit the codebase or anything? Would Amazon want their money back from Lasse once it became clear that he had let a bad actor commit changes which opened a serious security hole during his tenure as maintainer?

"No. Not doing the maintenance tasks would make everyone mad, violating one of the two job responsibilities."

What he was doing already was apparently ignoring issues for months and making people like Jigar Kumar annoyed. Which is fine for a volunteer thing. If "Jigar Kumar" is a sock-puppet, nobody knew that at the time of their posts; Lasse Collins' hypothetical employer wouldn't have known and would surely be on his case about paying him lots of money for maintenance while complaints are flowing on the project's public mailing list, right? Either they're paying him to do what he was doing before (which involved apparently ignoring work and making some people mad) or they're paying him to do more than he was doing before (which is not what you said).

It doesn't seem like it would work - but if it did work it doesn't seem like it would have helped against this attack?

coldpie
0 replies
24m

Lasse Collin wasn't the one who saw this attack and stopped it so how would paying him have helped against this attack?

Well, there's a few issues I'm trying to target. I'm trying to work backwards from "how do we stop bad actor Tan from getting maintainer access to the project?" Creating an identify-verified relationship (employment) is a good fit for that, I think. And it nicely solves some other related issues with the current volunteer maintainership model. Lasse may not have felt the strong pressure/health issues if he was being paid to do the work. Or, if he was feeling burnt out, he may have felt more comfortable passing the torch earlier if there was a clear framework to do so, backed by an entity that can do some of the heavy lifting of naming/validating a successor.

What if they find that Tan is a real person but he doesn't want to work for Amazon or legally can't

I think this would be a fairly rare occurrence, but it's one I called out as a potential problem in my original post, yeah ("smaller pool of possible maintainers"). If there isn't a clear successor, I think the maintainer could inform the Big Company that they'd like to move on in the next year or two, and Big Company could maybe find an internal engineer who wants to take over the role. Or maybe this more formal sponsored-maintainership arrangement would create incentives for outside contributors to aim for those positions, so there's more often someone waiting to take over (and then be verified by Big Company's hiring process).

is there an obligation [for the maintainer] to review all of Tan's historical commits? Would the new maintainer be obliged to review all historic commits or audit the codebase or anything? Would Amazon [fire] Lasse once it became clear that he had let a bad actor commit changes which opened a serious security hole during his tenure as maintainer?

(I tweaked your questions a tiny bit to rephrase them as I interpreted them. I think the spirit of your questions was kept, I apologize if not.) If these tasks fall under the "don't make everyone mad" job responsibility, then yes. If not, then no, to all of these. There are no obligations other than the two I mentioned: don't piss off the community and help name a successor. It's up to the project's community to decide if the maintainer is not meeting their obligations, not the sponsoring Big Company.

What he was doing already was apparently ignoring issues for months and making people like Jigar Kumar annoyed.

I'm not sure. It seems like Kumar was a bad actor. Was there actually a real maintenance issue? If so, maybe it could have been avoided in the first place by the sponsorship arrangement, like I mentioned at the top of this reply. Or, the community could raise the problem to Big Company, who can do the work of verifying that there is a problem and working with the maintainer to resolve it. Instead what happened here, which was for one burned out guy deciding to hand the keys over to some email address.

sloowm
0 replies
3h18m

I think the best solution would be governments forcing companies to secure the entire pipeline and setting up a non profit that does this for open source packages. Have security researchers work for a non profit and force companies that use software from some guy in Nebraska to pay into the non profit (could be in the form of labor) to get the code checked and certified.

The guy in Nebraska is still not getting anything but will also not have the stress of becoming one of the main characters/victims in a huge attack.

2devnull
1 replies
3h30m

Check the GitHub profile of anybody that commits. Is there a photo of the person? Can you see a commit history and repos that help validate who they seem to be.

In this instance, noticing the people emailing to pressure you have fake looking names that start with adjacent letters and the same domain name.

Be more paranoid.

pixl97
0 replies
2h46m

Is there a photo of the person?

Does that even matter these days?

Especially if we're talking nation state level stuff convincing histories are not hard to create to deflect casual observers.

Be more paranoid.

Most people in OSS just want to write some code to do something, not defend the world against evil.

meowface
0 replies
5h43m

As a burnt out creator of open source projects with thousands of GitHub stars who's received numerous questionable maintainership requests: either very carefully vet people or let it stagnate. I chose the latter and just waited until others forked it, in large part because I didn't want to be responsible for someone hijacking the project to spread malware.

If I had ever received a request from a well-known figure with a longstanding reputation, who's appeared in-person at talks and has verifiable employment, I might've been more receptive. But all the requests I got were from random internet identities that easily could've been fabricated, and in any case had no previous reputation. "Jia Tan" and their other sockpuppets very likely are not their real identities.

clnhlzmn
0 replies
4h13m

This is not Lasse Collin’s responsibility. What is a burnt out committer supposed to do? Absolutely nothing would be fine. Doing exactly what Lasse Collin did and turn over partial control of the project to an apparently helpful contributor with apparent community support is also perfectly reasonable.

caoilte
0 replies
4h8m

get the project taken over by a foundation eg the Apache Foundation.

sebstefan
13 replies
5h24m

Maybe one of the outcomes of this could be a culture change in FOSS towards systematically banning rude consumers in Github issues, or, just in general, a heightened community awareness making us coming down on them way harder when we see it happen.

Aurornis
4 replies
4h48m

The attackers will leverage any culture that helps them accomplish their goals.

If being rude and pushy doesn’t work, the next round will be kind and helpful. Don’t read too much into the cultural techniques used, because the cultural techniques will mirror the culture at the time.

coldpie
1 replies
4h38m

Even if the security outcome is the same, I would still count people being kind and helpful online instead of rude as an improvement.

saghm
0 replies
1h25m

As always, there's an xkcd for that https://xkcd.com/810/

tamimio
0 replies
1h5m

Spot on. The counter should be sound regardless of any social or cultural context, a process where being polite or rude, pushy or not is irrelevant.

advaith08
0 replies
3m

Agree. I think a more core issue here is that only 1 person needed to be convinced in order to push malware into xz

apantel
3 replies
4h14m

The Jia Tan character was never rude. If you make rudeness the thing that throws a red flag, then ‘nice’ fake accounts will bubble up to do the pressuring.

sebstefan
0 replies
3h53m

Pressuring the maintainer is already rude in itself and being polite about it won't help them

If they want things done quickly they can do it themselves

patmorgan23
0 replies
10m

Jia Tan wasn't rude, but the original maintainer Laser Collin probably wouldn't have been as burned out and willing to give responsibility to them if the community wasn't as rude and demanding of someone doing free work for them.

I think we need to start paying more of these open source maintainers and have some staff/volunteers that can help them manage their git hub issue volume.

genter
0 replies
2h45m

The assumption is that the group behind this attack had sock puppets that were rude to Lasse Collin, to wear him down, and then Jia Tan swept in as the savior.

publius_0xf3
0 replies
1h22m

I want to caution against taking a good thing too far.

There's a certain kind of talented person who is all too conscious of their abilities and is arrogant, irascible, and demanding as a result. Linus Torvalds, Steve Jobs, Casey Muratori come to mind. Much as we might want these characters to be kinder, their irascibility is inseparable from their more admirable qualities.

Sometimes good things, even the best things, are made by difficult people, and we would lose a lot by making a community that alienates them.

ok123456
0 replies
2h1m

People have been bullied out of 'nice' communities. See the 'Actix' debacle in Rust.

Vicinity9635
0 replies
1m

Being rude is... unimportant. A lot of people think being passive aggressive is being polite when it's actually being rude + deceitful. There's nothing wrong with being direct, which some mistake for rude. I find it refreshing.

GoblinSlayer
0 replies
2h26m

Just don't do anything crazy. There are legitimately crazy people asking for crazy things, not necessarily backdoors.

userbinator
12 replies
4h50m

I think one of the good things to come out of this may be an increased sense of conservatism around upgrading. Far too many people, including developers, seem to just accept upgrades as always-good instead of carefully considering the risks and benefits. Raising the bar for accepting changes can also reduce the churn that makes so much software unstable.

sebstefan
4 replies
4h47m

All things considered I'm not sure that'd be such a good thing

How many security issues spring from outdated packages vs packages updated too hastily?

Hackbraten
2 replies
3h27m

On top of that:

A newly-introduced security issue tends to have very limited exploitability, because it's valuable, not-yet well understood, and public exploits are yet to be developed.

Compare to that a similar vulnerability in an older package: chances are that everything about it has been learned and is publicly known. Exploits have become a commodity and are now part of every offensive security distro on the planet. If you run that vulnerable version, there's a real risk that a non-targeted campaign will randomly bite you.

maerF0x0
1 replies
1h39m

valuable, not-yet well understood, and public exploits

Except in the scenario that is this exact case: Supply chain attacks that are developed with the exploit in mind.

Hackbraten
0 replies
55m

I agree in principle. But even if the backdoor is deliberate (as is the case here), there’s limited risk for the average person. Nobody in their right mind is going to attack Jane Doe and risk burning their multi-million dollar exploit chain.

For an old vulnerability, however, any unpatched system is a target. So the individual risk for the average unpatched system is still orders of magnitude higher than in the former scenario.

denimnerd42
0 replies
4h9m

and when you do get a security issue and you're using a 10 year old version the upgrade is going to be really really difficult vs incremental upgrades when they are available. or are you going to fork and assume responsibility for that library code too?

cesarb
2 replies
4h3m

Far too many people, including developers, seem to just accept upgrades as always-good instead of carefully considering the risks and benefits.

Another example of this was log4j: if you were still using the old 1.x log4j versions, you wouldn't have been vulnerable to the log4shell vulnerability, since it was introduced early in the 2.x series. The old 1.x log4j versions had other known vulnerabilities, but only if you were using less common appenders or an uncommon server mode or a built-in GUI log viewer (!); the most common use of log4j (logging into a local file) was not exposed to any of these, and in fact, you could remove the vulnerable classes and still have a functional log4j setup (see for instance https://www.petefreitag.com/blog/log4j-1x-mitigation/ which I just found on a quick web search).

Did log4shell (and a later vulnerability which could only be exploited if you were using Java 9 or later, because it depended on a new method which was introduced on Java 9) lead people to question whether always being on the "latest and greatest" was a good thing? No, AFAIK the opposite happened: people started to push even harder to keep everything on the latest release, "so that when another vulnerability happens, upgrading to a fixed version (which is assumed to be based on the latest release) will be easy".

JohnMakin
1 replies
1h59m

Another example of this was log4j: if you were still using the old 1.x log4j versions, you wouldn't have been vulnerable to the log4shell vulnerability

Lol, this exact thing happened at my last gig. When I first learned of the vulnerability I panicked, until I found out we were so outdated it didn't affect us. We had a sad laugh about it.

"so that when another vulnerability happens, upgrading to a fixed version (which is assumed to be based on the latest release) will be easy".

I think there is some truth to this motivation though - if you are on an ancient 1.X version and have to jump a major version of two, that almost always causes pain depending on how critical the service or library is. I don't pretend to know the right answer but I always tend to wait several versions before upgrading so any vulnerabilities or fixes can come by the time I get to the upgrade.

cesarb
0 replies
1h51m

A lot of people were in that exact same situation. So many, that the original author of log4j 1.x released a fork to allow these people to keep using the old code while technically being "up to date" and free of known vulnerabilities: https://reload4j.qos.ch/

paulmd
0 replies
8m

well, it's the bazaar vs the cathedral, isn't it? bazaar moves a lot faster. Everyone likes that part, except when it breaks things, and when they have to chase an upstream that's constantly churning, etc. but most people don't consider that a cathedral itself might have some engineering merit too. cathedrals are beautiful and polished and stable.

I highly encourage people to try freeBSD sometime. Give ports a try (although the modern sense is that poudrie is better even if you want custom-built packages). See how nicely everything works. All the system options you need go into rc.conf (almost uniformly). Everything is documented and you can basically operate the system out of the FreeBSD Handbook documentation (it's not at all comparable to the "how to use a window or a menu" level intro stuff the linux provides). You can't do that when everything is furiously churning every release.

just like "engineering is knowing how to build a bridge that barely doesn't fall over", engineering here is knowing what not to churn, and fitting your own work and functionality extensions into the existing patterns etc. like it doesn't have to be even "don't make a bigger change than you have to", you just have to present a stable userland and stable kernel interface and stable init/services interface. the fact that linux doesn't present a stable kernel interface is actually fairly sketchy/poor engineering, it doesn't have to be that way, a large subset of kernel interfaces probably should be stable.

nilsherzig
0 replies
9m

Wouldn't that just result in exploits written for old versions? A successful exploit for something that everyone is running might be worse, than a backdoor on blending edge systems.

Everyone being on different versions results in something like a moving target

Vicinity9635
0 replies
4m

I'm kinda the opposite. Way too many times I've seen "upgrades" actively remove things I liked and add things I hate. I hold off on letting mobile apps update because they almost always get worse, not better.

TheKarateKid
0 replies
1h3m

About a decade ago, the industry shifted from slow, carefully evaluated infrequent updates (except for security) to frequent, almost daily updates that are mandatory. I'd say this was pioneered by the Chromium team and proved to be beneficial. The rest of the industry followed.

Now we're in a position where most projects update so quickly, that you don't really have a choice. If you need to update one component, there's a good chance it will require that many more as dependencies in your project will require an update to be compatible.

The industry as a whole sacrificed stability and some aspects of security for faster advancements and features. Overall I'd say the net benefit is positive, but its times like these that remind us that perhaps we need to slow things down just a little and do a bit of a course correction to bring things into a better balance.

exacube
12 replies
5h51m

Is the real identity of Jia Tan known, even by Lasse Collin?

I would think a "real identity" should be required by linux distros for all /major/ open source projects/library committers which are included in the distro, so that we can hold folks legally accountable

rsc
6 replies
5h48m

Open source fundamentally does not work that way. There are many important open source contributors who work pseudonymously.

Google's Know, Prevent, Fix blog post floated the idea of stronger identity for open source in https://security.googleblog.com/2021/02/know-prevent-fix-fra... and there was very significant pushback. We learned a lot from that.

The fundamental problem with stronger identity is that spy agencies can create very convincing ones. How are distros going to detect those?

kashyapc
2 replies
5h38m

While "open source" fundamentally doesn't work that way, the point here is about maintainers, not regular contributors. Identity of new maintainers must be vetted (via in-person meetups and whatever other mechanisms) by other "trusted" maintainers whose identities are "verified".

I realize, it's a hard problem. (And, thanks for the link to the "Know, Prevent, Fix" post.)

PS: FWIW, I "win my bread" by working for a company that "does" open source.

Edit: Some projects I know use in-person GPG key signing, or maintainer summits (Linux kernel), etc. None of them are perfect, but raises the bar for motivated anonymous contributors with malicious intent, wanting to become maintainers.

oefrha
1 replies
3h59m

I’ve worked with a few very talented pseudonymous developers on the Internet over the years. I can’t think of any way to vet their identities while maintaining their anonymity (well, it’s basically impossible by definition), plus if you’re talking about in-person meetups, traveling from, say, Asia to North America isn’t cheap and there could be visa issues. The distinction between maintainers and non-maintainers isn’t that meaningful because non-maintainers with frequent and high quality contributions will gain a degree of trust anyway. The attack we’re discussing isn’t about someone ramming obviously malicious code through as a maintainer, they passed or could have passed code review.

cesarb
0 replies
3h38m

traveling from, say, Asia to North America isn’t cheap and there could be visa issues.

And there are other reasons some people might not want to travel outside their nearby region. For instance, they might be taking care of an elderly relative. Or they might be the elderly relative, with travel counter-indicated for health reasons.

nrvn
0 replies
5h34m

I was initially thinking that one of the core non-tech causes of the was the single-person maintenance mode of the xz project.

But you have a point. As an agency you can seed two jiatan's to serve diligently for a couple of years following the strict 2-person code reviews and then still poison the project. On the other hand, if the xz build process was automated and transparent and release artifacts were reproducible and verifiable even in this poor condition of xz-utils as a project it would have been much harder to squeeze in a rogue m4/build-to-host.m4

in3d
0 replies
5h10m

The blog post clarified it's about maintainers of critical packages, not all contributors. This could be limited to packages with just one or two maintainers, especially newer ones. And they could remain somewhat anonymous, providing their information to trusted third parties only. If some maintainers don’t accept even this, their commits could be put into some special queue that requires additional people to sign off on them before they get accepted downstream. It's not a complete fix, but it should help.

delfinom
0 replies
4h47m

My problem with stronger identity is it violates open source licenses.

Source code is provided without warranty and this statement is clear in the license.

Putting an verified identity behind the source code publish is basically starting to twist said said no-warranty. Fuck that.

tester457
0 replies
4h45m

If this was done by a state actor then this policy wouldn't help at all. States have no shortage of identities to fake.

tamimio
0 replies
32m

Nope, identities won’t solve it, you can have people coerced, blackmailed, threatened, or simply just a “front” while there’s a whole team of spies in the background. The process should be about what’s being pushed and changed in the code, but I would be lying to say I have a concrete concept how it is possible.

mapmeld
0 replies
5h3m

What would prevent a known person from accepting a govt payout to sabotage their project, or to merge a plausible-looking patch? Relying on identity just promotes a type of culture of reputation over code review.

gquere
0 replies
5h48m

This will never be accepted by the community.

asvitkine
0 replies
5h49m

How would that even work? Are distros expected to code their own alternative versions of open source libraries where they can't get the maintainers to send their IDs? Or what stops from forged IDs being used?

maerF0x0
8 replies
1h35m

Two points I'd be interested in discussing.

1. It seems a lot of people are assuming Jia Tian was compromised all along. I haven't seen a reason to believe this was a long play, but rather that they were compromised along the way (and selected for their access). Why play the long game of let's find a lone package, w/ a tired maintainer, and then try to get in good with them vs. Let's survey all packages w/ a tired maintainer, and compromise a new entrant (ie bribe/beat them).

2. IMO this also is a failing on behalf of many government agencies. NSA, NIST, FBI all should, IMO spend less time compromising their own citizenry, and more time focusing on shoring up the security risks Americans face.

nebulous1
3 replies
1h18m

Completely conclusive proof? No, but it seems unlikely that there ever would be conclusive proof of such.

They don't seem to exist outside of this incident and things related to this.

There were multiple people who also don't seem to exist outside of their posts to the xz mailing list applying pressure for the original maintainer to bring Jia on board. This occurred around the time that Jia was first making contact with the project, not only recently.

Apparently the IP addresses that were logged for Jia is a VPN based in Singapore

They have vanished.

Honestly, there's very little evidence that they weren't always intending this.

doakes
2 replies
1h13m

What's your source for the IP addresses?

nebulous1
0 replies
47m

I can't remember where I read that but it would likely have been from a HN link or possibly a comment.

I just found this: https://news.ycombinator.com/item?id=39868773 which is definitely not where I originally read it, but libera is probably ultimately the source

lmkg
1 replies
1h27m

Regarding point 1: The timeline in the linked article describes some communications as "pressure emails." I've heard the theory, but haven't seen solid evidence, that the pressure emails weren't just regular impatience from outside devs, but actually part of the attack. To convince the primary maintainer into granting access to another party.

maerF0x0
0 replies
1h23m

Occams Razor and a variation on Hanlon's razor to me suggest that Jia actually wanted to solve the issues and work on code. They contributed some things to other repos too, correct? (I saw another thread about microsoft documentation, for eg).

Here's the thing, if the maintainer was so tired and needed the help, the pressure is more risk than reward. The maintainer would be relieved to have help... But pressure risks estranging...

To be clear I'm aware this is just my own musing though.

saulpw
0 replies
22m

I took a look at Jia Tan's early behavior, and I found it to be consistent with being "compromised" from the beginning. They had months of contributions on private repos before forking a test library and making superficial changes to it, and then diving headlong into archival libraries. It all looks set up and I see no evidence of an actual person at any point.

I also think it is more difficult to get away with bribing/beating an existing contributing than you suggest; esp since failure means likely exposure.

PUSH_AX
0 replies
1h13m

It seems a lot of people are assuming Jia Tian was compromised all along. I haven't seen a reason to believe this was a long play, but rather that they were compromised along the way

So I assume Jia has spoken out since? How do you go this long without realising someone else is making plays as you?

dhx
8 replies
12h33m

I think this analysis is more interesting if you consider these two events in particular:

2024-02-29: On GitHub, @teknoraver sends pull request to stop linking liblzma into libsystemd.[1]

(not in the article) 2024-03-20: The attacker is now a co-contributor for a patchset proposed to the Linux kernel, with the patchset adding the attacker as a maintainer and mirroring the attacker's activity with gaining the trust over the development of xz-utils.

A theory is that the attacker saw the sshd/libsystemd/xz-utils vector as closing soon with libsystemd removing its hard dependency on xz-utils. When building a Linux kernel image, the resulting image is compressed by default with gzip [3], but can also be optionally compressed using xz-utils (amongst other compression utilities). There's a lot of distributions of Linux which have chosen xz-utils as the method used to compress kernel images, particularly embedded Linux distributions.[4] xz-utils is even the recommended mode of compression if a small kernel build image is desired.[5]

If the attacker can execute code during the process of building a new kernel image, they can cause even more catastrophic impacts than targeting sshd. Targeting sshd was always going to be limited due to targets not exposing sshd over accessible networks, or implementing passive optical taps and real time behavioural analysis, or receiving real time alerts from servers indicative of unusual activity or data transfers. Targeting the Linux kernel would have far worse consequences possible, particularly if the attacker intended to target embedded systems (such as military transport vehicles [6]) where the chance of detection is reduced due to lack of eyeballs looking over it.

[1] https://github.com/systemd/systemd/pull/31550

[2] https://lkml.org/lkml/2024/3/20/1004

[3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

[4] https://github.com/search?q=CONFIG_KERNEL_XZ%3Dy&type=code

[5] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

[6] https://linuxdevices.org/large-military-truck-runs-embedded-...

daghamm
4 replies
7h54m

I don't understand how this could have worked.

If you compile and build your own image, would that be able to trigger the backdoor?

You can of course change an existing image to something that triggers the backdoor, but with that level of access you won't really need a backdoor, do you?

dhx
2 replies
6h47m

An attack would look something like:

1. A new "test" is added to the xz-utils repository, and when xz is being built by a distribution such as Debian, the backdoor from the "test" is included into the xz binary.

2. The backdoored xz is distributed widely, including to a target vendor who wishes to from a Debian development environment compile a kernel for an embedded device that is usually used in a specific industry and/or set of countries.

3. When backdoored xz is then asked to compress a file, it checks whether the file is a kernel image and checks whether the kernel image is for the intended target (e.g. includes specific set of drivers).

4. If the backdoored xz has found its target kernel image, search for and modify random number generation code to effectively make it deterministic. Or add a new module which listens on PF_CAN interfaces for a particular trigger and then sends malicious CAN messages over that interface. Or modify dm_crypt to overwrite the first 80% of any key with a hardcoded value. Plenty of nasty ideas are possible.

Denvercoder9
1 replies
5h37m

Note that none of these steps require the attacker to have any code in the kernel. The kernel patchset is completely orthogonal to the possibility of this attack, and seems to be benign.

sandstrom
0 replies
8m

Yeah, but gaining trust with benign patchset would be the first step.

bandrami
0 replies
7h35m

It's Thompson's "Trusting Trust"[1], right? To the extent XZ is part of the standard build chain you could have a source-invisible replicating vulnerability that infects everything. And if it gets into the image used for, say, a popular mobile device or IoT gadget...

1: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

tamimio
0 replies
21m

target embedded systems (such as military transport vehicles

You are giving a lot of credit to that, I have seen military ones with ancient software, even the “new” updated ones are still on ubuntu 18.04 because of some drivers/sdks compatibility, but it isn’t a major issue since most of the times they are not connected to the public internet.

rsc
0 replies
5h54m

Thanks for this comment. I've added that LKML patch set to the timeline.

delfinom
0 replies
4h32m

I don't think the attacker saw the systemd change at all personally.

The way the exploit was setup, they could have pivoted to targeting basically any application server because there's so many interdependencies. python, php and ruby could be targeted because liblzma is loaded via libxml2 as an example.

Gaining trust for linux kernel commits would have just let them continue maximizing profit on their time investment.

particularly if the attacker intended to target embedded systems (such as military transport vehicles [6])

Said vehicles aren't networked on the public internet and from experience in this particular sector, probably haven't been nor will be updated for decades. "Don't break what isn't broken" applies as well as "military doesn't have a budget to pay defense contractors $1 million to run apt-get update per vehicle".

chubot
8 replies
4h25m

Something to add to the timeline: when did this avenue of attack become available?

It only happened in the last 10 years apparently.

Why do sshd and xz-utils share an address space?

When was the sshd -> systemd dependency introduced?

When was the systemd -> xz-utils dependency introduced?

---

To me this ARCHITECTURE issue is actually bigger than the social engineering, the details of the shell script, and the details of the payload.

I believe that for most of the life of xz-utils, it was a "harmless" command line tool.

In the last 10 years, a dependency was silently introduced on some distros, like Debian and Fedora.

Now maintainer Lasse Collin becomes a target of Jia Tan.

If the dependency didn't exist, then I don't think anyone would be bothering Collin.

---

I asked this same question here, and got some good answers:

https://lobste.rs/s/uihyvs/backdoor_upstream_xz_liblzma_lead...

Probably around 2015?

So it took ~9 years for attackers to find this avenue, develop an exploit, and create accounts for social engieering?

If so, we should be proactively removing and locking down other dependencies, because it will likely be an effective and simple mitigation.

duped
3 replies
4h11m

I don't think these questions are that interesting. SSHD shared an address space with xz-utils because xz utils provides a shared library, and that's how dynamic linking works. sshd uses libsystemd on platforms with systemd because systemd is the tool that manages daemons and services like sshd, and libsystemd is the bespoke way for daemons to talk to it (and more importantly, it is already there in the distro - so you're not "adding a million line dependency" so much as linking against a system library from the OS developers that you need).

Linking against libsystemd on Debian is about as suspicious as linking against libsystem on MacOS. It's a userspace library that you can hypothetically avoid, but you shouldn't.

As for why systemd links against xz, I don't know, and it's a bit surprising that an init system needs compression utils but not particularly surprising given the kitchen sink architecture of systemd.

chubot
1 replies
3h54m

and that's how dynamic linking works -- really ignorant comment

Read the lobste.rs thread for some quotes on process separation and the Unix philosophy.

There are mechanisms other than dynamic linking -- this is precisely the question.

Also, Unix supported remote logins BEFORE dynamic linking existed.

---

What about sshd didn't work prior to 2015?

Was the dependency worth it?

not particularly surprising given the kitchen sink architecture of systemd

That's exactly the point -- does systemd need a kitchen sink architecture?

---

The questions are interesting because they likely lead to simple and effective mitigations.

They are interesting because critical dependencies on poorly maintained projects may cause nation states to attack single maintainers with social engineering.

Solutions like "let's create a Big tech funded consortium for security" already exist (Linux foundation threw some money at bash in 2014 after ShellShock).

That can be part of the solution, but I doubt it's the most effective one.

duped
0 replies
3h40m

I don't think it's acceptable to create a subprocess for what's effectively a library function call because it comes from a dependency.

The problem is the design of rtld and the dynamic linking model, where one shared library can detect and hijack the function calls of another by using the auditing features of rtld. Hardened environments already forbid LD_PRELOAD for injection attacks like this, but miss audit hooks.

My point is that just saying we should use the Unix process model as the defense for supply chain attacks is like using a hammer to fix a problem that needs a scalpel.

What about sshd didn't work prior to 2015?

systemd notifications, from what it sounds like.

cesarb
0 replies
3h58m

As for why systemd links against xz, I don't know, and it's a bit surprising that an init system needs compression utils

It's for the journal. It can be optionally compressed with zlib, lzma, or zstd. That library had not only the sd_notify function which sshd needed, but also several functions to manipulate the journal.

sloowm
1 replies
3h45m

Another point relevant on the timeline is when downstream starts using binaries instead of source.

I think people are flying past that important piece of the hack. Without that this would not have been possible. If there is a trusted source in the middle building the binaries instead of the single maintainer and the hacker this attack becomes extremely hard to slip by people.

Denvercoder9
0 replies
2h35m

Another point relevant on the timeline is when downstream starts using binaries instead of source.

No downstream was using binaries instead of source. Debian and Fedora rebuild everything from source, they don't use the binaries supplied by the maintainer. The backdoor was inserted into the build system.

JeremyNT
1 replies
4h11m

It's certainly a reasonable question to ask in this specific case. In hindsight Debian and Red Hat both bet badly when patching OpenSSH in a way that introduced the possibility of this specific supply chain attack.

If so, we should be proactively removing and locking down other dependencies, because it will likely be an effective and simple mitigation.

I think this has always been important, and it remains so, but incidents like this really drive the point home. For any piece of software that has such a huge attack surface as ssh does, the stakes are even higher, and so the idea of introducing extra dependencies here should be approached with extreme caution indeed.

chubot
0 replies
3h55m

Yup, "trusted computing base" is classic concept that we've forgotten

Or we pay lip service to, but don't use in practice

mseepgood
4 replies
6h56m

So all binaries built with a Microsoft compiler must be considered compromised?

progressof
3 replies
6h54m

no

jhoechtl
2 replies
6h46m

Care to enlighten how you come to such a knee-jerk reaction given a highly critivcal observation? What obvous are we missing?

yuriks
0 replies
6h43m

It's just a documentation change. Likely made to add reputation to the account.

progressof
0 replies
6h38m

I didn't come to any conclusion... and I don't think you missed anything... I'm just posting links... you think it's better if I didn't post anything because this is stupid? if so then ok...

BLKNSLVR
0 replies
6h0m

Regarding the AbuseIPDB link: some of the SSH payloads mentioned in the instances of 'attack' contain the username jiat75.

Doesn't necessarily validate anything though. Could be progressof planting misdirection given that the IP address only started being detected basically today (and the VPS was likely only just setup today as well, if the hostname is to be trusted).

... and that progressof's account is about an hour old.

kevindamm
7 replies
6h25m

Small nit to pick, but in the introductory paragraph it reads "unauthenticated, targeted remote code execution." I recall that there was a special private/public key pair that made this exploit only reproducible by the author (or only possible after re-keying the binary).

I believe this means it was unauthorized, not unauthenticated.

mcpherrinm
4 replies
5h56m

I think it is unauthenticated from the point of view of SSH’s own authentication. The backdoor has its own credential, but the RCE is accessible if you don’t have an account on the system.

kevindamm
3 replies
5h49m

That's the basis for my preferring the term relating to authorization. The two terms have distinct and well-defined meanings in the domain. They're both critical aspects of security but for different reasons.

dist-epoch
2 replies
4h24m

Remote code execution already implies unauthorized. There is no such thing as authorized remote code execution.

kevindamm
0 replies
3h49m

What makes you say that? SSH, RDP, even hitting a web service are all valid cases of authorized remote code execution. It's not the remote or execution parts that are bad.

gquere
0 replies
3h12m

There totally is authenticated RCE, for instance a PHP page that contains a RCE but needs a prior authentication to access the resource.

All RCEs are classified in either unauthenticated or authenticated, the former being the worst (or best if you're a researcher/hacker).

kevindamm
0 replies
5h47m

Fairly standard term for a kind of access similar to this, but the distinction is important.

Consider the impact if anybody else, not just this attacker, could have exploited it -- on the other hand, it may have been discovered sooner (had it also not been discovered by accident first).

wufocaculura
3 replies
6h52m

There's a single dot in a line between #include <sys/prcntl.h> and void my_sandbox(void). It is easy to miss, but makes the compile to fail, thus resulting in HAVE_LINUX_LANDLOCK to be never enabled.

arrowsmith
1 replies
5h51m

Can someone explain to n00bs like me: what's "landlock" anyway and why is it significant here?

Thorrez
0 replies
5h34m

prctl, not prcntl

swsieber
0 replies
6h36m

How do you get "Jigar Kumar"'s email address?

Hit reply

cced
0 replies
6h53m

I think the period on the line above void my_sandbox.

mseepgood
6 replies
7h34m

Never allow yourself to be bullied or pressured into action. As a maintainer, the more a contributor or user nags, the less likely I am to oblige.

pixl97
2 replies
2h53m

The issue here is the attackers will quickly move away from an individual attacking you to the group attacking you. The person writing the infected code will never be a jerk to you at all. You'll just suddenly see a huge portion of your mailing list move against you ever so slightly.

We've complained about bots in social media for a long time, but how many people in open source discussions are shady manipulative entities?

galleywest200
1 replies
2h24m

These days you can even have these emails automatically taken in by an LLM and have the LLM argue with the maintainer for you, no humans needed!

snerbles
0 replies
1h42m

Maintainers will need LLM sockpuppets of their own to automatically answer these automatic emails.

resource_waste
0 replies
5h50m

That sounds nice.

I did an engineering/program manager role for 8 years and people pretty much always did what I asked if I showed up at their desk or bothered their boss.

"Squeaky wheel gets the grease?"

But I too like to think that I prioritize my children on merit rather than fuss level. For some reason they continue to cry despite me claiming I don't react to it.

kjellsbells
0 replies
2m

True, but a determined adversary like JiaTan/Jugar has an ace up their sleeve: they are good enough, and patient enough, to be able to fork the base project, spend a year or two making it better than the original (releasing the head of steam built up from legitimate requests that the old, overworked maintainer never got too, building goodwill in the process) and then convincing the distros to pick up their fork instead of the older original. At which point it really is game over.

kenjackson
0 replies
5h29m

But in this case he was getting hit by both someone willing to help and then multiple people complaining that things were taking too long. And when you yourself feel like things are taking too long then you’re probably more susceptible to all this.

jvanderbot
6 replies
5h38m

It's also good to keep in mind that this is an unpaid hobby project.

That's the root cause. At some point a corp/gov consortium needs to adopt key projects and hire out the maintainers and give them real power and flexibility, perhaps similar to the way a nation might nationalize key infrastructure.

But this is against the core ethos at some level. Freedom and safety can be antagonistic.

tamimio
2 replies
42m

You are implying that governments are more technically competent and more trustworthy than open source communities..

mttpgn
0 replies
9m

Many open source projects often already do receive US government funding, mostly through an onerous grant-application process. Nationalizing American open source projects could make them operate more like European open source where their EU funding is open and clear. The detrimental trade-off, however, is that the American agencies most capable to support and contribute directly to infrastructure security have burned away all trust from the rest of the world. Direct contributions directly from those USG agencies would reduce global trust in those projects even worse.

jvanderbot
0 replies
15m

I said nothing of the sort.

I'm implying they are richer.

maerF0x0
1 replies
1h30m

gov consortium

Personally I wouldnt trust a govt to not backdoor everything.

jvanderbot
0 replies
14m

A consortium is a great way to get money and power into those maintainers. Never said they should take the power from them or provide code. I think people are hearing their own mind here, not mine.

none_to_remain
0 replies
1h26m

Avoid compromise with one simple trick: surrender to the attackers

gregwebs
6 replies
2h32m

I think this can be made much more difficult by enforcing a policy of open builds for open source. It shouldn't be possible to inject build files from a local machine. All build assets should come from the source repository. Artifacts should come from Github Actions or some other tool that has a clear specification of where all inputs came from. Perhaps Github could play a role in helping to automate any inconveniences here.

dboreham
2 replies
2h28m

Yes the whole "let's take a mystery meat tarball from a repo that isn't the project repo" seems suspect.

Github+ even has a scheme for signing artifacts such that you have some level of trust they came from inside their Actions system, derived from some git commit. This would allow the benefits of a modular build for a large product like a distro, while preserving a chain of trust in its component parts.

+Not advocating a dependency on Github per se -- the same sort of artifact attestation scheme could be implemented elsewhere.

shp0ngle
1 replies
1h43m

as I wrote in a different thread, some projects don't have any source control.

From the big ones - 7z, ncurses are both tarballs only.

patmorgan23
0 replies
7m

They need to join us in the 80s and start using source control.

maclockard
1 replies
2h28m

I think that trust needs to be 'pushed deeper' than that so to speak. While this would be an improvement, what happens if there is a malicious actor at Github? This may be unlikely, but would be even harder to detect since so much of the pipeline would be proprietary.

Ideally, we would have a mechanism to verify that a given build _matches_ the source for a release. Then it wouldn't matter where it was built, we would be able to independently verify nothing funky happened.

gregwebs
0 replies
2h14m

Vendor independent build providence is certainly the long-term goal. In the immediate-term moving away from mystery tarballs towards version control gets us a step closer.

One of the best things about Golang is that packages are shared direct via source repositories (Github) rather than a package repository containing mystery tarballs. I understand the appeal of package repositories, but without proper security constraints it's a security disaster waiting to happen.

qerti
0 replies
56m

Wasn’t the payload in a blob in the tests, which is in the source repo? If you were to clone the repo then build from source, you’d have the backdoor, right? Surely distros aren’t using binaries sent by maintainers

jhoechtl
5 replies
6h49m

merges hidden backdoor binary code well hidden inside some binary test input files. [...] Many of the files have been created by hand with a hex editor, thus there is no better "source code" than the files themselves.

So much for the folks advocating for binary (driver) blobs in OSS t support otherwise unsupported hardware.

It's either in source form and reproducable or it's not there.

pixl97
2 replies
2h50m

It's either in source form and reproducable or it's not there.

Wanna know how I know you haven't read into the discussion much?

There are a whole lot of binary test cases in software. Especially when you're dealing with things like file formats and test cases that should specifically fail on bad data of particular types.

TacticalCoder
1 replies
2h30m

There are a whole lot of binary test cases in software.

That's not how I read GP's point. If even binary blobs in test cases are a place where backdoors are, now as a matter of fact, hidden then, certainly, among the folks advocating for binary drivers in FOSS, there are some who are already --or planning to-- add backdoors there.

Binary blobs are all terrible, terrible, terrible ideas.

Builds should be 100% reproducible from source, bit for bit. At this point it's not open up for discussion anymore.

pixl97
0 replies
1h21m

Then you figure out how to build a 'source' test case of a bad zip, or bad jpg, or word document or whatever else exists out there. Also figure out how to test that your bit4bit perfect binary isn't doing the wrong damned thing in your environment with actual real data.

tredre3
0 replies
2h43m

Are you running linux-libre?

j1elo
5 replies
5h30m

A "good" side effect of this for OSS maintainers going on, is that now any time an entitled user starts being too pushy or too... well, entitled, they can be given a canned response:

Are you trying to pull an xz attack on me?

ceejayoz
4 replies
5h14m

Ah, the old "undercover cops can't lie about not being a cop, just ask them" technique.

Fnoord
3 replies
3h37m

If you casually ask this while you can study (and preferably record!) the person's posture and they react in real-time then you can apply interrogation technique which CIA et al use.

ceejayoz
2 replies
3h2m

So becoming an open source maintainer will involve an in-person trip to an interrogation?

The xz attack involves, in significant part, a maintainer burned out and happy to accept offered help. I don't think making it substantially harder to receive genuine help is likely to improve the situation.

Fnoord
1 replies
1h58m

I just wanted to bring up that technique (it is not unique to CIA; LE also uses it). I never asserted the technique would've been useful in this very situation. However, in your framing you ignored the option of video conferencing.

Also, you are forgetting bringing up 'are you trying to pull xz to me?' is a yellow flag towards the person who said it. It isn't definite, it just makes people raise up, it gathers the attention of watchful eyes. We should be careful not overdoing it though.

j1elo
0 replies
1h12m

I made the original comment and must say that it was only meant as a joke. But, maybe some cases would merit using it.

This is of course based on the fact of how the xz attack needed a couple of apparently innocent community members to start being too pushy, to the point of almost bullying the author and pointing fingers to their supposed passivity towards maintaining the project.

To some levels of such behavior, after all that has happened, one could reasonably (even if just jokingly) wonder if it's not a similar attempt. IMO, being framed as a possible attacker would probably either calm the shit out of some too entitled users, or provoke them into even more whining.

Now seriously, there is the thing about attitude that I wish was more popular among FOSS maintainers in order to have a mentally healthy relationship with their role: the ability to work on their own terms and not forget even for a split second that this is all a hobby activity. That FOSS licenses are explicitly written to allow A LOT of freedoms such as hiring 1st or 3rd party support services, forking, or whatever else but not to demand anything from the author.

edg5000
4 replies
5h23m

I wonder, once the attacker gained commit permissions, were they able to rewrite and force push existing commits? In that case rolling back to older commits may not be a solution.

If my speculation is correct then the the exact date on which access was granted must then first be known, after that a trusted backup of the repo from before that date is needed. Ideally Lasse Collin would have a daily backup of the repo.

Although perhaps the entire repo may have to be completely audited at this point.

ajross
1 replies
5h18m

Force pushes tend to be noticed easily. All it takes is for one external developer to try to pull to see the failure. And it's actually hard to do because you need to comb through the tree to update all the tags that point to the old commits. On top of that it obviously breaks any external references to the commit IDs (e.g. in distro or build configurations), all the way up to cryptographic signatures that might have been made on releases.

I think it's a pretty reasonable assumption that this didn't happen, though it would be nice to see a testimony to that effect from someone trustworthy (e.g. "I restored a xz checkout from a backup taken before 5.6.0 and all the commit IDs match").

fl7305
0 replies
2h52m

And it's actually hard to do because you need to comb through the tree to update all the tags that point to the old commits.

Isn't this part just a few pages of code, if that?

I agree that it will be blindingly obvious for the reasons you list.

sloowm
0 replies
3h59m

The way the hack works is incredibly sophisticated and has specifically sought out how to get past all normal checks. If messing with some commits would be possible this entire rube goldberg hack would not have been set up.

Denvercoder9
0 replies
2h39m

There are trusted copies of historic releases from third-party sources (at least Linux distributions, but there's probably other sources as well), it's pretty easy to check whether the tags in the git repository match those. (This can be done as the tarballs are a superset of the files in the git repository, the other way around doesn't work).

xyst
3 replies
4h20m

Timeline reads like a "pig butchering" or romance scam. Except the goal is not money, but control.

Attackers find a vulnerable but critical library in the supply chain. Do a cross check on maintainers or owners of the code (reference social media and other sources to get more information). Execute social engineering attack. Talk to maintainer(s) "off list" to gain rapport. Submit a few non-consequential patches that are easy to grep to gain trust. Use history of repository or mailing list against victim to gaslight ("last release was X years ago!1", "you are letting project rOt!", "community is waiting for you!").

BuildTheRobots
1 replies
3h46m

I'm amazed more people aren't talking about the "off list" part or asking Colin if he's willing to provide those emails/conversations.

xyst
0 replies
1h49m

That’s more of a LEO concern. Maybe attackers got sloppy and leaked info in email headers?

DrammBA
0 replies
2h47m

Timeline reads like a "pig butchering"

To me this is the polar opposite of pig butchering. This was a targeted and unromantic attack, unrelated to investing or cryptocurrency, the original maintainer was not "fattened like a hog" in any way, if anything he was bullied and abused into submission.

vb-8448
3 replies
6h21m

I wonder if netflix will make a movie from this story, if you read the timeline it really sounds like a well written thriller.

publius_0xf3
1 replies
3h39m

It does, but has there ever been a movie that successfully portrayed a compelling drama that takes place entirely on a computer monitor? It's hard to even imagine. It's why I think the novel is still relevant in our age because all the great stories that unfold on a screen can't be acted out on a sound stage.

bckygldstn
0 replies
2h20m

Searching [0] gets you halfway there: its about a compelling drama which takes place mostly in real life, but is portrayed entirely on a computer monitor.

[0] https://en.wikipedia.org/wiki/Searching_(film)

in3d
0 replies
5h4m

It’s not compelling enough unless we find out who was behind it, which is probably unlikely.

mattxxx
3 replies
4h3m

A service that measured "credibility" or "activity" of a username/email could be really useful here. At least, it would be a leading indicator that something _might_ be up. In particular the aside here about the email addresses are suspect:

https://research.swtch.com/xz-timeline#jia_tan_becomes_maint...

would be useful info for Lasse Collins before taking the pressure campaign seriously.

iso8859-1
2 replies
3h34m

What about just using 'web of trust', for example with GPG? If the user's key is signed by people that met up with the actual person, it would be much harder to make fake identities.

ptx
0 replies
1h48m

There was an article from 2019 [0] that someone on HN linked recently about how "web of trust is dead", but it seems to concern scalability problems with the keyserver, which resulted in DoS attacks, which made them disable the feature by default. The concept should presumably still be good, assuming the issues specific to the GPG keyserver can be avoided.

[0] https://inversegravity.net/2019/web-of-trust-dead/

carols10cents
0 replies
1h34m

What would prevent the sock puppet accounts from signing each others' keys?

goombacloud
3 replies
13h10m

This might not be complete because this statement "More patches that seem (even in retrospect) to be fine follow." lacks some more backing facts. There were more patches before the SSH backdoor, e.g.: "Lasse Collin has already landed four of Jia Tan’s patches, marked by “Thanks to Jia Tan”" and the other stuff before and after the 5.4 release. So far I didn't see someone make a list of all patches and gather various opinions on whether the changes could be maliciously leveraged.

VonGallifrey
1 replies
6h1m

I get that there is a reason not to trust those Patches, but I would guess they don't contain anything malicious. This early part of the attack seems to only focus on installing Jia Tan as the maintainer, and they probably didn't want anything there that could tip Lasse Collin off that this "Jia" might be up to something.

rsc
0 replies
5h54m

Yes, exactly. I did look at many of them, and they are innocuous. This is all aimed at setting up Jia as a trusted contributor.

deathanatos
3 replies
2h36m

Evan Boehs observes that Jigar Kumar and Dennis Ens both had nameNNN@mailhost email addresses

This is the second time I've read this "observation", but this observation is just wrong? Jigar's email is "${name}${number}@${host}", yes, but Dennis's is just "${name}@${host}" — there's not a suffixed number. (There's a 3, but it's just a "leetcode" substitution for the E, i.e., it's semantically a letter.)

(They could still both be sockpuppets, of course. This doesn't prove or disprove anything. But … let's check our facts?)

__float
2 replies
2h7m

Where are the email addresses visible? I've also seen this a few times, but never the actual addresses.

glitchcrab
0 replies
47m

I couldn't spot email addresses directly in plaintext for those who weren't submitting patches (e.g. Jigar), however if you look at one of the links to his (?) responses then there's a mailto link with the text 'Reply via email'

XorNot
3 replies
7h14m

What stands out to me is this particular justification:

2024-02-23: Jia Tan merges hidden backdoor binary code well hidden inside some binary test input files. The associated README claims “This directory contains bunch of files to test handling of .xz, .lzma (LZMA_Alone), and .lz (lzip) files in decoder implementations. Many of the files have been created by hand with a hex editor, thus there is no better "source code" than the files themselves.”

This is, perhaps, the real thing we should think about fixing here because the justification is on the surface reasonable and the need is quite reasonable - corrupted test files to test corruption handling.

But there has got to be some a way to express this which doesn't depend on, in essence, "trust me bro" since binary files don't appear in diffs (which is to say: I can think of a number of means of doing it, but there's definitely no conventions in the community I'm aware of).

OJFord
1 replies
6h49m

Also that when dynamically linking A against B, A apparently gets free reign to overwrite B.

It sort of makes sense, since at the end of the day it could just be statically linked or implement B's behaviour itself and do whatever it wants, but it's not really what you expect is it.

acdha
0 replies
6h7m

Yeah, that part struck me as something we should be able to block - the number of times where you actually want that must be small enough to make it practical do something like write-protect pages with a small exception list.

asvitkine
0 replies
5h39m

Well, test files shouldn't be affecting the actual production binary.

But in practice that's not something that can be enforced for arbitrary projects without those projects having set something up specifically.

For example, the project could track the effect on binary size of the production binary after every PR. But then it still requires a human (or I guess an AI bot?) to notice that the increase would be unexpected.

rwmj
2 replies
5h31m

Missing the whole Fedora timeline. I was emailed by "Jia Tan" between Feb 27 and Mar 27, in a partially successful attempt to get the new xz into Fedora 40 & 41. Edit: I emailed Russ with the details.

itslennysfault
1 replies
3h0m

I wondered about this. I saw the note at the bottom "RedHat announces that the backdoored xz shipped in Fedora Rawhide and Fedora Linux 40 beta" but saw nothing in the timeline explaining when/how it made it into Fedora.

Longlius
1 replies
1h55m

I don't even think it's necessarily moderation so much as maintaining these high-traffic projects is now akin to a full-time job minus the pay.

At a certain point, companies need to step up and provide funding or engineering work or they should just keep expecting to get owned.

Vegenoid
0 replies
22m

If there is a larger shift to companies paying for the software they rely on that is currently free, it's not going to be companies paying open-source maintainers, it's going to be companies paying other companies that they can sign contracts with and pass legal responsibility to.

benob
2 replies
5h22m

What makes you think that the accounts were not compromised only recently?

wasmitnetzen
0 replies
5h15m

Not OP, but the fact that they only appear in this context makes me believe that their are specifically set up for this task. No regular person has that good opsec just to push patches to a random library.

sloowm
0 replies
3h34m

If the account was compromised only recently the real co-maintainer would have done everything to warn the maintainer. If you're maintaining some core piece of infrastructure and your account gets compromised it would be trivial to let at least someone know you're not the one pushing these commits.

1024core
2 replies
28m

I am reminded of the infamous Sendmail worm from 1989(?).

If this compromised OpenSSHd had become the default across millions of systems all over the world, could a worm-like thing have brought a major chunk of the Internet down? Imagine millions of servers suddenly stuck in a boot-loop, refusing to boot.

And all because one owner of a library had some mental health issues. We should not have such SPOFs.

juliusdavies
0 replies
15m

Impossible here because the exploit was carefully engineered to be unreplayable and NOBUS (nobody but us) so it couldn’t go viral. Even if you intercepted a complete tcp byte trace of the attack there was nothing you could do with that to attack other systems.

dijit
0 replies
19m

And all because one owner of a library had some mental health issues

Wrong takeaway.

tamimio
1 replies
56m

This attack doesn’t exploit a technical issue or bug, it exploits the open source philosophy, and unless the community will come up with a systematic process to counter it, expect more sophisticated attacks similar to it in the future. This time we got lucky that some smart nerd -I am a nerd too, this is a praise not to be taken in a bad way- noticed and notified the community in less than 20 days of the second backdoor implementation, next time the attack may undergoes more comprehensive “rehearsals” that it will make it impossible to detect.

Vegenoid
0 replies
40m

Can you explain what aspects of the open source philosophy were exploited, and what possible mitigations might be?

softwaredoug
1 replies
5h29m

Open Source is a real tragedy of the commons.

Everyone wants to consume it. Nobody wants to participate.

People are upset when a company like Elastic or Mongo switches to a "non open" license. But at the same time, the market doesn't leave much choice. Companies won't be incentivized to contribute to projects when they can freeload. The market actually wants vendors, it doesn't want to participate in open source. But they don't want to _pay_ for vendors.

So I think its entirely appropriate that anyone / any entity that creates "open source" to change their license, set limits, say "no", and let users be damed unless they're willing to make it financially appealing. It's literally "Without Warranty" for a reason.

Letting your passion project becoming hijacked into determining your mental health is really depressing. F' the people who can't get on board with your boundaries, etc around it. They deserve the natural consequences of their lack of support.

rwmj
0 replies
5h25m

Yeah whatever. Closed source software is much easier to subvert, just have your agents join the company and they can push whatever they want without any external (or even internal) review.

fizlebit
1 replies
2h28m

What are the chances this is the first such attack, not just the first one discovered. Presumably every library or service running as root is open to attack and maybe also some running in userspace. The attack surface is massive. Time for better sandboxing? Once attackers get into the build systems of Debian and others is it game over?

2OEH8eoCRo0
0 replies
2h19m

I wouldn't be surprised if there are others but this specific one seems special. It was caught because on connection it does an extra decryption operation and I'd assume there is no way around this extra work. They'd have to re-architect this to not require that decryption.

I'm not a security expert though.

dxxvi
1 replies
2h33m

I guess that Lasse Collin will have more mental health issues after all of this :-)

Do we know who really is Jia Tan? Any photo of him? The email addresses he has been using? His location?

phreeza
0 replies
2h28m

What makes you think they are an actual person?

dekhn
1 replies
2h11m

Why do I feel like this set of posts by Russ about supply chain security will end up with a proposal/concrete first implementation of a supply-chain-verified-build process that starts at chip making - simple enough to analyze and rebuild independently- to bootstrap a go runtime that provides an environment to do secure builds.

Reflections on Trusting Trust becomes even more interesting if you consider photolithography machines being backdoored.

RyanShook
0 replies
1h52m

I’m of the opinion that there are backdoors in most of our software and a lot of our hardware. xz just happened to be caught because it was hogging resources.

dboreham
1 replies
2h23m

Something I've wondered about wrt this debacle: presumably the smart part of xz was the compression algorithm. I'm guessing but that's probably less than 500 lines of code. The rest is plumbing to do with running the algorithm in a CLI utility, on various different OSes, on different architectures, as a library, and so on. All that stuff is at some level of abstraction the same for all things that do bulk processing on bytes of data. Therefore perhaps we should think about a scheme where the boilerplate-ish stuff is all in some framework that is well funded with people to ensure it doesn't have cutout maintainers injecting backdoors, and is re-used for hundreds of bulk-data-processing utilities; and the clever part would then be a few hundred lines of code that's easy to review, and actually probably never needs to change. Like...a separation of concerns.

calvinmorrison
0 replies
2h5m

we were discussing this on the IRC. Imagine spinning up a thread, then running a bsd style pledge(2) on it to call liblzma. Kinda janky but it would work. Another option would be to just go out and call the xz util and not rely on a library to do so. That process can be locked down with pledge to only have stdin/stdout. That's all you need.

So, like UNIX does have this plumbing, its just that reaching for libraries and tight integration has been the pursuit of Lennart Poopering and his clan for years.

stockhorn
0 replies
3h46m

I feel like release tarballs shouldnt differ from the repo sources. And if they do, there should be a pipeline which generates and upload the release artifacts....

Can somebody write a script which diffs the release tarballs from the git sources for all debian packages and detects whether there are any differences apart from the files added by autotools :)?

pluc
0 replies
4h10m

The most interesting question I have with all this is:

Do you think this was a planned effort or was it opportunistic? Did they know what they were doing and social engineered towards it, or did they figure out what to do based on the day-to-day context they discovered?

peter_d_sherman
0 replies
2h43m

My takeaways:

First from the article itself:

"At this point Lasse seems to have started working even more closely with Jia Tan. Evan Boehs observes that Jigar Kumar and Dennis Ens both had nameNNN@mailhost email addresses that never appeared elsewhere on the internet, nor again in xz-devel."

That is an important observation!

Takeaway: Most non-agenda driven actual people on the Internet, leave a trail -- an actual trail of social media and other posts (and connected friends) that could be independently verified by system/social media/website administrators via voice or video calls (as opposed to CAPTCHA or other computer-based "Is it a human?" tests, which can be gamed) for stronger confidences in the identity / trustworthiness of the remote individual at the other end of a given account...

Next takeaway from the linked article: (https://boehs.org/node/everything-i-know-about-the-xz-backdo...):

"AndresFreundTec @AndresFreundTec@mastodon.social writes:

Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd,

showing lots of cpu time in liblzma, with perf unable to attribute it to a symbol.

Got suspicious. Recalled that I had seen an odd valgrind complaint in automated testing of postgres, a few weeks earlier, after package updates."

Takeaway: We could always use more observability of where exactly CPU time is spent in given programs...

Binaries without symbol tables (which is the majority of programs on computers today) make this task challenging, if not downright impossible, or at least very impractical -- too complex for the average user...

Future OS designers should consider including symbol tables for all binaries they ship -- as this could open up the capability for flame graphs / detailed CPU usage profiling (and the subsequent ability to set system policies/logging around these) -- for mere mortal average users...

klysm
0 replies
1h32m

What's crazy to me about how we've set up computers to do things, is that xz itself is a pure function, but somehow there's all of this shit that has to happen to just use a pure function! It's just bytes in and bytes out, but we have an astoundingly complex build system and runtime linking system which somehow allows this pure function to run arbitrary commands on a system.

kashyapc
0 replies
4h32m

Given this disaster, one or other "foundation" will now embrace the `xz` project, start paying a maintainer or two so that they don't accidentally end up dying from burnout.

Rinse, repeat for all critical-path open source software. A bit like the OpenSSL "Heartbleed" disaster[1]. OpenSSL is now part of Linux Foundation's (they do a lot of great work) "Core Infrastructure Initiative".

Many fat companies build their applications on these crucial low-level libraries, and leave the drudgery to a lone maintainer in Nebraska, chugging away in his basement[2].

[1] https://en.wikipedia.org/wiki/Heartbleed

[2] https://xkcd.com/2347/

gawa
0 replies
5h44m

Excellent summary of the events, with all the links in one place. This is the perfect resource for anyone who want to catch up, and also to learn about how such things (especially social engineering) unfold in the wild, out in the open.

One thing that could be added, for the sake of completeness: in the part "Attack begins", toward the end, when they are pushing for updating xz in the major distros, Ubuntu and Debian are mentioned but not Fedora.

Looks like the social engineering/pressuring for Fedora started at least weeks before 2024 March 04, according to a comment by @rwmj on HN [1]. I also found this thread on Fedora's devel list [2], but didn't dig too much.

[1] https://news.ycombinator.com/item?id=39866275

[2] https://lists.fedoraproject.org/archives/list/devel@lists.fe...

Solvency
0 replies
2h33m

Why does it seem like so many open source developers suffer from chronic mental health issues? as shown here, in TempleOS, etc. It's a weird but sad pattern I see all of the time.

1024core
0 replies
44m

We should set up a GoFundMe to reward Andres Freund.