The weird thing about this one is how it seems super professional in some ways, and rather amateur in others. Professional in the sense of spending a long time building up an identity that seemed trustworthy enough to be made maintainer of an important package, of probably involving multiple people in social manipulation attacks, of not leaking the true identity and source of the attack, and the sophistication and obfuscation techniques used. Yet also a bit amateur-ish in the bugs and performance regressions that slipped out into production versions.
I'm not saying it's amateur-ish to have bugs. I'm saying, if this was developed by a highly competent state-sponsored organization, you'd think they would have developed the actual exploit and tested it heavily behind closed doors, fixing all of the bugs and ensuring there were no suspicion-creating performance regressions before any of it was submitted into a public project. If there was no performance regression, much higher chance this never would have been discovered at all.
You probably have too high expectations when you hear the "state-sponsored" part. Every large organization will inevitably end up like any other. They also have bureaucracy, deadlines, production cycle, poor communication between teams, the recent iOS "maybe-a-backdoor" story also shows that they don't always care about burning the vulnerabilities because they amassed a huge pile of them.
Eh. Take a look at other state-sponsored attackers. We know they have 0-days for iOS, we know they've been used, but even Apple doesn't know what they are since they are so good at hiding their tracks. I don't think a state-sponsored attack would upload their payload to the git repo and tarball for all to stare at after it's been found out, which only took about a month.
NSO Group built a turing-complete VM out of a use-after-free exploit in some JBIG2 decompression code. Uploading a payload to the world wide web and calling it bad-3-corrupt_lzma2.xz is clownshoes by comparison.
My best guess as to what this is is an amateur hacker ring, possibly funded by a ransomware group.
That's why I mentioned the recent story. [0] [1] "Apple doesn't know" when the chain uses an internal backdoor in Apple hardware is a... stretch. And the chain gives the strong vibes of corporate-style development, with all its redundancy and mismatch between two parts. It's not alchemy, really.
[0] https://securelist.com/operation-triangulation-the-last-hard...
[1] https://news.ycombinator.com/item?id=38783112 - HN discussion
Re the secret knock in the Apple silicon, a friend of mine once said "that's how you lose the NOBUS on a backdoor", and I think they were absolutely right.
The one thing which most leads me to believe this was an intentional backdoor? The S-boxes.
I don't believe it's intentional for the reason you mentioned. Although it could theoretically be like that for plausible deniability, Apple's reputation is definitely more valuable than one patchable backdoor of god knows how many others. But debug backdoor is still a backdoor.
Very large companies are definitely at the mercy of governments. Just look at how they are bending over backwards to comply with DMA etc. So, it is not at all inconceivable that they are forced to put backdoors into their product by the governments.
Except Apple is known for having very publicly fought the FBI’s attempt to force a backdoor into iOS.
https://en.m.wikipedia.org/wiki/Apple–FBI_encryption_dispute
Thankfully! At least in a democracy, the government is chosen, megacorps are accountable to no one else.
Intentional doesn't mean Apple approved. It could be a couple of compromised employees on the right teams.
If you look at that HN discussion, you'll find a link to a Mastodon post from an Asahi Linux developer explaining that these "S-boxes" are actually an ECC calculation, and that the registers are probably cache debug registers, which allow writing directly to the cache bypassing the normal hardware ECC calculation, so you have to do the ECC calculation yourself if you don't want a hardware exception caused by an ECC mismatch on read (of course, when testing the cache, sometimes you do want to cause an ECC mismatch, to test the exception handling).
While the world is trying to understand the backdoor, you sir decided that it's "clowshoes". I can only blindly defer to your expertise... "Clownshoes, amateur, hacker ring, ransomeware group." Done.
The quality of work you attract in part depends on how much you pay. Go check out how much is paid for a persistent iOS exploit, compared to a Linux user space exploit. From that, you may draw conclusions about their relative perceived difficulty and desirability. This will explain why iOS exploits are done more professionally. They are rarer, much better paid, and thus attract a better audience and more work on guarding them from discovery.
Welp. Ok, well now my newest worst nightmare is a jira board with tickets for "Iran" and "North Korea" stuck in the wrong column and late-night meetings with "product" about features.
The realization that there IS NOT an all powerful super intelligent cabal running everything is the worst one.
What we have is an Illuminati that is using Jira. :(
Who do you think wrote Jira?
And WHY do you think they wrote Jira?
Poor Illuminati bastards. Forced to dogfood Jira.
That's the joke
I think this is a key insight with some details: there isn’t an entire shadowy org that operates without jira but there are teams of people, usually small who do amazing things without jira. I imagine the manhattan project ran this way but you still have elite teams like this in every org. Eventually they need to hand it off to a jira crew and that’s unavoidable.
Since it happened, I've said that the “Year of Snowden” turned “It's theoretically possible to backdoor the firmware of a hard drive” (for example), into “Somewhere in the NSA, there's a Kanban board with post-its for each major manufacturer of hard drives, and three of them are already in the 'done' column.”
https://cybercoe.army.mil/CDID/ and yeah some use agile practices.
state-sponsored organizations in this space could be usually military-like organizational structure that has people with dedication and motivation unlike corporate workers. command structure can mean less bureaucracy (can also mean more bureaucracy too in some ways) when it is directly aligned with their mission. national patriotism motivations mean they are more dedicated and focused than a typical corporate worker. so yeah, there would be quality differences.
Patriots for the most part suck at coding as much as everyone else.
not just at coding. as a class of people, they tend to not be the sharpest tools in the shed.
And necessarily at that. A well-educated person will not swallow the nationalist dogma as easily.
I would say you put too much faith into military-like organization as well. However, from what I can tell it's usually just ordinary security researchers and devs with dubious morals (some are probably even former cybercriminals) that usually don't even have the need-to-know and aren't necessarily aware of every single aspect of their work. The entire thing is likely compartmentalized to hell.
You can't conjure quality from nothing (especially if it's pure patriotism/jingoism), large organizations are bound to work with mediocrities and dysfunctional processes, geniuses don't scale. (I feel like stating the obvious)
not sure if we should easily judge offsec and the private-public partnership that provides intel and offensive capabilities.
whether they're ex-criminals or must be accused of "dubious morals" would depend whether their clients (or targets) are what one considers the enemy.
and what about the guy who silently works on a "dubious project" patiently for years ... and then, at the right moment, knowingly throws a spanner in the works? aren't they the true hero?
Modifying the sshd behavior without crashes seems by itself pretty difficult. I mean, conceptually it isn't hard, if you are in the same process and can patch various functions, but I think doing so and having it be "production ready" to ship in multiple linux distros all the time is a challenge.
This thing wasn't around for very long but yet another thing to consider would be to keep it working across multiple versions of OpenSSH, libcrypto etc.
I picture some division in [nation-state] where they're constantly creating personas, slowly working all sorts of languishing open source packages with few maintainers (this is the actual hard, very slow part), then once they have a bit of an in, they could recruit more technical expertise. The division is run by some evil genius who knows this could pay off big, but others are skeptical, so their resources are pretty minimal.
Moxie's reasons for disallowing Signal distribution via F-droid always rang a little flat to me ( https://github.com/signalapp/Signal-Android/issues/127 ). Lots of chatter about the supposedly superior security model of Google Play Store, and as a result fewer eyes independently building and testing the Signal code base. Everyone is entitled to their opinions, but independent and reproducible builds seem like a net positive for everyone. Always struggled to understand releasing code as open source without taking advantage of the community's willingness to build and test. Looking at it in a new light after the XZ backdoor, and Jia Tan's interactions with other FOSS folk.
He says the decision not to distribute prebuilt APKs is because:
Which is a compelling argument from my perspective. I also think that people who can’t compile code should probably not root their phone.
That seems like a great way to talk down to your end users, which seems like a security smell all by itself. Many users of F-Droid are technology professionals themselves and are quite aware of the security implications of the choices they make for the devices they own, and F-Droid is often a component of that outlook.
Further, I don't think it applies to the F-Droid maintainers, who routinely build hundreds of different Android apps for all our benefit. They even directly addressed his concerns about the signing key(s) and other issues by improving F-Droid and met with continued rejection.
The link provides interesting reading, but I believe Moxie must have changed his opinion later: I have never had Google Play Store on my phone, but I could install Signal. I am pretty sure I did not install it from any dodgy site. It warned when it got outdated. Not sure how updates work, not using it anymore.
Let's never forget that the google play store requires giving google the ability to modify your app code in any way they want before making it available for download. Oh sure, that backdoor will never be abused.
I don't think we should assume a state actor. We don't know.
It's kind of similar to stuxnet but attacking Linux distros is so broad and has such a huge risk of being exposed, as it was within a few weeks of deployment. A good nation state attack would put more effort into not being caught.
But we don't know. So maybe I'm wrong.
Assuming a state-actor is a cope though. It's looking at the problem and saying "well we were fighting god himself, so really what could we have done?"
Whereas given the number of identities and time involved, the thing we really see is "it took what, 2-3 or burner email accounts and a few dozen hours over 2 years to almost hack the world?"
The entire exploit was within the scope of capability of one guy. Telling ourselves "nation-state" is pretending there isn't a really serious problem.
Ye it is a really good scapegoat. You get cover from war mongerers in a "don't blame the victim" way too.
I read somewhere that some recent changes in systems would've made the backdoor useless so they had to rush out, which caused them to be reckless and get discovered
This refers to the fact that systemd was planning to drop the dependency on liblzma (the conpression library installed by xz), and instead dlopen it at runtime when needed. Not for security reasons, but to avoid pulling the libs into initramfs images.
The backdoor relies on sshd being patched to depend on libsystemd to call sd_notify(), which several distros had done.
OpenSSH has since merged a new patch upstream that implements similar logic to sd_notify() in sshd itself to allow distros to drop that patch.
So the attack surface of both sshd and libsystemd has since shrunk a bit.
I remember when we added sd_notify support to our services at work, I was wondering why one would pull in libsystemd as a dependency for this. I mean, there's a pure-Python library [1] that basically boils down to:
With proper error handling, that's about 50 lines of C code. I would vendor that into my application in a heartbeat.[1]: https://raw.githubusercontent.com/bb4242/sdnotify/master/sdn...
Writing proper error handling in C is a very tedious and error prone task. So it doesn't surprise me that people would rather call another library instead.
Managing C dependencies is even more tedious and error prone. And even in C, opening a UNIX domain socket to write a single packet is not that hard.
Which shall be harder to justify now: "You're calling a gigantic library full of potential security holes just to call one function, to save writing a few lines of code, are you trying to JIA TAN the project?".
There's also a pure C library (libsystemd itself) which already does all that, and you don't need to test all the error handling cases in your 50 lines of C code. It makes sense to use the battle-tested code, instead of writing your own.
The problem is people keep focusing on the libsystemd element because systemd has it's big hate-on crew and the vector was for what's deemed "simple".
The better question though is...okay, what if the code involved was not simple? xz is a full compression algorithm, compressors have been exploit vectors for a while, so rolling your own is a terrifically bad idea in almost all cases. There's plenty of other more sophisticated libraries as well where you could've tried to pull the exact same trick - there's nothing about it being a "simple" inclusion in this case which implies vendoring or rolling your own is a good mitigation.
The saying goes that everyone is always preparing to fight the last war, not the next (particularly relevant because adversaries are likely scouring OSS looking for other projects that might be amenable to this sort of attack - how many applications have network access these days? An RCE doesn't need to be in sshd).
We could be faced with a form of Survivorship Bias here[0]. I find that thought rather chilling.
[0] https://en.wikipedia.org/wiki/Survivorship_bias
Which would mean that we have all kinds of active backdoors in our systems without us knowing it.
But wouldn't they be detected at some point by someone? Or would they be silently removed again after some time so the attackers are not revealed?
NSO Pegasus has existed for years without all of their exploits being detected.
You have to remember there is a lot of code in our systems. And almost every developer is inadequately trained on how to write secure software.
Attacks on a given system or site can be expected to be removed (or auto-removed) after the operation ended, potentially w/o trace. But for supply-chain attacks there's always a history (if someone bothers to investigate).
There was a recent Dwarkesh Patel interview of Dario Amodei, CEO of Anthropic, who now have ex. national security people on their staff. He said that the assumption is that if a tech company has 10,000 or more employees then not only will you almost certainly have leakers, but there is also a high likelihood there is an actual spy among your staff. This is why they use need-to-know compartmentalism, etc.
I wonder what our success rate is in identifying industrial spies? 50%?
It’s also possible that this could be a change in personnel. Maybe the one who earned trust and took over was no more working for them. And an amateur took over with tight deadlines that lead to this gaffe for them.
The abrupt change in time-of-day when commits occurred supports the theory that Jai Tan is more than one person: https://twitter.com/birchb0y/status/1773871381890924872
The text near the box makes it sound like these are just the fixes - not adding the test files but updating them.
At that point it would have been clear “the race is on” to avoid detection, so it’s not too surprising someone would work late to salvage the operation.
Whoops, you're right. So this isn't really evidence of anything.
Out of interest I looked up the other commit at that time of day visible in that graph, laying on the arrow. It's [1], which changes the git URL from git.tukaani.org to github.com. Of course, moving the project hosting to github was part of the attack.
[1] https://git.tukaani.org/?p=xz.git;a=commitdiff;h=e1b1a9d6370...
If the mistakes align with the time of day change, perhaps the author had a distraction that pushed the hours and compromised judgement.
This tracks with other nation state sponsored attack patterns. I've had that same reaction before. Most APTs are like this but some Chinese,US and Russian APTs are so well funded, every aspect of their attacks is impressive.
Many hackers who work for nation states also have side gigs as crimeware/ransomgang members or actual pentesting jobs.
Reminds me of apt3/boyusec:
https://www.bleepingcomputer.com/news/security/chinese-gover...
It still boggles my mind that americans are against banning companies like huawei and bytedance. The MSS and PLA don't mess around.
The problem is that many others would have as much reason to bann US companies. I mean the US has a much more extensive history of using their security apparatus both for intelligence and economic means even against their allies.
Now if everyone bans everyone else we will let the world economy grind to a halt pretty quickly.
China already bans US companies and this is an active malicious threat not a vague possibility of harm. You can jail mark zukerberg but you can't jail xi jiping
For bytedance/TikTok the question is not really "should the US ban this company" but rather "should the government be able to ban any companies it wants (and also make it illegal for VPNs to allow US users to access relevant services/websites) without having to provide any substantial evidence?
Which is a very different question in practice
US companies? I'm with you, no! Foreign companies, absolutley. Even the suspicion of malicious abuse should be enough to ban a foreign company. Foreign persons and entities have no rights in the US and our government owes them as much explanation as they give us when they ban US companies on a whim.
I find this the most plausible explanation by far:
* The highly professional outfit simply did not see teknoraver's commit to remove liblzma as standard dependency of systemd build scripts coming.
* The race was on between their compromised code and that commit. They had to win it, with as large a window as possible.
* This caused the amateuristic aspects: Haste.
* The performance regression is __not__ big. It's lucky Andres caught it at all. It's also not necessarily all that simple to remove it. It's not simply a bug in a loop or some such. If I was the xz team I'd have enough faith in all the work that was done to give it high odds that they'd get there before discovery. That they'd have time; months, even.
* The payload of the 'hack' contains fairly easy ways for the xz hackers to update the payload. They actually used it to remove a real issue where their hackery causes issues with valgrind that might lead to discovering it, and they also used it to release 5.6.1 which rewrites significant chunks; I've as yet not read, nor know of any analysis, as to why they changed so much. Point is, it's reasonable to think they had months, and therefore, months to find and fix issues that risk discovery.
* I really can't spell this out enough: WE GOT REALLY LUCKY / ANDRES GOT LUCKY. 9 times out of 10 this wouldn't have been found in time.
Extra info for those who don't know:
https://github.com/systemd/systemd/commit/3fc72d54132151c131...
That's a commit that changes how liblzma is a dependency of systemd. Not because the author of this commit knew anything was wrong with it. But, pretty much entirely by accident (although removing deps was part of the point of that commit), almost entirely eliminates the value of all those 2 years of hard work.
And that was with the finish line in sight for the xz hackers: On 24 feb 2024, the xz hackers release liblzma 5.6.0 which is the first fully operational compromised version. __12 days later systemd merges a commit that means it won't work__.
So now the race is on. Can they get 5.6.0 integrated into stable releases of major OSes _before_ teknoraver's commit that removes liblzma's status as direct dep of systemd?
I find it plausible that they knew about teknoraver's commit _just before_ Feb 24th 2024 (when liblzma v5.6.0 was released, the first backdoored release), and rushed to release ASAP, before doing the testing you describe. Buoyed by their efforts to add ways to update the payload which they indeed used - March 8th (after teknoraver's commit was accepted) it was used to fix the valgrind issue.
So, no, I don't find this weird, and I don't think the amateurish aspects should be taken as some sort of indication that parts of the outfit were amateuristic. As long as it's plausible that the amateuristic aspects were simply due to time pressure, it sounds like a really bad idea to make assumptions in this regard.
They could also have regrouped and found another way to do the exploit, given the relative ease of updating the payload (though it's probably a limited number of times you could change the test blobs without causing suspicion?). But I agree this explanation is plausible.
If lzma isn't loaded as part of sshd, the path from an lzma backdoor to sshd get a hell of a lot more circuitous and/or easier to catch. You'd pretty much need to modify the sshd binary while compressing a package build, or do something like that to the compiler, to then modify sshd components while compiling.
True in the contents of sshd logins it isn't that big, but ~500ms to get from _start() to main() isn't small either, compared to the normal cost of that phase of library startup. Their problem was that the sshd daemon fork+exec's itself to handle a connection, so they had to redo a lot of the work for each connection.
I suspect they started off with much smaller overhead and then it increased gradually, with every feature they added, just like it happens with many software projects. Here's the number of symbols being looked that a reversing effort has documented: https://github.com/smx-smx/xzre/blame/ff3ba18a39bad272ff628b... https://github.com/smx-smx/xzre/blob/ff3ba18a39bad272ff628bb...
Afaict all of this happens before there's any indication of the attacker's keys being presented - that's not visible to the fork+exec'd sshd until a lot later.
They needed to some of the work before main(), to redirect RSA_public_decrypt(). That'd have been some measurable overhead, but not close to 500ms. The rest of the startup could have been deferred until after RSA_public_decrypt() was presented with something looking like their key as part of the ssh certificate.
I thought performance was actually fine? It only dragged when using valgrind, hence the rhetoric that it took some really unlikely circumstances for it to be detected that quickly.
No, it wasn't fine. From the advisory (https://news.ycombinator.com/item?id=39865810):
2-3x slower 0m0.299s -> 0m0.807s [1]
[1] https://www.openwall.com/lists/oss-security/2024/03/29/4
Yea you're right. 500ms vs 10ms on an older server. Was thrown off by this statement and thought only the perf/valgrind/gdb attachments were what really brought it to surface.
Performance was much worse but not enough to actually nudge the author into digging into it until the random ssh logins started piling up:
https://twitter.com/AndresFreundTec/status/17741907437768663...
https://nitter.poast.org/AndresFreundTec/status/177419074377...
The fact that the guys developing the code weren't also simultaneously running valgrind and watching performance isn't hard to believe. They were targeting servers and appliances, how many servers and appliances do you know of that are running valgrind in their default image?
Sure, in hindsight that's a "duh, why didn't we think of that" - but also it's not very hard at all to see why they didn't think of that. They were likely testing against the system images they were hoping to compromise, not joe-schmoe developer's custom image.
In theory they should probably be testing against the CI pipelines of Debian and Fedora / CentOS, as that's the moat their backdoor has to cross.
They put code in to, in theory, avoid running in a development environment.
Events can happen to anyone, even competent state-sponsored organisations. And intelligence agencies are sometimes rather less ruthlessly competent than imagined (Kremlin assisinations in the UK have been a comedy of errors [1]).
Maybe another backdoor, or alternative access mechanism they were using, got closed and they wanted another one in a hurry.
[1] https://en.wikipedia.org/wiki/Poisoning_of_Alexander_Litvine...
And they are getting better at it:
https://www.bbc.com/news/world-us-canada-68706317
Or maybe the opportunity window for the mechanism this backdoor would use was closing. According to the timeline at https://research.swtch.com/xz-timeline there was a github comment from poettering at the end of January which implied that the relevant library would be changed soon to lazy load liblzma ("[...] Specifically the compression libs, i.e. libbz2, libz4 and suchlike at least. And also libpam. So expect more like this sooner or later."), as in fact happened a month later. The attacker had to get the backdoor into the targeted Linux distributions before the systemd change got into them.
Of course, the attacker could instead take the loss and abandon the approach, but since they had written that amount of complex code, it probably felt hard to throw it all away.
That performance regression could also be a way to identify compromised systems.
If all systems are compromised you don't need to identify anymore.
Well, for now the payload delivery relies on things like x86, glibc.. still enough.
Why do we assume the person building the trust is the attacker ?
Is not possible the attacker simply took over the account of some one genuinely getting involved in the community either hacked or just with $5 wrench and then committed the malicious code ?
Given the behavior of the accounts that applied pressure on the original xz maintainer, this seems unlikely to me.
Or they just bought the guy at one point, because I understood the malicious behaviour started quite recently.
IIRC, according to Andres Freund the perf regression only happened in machines using the -fno-omit-frame-pointer setting, which was not the default at that point.
The -fno-omit-frame-pointer bit is separate from the slowdown. -fno-omit-frame-pointer lead to valgrind warnings, but no perceptible slowdown in most (including this) cases.
Well, there is a pretty logical explanation.
Libsystemd was moving to a dlopen architecture for its dependencies.
This means that the backdoor would not load as the sshd patch only used libsystemd for notify, which does not need liblzma at all.
So they IMHO gave it a last shot. It's OK if it burns as it would be useless in 3 months (or even less).
The collateral is the backdoor Binary, but given enough engineering power it will be irrelevant in 2-3 months, too.
I think this is probably the right answer.
The only thing that makes me think this was amateurs/criminals instead of professionals is that I tend to think that professionals are more interested in post attack security.
So if the gate was closing an amateur would say, "Act now! Let's get what we can!" A professional would say, "This is all going to come to light real soon - our exploit won't be read and there's a high chance of this all falling apart. Pull out everything, cover our tracks to the degree we can and find another opportunity to pull this off."
But then again I also think on professionals would work an exploit that takes years. Criminals by their nature want a quick payout (If I had the patience for a long con I'd just get a job) a motivated individual amateurs (i.e. crazy people) rarely have a wide enough skill set.
(really tinfoil hatty - ) I almost wonder if it's misdirection?
Or a whitehat who couldn't get attention another way?
so this has to be a coordinaed teamwork instead of a single hacker, right?
It could be somebody who was just good at some things and not at other things.
There are thousands of ways that performance can be impacted. No matter how good you are at developing, there will be a workload that would have a performance hit. Phoronix has been several times reporting issues to the Linux kernel because performance regression with their test suite. Performance tests tend to take more time than correctness tests.
Not seeing that as a point. It's probably not possible to have no performance hit whatsoever when you're checking the exact nanosecond count of every little thing. But usually nobody is doing that. It shouldn't be hard to not cause a substantial enough performance regression in SSHD logins that somebody who wasn't already monitoring that would notice and decide to dig into what's going on.
I'm not sure if it's been revealed yet what this thing actually does, but it seems like all it really needs to do is to check for some kind of special token in the auth request or check against another keypair.
Different departments.
For all we know earlier operations may have been high quality (and still undetected), this one for some reason may have been comparatively not that important and the actor decided to cut costs.
Sometimes I think it could be someone who was forced to embed the backdoor but was smart enough to make it detectable by others without raising suspicion by the entity that was forcing him.
The tin foiler in me still suspects it could be Microsoft who planted it to make FOSS look bad.
If Google could release Gemini with a straight face, is it so hard to believe that this shadowy org might fail in an even subtler way?
Patrick McKenzie observes elsewhere that criminal orgs operate more or less like non-criminal ones — by committee and by consensus. By that light, it’s not so hard to fathom why this ship could have been sunk by performance degradation arising from feature bloat. Companies I’ve worked at have made more amateurish mistakes.
This doesn’t seem too surprising to me.
Any moderately competent developer could gain maintainership of a huge percentage of open source projects if they’re being paid to focus solely on that goal… after all, they’re competing mostly against devs working on it part side or as a hobby.
If this is state sponsored, they likely have similar programs in a large number of other projects.
The maintainer account (the identity) could have been sold to a third party. There are secondary markets [1] for this.
[1] https://ogusers.gg/ is the largest clear net marketplace for buying and selling usernames on popular sites
My favorite theory is that Jia Tan is a troll. They tried some silly patches and were surprised they got accepted. What started as a little joke on the side because covid made you stay at home slowly spiraled into "I wonder how far I can push this?"
Two years are enough to make yourself familiar with open ssh, ifuncs etc.
Then you do silly things like "hey um I need to replace the bad test data with newly generated data, this time using a fixed seed so that they are reproducible", but you don't actually tell anyone the seed for the new data. Then you lol when that gets past the maintainers no questions asked.
In the end they maybe just wanted to troll a coworker, like play some fart noises while they listen to music, and since they use Debian well, you better find a way to backdoor something in Debian to get into their machine.
Like back in the day when sasser sabotaged half the internet and "security experts" said they have a plausible lead to Russia – which as is turned out was because said security experts ran strings on the binary and found Russian text – put there by the German teen who wrote sasser "for teh lulz".
My thought is they have 50-200 systems programmers where it's like "Hey George, I know you like to contribute to linux, have at it, we just need you to put a little pinhole here here and here" Then they have 5-20 security gurus / hackers who are like "Thanks George, I made this binary blob you need to insert, here's how you do it"
So the systems developer job is to build a reputation over years as we see, the security guy's job is to handle the equation group heavily encrypted/obfuscated exploits.
This would explain a mix of code quality / professionalism - multiple people, multiple roles. Of course the former "systems programmer" role need not be a full time employee. They good have motivated a white hat developer to change hats, either financially or through an offer they can't refuse.
So, much like any other code base.
Bit of a https://en.wikipedia.org/wiki/Curate%27s_egg
States have spent decades if not centuries building up espionage apparatus.
All of the testing and development that goes into avoiding bugs has a tendency to run counter to the goals of secrecy or speed.
Secret, fast, well tested. Choose 2.
You probably have seen lots of films deificating state-sponsored organisations.
SSO are made by humans under real constrainsts in time, money, personal. It is almost impossible to make something perfect that nobody in the world could detect.
In the world there are people with different backgrounds that could use techniques that you never accounted for. Maybe is a technique used in biology to study DNA, or for counting electrons in a microscope or the background radiation in astronomy.
For example, I have seen strange people like that reverse engineer encrypted chips that were "impossible" to in a very easy way because nobody expected it. They spend 10million dollars protecting something and someone removed the protection using $10.
What's also surprising is how quickly the community seems to be giving someone the benefit of the doubt. A compromised maintainer would probably exactly introduce a fake member joining the project to make certain commits. They might have a contact providing the sophisticated backdoor that they need to (amateurishly) implement.
On 2024-02-29, a PR was sent to stop linking liblzma into libsystemd [0].
Kevin Beaumont speculated [1] that "Jia Tan" saw this and immediately realized it would have neutered the backdoor and thus began to rush to avoid losing the very significant amount of work that went into this exploit; I think he's right. That rush caused the crashing seen in 5.6.0 and the lack of polish which could have eliminated or reduced the performance regressions which were the entire reason this was caught in the first place; they simply didn't have the time because the window had started to close and they didn't want all their work to be for nothing.
[0]: https://github.com/systemd/systemd/pull/31550
[1]: https://doublepulsar.com/inside-the-failed-attempt-to-backdo...
At this point we don't know who did this. It could have been a single really smart person, it could have been criminal, it could have been a state intelligence agency.
We don't know shit.