return to table of content

The xz sshd backdoor rabbithole goes quite a bit deeper

ufmace
105 replies
18h13m

The weird thing about this one is how it seems super professional in some ways, and rather amateur in others. Professional in the sense of spending a long time building up an identity that seemed trustworthy enough to be made maintainer of an important package, of probably involving multiple people in social manipulation attacks, of not leaking the true identity and source of the attack, and the sophistication and obfuscation techniques used. Yet also a bit amateur-ish in the bugs and performance regressions that slipped out into production versions.

I'm not saying it's amateur-ish to have bugs. I'm saying, if this was developed by a highly competent state-sponsored organization, you'd think they would have developed the actual exploit and tested it heavily behind closed doors, fixing all of the bugs and ensuring there were no suspicion-creating performance regressions before any of it was submitted into a public project. If there was no performance regression, much higher chance this never would have been discovered at all.

orbital-decay
26 replies
13h22m

You probably have too high expectations when you hear the "state-sponsored" part. Every large organization will inevitably end up like any other. They also have bureaucracy, deadlines, production cycle, poor communication between teams, the recent iOS "maybe-a-backdoor" story also shows that they don't always care about burning the vulnerabilities because they amassed a huge pile of them.

Jasper_
10 replies
12h27m

Eh. Take a look at other state-sponsored attackers. We know they have 0-days for iOS, we know they've been used, but even Apple doesn't know what they are since they are so good at hiding their tracks. I don't think a state-sponsored attack would upload their payload to the git repo and tarball for all to stare at after it's been found out, which only took about a month.

NSO Group built a turing-complete VM out of a use-after-free exploit in some JBIG2 decompression code. Uploading a payload to the world wide web and calling it bad-3-corrupt_lzma2.xz is clownshoes by comparison.

My best guess as to what this is is an amateur hacker ring, possibly funded by a ransomware group.

ThePowerOfFuet
6 replies
10h10m

Re the secret knock in the Apple silicon, a friend of mine once said "that's how you lose the NOBUS on a backdoor", and I think they were absolutely right.

The one thing which most leads me to believe this was an intentional backdoor? The S-boxes.

orbital-decay
4 replies
9h18m

I don't believe it's intentional for the reason you mentioned. Although it could theoretically be like that for plausible deniability, Apple's reputation is definitely more valuable than one patchable backdoor of god knows how many others. But debug backdoor is still a backdoor.

vinay_ys
2 replies
9h8m

Very large companies are definitely at the mercy of governments. Just look at how they are bending over backwards to comply with DMA etc. So, it is not at all inconceivable that they are forced to put backdoors into their product by the governments.

1over137
0 replies
7h24m

Very large companies are definitely at the mercy of governments.

Thankfully! At least in a democracy, the government is chosen, megacorps are accountable to no one else.

lozenge
0 replies
7h12m

Intentional doesn't mean Apple approved. It could be a couple of compromised employees on the right teams.

cesarb
0 replies
9h20m

The one thing which most leads me to believe this was an intentional backdoor? The S-boxes.

If you look at that HN discussion, you'll find a link to a Mastodon post from an Asahi Linux developer explaining that these "S-boxes" are actually an ECC calculation, and that the registers are probably cache debug registers, which allow writing directly to the cache bypassing the normal hardware ECC calculation, so you have to do the ECC calculation yourself if you don't want a hardware exception caused by an ECC mismatch on read (of course, when testing the cache, sometimes you do want to cause an ECC mismatch, to test the exception handling).

zvmaz
0 replies
9h52m

Uploading a payload to the world wide web and calling it bad-3-corrupt_lzma2.xz is clownshoes by comparison.

While the world is trying to understand the backdoor, you sir decided that it's "clowshoes". I can only blindly defer to your expertise... "Clownshoes, amateur, hacker ring, ransomeware group." Done.

dmitrygr
0 replies
10h46m

is clownshoes by comparison

The quality of work you attract in part depends on how much you pay. Go check out how much is paid for a persistent iOS exploit, compared to a Linux user space exploit. From that, you may draw conclusions about their relative perceived difficulty and desirability. This will explain why iOS exploits are done more professionally. They are rarer, much better paid, and thus attract a better audience and more work on guarding them from discovery.

amoss
8 replies
9h49m

Welp. Ok, well now my newest worst nightmare is a jira board with tickets for "Iran" and "North Korea" stuck in the wrong column and late-night meetings with "product" about features.

bombcar
5 replies
6h23m

The realization that there IS NOT an all powerful super intelligent cabal running everything is the worst one.

What we have is an Illuminati that is using Jira. :(

imglorp
3 replies
6h6m

Who do you think wrote Jira?

znpy
1 replies
5h54m

And WHY do you think they wrote Jira?

Ygg2
0 replies
5h45m

Poor Illuminati bastards. Forced to dogfood Jira.

beastman82
0 replies
5h53m

That's the joke

Aurelius108
0 replies
4h41m

I think this is a key insight with some details: there isn’t an entire shadowy org that operates without jira but there are teams of people, usually small who do amazing things without jira. I imagine the manhattan project ran this way but you still have elite teams like this in every org. Eventually they need to hand it off to a jira crew and that’s unavoidable.

zellyn
0 replies
3h53m

Since it happened, I've said that the “Year of Snowden” turned “It's theoretically possible to backdoor the firmware of a hard drive” (for example), into “Somewhere in the NSA, there's a Kanban board with post-its for each major manufacturer of hard drives, and three of them are already in the 'done' column.”

vinay_ys
5 replies
9h11m

state-sponsored organizations in this space could be usually military-like organizational structure that has people with dedication and motivation unlike corporate workers. command structure can mean less bureaucracy (can also mean more bureaucracy too in some ways) when it is directly aligned with their mission. national patriotism motivations mean they are more dedicated and focused than a typical corporate worker. so yeah, there would be quality differences.

varjag
2 replies
9h8m

Patriots for the most part suck at coding as much as everyone else.

DyslexicAtheist
1 replies
4h42m

not just at coding. as a class of people, they tend to not be the sharpest tools in the shed.

voakbasda
0 replies
2h2m

And necessarily at that. A well-educated person will not swallow the nationalist dogma as easily.

orbital-decay
1 replies
8h44m

I would say you put too much faith into military-like organization as well. However, from what I can tell it's usually just ordinary security researchers and devs with dubious morals (some are probably even former cybercriminals) that usually don't even have the need-to-know and aren't necessarily aware of every single aspect of their work. The entire thing is likely compartmentalized to hell.

You can't conjure quality from nothing (especially if it's pure patriotism/jingoism), large organizations are bound to work with mediocrities and dysfunctional processes, geniuses don't scale. (I feel like stating the obvious)

DyslexicAtheist
0 replies
4h43m

it's usually just ordinary security researchers and devs with dubious morals (some are probably even former cybercriminals)

not sure if we should easily judge offsec and the private-public partnership that provides intel and offensive capabilities.

whether they're ex-criminals or must be accused of "dubious morals" would depend whether their clients (or targets) are what one considers the enemy.

and what about the guy who silently works on a "dubious project" patiently for years ... and then, at the right moment, knowingly throws a spanner in the works? aren't they the true hero?

asveikau
9 replies
15h13m

Modifying the sshd behavior without crashes seems by itself pretty difficult. I mean, conceptually it isn't hard, if you are in the same process and can patch various functions, but I think doing so and having it be "production ready" to ship in multiple linux distros all the time is a challenge.

This thing wasn't around for very long but yet another thing to consider would be to keep it working across multiple versions of OpenSSH, libcrypto etc.

nick238
8 replies
14h37m

I picture some division in [nation-state] where they're constantly creating personas, slowly working all sorts of languishing open source packages with few maintainers (this is the actual hard, very slow part), then once they have a bit of an in, they could recruit more technical expertise. The division is run by some evil genius who knows this could pay off big, but others are skeptical, so their resources are pretty minimal.

timschmidt
4 replies
13h51m

Moxie's reasons for disallowing Signal distribution via F-droid always rang a little flat to me ( https://github.com/signalapp/Signal-Android/issues/127 ). Lots of chatter about the supposedly superior security model of Google Play Store, and as a result fewer eyes independently building and testing the Signal code base. Everyone is entitled to their opinions, but independent and reproducible builds seem like a net positive for everyone. Always struggled to understand releasing code as open source without taking advantage of the community's willingness to build and test. Looking at it in a new light after the XZ backdoor, and Jia Tan's interactions with other FOSS folk.

sethherr
1 replies
13h16m

He says the decision not to distribute prebuilt APKs is because:

if you aren't able to build TextSecure from source, you probably aren't capable of managing the risks associated with 3rd party sources.

Which is a compelling argument from my perspective. I also think that people who can’t compile code should probably not root their phone.

timschmidt
0 replies
13h9m

That seems like a great way to talk down to your end users, which seems like a security smell all by itself. Many users of F-Droid are technology professionals themselves and are quite aware of the security implications of the choices they make for the devices they own, and F-Droid is often a component of that outlook.

Further, I don't think it applies to the F-Droid maintainers, who routinely build hundreds of different Android apps for all our benefit. They even directly addressed his concerns about the signing key(s) and other issues by improving F-Droid and met with continued rejection.

usr1106
0 replies
7h54m

The link provides interesting reading, but I believe Moxie must have changed his opinion later: I have never had Google Play Store on my phone, but I could install Signal. I am pretty sure I did not install it from any dodgy site. It warned when it got outdated. Not sure how updates work, not using it anymore.

jjav
0 replies
10h51m

supposedly superior security model of Google Play Store

Let's never forget that the google play store requires giving google the ability to modify your app code in any way they want before making it available for download. Oh sure, that backdoor will never be abused.

asveikau
2 replies
12h55m

I don't think we should assume a state actor. We don't know.

It's kind of similar to stuxnet but attacking Linux distros is so broad and has such a huge risk of being exposed, as it was within a few weeks of deployment. A good nation state attack would put more effort into not being caught.

But we don't know. So maybe I'm wrong.

XorNot
1 replies
10h13m

Assuming a state-actor is a cope though. It's looking at the problem and saying "well we were fighting god himself, so really what could we have done?"

Whereas given the number of identities and time involved, the thing we really see is "it took what, 2-3 or burner email accounts and a few dozen hours over 2 years to almost hack the world?"

The entire exploit was within the scope of capability of one guy. Telling ourselves "nation-state" is pretending there isn't a really serious problem.

rightbyte
0 replies
3h7m

Ye it is a really good scapegoat. You get cover from war mongerers in a "don't blame the victim" way too.

richin13
7 replies
17h22m

I read somewhere that some recent changes in systems would've made the backdoor useless so they had to rush out, which caused them to be reckless and get discovered

sho_hn
6 replies
15h54m

This refers to the fact that systemd was planning to drop the dependency on liblzma (the conpression library installed by xz), and instead dlopen it at runtime when needed. Not for security reasons, but to avoid pulling the libs into initramfs images.

The backdoor relies on sshd being patched to depend on libsystemd to call sd_notify(), which several distros had done.

OpenSSH has since merged a new patch upstream that implements similar logic to sd_notify() in sshd itself to allow distros to drop that patch.

So the attack surface of both sshd and libsystemd has since shrunk a bit.

rav
5 replies
14h0m

The backdoor relies on sshd being patched to depend on libsystemd to call sd_notify

I remember when we added sd_notify support to our services at work, I was wondering why one would pull in libsystemd as a dependency for this. I mean, there's a pure-Python library [1] that basically boils down to:

  import os, socket
  
  def notify(state=b"READY=1"):
    sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
    addr = os.getenv('NOTIFY_SOCKET')
    if addr[0] == '@':
     addr = '\0' + addr[1:]
    sock.connect(addr)
    sock.sendall(state)
With proper error handling, that's about 50 lines of C code. I would vendor that into my application in a heartbeat.

[1]: https://raw.githubusercontent.com/bb4242/sdnotify/master/sdn...

bdd8f1df777b
2 replies
10h4m

With proper error handling, that's about 50 lines of C code.

Writing proper error handling in C is a very tedious and error prone task. So it doesn't surprise me that people would rather call another library instead.

shiomiru
0 replies
4h36m

Writing proper error handling in C is a very tedious and error prone task.

Managing C dependencies is even more tedious and error prone. And even in C, opening a UNIX domain socket to write a single packet is not that hard.

TacticalCoder
0 replies
9h8m

So it doesn't surprise me that people would rather call another library instead.

Which shall be harder to justify now: "You're calling a gigantic library full of potential security holes just to call one function, to save writing a few lines of code, are you trying to JIA TAN the project?".

cesarb
1 replies
10h25m

I was wondering why one would pull in libsystemd as a dependency for this. I mean, there's a pure-Python library [...] With proper error handling, that's about 50 lines of C code.

There's also a pure C library (libsystemd itself) which already does all that, and you don't need to test all the error handling cases in your 50 lines of C code. It makes sense to use the battle-tested code, instead of writing your own.

XorNot
0 replies
10h8m

The problem is people keep focusing on the libsystemd element because systemd has it's big hate-on crew and the vector was for what's deemed "simple".

The better question though is...okay, what if the code involved was not simple? xz is a full compression algorithm, compressors have been exploit vectors for a while, so rolling your own is a terrifically bad idea in almost all cases. There's plenty of other more sophisticated libraries as well where you could've tried to pull the exact same trick - there's nothing about it being a "simple" inclusion in this case which implies vendoring or rolling your own is a good mitigation.

The saying goes that everyone is always preparing to fight the last war, not the next (particularly relevant because adversaries are likely scouring OSS looking for other projects that might be amenable to this sort of attack - how many applications have network access these days? An RCE doesn't need to be in sshd).

gitaarik
2 replies
11h2m

Which would mean that we have all kinds of active backdoors in our systems without us knowing it.

But wouldn't they be detected at some point by someone? Or would they be silently removed again after some time so the attackers are not revealed?

threeseed
0 replies
7h23m

NSO Pegasus has existed for years without all of their exploits being detected.

You have to remember there is a lot of code in our systems. And almost every developer is inadequately trained on how to write secure software.

guenthert
0 replies
8h56m

Attacks on a given system or site can be expected to be removed (or auto-removed) after the operation ended, potentially w/o trace. But for supply-chain attacks there's always a history (if someone bothers to investigate).

HarHarVeryFunny
0 replies
6h53m

There was a recent Dwarkesh Patel interview of Dario Amodei, CEO of Anthropic, who now have ex. national security people on their staff. He said that the assumption is that if a tech company has 10,000 or more employees then not only will you almost certainly have leakers, but there is also a high likelihood there is an actual spy among your staff. This is why they use need-to-know compartmentalism, etc.

I wonder what our success rate is in identifying industrial spies? 50%?

dabbledabble
4 replies
16h48m

It’s also possible that this could be a change in personnel. Maybe the one who earned trust and took over was no more working for them. And an amateur took over with tight deadlines that lead to this gaffe for them.

swid
1 replies
15h14m

The text near the box makes it sound like these are just the fixes - not adding the test files but updating them.

At that point it would have been clear “the race is on” to avoid detection, so it’s not too surprising someone would work late to salvage the operation.

versteegen
0 replies
8h40m

Whoops, you're right. So this isn't really evidence of anything.

Out of interest I looked up the other commit at that time of day visible in that graph, laying on the arrow. It's [1], which changes the git URL from git.tukaani.org to github.com. Of course, moving the project hosting to github was part of the attack.

[1] https://git.tukaani.org/?p=xz.git;a=commitdiff;h=e1b1a9d6370...

hinkley
0 replies
12h39m

If the mistakes align with the time of day change, perhaps the author had a distraction that pushed the hours and compromised judgement.

badrabbit
4 replies
9h52m

This tracks with other nation state sponsored attack patterns. I've had that same reaction before. Most APTs are like this but some Chinese,US and Russian APTs are so well funded, every aspect of their attacks is impressive.

Many hackers who work for nation states also have side gigs as crimeware/ransomgang members or actual pentesting jobs.

Reminds me of apt3/boyusec:

https://www.bleepingcomputer.com/news/security/chinese-gover...

It still boggles my mind that americans are against banning companies like huawei and bytedance. The MSS and PLA don't mess around.

cycomanic
1 replies
6h20m

It still boggles my mind that americans are against banning companies like huawei and bytedance. The MSS and PLA don't mess around.

The problem is that many others would have as much reason to bann US companies. I mean the US has a much more extensive history of using their security apparatus both for intelligence and economic means even against their allies.

Now if everyone bans everyone else we will let the world economy grind to a halt pretty quickly.

badrabbit
0 replies
1h26m

China already bans US companies and this is an active malicious threat not a vague possibility of harm. You can jail mark zukerberg but you can't jail xi jiping

afiori
1 replies
4h7m

For bytedance/TikTok the question is not really "should the US ban this company" but rather "should the government be able to ban any companies it wants (and also make it illegal for VPNs to allow US users to access relevant services/websites) without having to provide any substantial evidence?

Which is a very different question in practice

badrabbit
0 replies
1h24m

US companies? I'm with you, no! Foreign companies, absolutley. Even the suspicion of malicious abuse should be enough to ban a foreign company. Foreign persons and entities have no rights in the US and our government owes them as much explanation as they give us when they ban US companies on a whim.

rzwitserloot
3 replies
5h24m

I find this the most plausible explanation by far:

* The highly professional outfit simply did not see teknoraver's commit to remove liblzma as standard dependency of systemd build scripts coming.

* The race was on between their compromised code and that commit. They had to win it, with as large a window as possible.

* This caused the amateuristic aspects: Haste.

* The performance regression is __not__ big. It's lucky Andres caught it at all. It's also not necessarily all that simple to remove it. It's not simply a bug in a loop or some such. If I was the xz team I'd have enough faith in all the work that was done to give it high odds that they'd get there before discovery. That they'd have time; months, even.

* The payload of the 'hack' contains fairly easy ways for the xz hackers to update the payload. They actually used it to remove a real issue where their hackery causes issues with valgrind that might lead to discovering it, and they also used it to release 5.6.1 which rewrites significant chunks; I've as yet not read, nor know of any analysis, as to why they changed so much. Point is, it's reasonable to think they had months, and therefore, months to find and fix issues that risk discovery.

* I really can't spell this out enough: WE GOT REALLY LUCKY / ANDRES GOT LUCKY. 9 times out of 10 this wouldn't have been found in time.

Extra info for those who don't know:

https://github.com/systemd/systemd/commit/3fc72d54132151c131...

That's a commit that changes how liblzma is a dependency of systemd. Not because the author of this commit knew anything was wrong with it. But, pretty much entirely by accident (although removing deps was part of the point of that commit), almost entirely eliminates the value of all those 2 years of hard work.

And that was with the finish line in sight for the xz hackers: On 24 feb 2024, the xz hackers release liblzma 5.6.0 which is the first fully operational compromised version. __12 days later systemd merges a commit that means it won't work__.

So now the race is on. Can they get 5.6.0 integrated into stable releases of major OSes _before_ teknoraver's commit that removes liblzma's status as direct dep of systemd?

I find it plausible that they knew about teknoraver's commit _just before_ Feb 24th 2024 (when liblzma v5.6.0 was released, the first backdoored release), and rushed to release ASAP, before doing the testing you describe. Buoyed by their efforts to add ways to update the payload which they indeed used - March 8th (after teknoraver's commit was accepted) it was used to fix the valgrind issue.

So, no, I don't find this weird, and I don't think the amateurish aspects should be taken as some sort of indication that parts of the outfit were amateuristic. As long as it's plausible that the amateuristic aspects were simply due to time pressure, it sounds like a really bad idea to make assumptions in this regard.

olejorgenb
1 replies
3h3m

They could also have regrouped and found another way to do the exploit, given the relative ease of updating the payload (though it's probably a limited number of times you could change the test blobs without causing suspicion?). But I agree this explanation is plausible.

anarazel
0 replies
1h18m

If lzma isn't loaded as part of sshd, the path from an lzma backdoor to sshd get a hell of a lot more circuitous and/or easier to catch. You'd pretty much need to modify the sshd binary while compressing a package build, or do something like that to the compiler, to then modify sshd components while compiling.

anarazel
0 replies
1h6m

The performance regression is __not__ big. It's lucky Andres caught it at all. It's also not necessarily all that simple to remove it. It's not simply a bug in a loop or some such. If I was the xz team I'd have enough faith in all the work that was done to give it high odds that they'd get there before discovery. That they'd have time; months, even.

True in the contents of sshd logins it isn't that big, but ~500ms to get from _start() to main() isn't small either, compared to the normal cost of that phase of library startup. Their problem was that the sshd daemon fork+exec's itself to handle a connection, so they had to redo a lot of the work for each connection.

I suspect they started off with much smaller overhead and then it increased gradually, with every feature they added, just like it happens with many software projects. Here's the number of symbols being looked that a reversing effort has documented: https://github.com/smx-smx/xzre/blame/ff3ba18a39bad272ff628b... https://github.com/smx-smx/xzre/blob/ff3ba18a39bad272ff628bb...

Afaict all of this happens before there's any indication of the attacker's keys being presented - that's not visible to the fork+exec'd sshd until a lot later.

They needed to some of the work before main(), to redirect RSA_public_decrypt(). That'd have been some measurable overhead, but not close to 500ms. The rest of the startup could have been deferred until after RSA_public_decrypt() was presented with something looking like their key as part of the ssh certificate.

heyoni
3 replies
17h24m

I thought performance was actually fine? It only dragged when using valgrind, hence the rhetoric that it took some really unlikely circumstances for it to be detected that quickly.

heyoni
0 replies
3h14m

Yea you're right. 500ms vs 10ms on an older server. Was thrown off by this statement and thought only the perf/valgrind/gdb attachments were what really brought it to surface.

Initially starting sshd outside of systemd did not show the slowdown, despite the backdoor briefly getting invoked. This appears to be part of some countermeasures to make analysis harder.

Performance was much worse but not enough to actually nudge the author into digging into it until the random ssh logins started piling up:

https://twitter.com/AndresFreundTec/status/17741907437768663...

https://nitter.poast.org/AndresFreundTec/status/177419074377...

tw04
2 replies
15h29m

The fact that the guys developing the code weren't also simultaneously running valgrind and watching performance isn't hard to believe. They were targeting servers and appliances, how many servers and appliances do you know of that are running valgrind in their default image?

Sure, in hindsight that's a "duh, why didn't we think of that" - but also it's not very hard at all to see why they didn't think of that. They were likely testing against the system images they were hoping to compromise, not joe-schmoe developer's custom image.

dralley
1 replies
12h27m

In theory they should probably be testing against the CI pipelines of Debian and Fedora / CentOS, as that's the moat their backdoor has to cross.

tw04
0 replies
11h39m

They put code in to, in theory, avoid running in a development environment.

rgmerk
2 replies
17h14m

Events can happen to anyone, even competent state-sponsored organisations. And intelligence agencies are sometimes rather less ruthlessly competent than imagined (Kremlin assisinations in the UK have been a comedy of errors [1]).

Maybe another backdoor, or alternative access mechanism they were using, got closed and they wanted another one in a hurry.

[1] https://en.wikipedia.org/wiki/Poisoning_of_Alexander_Litvine...

cesarb
0 replies
10h33m

Maybe another backdoor, or alternative access mechanism they were using, got closed and they wanted another one in a hurry.

Or maybe the opportunity window for the mechanism this backdoor would use was closing. According to the timeline at https://research.swtch.com/xz-timeline there was a github comment from poettering at the end of January which implied that the relevant library would be changed soon to lazy load liblzma ("[...] Specifically the compression libs, i.e. libbz2, libz4 and suchlike at least. And also libpam. So expect more like this sooner or later."), as in fact happened a month later. The attacker had to get the backdoor into the targeted Linux distributions before the systemd change got into them.

Of course, the attacker could instead take the loss and abandon the approach, but since they had written that amount of complex code, it probably felt hard to throw it all away.

renk
2 replies
11h0m

That performance regression could also be a way to identify compromised systems.

Rygian
1 replies
10h20m

If all systems are compromised you don't need to identify anymore.

renk
0 replies
9h27m

Well, for now the payload delivery relies on things like x86, glibc.. still enough.

manquer
2 replies
15h56m

Why do we assume the person building the trust is the attacker ?

Is not possible the attacker simply took over the account of some one genuinely getting involved in the community either hacked or just with $5 wrench and then committed the malicious code ?

ericpruitt
1 replies
14h44m

Is not possible the attacker simply took over the account of some one genuinely getting involved in the community either hacked or just with $5 wrench and then committed the malicious code ?

Given the behavior of the accounts that applied pressure on the original xz maintainer, this seems unlikely to me.

ilvez
0 replies
12h29m

Or they just bought the guy at one point, because I understood the malicious behaviour started quite recently.

ufo
1 replies
6h30m

IIRC, according to Andres Freund the perf regression only happened in machines using the -fno-omit-frame-pointer setting, which was not the default at that point.

anarazel
0 replies
1h24m

The -fno-omit-frame-pointer bit is separate from the slowdown. -fno-omit-frame-pointer lead to valgrind warnings, but no perceptible slowdown in most (including this) cases.

treffer
1 replies
2h51m

Well, there is a pretty logical explanation.

Libsystemd was moving to a dlopen architecture for its dependencies.

This means that the backdoor would not load as the sshd patch only used libsystemd for notify, which does not need liblzma at all.

So they IMHO gave it a last shot. It's OK if it burns as it would be useless in 3 months (or even less).

The collateral is the backdoor Binary, but given enough engineering power it will be irrelevant in 2-3 months, too.

hackeraccount
0 replies
1h59m

I think this is probably the right answer.

The only thing that makes me think this was amateurs/criminals instead of professionals is that I tend to think that professionals are more interested in post attack security.

So if the gate was closing an amateur would say, "Act now! Let's get what we can!" A professional would say, "This is all going to come to light real soon - our exploit won't be read and there's a high chance of this all falling apart. Pull out everything, cover our tracks to the degree we can and find another opportunity to pull this off."

But then again I also think on professionals would work an exploit that takes years. Criminals by their nature want a quick payout (If I had the patience for a long con I'd just get a job) a motivated individual amateurs (i.e. crazy people) rarely have a wide enough skill set.

k8svet
1 replies
14h14m

(really tinfoil hatty - ) I almost wonder if it's misdirection?

c6400sc
0 replies
13h58m

Or a whitehat who couldn't get attention another way?

est
1 replies
14h44m

super professional in some ways, and rather amateur in others

so this has to be a coordinaed teamwork instead of a single hacker, right?

bee_rider
0 replies
14h13m

It could be somebody who was just good at some things and not at other things.

braiamp
1 replies
17h51m

There are thousands of ways that performance can be impacted. No matter how good you are at developing, there will be a workload that would have a performance hit. Phoronix has been several times reporting issues to the Linux kernel because performance regression with their test suite. Performance tests tend to take more time than correctness tests.

ufmace
0 replies
17h34m

Not seeing that as a point. It's probably not possible to have no performance hit whatsoever when you're checking the exact nanosecond count of every little thing. But usually nobody is doing that. It shouldn't be hard to not cause a substantial enough performance regression in SSHD logins that somebody who wasn't already monitoring that would notice and decide to dig into what's going on.

I'm not sure if it's been revealed yet what this thing actually does, but it seems like all it really needs to do is to check for some kind of special token in the auth request or check against another keypair.

varjag
0 replies
9h9m

Different departments.

thih9
0 replies
6h25m

For all we know earlier operations may have been high quality (and still undetected), this one for some reason may have been comparatively not that important and the actor decided to cut costs.

tdudhhu
0 replies
7h26m

Sometimes I think it could be someone who was forced to embed the backdoor but was smart enough to make it detectable by others without raising suspicion by the entity that was forcing him.

swed420
0 replies
5h45m

The tin foiler in me still suspects it could be Microsoft who planted it to make FOSS look bad.

setgree
0 replies
7h37m

If Google could release Gemini with a straight face, is it so hard to believe that this shadowy org might fail in an even subtler way?

Patrick McKenzie observes elsewhere that criminal orgs operate more or less like non-criminal ones — by committee and by consensus. By that light, it’s not so hard to fathom why this ship could have been sunk by performance degradation arising from feature bloat. Companies I’ve worked at have made more amateurish mistakes.

rlt
0 replies
12h13m

This doesn’t seem too surprising to me.

Any moderately competent developer could gain maintainership of a huge percentage of open source projects if they’re being paid to focus solely on that goal… after all, they’re competing mostly against devs working on it part side or as a hobby.

If this is state sponsored, they likely have similar programs in a large number of other projects.

jcpham2
0 replies
6h8m

The maintainer account (the identity) could have been sold to a third party. There are secondary markets [1] for this.

[1] https://ogusers.gg/ is the largest clear net marketplace for buying and selling usernames on popular sites

iforgotpassword
0 replies
8h29m

My favorite theory is that Jia Tan is a troll. They tried some silly patches and were surprised they got accepted. What started as a little joke on the side because covid made you stay at home slowly spiraled into "I wonder how far I can push this?"

Two years are enough to make yourself familiar with open ssh, ifuncs etc.

Then you do silly things like "hey um I need to replace the bad test data with newly generated data, this time using a fixed seed so that they are reproducible", but you don't actually tell anyone the seed for the new data. Then you lol when that gets past the maintainers no questions asked.

In the end they maybe just wanted to troll a coworker, like play some fart noises while they listen to music, and since they use Debian well, you better find a way to backdoor something in Debian to get into their machine.

Like back in the day when sasser sabotaged half the internet and "security experts" said they have a plausible lead to Russia – which as is turned out was because said security experts ran strings on the binary and found Russian text – put there by the German teen who wrote sasser "for teh lulz".

fasa99
0 replies
1h39m

My thought is they have 50-200 systems programmers where it's like "Hey George, I know you like to contribute to linux, have at it, we just need you to put a little pinhole here here and here" Then they have 5-20 security gurus / hackers who are like "Thanks George, I made this binary blob you need to insert, here's how you do it"

So the systems developer job is to build a reputation over years as we see, the security guy's job is to handle the equation group heavily encrypted/obfuscated exploits.

This would explain a mix of code quality / professionalism - multiple people, multiple roles. Of course the former "systems programmer" role need not be a full time employee. They good have motivated a white hat developer to change hats, either financially or through an offer they can't refuse.

cykros
0 replies
8h38m

States have spent decades if not centuries building up espionage apparatus.

All of the testing and development that goes into avoiding bugs has a tendency to run counter to the goals of secrecy or speed.

Secret, fast, well tested. Choose 2.

cladopa
0 replies
8h59m

You probably have seen lots of films deificating state-sponsored organisations.

SSO are made by humans under real constrainsts in time, money, personal. It is almost impossible to make something perfect that nobody in the world could detect.

In the world there are people with different backgrounds that could use techniques that you never accounted for. Maybe is a technique used in biology to study DNA, or for counting electrons in a microscope or the background radiation in astronomy.

For example, I have seen strange people like that reverse engineer encrypted chips that were "impossible" to in a very easy way because nobody expected it. They spend 10million dollars protecting something and someone removed the protection using $10.

bartimus
0 replies
11h50m

What's also surprising is how quickly the community seems to be giving someone the benefit of the doubt. A compromised maintainer would probably exactly introduce a fake member joining the project to make certain commits. They might have a contact providing the sophisticated backdoor that they need to (amateurishly) implement.

ThePowerOfFuet
0 replies
7h19m

On 2024-02-29, a PR was sent to stop linking liblzma into libsystemd [0].

Kevin Beaumont speculated [1] that "Jia Tan" saw this and immediately realized it would have neutered the backdoor and thus began to rush to avoid losing the very significant amount of work that went into this exploit; I think he's right. That rush caused the crashing seen in 5.6.0 and the lack of polish which could have eliminated or reduced the performance regressions which were the entire reason this was caught in the first place; they simply didn't have the time because the window had started to close and they didn't want all their work to be for nothing.

[0]: https://github.com/systemd/systemd/pull/31550

[1]: https://doublepulsar.com/inside-the-failed-attempt-to-backdo...

AndyMcConachie
0 replies
9h7m

At this point we don't know who did this. It could have been a single really smart person, it could have been criminal, it could have been a state intelligence agency.

We don't know shit.

atomicnumber3
39 replies
18h28m

The sophistication here is really interesting. And it all got caught because of a fairly obvious perf regression. It reminds of a quote I heard in one of those "real crime" shows: "There's a million ways to get caught for murder, and if you can think of half of them, you're a genius."

TheDudeMan
11 replies
17h39m

Yet most murders go unsolved.

TheBlight
5 replies
17h23m

Depends on locale. In Germany something like 90% of murder cases are solved/cleared.

In the U.S., I suspect a majority of the murders technically unsolved by police are cases where the identity of the perpetrators is somewhat of an open secret within communities that don't trust law enforcement (and LE similarly has little interest in working with them either.)

kybernetyk
4 replies
16h59m

In Germany something like 90% of murder cases are solved

You must watch out when reading the German crime statistics. "Solved" which is marked as "aufgeklärt" in those statistics just means that a suspect has been named. Not that someone actually did it/has been sentenced for the crime.

https://de.wikipedia.org/wiki/Aufkl%C3%A4rungsquote#Deutschl... 2nd sentence
TheBlight
3 replies
16h47m

Is it reasonable to assume a material number of cleared murders in Germany result in no charges and/or no conviction? (Genuinely curious.)

OJFord
2 replies
7h38m

Surely it's pretty common everywhere to have at some point a suspect ('solved!') who is then released, because you lack evidence, realise it's not them, whatever. A suspect isn't necessarily convicted even if you do ultimately convict someone.

trogdor
1 replies
2h6m

A suspect isn't necessarily convicted even if you do ultimately convict someone.

What does that mean?

OJFord
0 replies
1h28m

Turns out it was someone else, and you convict that other person. You thought you had them, were wrong, but did then ultimately solve the case.

It happens loads too, frequently in high profile stuff on the news they'll have a suspect who's somehow close to it, arrest them, but then they're released once satisfied with their allibi or whatever.

ssl-3
4 replies
17h35m

Then most murderers are geniuses.

Or most murder investigations are (by definition) incompetent.

Or (more likely): The old idiom quoted above is stupid and useless. (That it presumes that murdering and getting away with it is somehow a noble or esteemed deed should be damning enough.)

graphe
3 replies
17h22m

Wrong.

There’s no money or benefits in solving crimes. It could be done easily in many cases but nobody cares about certain people like gang members. Lots of cases where the murderer tells everyone but nobody cares.

ssl-3
2 replies
16h57m

Wrong?

Which part is wrong? Only 2/3 of these to choices can be wrong. The remaining one must be correct.

withinboredom
1 replies
13h54m

Technically, all 3 could be wrong and an unknown 4th option could be correct. That seems to be what they are proposing here.

In both cases, the premise is unclear so good luck!

ssl-3
0 replies
12h14m

Eh, good call I guess. I didn't see that aspect.

The 4th option they may appear to propose suggests that murder investigators don't get paid -- neither in money, nor in benefits.

So, to that end: As far as I know, that's not usually the case with government employees, and it is always actionable when it does happen to be the case.

xorvoid
10 replies
18h20m

Maybe I’m just being naive or too trusting, but this is sort of what I think when folks are getting worried about other backdoors like this in the wild.

Is it that they just got unlucky to get caught, or is this type of attack just too hard to pull off in practice?

I’d like to think the later. But, we really don’t know.

lyu07282
6 replies
17h36m

One measure might be that we never really found that many backdoors. Over time there is quite a large accumulation of hackers looking at the most mundane technical details.

This may be confirmed by regular vulnerabilities that are found in sometimes many decades old software, since vulnerabilities are much harder to find than backdoors. For example shellshock was 30 year old code, PwnKit 12 and log4j was ~10 ish.

So if backdoors were commonplace, we probably would've found more by now.

Perhaps that's changing now, the xz backdoor will for sure attract many copycats.

sjs382
1 replies
17h2m

Over time there is quite a large accumulation of hackers looking at the most mundane technical details.

Are there though? Even if true, there are probably enough places with very few eyes on them.

almostnormal
0 replies
10h0m

Maybe something could be built to put more eyeballs on things.

A kind of online-tool that collects the sources to build some relevant distributions, a web front-end to show a random piece of code (filtered by language, probability to show inreasing by less-recently/frequently/qualified viewed) to a volunteering visitor to review. The reviewer leaves a self assesment about their own skills (feed back into selection probability) and any potential findings. Tool-staff double-checks findings (so that the tool does not create too much noise) and forwards to the original authors (bugs) or elsewhere (backdoors).

A bit like wikipedias show random page.

hinkley
1 replies
12h36m

I’m not convinced that if I found a bug that I’d notice all the security implications of fixing it. Occasionally yes, but I wonder how many people have closed back doors just by fixing robustness issues and not appreciated how big of a bug they found.

beeboobaa3
0 replies
11h32m

Sure, but this xz backdoor is far, far more involved than that.

cjbprime
1 replies
15h1m

Doesn't your data prove the opposite point? There are so many vulnerabilities and so few people looking for them that even the thirty year old ones have barely been found.

A healthy feedback loop would have trended the average age of each vulnerability at the time of detection to be *short".

formerly_proven
0 replies
7h33m

Most backdoors that are found are really obvious garbage. Like hardcoded credentials or keys in appliances.

devcpp
0 replies
12h59m

Note he's not a cybersecurity researcher, he's mostly a database engineer (a great one, making significant PGSQL contributions), so I'm not sure he's familiar with statistics and variety of backdoor attempts.

ordu
0 replies
17h42m

I feel the same way. It is too much complexity in one place, it couldn't work without hiccups.

anamax
5 replies
17h54m

"There's a million ways to get caught for murder, and if you can think of half of them, you're a genius."

Does "think of half" apply to the folks trying to solve murders?

atomicnumber3
2 replies
15h40m

Nah, it applies to the person trying to get away with the murder. People will do really, really intricate jobs of trying to cover up, then slip up because like, they leave a receipt in their car that accidentally breaks their alibi.

hackeraccount
0 replies
1h46m

My favorite get away with murder stories are the imperfect frame up type stories. So commit a crime and lay a trail of bread crumbs to a false path that will be picked up by the investigators and then later on easily refuted by yourself - because you did it but not in the way you're accused of.

graemep
0 replies
9h23m

A clever murderer will disguise the murder as an accident, suicide or natural death. It will not even show in the stats as unsolved.

I got the idea from fiction (specifically Dorothy Sayers), but the number of murders Harold Shipman committed before anyone even noticed makes it plausible that people with relevant expertise (doctors, pharmacists, cops, etc.) could easily get away with murder. If Shipman had stopped after the first 100 or so he would have.

beautifulfreak
0 replies
6h0m

That's from Body Heat, said by Mickey Rourke to William Hurt. "...you got fifty ways you're gonna fuck up. If you think of twenty-five of them, then you're a genius - and you ain't no genius." (But a million sounds closer to the truth.)

ball_of_lint
0 replies
13h54m

Even if you can think of 10 relatively uncorrelated reasons, that lets you catch the genius murderer 1-(1/2^10) of the time, which is quite good.

dboreham
4 replies
18h22m

And it all got caught because of a fairly obvious perf regression

Always possible that was "parallel construction" evidence.

Someone at a TLA discovered the attack by some other means, had a quiet Signal chat with a former colleague who works at MS...

rtpg
0 replies
17h59m

Wouldn't it have been easier to just have someone drive-by comment on the changes in the source tree in the comment? Like "what's up with this?"

Though I guess you end up with some other questions if it's totally anonymous. But I often will do a quick look over commits of things that I upgrade (more for backwards compat questions than anything but)

paleotrope
0 replies
18h18m

Interesting possibilty but it seems like the "discovery" story is too complex and unbelievable.

eli
0 replies
18h18m

There doesn’t seem to be any evidence to support this whatsoever yet it’s nearly impossible to disprove. Classic conspiracy theory.

GrantMoyer
0 replies
18h10m

It seems like a much more suitable parallel construction story to invent in this instance would be something like "there were valgrind issues reported, but I couldn't reproduce them, so I sanity checked the tarball was the same as the git source. It wasn't."

rdtsc
2 replies
17h46m

I can believe it’s because it was a team behind the account. Someone developed the feature and another more careless or less experienced one integrated it. Another one possibly managing sock puppets and interacting in comments and PRs.

akira2501
1 replies
13h49m

I wonder what the web admin control panel for the "fake human" looks like, or if it even rises to that level of sophistication yet.

cookiengineer
0 replies
12h7m

It's called AIMS (Advanced Impact Media Solutions) and is used by several state-level actors these days, both pro- and contra-NATO.

Well, at least that one is the most sophisticated one on the market (as of now) and Team Jorge is probably making shitloads of money with it while not giving a damn about who uses their software in the end.

INTPenis
0 replies
11h58m

Given enough time and local testing they could have gotten away with it.

I'm positive their deadline changed due to @teknoraver's patch in libsystemd.

MBCook
20 replies
18h36m

Luckily, thanks to Elon, we’ll never know since you haven’t have a Twitter account to view the thread.

ethanwillis
9 replies
18h18m

No thanks, I support an open web.

mkl
5 replies
17h54m

Ignoring the closed web is not the same as supporting the open web. I support whatever mirrors and tools get closed knowledge into the open.

(Edited to remove snark.)

kibwen
3 replies
17h43m

> Choosing ignorance over knowledge

If there's some piece of knowledge that's absolutely, positively critical to my life, it will exist somewhere that actually matters, not on Twitter.

mkl
1 replies
17h34m

Sure, but almost no knowledge that is interesting, valuable, useful, etc., is absolutely, positively critical to your life. Almost nothing on HN has that level of importance, but you are here learning interesting things, and unfortunately the first place some of those things appear is still Twitter.

k8svet
0 replies
17h16m

Where does it end, fellow person? What is going to be the excuse/defense/workaround whenever Nitter instances are completely suffocated? Just suck it up and sign up so you can continue to participate on a increasingly hostile, toxic, manipulated platform in service of a narcasists deranged ego? Because Joe Bob Expert is too lazy to post elsewhere? No, I'm sorry, but when is enough, enough?

I know I'm missing out on good content, and I don't care. I have _some_ self-respect.

k8svet
0 replies
17h33m

I don't even know if that's true, but I don't care. Twitter, at this point, is far more egregious than reddit, and I swore months ago I'd never contribute there. It blows my mind that people still play in Elon's piss-filled sandbox because they love the dopamine hits of bot-inflated engagement metrics.

Yes, you casual reader that keeps posting on Twitter due to laziness and momemtum, I'm absolutely talking about you. Your laziness is hurting everyone. And I'm not alone, you're limiting your audience and prioritizing, well, people too lazy to get off Twitter, and ignoring the technical, prescient (observant, at this point?), informed crowd that have left for elsehwhere. /shrug

ethanwillis
0 replies
17h47m

And yet a short time later: "Instance has been rate limited. Use another instance or try again later."

beepbooptheory
2 replies
17h59m

You support the open web by stealing from the closed one!

int_19h
0 replies
7h23m

Hack the planet!

Rodeoclash
0 replies
17h42m

I'm glad you get it!

publius_0xf3
0 replies
13h16m

Wow, it works. But it's not random, afaict. I keep getting the privacydev instance.

How do they keep theirs up and running?

opello
0 replies
17h48m

Aren't all the nitter instances going to die with the anonymous guest account restrictions? I've not closely followed those developments.

jxyxfinite
0 replies
17h55m

I thought nitter was officially dead. Nice to see some instances are still working

justinclift
0 replies
17h54m

Oh. Nitter didn't stop working a few weeks ago after all?

Oops, now that's giving:

    Instance has been rate limited.
    Use another instance or try again later.
So maybe "kind of working" is the better description. :)

avalys
4 replies
18h0m

How much does a Twitter account cost?

ethanwillis
0 replies
17h46m

Your personal information.

defrost
0 replies
17h49m

Dignity ...

GaggiX
0 replies
17h56m

Probably a bit of mental health.

Brian_K_White
0 replies
17h5m

approximately twice as many principles as it's worth

UncleOxidant
17 replies
17h29m

This. Could people stop posting xitter links and post threadreaderapp links like this instead. Thank you.

scubbo
5 replies
17h6m

Amusing. I was always irritated by the very concept of threadreaderapp and by people's propensity for posting the links (just read it on the website! There's no need to spend extra compute to join up some divs!) - but Elon's ever-increasing breakage of the site now makes it genuinely useful.

k8svet
4 replies
17h0m

"ever increasing"? Twitter is completely, 110% unusable without an account (and dear god, I dare some of you to make a new account and see what the process and default content is. It's gross).

I say 110% not to be hyperbolic -- It shows you non-latest tweets on profiles, it doesn't let you see tweet threads or replies, even from the original poster when they post a chain of tweets. I literally can't read any of this content save for the threadreadapp link.

...

Dalewyn
2 replies
16h7m

I made an account after Musk took over and Mysterious Twitter X started mandating logging in, in large part since he made the place tolerable and I follow illustrators and official accounts for games I play anyway.

Making the account wasn't that annoying. Once upon a time they demanded my phone number and that was obnoxious to the point of noping out, but nowadays (after Musk took over?) they also take email instead. Email for registering accounts is nothing new, so no big deal; been doing that since 2002 when I registered my first forum account.

After I made my account I went and followed all the accounts I usually follow, and my recommendations got relevant in very short order: Posts from illustrators, the games, and players who play those games.

So, thanks Musk. You've at least convinced one guy to make an account where Dorsey flatly couldn't, and made the guy even happy about it which was pleasantly surprising.

k8svet
1 replies
16h4m

in large part since he made the place tolerable

oh yeah? Interesting you chose not to elaborate on how, given the statements I made about ruining public access stand.

But in summary, you're saying you used the platform the same way it was usable before Elon bought it, other than all of the things I mentioned that make it unusable for those not-logged-in? Let's be frank, Elon made it 200% worse, than relinquished to only 150% worse, and that's a win?

Dalewyn
0 replies
16h2m

You know what happened when Musk took over and quite literally fired all of Twitter Japan literally overnight?

The political manipulating stopped. Those Twitter trends? Before it was always about politics nobody gave a stinking fuck about. After it was always about games, anime, manga, music, and other pop cultural things almost everyone cares about.

As a side bonus, his mere presence as the new owner made the political asshats exit themselves into a completely separate corner of the internet.

So yes, he made the place tolerable and I have absolute appreciation for him for achieving that.

manquer
0 replies
15h48m

ever increasing

I don’t think it means currently it is reasonable , just that things are continuing to worsen and we have not yet reached a bottom even if a bottom exists.

pquki4
5 replies
16h40m

I'd say the original author should just post something as an article instead of tweets. A blog post. Github md file. Github gist. Even pastebin. I don't care its format or where it is hosted, it does not need to be well formatted and could be as casual as it could be -- I don't expect to read a well-written article, and I know that would take a lot of effort. I just want to see something that is not a series of tweets.

joshmanders
2 replies
16h19m

I don't care its format or where it is hosted

Except not formatted as a tweet thread nor hosted on Twitter, right?

creato
1 replies
16h12m

Twitter is literally unusable if not logged in. All I see is the first tweet, and every link I can find that might reveal the rest of the thread takes me to a login page.

BlueFalconHD
0 replies
14h3m

Also unusable on simpler hardware. The browser on the Kindle Paperwhite I am typing this on is just slightly too old to run Twitter. I get the unsupported browser page, which funnily enough still uses the old colors and logo.

wrs
0 replies
16h17m

I agree with you 200%, especially because now it’s not only an idiotic format, reading it lends support to Elon’s X (and I no longer have an X account for that reason so can’t read it). But I’m afraid this sentiment long since reached the point of “yelling at clouds”.

m3kw9
0 replies
13h38m

Then how will he gain followers?

ranger_danger
2 replies
17h27m

How do they get around the account/resource limits?

callalex
0 replies
17h0m

Web scraping is a cat-and-mouse game but all the cats got laid off.

baobun
0 replies
17h23m

Nice try, Elon.

pvg
0 replies
13h30m

The site conventions are to post original sources and workarounds in the thread. And to avoid gumming up threads with annoyances-of-everyday-web-life meta.

noman-land
13 replies
16h27m

I haven't been following this story super closely but I find it extremely odd that I've heard zero discussion about the perpetrator of this hack.

db48x
4 replies
15h51m

What’s to discuss? Nothing is known about him.

chrismartin
3 replies
14h52m

There's a lot of metadata about when/how they used git and IRC, and some preliminary analysis on same. Another surname in one of the commits. An apparent LinkedIn account. (See heading "OSINT" in https://boehs.org/node/everything-i-know-about-the-xz-backdo... .)

A lot of these tracks could be intentionally manipulated by a sophisticated actor to disguise their identity, but it's not "nothing".

db48x
2 replies
12h39m

Like I said, we don't know anything worth having a real discussion about. Maybe he was in the +03 time zone, and pretending to be in +08, but that's not enough to base a discussion on.

qarl
1 replies
12h16m

You're discussing it.

samus
0 replies
5h41m

We are all speculating.

Adverblessly
2 replies
7h44m

but also Israel (IST)

I had the same thought myself initially, but the analysis suggests a work-week that includes Fri, which precludes Israel (where the work week is Sun-Thu and not Mon-Fri), as well as celebrating Christmas and New Year's which are not official holidays in Israel. It isn't uncommon for younger people to take a day off for New Year's since it is an excuse to party, or for Jews with eastern European origins to celebrate Novy God, but I don't know of any Christmas celebrations.

Obviously these could be faked, but then why fake a Mon-Fri work week and not also fake the work hours to match it? To me it seems like an unlikely hypothesis.

compsciphd
1 replies
7h10m

and I believe someone pointed out that there were commits on yom kippur? that is a day basically no one works. The skies are closed, the roads are empty and everyone is bicycling on all the available streets, including highways.

yonatan8070
0 replies
3h17m

Israel isn't home only for Jewish people who don't work on Yom Kippur, there are significant populations of both Muslims and Chirstians

I don't think that you can rule out any country based on email and commit timestamps, the attacker could have been further east and had a late work day, or further east with an early work day

sofrimiento
2 replies
8h36m

I would be interested in semantic analysis of the communication from the involved online personas, similar to what was done for Satoshi, to point to a cultural direction. Would also be interesting to see if there were semantic style differences over time pointing to different people acting as the personas.

Since it would be quite a lot of code that has been committed as well, would also be interesting to see if code style differences could be found pointing to more people involved than only one.

samus
0 replies
5h42m

There is nothing of value to gain from such analysis. Even if evidence turns up, it would be even more flimsy than graphology.

alcover
0 replies
3h38m

I fully agree. Stylometry is surprinsingly accurate (as proven on this very forum corpus) and would be quite involved to hide from.

SAI_Peregrinus
0 replies
15h47m

There's been lots of speculation. It seems likely to be more than one person, likely either a ransomware group or a national intelligence agency.

EvanAnderson
7 replies
17h33m

Has anybody done a writeup of the obfuscation in the backdoor itself (not the build script that installs it)? I threw the binary into Ghidra and looked thru the functions it found, but having no familiarity with the ifunc mechanism it uses to intercept execution I have up and set it aside for others.

I'd have to assume since there's anti-debug functionality that the code is also obfuscated. Since it shipped as an opaque binary I assumed at least some of the code would be encrypted with keys we don't have (similar to parts of the STUXNET payload).

bilekas
6 replies
17h28m

No full dissemination of the backdoor itself has been done yet, as for the anti-debug, sure you can avoid things like that with flags. But this was done at compile level so its a bit more tricky.

I'd have to assume since there's anti-debug functionality that the code is also obfuscated.

Not really, as above it was done at build time.. So you have already set your home up.

It's shown the problems with package managers not taking source from the right place.

bannable
2 replies
16h56m

What do you mean by "package managers not taking source from the right place"?

sho_hn
1 replies
15h50m

I assume they are advocating for package managers to preferably grab signed git tags from repositories rather than download tarballs.

The backdoor relied on the source in the tarballs being different from the git tag, adding additional script code. This is common for projects that uses GNU autotools as build system; maintainers traditionally run autoconf so that users don't have to and ship the results in the tarballs.

I agree that this should be discouraged, and that distros should, when possible, at least verify that tarbal contents are reproducible / match git tags when importing new versions.

bilekas
0 replies
9h12m

Correct. The onus should be now be on the package delivery to provide transperant packages maybe? Maybe add the extra step of pulling instead of trusting the push from maintainers? It's just an extra step the might get more eyes. All said, even in hindsight I wouldn't have called this one out.

EvanAnderson
2 replies
17h15m

So ifunc is a link-time thing and not a runtime thing, then? (My background, when it comes to linking, is DOS and Windows.)

asveikau
1 replies
11h50m

It's runtime, but I think dynamic linker. The Windows equivalent would be if a library patched some code at dllmain. Actually the Detours library in the Windows world is similar. But it's for performance; the idea is you would patch some function references based on the CPU revision to get faster code specific to your CPU.

bilekas
0 replies
9h1m

This is a really nice windows analogy, however it goes without saying this package wasn't aiming for Windows, ironically, it chose the path (as we seen so far) of least resistance. If you're hooked to an sshd service, your golden. They put 5 checks (maybe comically) in a row to make sure it was linux I this case .. who knows what's next.

jpalawaga
5 replies
17h1m

it's a rather good thing that this was found before it made it out broadly.

Not just for obvious reason of not wanting an unknown party to have RCE on your infrastructure. I think as people keep digging they will eventually formulate a payload which will allow the backdoor to be used by anyone.

As bad as it is for a single party to have access, it's much worse for any (every?) party to have access.

justusthane
4 replies
14h46m

Isn’t that more or less impossible since the payload is a private RSA key?

timschmidt
0 replies
13h30m

See https://en.wikipedia.org/wiki/Dual_EC_DRBG for another backdoor requiring a private key, in which the key was simply replaced in a subsequent supply chain attack(!) with a key known to the attacker:

"In December 2015, Juniper Networks announced[55] that some revisions of their ScreenOS firmware used Dual_EC_DRBG with the suspect P and Q points, creating a backdoor in their firewall. Originally it was supposed to use a Q point chosen by Juniper which may or may not have been generated in provably safe way. Dual_EC_DRBG was then used to seed ANSI X9.17 PRNG. This would have obfuscated the Dual_EC_DRBG output thus killing the backdoor. However, a "bug" in the code exposed the raw output of the Dual_EC_DRBG, hence compromising the security of the system. This backdoor was then backdoored itself by an unknown party which changed the Q point and some test vectors.[56][57][58] Allegations that the NSA had persistent backdoor access through Juniper firewalls had already been published in 2013 by Der Spiegel.[59] The kleptographic backdoor is an example of NSA's NOBUS policy, of having security holes that only they can exploit."

samus
0 replies
5h47m

If it is known to belong to a widely deployed backdoor that can't be patched away in time, then it is worth to recover the key by brute force using supercomputers. Of course, such capabilities are rather restricted to nation states.

jpalawaga
0 replies
4h42m

That sort of assumes that the backdoor in question is free of all bugs (for example, buffer overflows).

d-z-m
0 replies
3h2m

From my understanding, the command payload is not an RSA private key. It is an SSH certificate's public key field, a section of which contains the signed command to be executed.

bilekas
5 replies
17h56m

Without having a Twitter account I have a really hard time following these threads. Is there some write up?

Edit : Check comments.

Yes, the backdoor hasn't been decompiled/reverse engineered yet. But it feels like clickbait to say : "It goes deeper"... Obviously. Nobody knows what it fully does yet. There was no assumption of knowing what it did.

aeyes
4 replies
17h37m

Short summary is that it allows auth bypass, not just RCE.

bilekas
2 replies
17h35m

Yes, I understand.. It's just not a surprise.

ckcheng
1 replies
12h50m

It was surprising because previously we had:

'XZ backdoor: "It's RCE, not auth bypass, and gated/unreplayable."' [0].

[0]: https://news.ycombinator.com/item?id=39877267 (811 comments)

bilekas
0 replies
12h2m

Did you understand the XZ backdoor before the rest of us ?

We will figure it out. With all of our stubborn ways, we can document it. Clap

mr_mitm
0 replies
12h11m

RCE as root is already the worst case though. The auth bypass is basically just a convenience feature of the backdoor. So yeah it's mildly interesting but not really a new development of the story.

sureglymop
4 replies
15h19m

Does anyone have a good explanation or introduction into the performance testing that was done to find this? And how to get started? Actually measuring performance always seemed to be a very hard task and I'd like to be able to do similar testing as the person which found this backdoor.

cesarb
1 replies
10h3m

From what I understand, it wasn't even the performance testing itself that caught the backdoor. The developer wanted to have the machine as quiescent as possible (so that nothing else running on it would interfere) before starting the performance tests, but sshd was using much more CPU than expected (and this could be observed with simple tools like "top"). My guess is that the usual "backscatter" of password guessing ssh login attempts from all over the Internet normally uses very little CPU time in sshd before being rejected, but the backdoor made each login attempt use a significant amount of CPU time to check whether it was an encrypted request from the attacker (and this particular machine had its sshd open to the Internet because it was a cloud machine being accessed via ssh through the open Internet).

anarazel
0 replies
1h26m

Nearly spot on. I indeed was seeing sshd usage via top.

but the backdoor made each login attempt use a significant amount of CPU time to check whether it was an encrypted request from the attacker

Absurdly enough, the high cpu usage is well before it even tries to figure that out. Due to avoiding "suspicious" strings in both the binary and memory, it has a more expensive string matching routine. Finding all the symbols it from the in-memory symbol tables ends up slow due to that.

That's why even sshd -h (i.e. printing help) is slow when started in the right environment. There's not enough visibility into other things that early during process startup (this happens long before main() is called), so they couldn't check what key is being presented or such. They really "should" have deferred much more of their initialization until after the ed448 check happened.

(and this particular machine had its sshd open to the Internet because it was a cloud machine being accessed via ssh through the open Internet)

Unfortunately not. It's a machine I have at home, with some port of my public IP redirected to it, as I often need it when not at home. Oddly enough, my threat model did not include getting attacked with something as sophisticated as this, so I thought that was fine (only pubkey, no root).

intelVISA
0 replies
10h50m

measuring performance always seemed to be a very hard task

It's not hard: it's either easy, or impossible, depending on the culture at your shop.

At the basic level it's trivial - you instrument code and interpret the results against the HW and lower level machine counters.

Depending on $lang you have many good libraries for this kicking around, the hard part is working with a corp that values performance (99% do not) so none of the required mindset and surrounding infra will be setup to permit this in any useful capacity.

glibg10b
0 replies
11h1m

https://www.openwall.com/lists/oss-security/2024/03/29/4

  > == Observing Impact on openssh server ==
  > 
  > With the backdoored liblzma installed, logins via ssh become a lot slower.
  > 
  > time ssh nonexistant@...alhost
  > 
  > before:
  > nonexistant@...alhost: Permission denied (publickey).
  > 
  > before:
  > real 0m0.299s
  > user 0m0.202s
  > sys 0m0.006s
  > 
  > after:
  > nonexistant@...alhost: Permission denied (publickey).
  > 
  > real 0m0.807s
  > user 0m0.202s
  > sys 0m0.006s

mattlondon
4 replies
7h7m

I often wonder with these sort of things, where there are lots of write-ups by geeks who dive-deep into the details, why do people say "This has to be state sponsored!", "Look at the timestamps! Irrefutable proof!"

Yes this was sneaky and yes this was a "slow burn" but is there really anything in the xz case that requires more than just a single competent person? Anything that requires state-level of sponsorship? The fact that random individuals online are able to dissect it and work things out suggests that it is comprehendible by a single person.

What is to say it that this was not just one smart-yet-disgruntled person acting alone?

lolinder
1 replies
5h43m

The biggest thing for me that points to a state actor is the amount of time committed to the social engineering attack versus the expected value of the prize. A for-profit scheme built this way would be irrational, which doesn't preclude it being an irrational actor (or an individual with a motive other than profit) but does point to a state actor as a likely candidate.

The total value of the prize, if successful, would be worth a lot if you could sell it, but the odds of successfully getting an exploit into major infrastructure that goes undetected for long enough for your customers to buy and use it are tiny. States can afford moonshots, but I tend to expect private individuals to seek targets with a higher expected value.

Of course, that doesn't mean it was a competent state actor or that they allocated a ton of resources to it.

fl7305
0 replies
2h13m

I tend to expect private individuals to seek targets with a higher expected value.

If you're a very skilled and dedicated hacker, what other targets do you have that can net you many millions of dollars?

or an individual with a motive other than profit

Isn't one of the most striking things about the hacker community the extreme amounts of time and effort that are put into things that are not expected to generate any profit?

I mean, there are people who spend all their free time over several decades just digging tunnels under their property. Or build a 6502 CPU from discrete transistors. Etc.

fliglr
0 replies
7h0m

There is nothing. Nobody has any idea who he is

fl7305
0 replies
6h56m

I agree.

The hack took someone with a lot of skills, determination and time.

But doesn't that describe a significant portion of open source developers?

There is also clear motivation. Wouldn't the exploit have been worth many millions on the black market?

yobid20
0 replies
5h54m

Plot twist: it was the NSA

m3kw9
0 replies
13h36m

Everyone now looking at their test data directory

goalieca
0 replies
18h20m

This is fantastic. Great work actually triggering the bug!

dvektor
0 replies
16h7m

Somehow nobody has mentioned this yet but big props to the original author of this post. Super impressive work

coginthemachine
0 replies
16h10m

I'm concerned about the long game nature of things here. 1-Sure they bid their time to setup the "infrastructure" to create the backdoors. 2-I'm sure their plan was to play another long game after the exploit got in the wild, in production. It's the right way to spend lottery money. Invest. 3-That makes me wonder if such games are being played today.

Scary.