return to table of content

Xzbot: Notes, honeypot, and exploit demo for the xz backdoor

asveikau
110 replies
1d1h

It's pretty interesting that they didn't just introduce an RCE that anyone can exploit, it requires the attacker's private key. It's ironically a very security conscious vulnerability.

Alifatisk
42 replies
1d1h

For real, it's almost like a state-sponsored exploit. It's crafted and executed incredibly well, the performance issue feels like pure luck it got found.

stingraycharles
24 replies
1d1h

I like the theory that actually, it wasn’t luck but was picked up on by detection tools of a large entity (Google / Microsoft / NSA / whatever), and they’re just presenting the story like this to keep their detection methods a secret. It’s what I would do.

est31
9 replies
1d

I doubt that if Google detected it with some internal tool, they'd reach out to Microsoft to hide their contribution.

It was reported by an MS engineer who happens to be involved in another OSS project. MS is doing business with the US intelligence community, for example there is the Skype story: First, rumors that NSA offers a lot of money for people who can break Skype's E2E encryption, then MS buys Skype, then MS changes Skype's client to not be E2E encrypted any more and to use MS servers instead of peer to peer, allowing undetectable wiretapping of arbitrary connections.

But it's a quite credible story too that it was just a random discovery. Even if it was the NSA, why would they hide that capability. It doesn't take much to run a script to compare git state with uploaded source tarballs in distros like Debian (Debian has separate tarballs for the source and the source with Debian patches applied).

mistrial9
2 replies
19h50m

an MS engineer

no this engineer is world-known for being core PostgreSQL, a team with high standards.. unlike that company you mention

jeltz
0 replies
12h44m

He is also one of the sharpest developers I have had the fortune to speak to (and do some minor work with on the mailing list) and he knows a lot about performance and profiling. I 100% think that he found it on his own. He would also be an odd choice for Microsoft to pick since I doubt he works anywhere close to any of their security teams (unless they went through some serious lengths to find just the guy where it would be 100% believable that he just stumbled on it).

est31
0 replies
18h46m

Yeah definitely, good point. I got that wrong above: He is in fact a postgres contributor who happens to be MS employee and not an MS employee who happens to be a postgres contributor.

anarazel
2 replies
23h39m

It was reported by an MS engineer who happens to be involved in another OSS project.

I view it as being OSS or postgresql dev that happens to work at microsoft. I've been doing the former for much longer (starting somewhere between 2005 and 2008, depending on how you count) than the latter (2019-12).

iliane5
0 replies
7h52m

Just wanted to say thank you for your work and attention to detail, it's immensely valuable and we're all very grateful for it.

est31
0 replies
20h2m

Thanks for the explanation. Also thanks for catching this and protecting us all; I think in the end it's way more believable that you indeed found it on your own, above was just brainless blathering into the ether =). Lastly, thanks for your Postgres contributions.

bevekspldnw
1 replies
22h28m

“Even if it was the NSA, why would they hide that capability”

Perhaps you’re not familiar with what NSA historically stood for: Never Say Anything.

est31
0 replies
18h39m

Googling for "site:nsa.gov filetype:pdf" gives tens of thousands of results with documents produced by the NSA about various things (the google counter is known to lie but that's not my point). They do publish things.

t0mas88
0 replies
21h30m

Intelligence agencies are very careful about sharing their findings, even with "friends", because the findings will disclose some information about their capabilities and possibly methods.

Let's say agency A has some scanning capability on open source software that detected this backdoor attempt by agency B. If they had gone public, agency B now knows they have this ability. So agency B will adjust their ways the next time and the scanning capability becomes less useful. While if agency A had told Microsoft to "find" this by accident, nobody would know about their scanning capability. And the next attempt by agency B would only try to avoid having the performance impact this first attempt had, probably leaving it visible to agency A.

jsmith99
4 replies
1d

The attacker changed the projects contact details at oss fuzz (an automated detection tool). There’s an interesting discussion as to whether that would have picked up the vulnerability https://github.com/google/oss-fuzz/issues/11760

metzmanj
2 replies
21h57m

I work on oss-fuzz.

I don't think it's plausible OSS-Fuzz could have found this. The backdoor required a build configuration that was not used in OSS-Fuzz.

I'm guessing "Jia Tan" knew this and made changes to XZ's use of OSS-Fuzz for the purposes of cementing their position as the new maintainer of XZ, rather than out of worry OSS-Fuzz would find the backdoor as people have speculated.

Aloisius
1 replies
15h41m

How many oss-fuzz packages have a Dockerfile that runs apt-get install liblzma-dev first?

Had this not been discovered, the backdoored version of xz could have eventually ended up in the ubuntu version oss-fuzz uses for its docker image - and linked into all those packages being tested as well.

Except now there's an explanation if fuzzing starts to fail - honggfuzz uses -fsanitize which is incompatible with xz's use of ifunc, so any package that depends on it should rebuild xz from source with --disable-ifunc instead of using the binary package.

metzmanj
0 replies
4h35m

This is interesting, but do you think this would have aroused enough suspicion to find the backdoor (after every Ubuntu user was owned by it)? I don't see why this is the case. It wasn't a secret that ifuncs were being used in XZ.

And if that's the case, it was sloppy of "Jia" to disable it in OSS-Fuzz and not do this:

``` __attribute__((__used__,__no_sanitize_address__)) ```

to the XZ source code to fix the false positive and turn off the compilation warning, no attention would have been drawn to this at all since no one would have to change their build script.

With or without this PR, it's very unlikely OSS-Fuzz would have found the bug. OSS-Fuzz also happens to be on Ubuntu 20. I'm not very familiar with Ubuntu release cycles, but I think it would have been a very long time before backdoored packages made their way into Ubuntu 20.

meowface
0 replies
23h22m

That's a fascinating extra detail. They really tried to cover all their bases.

There's some plausible evidence here that they may've tried to use alter egos to encourage Debian to update the package: https://twitter.com/f0wlsec/status/1773824841331740708

edflsafoiewq
3 replies
1d

They could announce it without revealing their detection method. I don't see what the extra indirection buys them.

MOARDONGZPLZ
2 replies
1d

Shutting down 99.9% of speculation on how the vulnerability was found in the first place.

VMG
1 replies
21h47m

(1) obviously not

(2) they could just say they found it during some routine dependency review or whatever

MOARDONGZPLZ
0 replies
17h5m

I mean, yes. I’ve read all the commentary I can get my hands on about this incident because it is fascinating and this is the first instance of some parallel construction theory of finding it I’ve seen.

Second, maybe a routine dependency review is how they _actually_ found it but they don’t want future people like this focusing too much on that otherwise they may try to mitigate, whereas now they may focus on something inane like a 0.5 second increase in sshd’s load time or whatever.

takeda
0 replies
21h59m

Sorry for that ugly comparison, but that explanation reminds me of the theories when covid started, that it was created by secret organization that is actually ruling the world.

People love when there's some explanation that doesn't involve randomness, because with randomness looks like we don't have grasp on things.

Google actually had tooling that was detecting it, but he disabled check that would show it.

Google/Microsoft/NSA could just say they detected it with internal tooling and not disclose how exactly. Google and Microsoft would love to have credit.

meowface
0 replies
23h24m

It's really interesting to think what might've happened if they could've implemented this with much less performance overhead. How long might it have lasted for? Years?

lyu07282
0 replies
7h5m

While we are speculating, there was this case of university students trying to introduce malicious commits in the kernel in order to test open source to such attack vectors. Perhaps this was similar "research" by some students.

https://www.cnx-software.com/2021/04/22/phd-students-willful...

heavyset_go
0 replies
18h31m

You're describing parallel construction and it is something that happens all of the time.

TacticalCoder
0 replies
23h9m

... nd they’re just presenting the story like this to keep their detection methods a secret. It’s what I would do.

Basically "parallel construction". It's very possible it's what happened.

https://en.wikipedia.org/wiki/Parallel_construction

timmytokyo
3 replies
23h47m

I'm not sure why everyone is 100% sure this was a state-sponsored security breach. I agree that it's more likely than not state-sponsored, but I can imagine all sorts of other groups who would have an interest in something like this, organized crime in particular. Imagine how many banks or crypto wallets they could break into with a RCE this pervasive.

hackeraccount
0 replies
4h52m

I'm not 100% sure - it could have been a criminal(s) or even a single motivated actor.

That said it's a lot of work - 2 years at least. It's an exploit that's so good that you'd have to use incredibly carefully - also because if/when it's discovered it's going to break everywhere.

I've read descriptions about how the NSA (and presumably other such agencies) and they're really careful. The first job is to make sure the target doesn't get confirmation that they are in fact a target. The second is that they always cover their tracks so the target doesn't know they were a target.

Criminals tend to do the first but almost never the second so a tool like this - while I'm sure they would love - isn't worth the amount of work it would take to develop.

Again - I'm not 100% on this but ... 40% ? say 20% criminals, 10% lone wolf?

blablabla123
0 replies
23h27m

Especially considering this introduced a 500ms waiting time. But surely this was quite a risky time investment, 2 years. How likely is it that this was the only attempt if this was done by a group? (And maybe there were failed attempts after trying to take over maintenance of other packages?) Maybe really a very well-funded cybercrime group that can afford such moonshot endeavours or a state group that doesn't completely know yet what it's doing or isn't that well equipped (anymore?). I'm definitely curious about analysis of attribution

bevekspldnw
0 replies
22h24m

Motive and patience. Motive as you point out is shared by many parties.

Typically its only state agencies that will fund an operation with uncertain pay off over long periods of time. That type of patience is expensive.

Online criminals are beholden to changing market pressures and short term investment pressures like any other start up.

webmaven
2 replies
1d

Was the performance issue pure luck? Or was it a subtle bit of sabotage by someone inside the attacking group worried about the implications of the capability?

If it had been successfully and secretly deployed, this is the sort of thing that could make your leaders much more comfortable with starting a "limited war".

There are shades of "Setec Astronomy" here.

papascrubs
0 replies
18h42m

Plot twist:

It was a psyop to increase the scrutiny around OSS components.

Kidding. Mostly...

But given the amount of scrutiny folks are going to start putting into some supply chains... Probably cheaper to execute than most company's annual security awareness budgets cost.

neodymiumphish
0 replies
17h17m

Considering how difficult it might be (and identifiable) to attempt direct exploitation of this without being sure your target is vulnerable, it’s plausible the performance issue allowed for an identifiable delay in attempts. This might be useful in determining whether to attempt the exploit, with an auto-skip if it received a response in less than N milliseconds.

hnthrowaway0328
2 replies
1d1h

Do we have a detailed technical analysis of the code? I read a few analysis but they all seem preliminary. It is very useful to learn from the code.

coldpie
1 replies
1d

There's a few links down at the bottom of the OP to quite detailed analysis. From there you could join a Discord where discussion is ongoing.

hnthrowaway0328
0 replies
21h47m

Thanks coldpie.

IshKebab
1 replies
22h2m

I don't think it was executed incredibly well. There were definitely very clever aspects but they made multiple mistakes - triggering Valgrind, the performance issue, using a `.` to break the Landlock test, not giving the author a proper background identity.

I guess you could also include the fact that they made it a very obvious back door rather than an exploitable bug, but that has the advantage of only letting you exploit it so it was probably an intentional trade-off.

Just think how many back doors / intentional bugs there are that we don't know about because they didn't make any of these mistakes.

richardfey
0 replies
21h19m

Maybe it's the first successful attempt of a state which nobody would right now suspect as capable of carrying this out. Everyone is looking at the big guys but a new player has entered the game.

7373737373
1 replies
23h23m

I read someone speculating that the performance issue was intentional, so infected machines could be easily identified by an internet wide scan without arousing further suspicicion.

If this is or becomes a widespread method, then anti-malware groups should perhaps conduct these scans themselves.

zarzavat
0 replies
21h17m

Very small differences in performance can be detected over the network as long as you have enough samples. Given that every port 22 is being hit by a gazillion attempts per day already, sample count shouldn’t be an issue.

So if distinguishing infected machines was their intention they definitely over-egged it.

lyu07282
0 replies
23h57m

one question I still have is what exactly the performance issue was? I heard it might be related to enumeration of shared libraries, decoding of the scrambled strings[1], etc. anyone know for sure yet?

one other point for investigation is if the code is similar to any other known implants? like the way it obfuscates strings, the way it detects debuggers, the way its setting up a vtable, there might be code fragments shared across projects. Which might give clues about its origin.

[1] https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01

jdewerd
0 replies
1d1h

Yeah, we should probably expect that there are roughly 1/p(found) more of these lurking out there. Not a pleasant thought.

andersa
0 replies
1d1h

It'd make total sense if it was. This way you get to have the backdoor without your enemies being able to use it against your own companies.

haswell
37 replies
1d1h

I suspect the original rationale is about preserving the longevity of the backdoor. If you blow a hole wide open that anyone can enter, it’s going to be found and shut down quickly.

If this hadn’t had the performance impact that brought it quickly to the surface, it’s possible that this would have lived quietly for a long time exactly because it’s not widely exploitable.

declan_roberts
32 replies
1d

I agree that this is probably about persistence. Initially I thought the developer was playing the long-con to dump some crypto exchange and make off with literally a billion dollars or more.

But if that was the case they wouldn't bother with the key. It'd be a one-and-done situation. It would be a stop-the-world event.

Now it looks more like nation-state spycraft.

btown
18 replies
23h8m

It's worth also noting that the spycraft involved a coordinated harassment campaign of the original maintainer, with multiple writing styles, to accelerate a transition of maintainership to the attacker:

https://www.mail-archive.com/xz-devel@tukaani.org/msg00566.h...

https://www.mail-archive.com/xz-devel@tukaani.org/msg00568.h...

https://www.mail-archive.com/xz-devel@tukaani.org/msg00569.h...

While this doesn't prove nation-state involvement, it certainly borrows from a wide-ranging playbook of techniques.

allanbreyes
5 replies
22h2m

Ugh, that this psyops sockpuppetry may have started or contributed to the maintainer's mental health issues seems like the most depressing part of all this. Maintaining OSS is hard enough.

Delk
1 replies
21h1m

Probably didn't start them, considering that he already mentioned (long-term) mental health issues in the mailing list discussion in which the (likely) sock puppets started making demands.

But it's hard to see the whole thing helping, and it is some combination of depressing and infuriating. I hope he's doing ok.

tapland
0 replies
19h3m

You only push after you've established your guy as the likely successor. If this was coordinated it was not the first move.

heavyset_go
0 replies
18h37m

Yeah, that was a hard read. I think it highlights the importance of emotionally distancing yourself from your projects.

It's also interesting because it exploits the current zeitgeist when it comes to maintainers' responsibilities to their users and communities. Personally, I think it puts too much expectation on maintainers' shoulders, especially when they're working for free.

Personally, I think maintainers are doing favors for users, and if they don't like how the project is progressing or not, then too bad. That's not a popular sentiment, though.

btown
0 replies
21h31m

I hope that one takeaway from this entire situation is that if you're a maintainer and your users are pushing you outside your levels of comfort, that's a reflection on them, not on you - and that it could be reflective of something far, far worse than just their impatience.

If you, as a maintainer, value stability of not only your software but also your own mental health, it is entirely something you can be proud of to resist calls for new features, scope increases, and rapid team additions.

__MatrixMan__
0 replies
21h34m

No good deed goes unpunished, what a shame.

yread
4 replies
21h13m

Maybe they weren't all sockpuppets. Here Jigar Kumar was nitpicking Jia Tan's changes:

https://www.mail-archive.com/xz-devel@tukaani.org/msg00556.h...

That was not necessary to gain trust. Writing style is different, too. Later when Jia gained commit access he reminds him to merge it.

barkingcat
1 replies
20h38m

that's precisely what sock puppetry does ... talk/write in a different writing style to make others believe it's different people.

bdowling
0 replies
13h21m

For all we know there is a whole cyber-squadron (or whatever their military units are called) behind Jia Tan and other accounts.

asveikau
1 replies
19h39m

This looks like a very phony "debate".

I think the most convincing case made about the sock puppets is around account creation dates, and also people disappearing after they get what they need. Like Jigar disappearing after Jia becomes maintainer. Or the guy "misoeater19" who creates his debian bug tracker account to say that his work is totally blocked on needing xz 5.6.1 to be in debian unstable.

pyinstallwoes
0 replies
10h51m

Ah, the good ol’ lever of urgency. Urgency frequency smells of deception.

mrkramer
3 replies
22h0m

At first I thought the guy who did this was a lone wolf but now I believe it was indeed state actor. They coordinated and harassed original maintainer into giving them access to the project, basically they hijacked the open source project. The poor guy(the original maintainer) was alone against state actor who was persistent with the goal of hijacking and then backdooring the open source project.

It seems like they were actively looking[0] which open source compression library they can inject with vulnerable code and then exploit and backdoor afterwards.

[0] https://lwn.net/Articles/967763/

sofixa
0 replies
7h54m

At first I thought the guy who did this was a lone wolf but now I believe it was indeed state actor. They coordinated and harassed original maintainer into giving them access to the project, basically they hijacked the open source project. The poor guy(the original maintainer) was alone against state actor who was persistent with the goal of hijacking and then backdooring the open source project.

It could just as easily be a lone wolf capable of writing in different styles (really not difficult if one spends enough time online, one gets exposed to all sorts of styles) or even just your run of the mill criminal gang.

In fact, "multiaccount" cheating (where a single person has multiple accounts that interact with one another in a favourable manner, e.g. trading for stuff for reduced fees, or letting themselves be killed to pad stats, or expressing opinions leaving the impression of mass support, etc.) has been present in many MMO games for more than a decade. I recall an online game I was really into back in high school, eRepublik, which simulated real world politics, economics and warfare (in a really dumbed down version of course), and multiacounts were especially prevalent in politics, where candidates for election would often be padded by fake groups, or even entire countries being taken over by a gang of real people with like ten accounts each.

The complexity is nothing new. The only indicator this could be a state actor is the long con aspect of it. The attacker(s) were looking into years between starting and actually being able to exploit.

pyinstallwoes
0 replies
10h53m

“Recently I've worked off-list a bit with Jia Tan on XZ Utils and”

"In 2021, JiaT75 submitted a pull request to the libarchive repository with the title ‘Added error text to warning when untaring with bsdtar’ which seemed legitimate at first glance. "

Seems “Jia” is related to both

metrxqin
0 replies
13h16m

I'm literally shocked by the conversations in the mail-list, it's blatantly an exploitation of others' kindness.

furstenheim
0 replies
20h36m

I actually wondered how many packages they harassed until they got access to one such

Voultapher
0 replies
21h41m

I mean one person can use sock puppet accounts to write emails.

juitpykyk
9 replies
19h49m

You're talking as if securing a backdoor with public cryptography is some unimaginable feat of technology.

It's literally a couple hours work.

kbenson
3 replies
19h38m

I don't think they were using complexity as the reason for that assumption, but instead goals. Adding security doesn't require a nation state's level of resources, but it is a more attractive feature for a nation state that wants to preserve it over time and prevent adversaries from making use of it.

neodymiumphish
1 replies
19h35m

And on the contrary, creating a vulnerability that’s not identifiable to a limited attack group provides for a bit more deniability and anonymity. It’s hard to say which is more favorable by a nation-state actor.

usrusr
0 replies
19h3m

An interesting angle: if this was somehow observed in use instead of getting discovered without observing use, there would be a glaring bright cui bono associated with what it was being used for.

Theoretically, I could even imagine that public key security motivated by some "mostly benign" actor who fell in love with the question "could I pull this off?" dreading the scenario of their future hacker superpower falling in the wrong hands. It's not entirely unthinkable that an alternative timeline where this was never discovered would only ever see this superbackdoor getting used for harmless pranks, winning bets and the like. In this unlikely scenario, the discovery, through the state actor and crime syndicate activities undoubtedly inspired by the discovery, would paradoxically lead to a less safe computing world than the alternative timeline where the person with the key had free RCE almost everywhere.

heavyset_go
0 replies
18h44m

This makes sense in closed source products where you'll never get to audit the source for such exploits, but little sense in open source projects where anyone can audit it.

That's to say an enterprise router or switch would likely have secured exploits put there by corporate and national security agencies, whereas open source exploits would benefit from the probable deniability.

justinclift
2 replies
18h53m

It'll be kind of tragic if this backdoor turns out to be the developer's pet "enable remote debugging" code, and they didn't mean for it to get out into a release. ;)

unkulunkulu
1 replies
16h40m

I would be scared to work near a person who writes his debugging tools this way :D

justinclift
0 replies
13h34m

Heh Heh Heh

"It seemed like a good idea at the time" is a pretty common story though. ;)

parl_match
1 replies
14h26m

Inserting a change like this as a one off would cause lots of scrutiny, which would probably get it detected. Instead, the bad actor spent years contributing to the project before dropping this.

So, while writing the exploit might be a couple of hours work, actually pulling it off is quite a bit more difficult.

londons_explore
0 replies
6h13m

Plenty of open source maintainers spend only a few hours a month on their projects.

For many projects, that is enough to become the main contributor.

meowface
1 replies
23h26m

But if that was the case they wouldn't bother with the key. It'd be a one-and-done situation. It would be a stop-the-world event.

Why not? It's possible someone else could've discovered the exploit before the big attack but decided not to disclose it. Or that they could've disclosed it and caused a lot of damage the attacker didn't necessarily want. And they easily could've been planning both a long-term way to access a huge swath of machines and also biding their time for a huge heist.

They have no reason to not restrict the backdoor to their personal use. And it probably is spycraft of some sort, and I think more likely than not it's a nation-state, but not necessarily. I could see a talented individual or group wanting to pull this off.

varenc
0 replies
20h45m

I think we need to consider the context. The attacker ultimately only had control over the lzma library. I'm skeptical that there's an innocent looking way that lzma could have in the open introduced an "accidental" RCE vuln that'd affect sshd. Of course I agree that they also wanted an explicit stealth backdoor for all the other reasons, but I don't think a plausibly deniable RCE or authentication bypass vuln would have even been possible.

heavyset_go
0 replies
18h47m

Now it looks more like nation-state spycraft.

I disagree, plenty of exploits implemented WireGuard-like crypto key access to them.

Beijinger
1 replies
20h28m

This is just a safety measure that it does not blow up in your own face (country).

CommitSyn
0 replies
15h3m

Not only did it mandate authentication, but there was also a Killswitch, and there's no other reason for that than because you have reasons to suspect someone may have access to this sensitive world-stopper exploit and use it against you. That leaves simply a (reasonably, but still) paranoid person, and people worried about that+consequences are unlikely to have the mental well-being to pull this off, or, a group of people.

eli
0 replies
22h33m

More to the point it prevents your enemies from using the exploit against friendly targets.

The tradeoff is that, once you find it, it's very clearly a backdoor. No way you can pretend this was an innocent bug.

chpatrick
0 replies
21h22m

Also people can come and immediately undo whatever you did if it's not authenticated.

linsomniac
9 replies
1d1h

Am I reading it correctly that the payload signature includes the target SSH host key? So you can't just spray it around to servers, it's fairly computationally expensive to send it to a host.

miduil
8 replies
1d1h

*host key fingerprint, but I assume what you've meant.

It's practically a good backdoor then, crypto graphically protected and safe against "re-play" attacks.

amluto
7 replies
1d

Not quite. It still looks vulnerable: an attacker A without the private key impersonates a victim server V and reports their host key. A careless attacker B with the key tries to attack A, but an ends up recovering a valid payload targeting V.

Denvercoder9
6 replies
23h44m

I'm not too familar with the SSH protocol, but is it possible to impersonate a victim server V without having the private key to their host key?

pxx
5 replies
23h1m

This stuff is pre-auth.

You can just treat the entire thing as opaque and proxy everything to the host you're trying to compromise; as soon as you have an exploit string for a given host you can just replay it.

robryk
1 replies
19h5m

Ssh does client authentication after handshake. The server is required to sign the handshake result with its private key, so you won't get past handshake if you are a server that claims to have a public key that you don't know the private key for.

E: see RFC 4253, sections 8 and 10, and RFC 4252 for corroboration

pxx
0 replies
15h55m

Huh, I had erroneously thought the exploit string was sent earlier in the connection, before the handshaking completed (note the "handshake failed" error in xzbot on successful exploit, and also the fact that no logging is done).

But you're right: we've verified the hostkey by the time we send the special certificate. So there's no way to effectively replay this without access to the server keys. My original comment is incorrect.

I'm actually surprised there's no logging at INFO or higher after this succeeds, given that openssh typically logs preauth connection closes. But I guess the crutch is that we never log connection opens and we only really log failures in handshaking, and it's not like the backdoor is going to go out of its way to log the fact that it opened itself...

bandrami
1 replies
14h56m

The whole point of asymmetric-key is that a middleman can't do that. Even if you relayed the entire handshake, all traffic after that is just line noise to you unless you have the transmitter's private key. You can't read it or mangle it (well, you can mangle it, but the receiving party will know it was mangled in transit). The exploit string on the wire for that transmission won't work in the context of any other transmission sequence.

pxx
0 replies
14h53m

Yeah I had mistakenly thought the exploit string was transmitted during key exchange (read too quickly on "pre-auth"), which is incorrect; see sibling comment. I'm unfortunately past the edit window now.

amluto
0 replies
18h26m

I suppose the original attacker could give really fancy exploit strings that verify the eventual payload strongly enough to prevent someone replaying the attack from accomplishing much.

takeda
7 replies
22h10m

If you think of it as a state sponsored attack it makes a lot of sense to have a "secure" vulnerability in system that your own citizens might use.

It looks like the whole contribution to xz was an effort to just inject that backdoor. For example the author created the whole test framework where he could hide the malicious payload.

Before he started work on xz, he made contribution to libarchive in BSD which created a vulnerability.

pxx
6 replies
21h51m

The libarchive diff didn't create any vulnerability. The fprintf calls were consistent with others in the same repository.

jhugo
3 replies
19h42m

It did, actually: the filename can contain terminal control characters, which thanks to the change from safe_fprintf to fprintf, were printed without escaping, which allows the creator of the archive being extracted to control the terminal of the user extracting the archive.

pxx
2 replies
16h37m

That's

(a) not exploitable without the existence of a much higher-severity exploit -- sure you can clear the screen, but that's low impact

(b) possible to trigger on other extant paths; see https://github.com/libarchive/libarchive/issues/2107 so it seems nothing new was introduced

(c) kind of contrived to get to execute; you have to somehow fail to extract the archive but on the happy path you'll see a bunch of weirdly-named files

I think it's distinctly higher-probability that this change was just meant to build credibility for the GitHub account. The diff is a fairly trivial patch to a minor issue filed around the same time as the pull request.

CommitSyn
1 replies
14h49m

Is it possible it was part of a planned or current exploit chain, some other way it could have been utilized?

wholinator2
0 replies
7h51m

Yes, i think one thing we should learn from this is that suspicious code is suspicious code, and anyone asserting that some suspicious code cannot be exploited is suspicious themselves. I don't think we should inquisition half the industry, but i do think people should be a lot more careful about saying that one small exploitable thing definitely cannot be part of a larger exploit.

It's obvious, basically no one knows what's going on in the _vast_ majority of code running out systems these days. And even if you know 99% the attackers only need to be right once

halJordan
3 replies
21h45m

This is not that concept. That concept is no one but us can technically complete the exploit. Technical feasibility in that you need a supercomputer to do it, not protecting a backdoor with the normal cia triad

justinclift
2 replies
18h45m

That doesn't seem correct:

    If they determine the vulnerability is only exploitable by the NSA for reasons
    such as computational resources, budget, or skill set, they label it as NOBUS
    and will not move to patch it, but rather leave it open to exploit against current
    or future targets.
If (!) the NSA regards ssh keys as secure, then from that article it sounds like the NOBUS thing would fit.

CommitSyn
1 replies
12h11m

That would only fit "If (!) the NSA regards ssh keys as secure >for everyone but them<

justinclift
0 replies
5h8m

Not sure why? My thinking in this hypothetical scenario is that the NSA would have the private key, which is why it would be a NOBUS thing.

If they didn't have the key though, then yeah it doesn't fit. Unless they can walk through ssh key security anyway. :)

nialv7
1 replies
1d

how are you going to sell it if anyone can get in?

ryanmerket
0 replies
18h42m

this is probably the right answer. hacking group getting a 0day to sell to nation states on a per-use agreement.

whirlwin
0 replies
12h3m

Imagine how much the private key is worth on the black market

userbinator
0 replies
16h39m

IMHO it's not that surprising; asymmetric crypto has been common in ransomware for a long time, and of course ransomware in general is based on securing data from its owner.

"It's not only the good guys who have guns."

bayindirh
0 replies
11h43m

It's a (failed) case study in "what if we backdoor it in a way only good guys can use but bad guys can't?"

Computers don't know who's / what's good or bad. They're deterministic machines which responds to commands.

I don't know whether there'll be any clues about who did this, but this will be the poster child of "you can't have backdoors and security in a single system" argument (which I strongly support).

Taniwha
0 replies
13h54m

I wonder if anyone has packet logs from the past few weeks that show attempts at sshd - might be some incriminating IP addresses to start hunting with

dec0dedab0de
88 replies
1d1h

Stuff like this is why I like port knocking, and limiting access to specific client IPs/networks when possible.

20 years ago, I was working at an ISP/Telco and one of our vendors had a permanent admin account hardcoded on their gear, you couldn't change the password and it didn't log access, or show up as an active user session.

Always limit traffic to just what is necessary, does the entire internet really need to be able to SSH to your box?

herpderperator
47 replies
1d1h

The thing about port knocking is that if you're on a host where you don't have the ability to port-knock, then you're not able to connect.

This can turn into a footgun: you're away from your usual device, something happens and you desperately need to connect, but now you can't because all the devices in your vicinity don't have the ability to perform $SECURITY_FEATURE_X so that you can connect, and you're screaming at yourself for adding so much security at the expense of convenience.

This could happen as easily as restricting logins to SSH keys only, and not being able to use your SSH key on whatever host you have available at the time, wishing you'd have enabled password authentication.

jolmg
16 replies
1d

The thing about port knocking is that if you're on a host where you don't have the ability to port-knock, then you're not able to connect.

If that's important, one should be able to set up port knocking such that you're able to do the knocks even by changing the port in a sequence by hand on e.g. a web browser address bar.

mr_mitm
15 replies
1d

Note that port knocking is vulnerable to replay attacks. Single Packet Authentication is better, but requires a private key (can be your SSH key).

https://www.cipherdyne.org/fwknop/

ulrikrasmussen
4 replies
1d

What if the knocking sequence was derived from a TOTP secret?

deepbreath
1 replies
22h7m

Don't have to do anything too complicated. Here's the knocker code in a short Bash script, produced by GPT4:

~ % gpt4 'write a very short bash script that takes the number stored in ~/.ssh/knock_seq, increments it by 1 and saves it to the file. It then takes the new number and concatenates it with the value stored in the file ~/.ssh/secret. It pipes the resulting string to sha1sum, spitting out binary. It then takes both the resulting sha1sum and the number used and pipes their concatenation to "nc -u $host $(cat ~/.ssh/knocking_port)". be brief'

  knock_seq=$(cat ~/.ssh/knock_seq)
  let knock_seq++
  echo $knock_seq > ~/.ssh/knock_seq
  concat_seq_secret=$(echo -n "${knock_seq}$(cat ~/.ssh/secret)")
  sha1_output=$(echo -n $concat_seq_secret | sha1sum -b | awk '{print $1}')
  final_output="${sha1_output}${knock_seq}"
  host=localhost
  knocking_port=$(cat ~/.ssh/knocking_port)
  echo -n $final_output | nc -u $host $knocking_port

deepbreath
0 replies
21h54m

The knockee PoC should also be straightforward, can use socat + udp-listen + fork with a script that checks that input matches `sha1sum(secret||num)||num` and `num>previously_seen_num`, and if so, adds an iptables rule.

This should prevent against replays. Throw in some rate limits somewhere maybe to not get DDoSed, especially if you let socat `fork`.

swsieber
0 replies
1d

Ooo, that's a fun idea

discite
0 replies
23h15m

Definitely some fun project to try

jolmg
4 replies
1d

Not if you set it up such that each knocking sequence can only be used once. Port knocking is a flexible concept.

squigz
3 replies
1d

I wonder if one could combine port knocking and TOTP in some way, so the sequence is determined by the TOTP?

(Security is not my thing; don't judge me!)

ametrau
1 replies
23h26m

Yeah you could but wouldn’t it defeat the purpose of being basically a secret knock before you can give the password? The password should be the ssh password.

cesnja
0 replies
23h8m

This would be just to allow you to connect to the server. If there was a vulnerable sshd on port 22, an adversary would have to know the port knocking sequence to connect to sshd and run the exploit.

noman-land
0 replies
1d

Was just having the same thought reading this thread.

mlyle
3 replies
23h13m

Naive knocking isn't good as a primary security mechanism, but it lowers your attack surface and adds defense in depth.

It means that people who can't intercept traffic can't talk to the ssh server-- and that's most attackers at the beginning phases of an attack. And even someone who can intercept traffic needs to wait for actual administrative activity.

gnramires
2 replies
21h55m

Defense in depth has value I agree, but I think it can also be counterproductive in some cases. Every layer can also be buggy and have vulnerabilities, which can often leak (e.g. into code execution) and compromise the whole system (bypassing layers). What happened in this case seems to be a case of maintained hijacking and introducing vulnerabilities. Adding an additional dependency (of say a port-knocking library) doesn't look great in that regard, if the dependency can be hijacked to add remote code execution capabilities. And that library is likely a lot less scrutinized than OpenSSH!

Also underrated I think is security by simplicity. OpenSSH should be extremely simple and easy to understand, such that every proposal and change could be easily scrutinized. Cryptographic constructions themselves are almost mathematically proven invulnerable, then a small codebase can go most of the way to mathematically provable security (bonus points for formal verification).

But for this kind of system there's usually some kind of human vulnerability (e.g. system updates for your distro) in the loop such that the community needs to remain watchful. (It's fun to consider an application that's proven correct and doesn't need updating every again, but usually that's not practical)

mlyle
1 replies
21h12m

Adding an additional dependency (of say a port-knocking library) doesn't look great in that regard, if the dependency can be hijacked to add remote code execution capabilities.

Port knocking infrastructure can be minimal, knowing nothing but addresses knocking. It can also be completely outside the protected service on a gateway.

Indeed, it can even be no-code, e.g. https://www.digitalocean.com/community/tutorials/how-to-conf...

OpenSSH should be extremely simple and easy to understand, such that every proposal and change could be easily scrutinized.

But OpenSSH intrinsically is going to have a much larger attack surface.

then a small codebase can go most of the way to mathematically provable security (bonus points for formal verification).

It's worth noting this would not have helped against this attack:

* It was against another dependency, not openssh

* The actual vulnerability didn't occur in the code you'd inspect as part of verification processes today. (I don't think anyone is formally verifying build process).

gnramires
0 replies
19h6m

Good points, I would say that defense in depth is useful when the layers of defense need to all be broken (more or less) independently for a successful attack (this fails only if you add layers that expose vulnerabities compromising your system). E.g. usually a sandbox satisfies this criterion.

Also, whenever some layers may allow compromising everything, the supply chains of the layers should be minimal or correlated (same supplier), to avoid increasing such supply chain risks.

vikarti
0 replies
23h55m

This looks related to some other problem: - There is Alice's server which provide service X - There are clients like Bob who needs this service. - There is Mallory who thinks clients doesn't need such service. Mallory have significant resources (more than Alice or Bob). - Mallory thinks it's ok to block access to Alice' server IF it's known that it's Alice's server and not some random site. Mallory sometimes also thinks it's ok to block if protocol is unknown.

This problem solved by XRay in all of it's versions. It could be possible (if overkill) to use mostly same methods to authenticate correct user and provide eir access.

mlyle
11 replies
1d

The thing about port knocking is that if you're on a host where you don't have the ability to port-knock, then you're not able to connect.

You can type http://hostname:porttoknock in a browser.

As long as you're not behind a super restrictive gateway that doesn't let you connect to arbitrary ports, you're golden.

webmaven
2 replies
1d

TIL another interesting browser feature. Thank you.

Dwedit
1 replies
1d

That's just the usual way of attempting to connect to an HTTP server running on a different port. Sometimes you see websites hosted on port 8080 or something like that.

webmaven
0 replies
23h49m

Oh. Duh.

I suppose I should have said it was a new-to-me use case for that feature.

noman-land
2 replies
1d

I'm a bit of a noob about this. Can you explain what this means?

shanemhansen
1 replies
1d

Port knocking involves sending a packet to certain ports on a host. It's overkill but typing http://host:port/ in your browser will, as part of trying to make a TCP connection, send a packet to that port.

noman-land
0 replies
23h15m

Thanks, I didn't realize port knocking could be done manually like this as a way to "unlock" an eventual regular ssh attempt outside the browser. This makes sense now and is super clever!

toast0
1 replies
20h39m

Likely won't be enough if you're behind CGNAT and you get a different public IP on different connections.

mlyle
0 replies
17h11m

Most decent CGNAT will give you the same source IP when you connect to the same dest IP repeatedly.

bee_rider
0 replies
1d

It seems like there’d be a pretty big overlap between those kinds of hosts.

Dwedit
0 replies
1d

I've actually been behind a firewall that blocked outgoing connections except on several well-known ports. Had to run my SSH server over the port usually used for HTTPS just to get it unblocked.

Joel_Mckay
4 replies
1d

Port knocking with ssh over https using client certs.

And port knocking is one of the few effective methods against the standard distributed brute forcing slow attacks.

Note if you blanket-ban IN/RU/UK/CN/HK/TW/IR/MX/BZ/BA + tor/proxies lists, than 99.998% of your nuisance traffic issues disappear overnight. =)

oarsinsync
1 replies
23h53m

if you blanket-ban IN/RU/UK/CN/HK/TW/IR/MX/BZ/BA

The list of countries there: India, Russia, United Kingdom, China, Hong Kong, Taiwan, Iran, Mexico, Belize, Bosnia and Herzegovina

I’m amused that the UK is in that group.

Joel_Mckay
0 replies
23h46m

I was too, but the past few years their government server blocks have been pen-testing servers without authorization.

It is apparently a free service they offer people even when told to get stuffed.

=)

My spastic ban-hammer active-response list i.e. the entire block gets temporarily black-holed with a single violation: AU BG BR BZ CN ES EE FR GB HR IL IN ID IR IQ JP KG KR KP KW LV MM MX NI NL PA PE PK PL RO RU RS SE SG TW TH TR YE VN UA ZA ZZ

deknos
1 replies
1d

can you give an example for an implementation of portnocking/ssh/over https and client certs?

Joel_Mckay
0 replies
1d

In general, it is a standard shore-wall firewall rule in perl, and the standard ssh protocol wrapper mod.

These are very well documented tricks, and when combined with a standard port 22 and interleaved knock ports tripwire 5 day ban rules... are quite effective against scanners too.

I am currently on the clock, so can't write up a detailed tutorial right now.

Best regards, =)

nequo
0 replies
22h44m

This is great but would this neutralize the xz backdoor? The backdoor circumvents authentication, doesn't it?

herpderperator
0 replies
1d

What does this have to do with port knocking?

GabeIsko
2 replies
22h48m

I wouldn't really consider port knocking to be that effective of a way to block connections. It is really only obscure, but port knocking software is very accessible. So if you know a port needs to be knocked, it's not hard for an attacker to get to it.

The main benefit of port knocking is that it allows you to present your ports as normally closed. If you have a system and you are worried about it getting port scanned for whatever reason, it makes sense to have a scheme where the ports are closed unless it receives a knock. So if someone gets access they shouldn't for whatever reason and pulls a portscan off, the information they get about your system is somewhat limited.

In this scheme, it would be much better to have an access point you authenticate with and then that handles the port knocking for the other devices. So it is a kind of obscure method that is really only useful in a specific use case.

As far as SSH keys go, I would argue that SSH support is so ubiquitous, and SSH access is so powerful that it is a reasonable security tradeoff. I also don't think that SSH ports should be exposed to the internet, unless it is a specific use case where that is the whole point, like GitHub. I'm very down on connecting directly through SSH without network access. The xz situation has validated this opinion.

I personally don't know any application where you really need supreme admin access to every device, from any device, from anywhere in the world, while at the same time it has extreme security requirements. That's a pretty big task. At that point, constructing a dedicated, hardened access point that faces the internet and grants access to other devices is probably the way to go.

delusional
1 replies
22h30m

I'm very down on connecting directly through SSH without network access. The xz situation has validated this opinion.

Did it? At the point an attacker has remote code execution, couldn't they just as easily pivot into an outgoing connection to some command and control server? I don't see how some intermediary access point would have alleviated this problem. If the call is coming from inside the house, the gig is already up.

deepbreath
0 replies
22h22m

At the point an attacker has remote code execution

The attacker doesn't have remote code execution in the xz case unless they can speak to your port 22. Port knocking prevents them from doing so, provided they don't know how to knock.

nottorp
1 replies
1d

You just described the problem with using keys for any login, like the latest fad is?

And generally with depending on a device (phone, xxxkey or whatever) for access.

Dwedit
0 replies
23h59m

At least TOTP is just a long, random, server-assigned password, where you you don't type the whole thing into during login attempts.

You can write down the full TOTP secret, and make new 'TOTP keys' whenever you want. Suggested apps: Firefox Extension named "TOTP", and Android App named "Secur". You don't need the heavier apps that want you to create accounts with their company.

Using a TOTP key only provides a little more protection than any other complex password, since you're not typing in the whole thing during a login attempt.

godman_8
1 replies
23h0m

My solution to this has been creating a public bastion server and use Wireguard. Wireguard listens on a random UDP port (port knocking is more difficult here.) This client is set up to have a dynamic endpoint so I don't need to worry about whitelisting. The key and port information are stored in a password manager like Vaultwarden with the appropriate documentation to connect. Firewall rules are set to reject on all other ports and it doesn't respond to ICMP packets either. A lot of that is security through obscurity but I found this to be a good balance of security and practicality.

trelane
0 replies
22h20m

I've seen this discussed a fair bit, and always the recommendation is to use wire guard and expose ssh only to the "local network" e.g. https://bugs.gentoo.org/928134#c38

First, I don't see how this works where there's a single server (e.g. colocation).

Second, doesn't that just make Wireguard the new hack target? How does this actually mitigate the risk?

tamimio
0 replies
1d

The thing about port knocking is that if you're on a host where you don't have the ability to port-knock, then you're not able to connect.

Then you attach a device that can have port knocking to that unsupported host. Also, I remember it was called port punching not knocking.

dec0dedab0de
0 replies
1d

Absolutely, everything in security is a tradeoff. I guess the real point is that there should be layers, and even though you should never rely on security through obscurity, you should still probably have a bit of obscurity in the mix.

codedokode
0 replies
23h45m

You can as well find yourself on a host that doesn't have SSH or network that filters SSH traffic for security reasons.

teddyh
13 replies
23h2m

Port knocking is stupid. It violates Kerckhoffs’s principle¹. If you want more secret bits which users need to know in order to access your system, increase your password lengths, or cryptographic key sizes. If you want to keep log sizes manageable, adjust your logging levels.

The security of port knocking is also woefully inadequate:

• It adds very little security. How many bits are in a “secret knock”?

• The security it does add is bad: It’s sent in cleartext, and easily brute-forced.

• It complicates access, since it’s non-standard.

1. <https://en.wikipedia.org/w/index.php?title=Kerckhoffs%27s_pr...>

VWWHFSfQ
5 replies
22h56m

Port knocking is definitely dumb, but "increase your password lengths, or cryptographic key sizes" does nothing if your ssh binary is compromised and anyone can send a magic packet to let themselves in.

Strict firewalls, VPNs, and defense-in-depth is really the only answer here.

Of course, those things all go out the window too if your TCP stack itself is also compromised. Better to just air-gap.

teddyh
4 replies
22h46m

Many people argue that a VPN+SSH is a reasonable solution, since it uses two separate implementations, where both are unlikely to be compromised at the same time. I would argue that the more reasonable option would be to split the SSH project in two; both of which validates the credentials of an incoming connection. This would be the same as a VPN+SSH but would not convolute the network topography, and would eliminate the need for two keys to be used by every connecting user.

However, in this case, the two-layer approach would not be a protection. Sure, in this case the SSH daemon was compromised, and a VPN before SSH would have protected SSH. But what if the VPN server itself was compromised? Remember that the SSH server was altered, not to allow normal logins, but to call system() directly. What if a VPN server had been similarly altered? This would not have protected SSH, since a direct system() call by the VPN daemon would have ignored SSH completely.

It is a mistake to look at this case and assume that since SSH was compromised this time, SSH must always be protected by another layer. That other layer might be the next thing to be compromised.

PhilipRoman
2 replies
22h35m

This would not have protected SSH, since a direct system() call by the VPN daemon would have ignored SSH completely.

I don't know how VPNs are implemented on Linux, but in principle it should be possible to sandbox a VPN server to the point where it can only make connections but nothing else. If capabilities are not enough, ebpf should be able to contain it. I suspect it will have full control over networking but that's still very different from arbitrary code execution.

teddyh
1 replies
22h28m

That would be possible, yes, but it’s not the current situation. And since we can choose how we proceed, I would prefer my proposal, i.e. that the SSH daemon be split into two separate projects, one daemon handling the initial connection, locked-down like you describe, then handing off the authenticated connection to the second, “inner”, SSH daemon, which also does the authentication, using the same credentials as submitted by the connecting user. This way, the connecting user only has to have one key, and the network topology does not become unduly twisted.

PhilipRoman
0 replies
22h22m

Huh, I briefly looked at the documentation for UsePrivilegeSeparation option and it looks very similar to what you're describing. Interesting why it didn't prevent this attack.

mikeocool
0 replies
22h6m

Best practice would have your internet exposed daemon (vpn or ssh) running a fairly locked down box that doesn’t also have your valuable “stuff” (whatever that is) on it.

So if someone cracks that box, their access is limited, and they still need to make a lateral move to access actual data.

In the case of this backdoor, if you have SSH exposed to the internet on a locked down jump box, AND use it as your internal mechanism for accessing your valuable boxes, you are owned, since the attacker can access your jump box and then immediately use the same vulnerability to move to an internal box.

In the case of a hypothetical VPN daemon vulnerability, they can use that to crack your jump box, but then still need another vulnerability to move beyond that. Not great, but a lot better than being fully owned.

You could certainly also accomplish a similar topology with two different SSH implementations.

reaperman
3 replies
22h55m

Security-by-obscurity is dumb, yes. But in the context of supply-chain exploits in the theme of this xz backdoor, this statement is also myopic:

If you want more secret bits which users need to know in order to access your system, increase your password lengths, or cryptographic key sizes.

If your sshd (or any other exposed service) is backdoored, then the "effective bits" of any cryptographic key size is reduced to nil. You personally cannot know whether or not your exposed service is backdoored.

Bottom-line is: adding a defense-in-depth like port knocking is unlikely to cause harm unless you use it as justification for not following best-practices in the rest of your security posture.

belorn
2 replies
22h41m

Chaining multiple different login system can make sense. A more sensible solution over port knocking would be an alternative sshd implementation with a tunnel to the second sshd implementation. Naturally the first one should not run as root (similar to the port knocking daemon).

That way it would not be in clear text, and the number of bits of security will be order of magnitude larger even with very simple password. The public facing sshd can also run more lightweight algorithms and disable loggings for lower resource usage.

Regardless if one uses two sshd or port knocking software, the public facing daemon can have backdoors and security bugs. If we want to avoid Xz-like problems then this first layer need to be significant hardened (With SELinux as one solution). Their only capability should be to open the second layer.

ufo
1 replies
16h46m

Chaining login methods would not help if the outermost login method is backdoored with an RCE.

belorn
0 replies
5h23m

That is where hardened with SELinux comes in. The outermost login method only capability beyond communication in the initial connection should be to open a tunnel to the next level, so any remote code execution could only execute the code to open the tunnel.

Building security in depth correctly is not simple. It takes work to construct layers so that one compromised layer do not cause whole system failure.

kelnos
1 replies
22h56m

I don't think I agree. This backdoor means that it doesn't matter how long your key lengths or cryptographic key sizes are; that's kinda the point of a backdoor. But an automated attempt to find servers to exploit is not going to attempt something like port knocking, or likely even looking for a sshd on a non-standard port.

Kerckhoff's law goes (from your link):

The principle holds that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge.

With this backdoor, that principle does not hold; it's utterly irrelevant. Obscuring the fact that there even is a ssh server to exploit increases safety.

(I dislike port knocking because of your third point, that it complicates access. But I don't think your assertions about its security principles hold water, at least not in this case.)

(Ultimately, though, instead of something like port knocking or even running sshd on a non-standard port, if I wanted to protect against attacks like these, I would just keep sshd on a private network only accessible via a VPN.)

teddyh
0 replies
22h37m

This backdoor means that it doesn't matter how long your key lengths or cryptographic key sizes are; that's kinda the point of a backdoor.

If we’re talkning about this specific backdoor, consider this: If the attacker had successfully identified a target SSH server they could reasonably assume had the backdoor, would they be completely halted by a port knocker? No, they would brute-force it easily.

Port knocking is very bad security.

(It’s “Kerckhoffs’s <law/principle/etc.>”, by the way.)

e_y_
0 replies
22h22m

Security through obscurity is only a problem if obscurity is the main defense mechanism. It's perfectly fine as a defense-in-depth. This would only be an issue if someone did something stupid like set up a passwordless rlogin or database server expecting port knocking alone to handle security.

Also as pointed out elsewhere, modern port knocking uses Single Packet Authorization which allows for more bits. It's also simpler and uses a different mechanism than ssh (which due to its age, has historically supported a bunch of different login and cryptography techniques), which reduces the chance that an attacker would be able to break both.

TacticalCoder
13 replies
1d

Stuff like this is why I like port knocking, and limiting access to specific client IPs/networks when possible.

Indeed: I whitelist hosts/IP blocks allowed to SSH in. I don't use port-knocking but I never ever criticized those using port knocking.

I do really wonder if people are still going to say that port knocking is pointless and security theatre: we now have a clear example where people who were using port-knocking were protected while those who didn't were potentially wide open to the biggest backdoor discovered to date (even if it's wasn't yet in all the major distros).

... does the entire internet really need to be able to SSH to your box?

Nope and I never ever understood the argument saying: "Port-knocking is security theatre, it doesn't bring any added security".

To me port-knocking didn't lower the security of a system.

It seems that we now have a clear proof that it's actually helping versus certain type of attacks (including source-code supply chain attacks).

bpfrh
7 replies
23h46m

It seems that we now have a clear proof that it's actually helping versus certain type of attacks (including source-code supply chain attacks).

So would have a vpn or using a bastion host with a custom non standard ssh implementation...

At some point you have to make the choice to not implement a security measure and I would argue that should stop at vpn+standard software for secure access.

If you are a bigger company, probably add SSO and network segmentation with bastion hosts and good logging.

Port Knocking doesn't add any security benefit in the sense that there are known non avoidable security risk aka your transmit your password(knocking) in clear text over the network.

You also add another program with potential vulnerabilities, and as port knocking is not as popular as e.g. sshd, wireguard, maybe it gets less scrutiny and it leads to a supply chain attack?

Security measures are also not free in the sense that somebody has to distribute them and keep the configuration up to date, even if that person is you, that means syncing that connect-to-server-script and keeping it in a secure location on your devices.

tredre3
3 replies
23h40m

    You also add another program with potential vulnerabilities, and as port knocking is not as popular as e.g. sshd, wireguard, maybe it gets less scrutiny and it leads to a supply chain attack?
That other program is just a stateful firewall, aka the Linux Kernel itself. If you can't trust your kernel then nothing you do matters.

bpfrh
2 replies
23h31m

That other programm is knockd, which needs to listen to all traffic and look for the specified packets.

Granted, that program is really small and could be easily audited, but that same time could have been spent on trying apparmor/seclinux + a good vpn and 2fa

rhaps0dy
1 replies
22h43m

I much prefer the approach I read about in https://github.com/moxie0/knockknock (use a safe language, trust basically only the program you write and the language), to a random port daemon written in C which pulls libpcap to sniff everything.

To some extent knockknock also trusts the Python interpreter which is not ideal (but maybe OK)

Aloisius
0 replies
20h27m

In Linux, simple knocking (fixed sequences of ports) can be done entirely in the kernel with nftables rules. Probably could even have different knock ports based on day of the week or hour or source IP.

https://wiki.nftables.org/wiki-nftables/index.php/Port_knock...

marcus0x62
2 replies
22h41m

Port Knocking doesn't add any security benefit in the sense that there are known non avoidable security risk aka your transmit your password(knocking) in clear text over the network.

This take is bordering on not even wrong territory. The point of port knocking isn't to increase entropy of your password or authentication keys per se, it is to control who can send packets to your SSH daemon, either to limit the noise in your logs or to mitigate an RCE in the SSH daemon. The vast majority of potential attackers in the real world are off-path and aren't going to be in a position to observe someone's port-knocking sequence.

Is VPN a better solution? Maybe, but VPNs, especially commercial VPNs have their own set of challenges with regard to auditability and attack surface.

bpfrh
1 replies
22h21m

This take is bordering on not even wrong territory. The point of port knocking isn't to increase entropy of your password or authentication keys per se, it is to control who can send packets to your SSH daemon, either to limit the noise in your logs or to mitigate an RCE in the SSH daemon. The vast majority of potential attackers in the real world are off-path and aren't going to be in a position to observe someone's port-knocking sequence.

If you read my full sentence in the context it stands, I argue that authorizing access to your openssh instance is done by sending an authentication code in cleartext.

It does not matter if that authentication code is in the form of bits, tcp pakets, colors or horoscopes as long as your transmit that in clear text it is in fact no a secure mechanism.

Yeah but now you basically have to always run an vpn that only exists between your server and your client, because otherwise your clear text authentication code is visible and at that point just use wireguard and make a 1-1 tunnel only for ssh with no known attacks, even if the attacker is in the same network.

Yes a vpn that has no known reliably attack vector is definitely better than a protocol with a known working attack vector

marcus0x62
0 replies
2h17m

If you read my full sentence in the context it stands, I argue that authorizing access to your openssh instance is done by sending an authentication code in cleartext.

I did read your full post. Your claim that port knocking adds no security benefit is just simply incorrect. Even if you take for granted the scenario of an attacker who can recover the port knock sequence, there is still the benefit of shielding the SSH daemon from other attackers who cannot do so. Which is…a “security benefit.”

Describing the port knocking sequence as an extension of someone’s authentication key is where this goes off into not even wrong territory. A dynamically controlled firewall rule (port knocking) is not fungible with bits from an authentication token for the reasons I’ve already outlined - the benefit is limiting network access to the daemon at all.

Yes a vpn that has no known reliably attack vector is definitely better than a protocol with a known working attack vector

You talk about recovering someone’s port knocking sequence as if that is trivial to do or in any way reliable. It is, in fact, neither of those things. An attacker would have to either:

1) sniff network traffic in front of the server

2) sniff network traffic in front of the client

3) compromise the server

4) compromise the client

5) brute force the port knocking sequence without getting locked out.

Most attackers are going to be in a position to try brute forcing — that’s it.

Meanwhile, you may not have noticed, but commercial VPNs have suffered a steady stream of high-impact CVEs for the last few years.

Wireguard is certainly better in this regard than any commercial VPN I know of, but it does have challenges with regard to key distribution/device enrollment, and thus from a practical standpoint limits access to pre-enrolled endpoints, which is a limitation someone using port knocking does not have.

AlexCoventry
3 replies
1d

say that port knocking is pointless and security theatre

Who was saying that?

AlexCoventry
0 replies
22h59m

Thanks.

rdtsc
0 replies
23h42m

I've heard it many times in the form of "security through obscurity, lol! you obviously, don't know what you're doing".

Yeah, it's a "straw man" pretending the person they are addressing was just planning on running telnet with port knocking.

nick238
0 replies
22h44m

The advantage of port knocking to me is just reducing the amount of garbage script-kiddie scans. IMHO the design of `sshd` needs to just assume it will be slammed by garbage attempts and minimize the logging. I've heard of `fail2ban`, but banning does nothing as the bots have an unlimited number of IPs.

Terr_
2 replies
1d

My gut-feel is that it rides near the line between "defense in depth" versus "security through obscurity".

mrguyorama
0 replies
1d

The reality is that security through obscurity works really well as a layer in an otherwise already good security model. You make sure to test it with some red teaming, and if they fail to get through, you give them all the details about your tricks so they can also test the actual security.

The "obscurity" part mostly serves to limit the noise of drive by and bot'd attacks, such that each attack attempt that you end up stopping and catching is a more serious signal, and more likely to be directed. It's about short circuiting much of the "chaff" in the signal such that you are less warning fatigued and more likely to seriously respond to incidents.

The obscurity is not meant to prevent targeted attacks.

Attummm
0 replies
23h41m

While 'security through obscurity' shouldn't be your only defense, it still plays a crucial role within 'defense in depth' strategies.

In the past, sensitive networks relied heavily on obscurity alone. Just dialing the right phone number could grant access to highly sensitive networks. However, there's a cautionary tale of a security researcher who took the phrase 'don't do security through obscurity' to heart and dared hackers, leading to disastrous consequences.

Obscurity, when used appropriately, complements broader security measures. Take SSH, for example. Most bots target its default port. Simply changing that port removes the easy targets, forcing attackers to use more sophisticated methods, which in turn, leaves your logs with more concerning activity.

zoeysmithe
1 replies
1d

Port knocking is bit like a lazy person's VPN. You might as well get off your butt and install a vpn solution and use ssh via vpn. The time and effort is almost the same nowadays anyway. The chances of both vpn and ssh being exploited like this must be zero.

Worse, most corporate, public wifi, etc networks block all sorts of ports. So at home sure you can open random ports but near everywhere else its just 80 and 443. Now you can't knock. But your https vpn works fine.

Also a lot of scary stuff here about identity and code checkins. If someone is some contributor, how do we know if their creds havent been stolen or they've been forced via blackmail or whatever to do this? Or how many contributors are actually intelligence agents? Then who is validating their code? This persons code went through just fine, and this was only caught because someone noticed a lag in logins, which by then, is a running binary.

FOSS works on the concept of x amount of developer trust, both in code and identity. You can't verify everyone all the time (creds and certs get stolen, blackmail, etc), nor can you audit every line of code all the time. Especially if the exploit is submitted piecemeal over the years or months. That trust is now being exploited it seems. Scary times. I wonder if how FOSS works will change after this. I assume the radio silence for Theo and Linus and others means there's a lot of brainstroming to get to the root of this problem. Addressing the symptom of this one attack probably won't be enough. I imagine some very powerful people want some clarity and fixes here and this is probably going to be a big deal.

I wouldn't be surprised if a big identity trust initiative comes out of this and some AI stuff to go over an entire submitter's history to spot any potentially malicious pattern like this that's hard for human beings to detect.

codedokode
0 replies
23h39m

nor can you audit every line of code all the time

You can if you distribute this job among volunteers or hire people to do that. There are millions of developers around the world capable to do this. But reality is that nobody wants to contribute time or pay for free software.

noman-land
1 replies
1d

Got any advice to easily set up port knocking?

belorn
1 replies
23h9m

Rather than port knocking, I prefer IP knocking. The server has several ip addresses and once a correct sequence of connection is made, the ssh port opens. Since so few know about IP knocking, it much safer than port knocking.

/s

drpixie
0 replies
18h44m

Sounds like (another) good reason for IPv6 - your box can have many, very obscure addresses :)

nijave
0 replies
1d

Jump host running a different SSH server implementation or SSH over VPN seems a little more reliable.

There's a lot of solutions now where the host has an agent that reaches out instead of allowing incoming connections which can be useful (assuming you trust that proxy service/software).

One place I worked, we ran our jumphost on GCP with Identity Aware Proxy and on AWS with SSM sessions so had to authenticate to the cloud provider API and the hosts weren't directly listening for connections from the internet. Similar setup to ZeroTier/TailScale+SSH

k8svet
0 replies
1d

Over the weekend, I added this to a common bit included in all my NixOS systems:

    -networking.firewall.allowedTCPPorts = [ 22 ];
    +networking.firewall.interfaces."tailscale0".allowedTCPPorts = [ 22 ];
I probably should have done this ages ago.

Dwedit
0 replies
1d

Going in through the main Internet connection might not be the only way in. Someone surfing on Wifi who visits the wrong website can also become a second way into your internal network.

declan_roberts
32 replies
1d

One thing I notice about state-level espionage and backdoors. The USA seems to have an affinity for hardware interdiction as opposed to software backdoors. Hardware backdoors make sense since much of it passes through the USA.

Other countries such as Israel are playing the long-con with very well engineered, multi-year software backdoors. A much harder game to play.

bawolff
10 replies
21h58m

Other countries such as Israel are playing the long-con with very well engineered, multi-year software backdoors

What is this in reference to?

Voultapher
5 replies
21h28m

NSO Group

They are an Israel based company, that sell zero-click RCEs for phones and more. Such malicious software was involved in the murder of journalist Jamal Khashoggi.

Their exploits, developed in-house as well as presumably partially bought on the black market, are some of the most sophisticated exploits found in the wild, e.g. https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.
bawolff
4 replies
20h31m

that sell zero-click RCEs

Exactly my point. They do not sell backdoors.

Don't get me wrong, still icky, but definitely not a "very well engineered, multi-year software backdoors"

xvector
1 replies
20h28m

They implant backdoors and sell zero-click RCEs that exploit those backdoors.

bawolff
0 replies
17h38m

That's a pretty strong claim. Do you have any evidence they have done this?

And i mean, i'm genuinely curious. I don't think they would be above doing so if they had the opportunity, i just haven't heard of any instances where they were caught doing so.

db48x
1 replies
14h13m

I’m not sure why you’re making that point; the exploit we are talking about _is_ an RCE.

bawolff
0 replies
13h49m

The person i was replying to claimed that countries like usa rely on hardware modification while countries like israel rely on back-doors. This does not seem to be the case. Israel doesn't (or at least hasn't been caught) making backdoors, while the usa actually has (e.g. Dual_EG).

RCE is a broad category and includes many things. That said, while technically the liblzma thing is an RCE, supply chain attacks aren't really what most people mean when they talk about an RCE vulnerability.

ginko
1 replies
21h39m

Probably Stuxnet.

bawolff
0 replies
20h31m

Stuxnet was not a backdoor.

markus92
0 replies
21h40m

Stuxnet?

hammock
0 replies
21h37m

Bob Maxwell (Ghislaine's father) sold backdoored software to corporations and governments all around the world, including US targets, on behalf of Israel's Mossad.

"The Maxwell-Mossad team steals spy software PROMIS from the United States, Mossad puts an undetectable trap door in it so Mossad can track the activities of anyone using it, then Maxwell sells it around the world (including back to the U.S. -- with the trap door)."

https://kclibrary.bibliocommons.com/v2/record/S120C257904

croemer
7 replies
19h15m

Jia Tan's GitHub activity was mostly 10-18@UTC, consistent with Europe/Israel/Russia

klabb3
2 replies
13h35m

Thanks, i was wondering about this since day 0 and was too lazy to look it up. Yes it can be spoofed, but I imagine a good chunk of day-to-day is work is semi-interactive, which would make it preferable to have the attacker be in the same tz as the victims. Anyone know what tz Lasse was at? If not (eg he’s in the US), then I’d say Occam’s razor that the attacker is working those UTC 10-18 office hours without extra steps. Tz proves nothing and for a 3y low-intensity operation I’d just assume the attacker won’t introduce that much friction only to mislead. I’m sure there are much stronger signals in the investigation work that’s going on now. Unfortunately, given the hush-hush-by-default nature of our beloved intel agencies, we’ll probably never know.

croemer
0 replies
5h10m

I don't think Github activity logs can be spoofed - of course activity can consciously been done in a certain time zone, but that's different from spoofing timestamps in git commits. See https://news.ycombinator.com/edit?id=39905376 for the full histogram, it shows a rather narrow time distribution between 12-16 UTC - not really natural at all if you ask me.

jeroenhd
2 replies
18h7m

It's rather trivial to fake git commits.

Basing this stuff on email times (especially replies to emails sent that same day) would be more relevant. However. if this operation was pulled off with the precision and opsec that it seems to have been, I wouldn't be too surprised if whatever group is behind this attack would've sent over + funded a developer somewhere in a time zone of their choice.

No doubt any nation state is able to station someone in Russia/Israel/one of the n-eyes countries.

I doubt OSINT will figure out who is really behind this, but I'm sure governments will, soon enough. Whether or not they report the truth of their findings, and if one should trust the official reporting, is yet another tough question.

db48x
0 replies
14h14m

The git timestamps in the git commits probably were faked, but they implicate China. There were a handful of apparent slippups though, and if we take those timestamps to be correct then it looks like they were in eastern Europe (or possibly Finland, the Baltics, or the Mediterranean coast all the way down to Egypt). And if we instead assume that these timestamps are a second–level misdirection then there is a huge complexity penalty; my money is on the simpler answer (but not all my money; it’s not a sure bet).

Culonavirus
0 replies
17h46m

I like to think that the performance issue in the exploit was actually an exploit of an exploit, a counterintelligence act, by some "good samaritan" :D

wolverine876
4 replies
1d

The USA seems to have an affinity for hardware interdiction as opposed to software backdoors.

What are some examples?

acid__
1 replies
19h32m

Be sure to check out the mentioned catalog. [1]

The NSA's capabilities back in 2008 were pretty astonishing: "RAGEMASTER" A $30 device that taps a VGA cable and transmits the contents of your screen to the NSA van sitting outside! Crazy stuff. Makes you wonder what they've built in the last 15 years.

[1] https://en.wikipedia.org/wiki/ANT_catalog

masklinn
0 replies
13h18m

Today you’ve got open access USB cable which can stream your shit out on WiFi, no doubt the NSA has worse.

gowld
4 replies
22h16m

This goes back to WWII. USA solves problems with manufacturing and money. Europeans relatively lack both, so they solve problems with their brains.

hybridtupel
3 replies
22h5m

Israel is not in Europe

Alezz
2 replies
21h15m

The perfect definition of Europe is if they took part in the Eurovision Song Contest.

nmat
1 replies
20h55m

Australia participates in the Eurovision Song Contest.

willsmith72
0 replies
18h51m

and now i get to identify as european in heated internet debates

fpgaminer
1 replies
21h15m

My completely unexpert opinion, informed by listening to all the episodes of Darknet Diaries, agrees with this. US intelligence likes to just bully/bribe/blackmail the supply chain. They've got crypto chops, but I don't recall any terribly sophisticated implants like this one (except Stuxnet, which was likely Israel's work funded/assisted by the US). NK isn't terribly sophisticated, and their goal is money, so that doesn't match either. Russia is all over the place in terms of targets/sophistication/etc because of their laws (AFAIK it's legal for any citizen to wage cyberwarfare on anything and everything except domestically), but this feels a bit beyond anything I recall them accomplishing at a state level. Israeli organizations have a long history of highly sophisticated cyberwarfare (Stuxnet, NSO group, etc), and they're good about protecting their access to exploits. That seems to fit the best. That said, saying "Israeli organization" casts a wide net since it's such a boiling hotspot for cybersecurity professionals. Could be the work of the government, could be co-sponsored by the US, or could just be a group of smart people building another NSO group.

greggsy
0 replies
21h20m

I mean, they’re just the high profile ones. China makes and ships a lot of hardware, and the US makes and ships a lot of software.

gghffguhvc
29 replies
1d

Is there anything actually illegal here? Like is it a plausible “business” model for talented and morally compromised developers to do this and then sell the private key to state actors without actually breaking in themselves or allowing anyone else to break in.

Edit: MIT license provides a pretty broad disclaimer to say it isn’t fit for any purpose implied or otherwise.

kstrauser
12 replies
1d

Yes. This would surely be prosecutable under the CFAA.

Honestly, if I were involved in this, I'd hope that it was, say, the FBI that caught me. I think that'd be the best chance of staying out of the Guantanamo Bay Hilton, laws be damned.

gghffguhvc
8 replies
1d

Even with the MIT disclaimer and the author not being the distributor or have any relationship with the distributor. Publishing vulnerable open source software to GitHub with a disclaimer that says it isn’t fit for any purpose seems like a bit of an oversight of using MIT license in distros to me.

bawolff
5 replies
21h46m

That is not how disclaimers work. You cannot disclaim liability for intentionally harming someone.

You also cannot avoid criminal charges for a crime simply by shouting "don't blame me"

kstrauser
2 replies
21h0m

That's exactly right. Imagine a license that said "...and I can come to your house and kill you if I want to." Even if someone signed it in ink and mailed a copy back, the licensor still can't go to their house and kill them even though the agreement says they can.

I can imagine the case of maybe a "King of the Hill"-type game played on bare hardware, where you're actively trying to hack into and destroy other players' systems. Such a thing might have a license saying "you agree we may wipe your drive after downloading all your data", and that might be acceptable in that specific situation. You knew you were signing up for a risking endeavor that might harm your system. If/when it happens, you'd have a hard time complaining about it doing the thing it advertised that it would do.

Maybe. Get a jury involved and who knows?

But somewhere between those 2 examples is the xz case. There's no way a user of xz could think that it was designed to hack their system, and no amount of licensing can just wave that away.

For a real world analogy, if you go skydiving, and you sign an injury against waiver, and you get hurt out of pure dumb luck and not negligence, good luck suing anyone for that. You jumped out of a plane. What did you think might happen? But if you walk into a McDonald's and fall through the floor into a basement and break your leg, no number of "not responsible for accidents" signs on the walls would keep them from being liable.

bawolff
1 replies
20h33m

For a real world analogy, if you go skydiving, and you sign an injury against waiver, and you get hurt out of pure dumb luck and not negligence, good luck suing anyone for that. You jumped out of a plane. What did you think might happen? But if you walk into a McDonald's and fall through the floor into a basement and break your leg, no number of "not responsible for accidents" signs on the walls would keep them from being liable.

Even this is a bad example, since it is just gross negligence and not intentional. A better analogy would be if mcdonalds shoots you.

kstrauser
0 replies
20h1m

I use to go to the In-N-Out in Oakland that just closed. That was a possibility, believe me.

seattle_spring
0 replies
13h22m

There's a great South Park episode about this, titled "Human CentiPad." Not for the squeamish.

gghffguhvc
0 replies
11h52m

I did setup the question in a way that the developer doesn’t harm someone themselves but sells it to a state actor. I.e extremely similar outcome to finding a zero day and selling it to a state actor except it is “more” secure - need private key.

The point about MIT is that they are saying to the world when publishing “as is” folks. Not claiming I haven’t backdoored it for Uncle Sam.in fact I’m not claiming anything, use at your own risk.

It used to be the law to implicitly do this by weak encryption for exports.

gowld
0 replies
22h11m

A software license has never been a protection against malicious criminal activity. They'd have to prove that the "feature" had a legitimate non-nefarious purpose, or was accidental, neither of which apply here.

gghffguhvc
0 replies
23h56m

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

shp0ngle
2 replies
21h36m

CIA didn't put anyone new into gitmo for years.

The 30 remaining gitmo prisoners are all W Bush holdovers that all subsequent administrations forgot.

kstrauser
0 replies
21h19m

Conspiracy theorist: That's what they want you to believe.

And in fairness, the whole nature of their secrecy means there's no way to know for sure. It might be just a boogeyman kept around as a useful tool for scaring people into not breaking national security-level laws. I mean, it's not as though I want to go around hacking the planet, but the idea of ending up at a CIA "black site", assuming such things even exist, would be enough to keep me from trying it.

aftbit
0 replies
20h53m

Sure but that's because the world knows about Gitmo now. What about the other quieter black sites?

vikramkr
4 replies
16h26m

Is there some weird loophole or something I'm missing here? Otherwise, I'm not exactly sure how hacking wouldn't be illegal.

greyface-
3 replies
16h14m

Hacking statues cover unauthorized access, and there's no evidence that unauthorized access has occurred, unless we see someone actually making use of the backdoor in the wild. Accessing a system without authorization is a crime, but distributing code that contains a backdoor is not a crime. The attacker was authorized to publish changes to xz.

vikramkr
2 replies
11h28m

Do you have a source for the claim that this isn't a crime because it didn't succeed? As a sanity check, checking the plain text of the CFAA, it covers attempted hacking as you'd expect. So distributing a system with a backdoor, introducing a backdoor, etc all seem quite comfortably covered by existing statues. Which makes sense since like, I don't want to say 'duh' because I think that probably wouldn't meet HN guidelines for discourse but I'm not sure what else to say. Even American politicians, as notoriously incompetent as our leaders can be sometimes, wouldn't miss a crime that obvious. https://www.law.cornell.edu/uscode/text/18/1030

greyface-
1 replies
10h23m

I'm not claiming the attempt didn't succeed. I'm claiming that the attempt didn't occur, and that distributing the backdoor more widely would only have created the preconditions for an attempt. We don't know who the intended target was, or what the intended payload was.

Yes, subsection (b) of the CFAA covers attempts at acts described in subsection (a) of the CFAA. Which specific act under subsection (a) do you claim has been attempted?

vikramkr
0 replies
4h3m

All of them? It's a backdoor into Linux to gain unauthorized access to computers that even specifically only works for that attackers specific private key. We have trial by jury where I guess you could argue that this carefully crafted backdoor was just some sort weird accident, but we also have prosecutors to make the obvious counterargument and investigate what these folks were going for. Though frankly they're probably state actors that we're never gonna catch. But literally all of part a. I still don't understand if I'm actually just missing something obvious about our criminal justice system that would mean the US has no ability to prosecute even moderately complex crimes by slightly sophisticated actors that didn't reach fruition, since by your reasoning we'd also never be able to prosecute essentially any organized crime anywhere as long as they keep their targets a secret? If the attempt didn't occur then did someone just trip and fall on their keyboard over a period of months to accidentally carry out sophisticated social engineering to write a carefully hidden backdoor into a package targeted at hijacking widely used operating systems?

pama
2 replies
20h21m

You are talking about the greatest exposed hack of the computer supply chain so far by a big margin. Laws can be made retroactively for this type of thing. It has implications that are beyond the legal system, as the threat level is way beyond what is typically required as a sniff test for justifying military actions. This was not an RCE based on identifying negligent code; this was a carefully designed trap that could reverse the power dynamics during military conflict.

KeplerBoy
0 replies
18h45m

Meh, these are potentially the kind of crimes where laws don't apply.

db48x
2 replies
14h28m

I’m no lawyer, but it is at minimum tortuous interference.

denton-scratch
1 replies
7h51m

Indeed. I think a lawyer would have written "tortious", there being no pain involved.

db48x
0 replies
27m

Only the pain of spelling.

jcranmer
1 replies
21h44m

There are things you can't contractually wave away, especially in form contracts that the other side has no ability to negotiate (which is what software licenses amount to).

One of those things is going to be fraud: if the maintainer is intentionally installing backdoors into their software and not telling the user, there's going to be some fraud-like statute that they'll be liable for.

dannyw
0 replies
20h23m

That said, if you’re doing this for your jurisdiction’s security agency, you’ll certainly be protected.

vikramkr
0 replies
11h20m

As a sanity check, attempted crime is still crime, so not a lawyer but part B isn't exactly ambiguous here: https://www.law.cornell.edu/uscode/text/18/1030

In terms of a plausible business model - I mean that's conspiracy to commit crime, like of course that's illegal? And depending on the state actor and the target system - literally treason if it's a cyber attack as a part of a war? You can put whatever you want into a license or contract but obviously that doesn't let you override the actual law

alfanick
0 replies
1d

Legality things, depends on jurisdiction, which may or may not depend on:

* where were the authors of the code, * where the code is stored, * who is attacked, * where are their servers, * who is attacking, * where are they based, * where did they attack from, * ...

IANAL but it seems very complicated from law perspective (we, humanity, don't have a global law)

Edit2: making a bullet point list is hard

jobs_throwaway
27 replies
1d1h

Imagine how frustrating it has to be for the attacker to meticulously plan and execute this and get foiled so late in the game, and so publicly

Alifatisk
16 replies
1d1h

Must be punching the air right now

TechDebtDevin
9 replies
1d1h

Likely a team of people at a three letter agency.

TheBlight
7 replies
1d

Everyone keeps saying this but it seems unlikely to me that they'd do this for a relatively short window of opportunity and leave their methods for all to see.

avidiax
4 replies
1d

You are judging this by the outcome, as though it were pre-ordained, and also assuming that this is the only method this agency has.

It is much more likely that this backdoor would have gone unnoticed for months or years. The access this backdoor provides would be used only once per system, to install other APT (advanced persistent threats), probably layers of them. Use a typical software RAT or rootkit as the first layer. If that is discovered, fallback to the private keys you stole, or the social engineer the company directory you copied. If that fails, rely on the firmware rootkit that only runs if it's timer hasn't been reset in 6 months. Failing that, re-use this backdoor if it's still available.

TheBlight
3 replies
22h46m

It was found in a few weeks so why is it more likely it wouldn't have been noticed for months/years with more people running the backdoored version of the code?

ufo
2 replies
22h9m

We were lucky that the backdoor called attention to itself, because it impacted the performance off ssh and introduced valgrind warnings.

TheBlight
1 replies
19h44m

Doesn't that further suggest non-state actor(s)?

ufo
0 replies
16h51m

I've heard that it was only detected because the developer that found it was using different compiler flags than the default. Under default settings, the backdoor was stealthier.

AtNightWeCode
1 replies
23h47m

My guess is that a ransomware group is behind this. Even if the backdoor had gone into production servers it would have been found fairly quickly if used at some scale.

TheBlight
0 replies
22h48m

My guess is that a ransomware group is behind this.

My bet would be that they were after a crypto exchange(s) where they've already compromised some level of access and want to get deeper into the backend.

Even if the backdoor had gone into production servers it would have been found fairly quickly if used at some scale.

I agree. Yes it's possible the backdoor could've gone unnoticed for months/years but I think the perp would've had to assume not.

morkalork
0 replies
1d

I wonder if they have OKRs too.

cellis
4 replies
1d1h

Or...falling back on less noticed contingency plans...

Ekaros
3 replies
1d

My pet theory is that this was just one project they have been running for years. They are likely doing many more at same time. Slowly inserting parts in various projects and getting their contributors inside the projects.

SAI_Peregrinus
0 replies
20h48m

If it's an intelligence agency exploit, this is nearly certain. Getting agents hired as employees of foreign companies to provide intelligence is an ancient practice. Getting agents to be open source maintainers is a continuation of the same thing.

0cf8612b2e1e
0 replies
1d

That seems like a safe bet. If you are planning a multi year operation, it would be silly to do it all under a single account. Best to minimize the blast radius if any single exploit gets discovered.

noobermin
0 replies
1d1h

Or, they must be strapped into a chair having teeth being pulled out by whoever directed them.

pixl97
8 replies
1d1h

So who's started scanning open source lib test cases for more stuff like this?

slaymaker1907
6 replies
21h52m

I think it would help to try and secure signing infrastructure as much as possible. First of all, try to have 3 devs for any given project (this is the most difficult step). Make sure logging into or changing any of the signing stuff requires at least 2 of the devs, one to initiate the JIT and one to monitor.

Additionally, take supply chain steps like requiring independent approval for PRs and requiring the signing process to only work with automated builds. Don't allow for signing bits that were built on a dev machine.

Finally, I think it would help to start implementing signing executables and scripts which get checked at runtime. Windows obviously does executable signing, but it largely doesn't sign stuff like Python scripts. JS in the browser is kind of signed given that the whole site is signed via https. It's not perfect, but it would help in preventing one program from modifying another for very sensitive contexts like web servers.

bawolff
4 replies
21h48m

I do not think any of that would have prevented this situation.

slaymaker1907
3 replies
20h30m

Upon further reading, I think you might be correct. I initially thought a good signing process would be sufficient since it sounded like this malicious blob was secretly being included in the tarball by the build server, but it instead seems to be the case that the malicious binary was included in the repo as a test file.

You could probably still protect against this sort of attack using signing, but it would be much more laborious and annoying to get working. The idea is that you would somehow declare that OpenSSH binaries must be signed by a *particular* key/authority, that VS Code is signed by Microsoft, Chrome signed by Google, etc. Additionally, the config declaring all of this obviously needs to be secured so you'd need to lock down access to those files (changing key associations would need to require more permissions than just updating old software or installing new software to be useful).

Aloisius
2 replies
19h45m

It was both. The binary object file was obfuscated in two "test" xz files in the repo itself. The code to extract the object file and inject it into the build process was only in the GitHub release tarball.

The code in the tarball could have been prevented if only automated tarballs were permitted (for instance, GitHub's branch/tag source tarballs) or caught after the fact by verifying the file hashes in the tarball against those in the repo.

bawolff
1 replies
17h40m

The code to extract the object file and inject it into the build process was only in the GitHub release tarball.

I thought it was in both, however the backdoor wasn't inserted with the default build time options, so it ended up in the tarball, but not if you just did make unless you set things up the same as the build server.

Aloisius
0 replies
16h33m

The release tarball contained code that can't be generated from the repo, no matter what settings you use.

JiaT75 created the release on GitHub which involves creating a description and uploading tarballs. Their PGP key has been used to sign them since May 2023.

1. The repo contained two test xz files which contained the obfuscated exploit .o file

2. JiaT75 created the release on GitHub which allows you to upload your own tarball for the release.

3. The release tarball contained a modified build-to-host.m4 that extracted the exploit .o so it would be linked into the library if building on a Linux x86-64 machine.

4. Fedora/Debian/whoever pulled the release source tarball from GitHub and ran their builds for x86-64 Linux which caused the exploit .o file to be linked into the binary they then packaged up for release

j-krieger
0 replies
20h2m

All this riffraff and OSS maintainers basically work for free.

beefnugs
0 replies
22h29m

Like everything else in this world, its utterly and completely unfixably broken: There is no way you can have complex huge dependencies properly.

The best most paranoid thing is to have multiple networking layers hoping they have to exploit multiple things at once, and whitelist only networking for exactly what you are expecting to happen. (which is completely incompatible with the idea of ssl, we would need a new type of firewall that sits between ALL applications before encryption, like a firewall between applications and the crypto library itself, which breaks a bunch of other things people want to do)

zoeysmithe
0 replies
23h56m

This actor, and others like them, may have dozens or hundreds of these things out there. We don't know. This was only found accidentally, not via code review.

bilekas
18 replies
13h23m

This whole thing has been consuming me over the whole weekend. The mechanisms are interesting and a collection of great obfuscations, the social engineering is a story that’s shamefully all too familiar for open source maintainers.

I find most interesting how they chose their attack vector of using BAD test data, it makes the rest of the steps incredibly easier when you have a good archive, manipulate it in a structured method (this should show on a graph of the binary pattern btw for future reference) then use it for a fuzzy bad data test. It’s great.

The rest of the techniques are banal enough except the most brilliant move seems to be that they could add “patches” or even whole new back doors using the same pattern on a different test file. Without being noticed.

Really really interesting, GitHub shouldn’t have hidden and removed the repo though. It’s not helpful at all to work through this whole drama.

Edit: I don’t mean to say this is banal in any way, but once the payload was decided and achieved through a super clever idea, the rest was just great obfuscation.

throw156754228
6 replies
9h1m

They removed the repo so only the attackers had access to the code and know how.

andybak
5 replies
8h26m

No. They obviously didn't do that so you're just being sarcastic but not actually making any point of your own in addition to that.

throw156754228
4 replies
6h22m

Intended levity, gee you sound miserable. Try some exercise in the mornings?

andybak
3 replies
5h6m

Read the room... This ain't Reddit.

throw156754228
2 replies
4h6m

Well this very reply you made is very much like something I would see on Reddit.

byteknight
1 replies
3h42m

The door is that way, kind sir.

throw156754228
0 replies
2h57m

Looking at your comment history, as dumb as the other guy.

mondrian
6 replies
13h16m

A main culprit seems to be the addition of binary files to the repo, to be used as test inputs. Especially if these files are “binary garbage” to prove a test fails. Seems like an obvious place to hide malicious stuff.

bilekas
5 replies
13h8m

It is an obvious place for sure, but it also would have been picked up if the builds where a bit more transperant. That batch build script should have been questioned before approval.

j16sdiz
4 replies
11h19m

By whom? The attacker have assumed the maintainer role. Nobody is reviewing.

NarcissistDev
3 replies
5h41m

There’s a severe and dangerous lack of paranoia in this dev space.

If I was letting someone maintain my codebase, I would 1 Billion percent be reviewing everything… if there was a binary added, I’d be building it myself and comparing the checksums.

Trust absolutely no-one. If you can give a close friend or a loved one a loan of money and it is so easy for them to never pay you back, it should be a reminder that devs you’ve never met are even more likely to scam you.

waihtis
2 replies
4h12m

pretty unhealthy attitude to live by tbh. almost better being burned by a malicious payload once every 10 years than live in perpetual fear of being constantly scammed

daymanstep
1 replies
3h29m

I think that depends on the context. If you're maintaining one of the most widely used packages that is directly linked to by libsystemd and is included by pretty much every Linux distro as part of the base system? Then maybe some measure of paranoia is justified.

I think the OpenBSD developers are right to be as paranoid as they are. Anyone who is maintaining a security critical system should be on guard against these kinds of attacks.

waihtis
0 replies
3h22m

Oh absolutely true for things like OpenBSD and such

withinboredom
3 replies
11h59m

It’s got me suspicious of build-time dependency we have in an open source tool, where the dependency goes out of its way to prefer xz and we even discovered that it installs xz on the host machine if it isn’t already installed — as a convenience. Kinda weird because it didn’t do that for any other dependencies.

These long-games are kinda scary and until whatever “evil” is actually done you have no idea what is actually malicious or just weird.

ToneWashed
2 replies
10h51m

It’s got me suspicious of build-time dependency we have in an open source tool, where the dependency goes out of its way to prefer xz and we even discovered that it installs xz on the host machine if it isn’t already installed — as a convenience. Kinda weird because it didn’t do that for any other dependencies.

Have you considered reaching out to the maintainers of that project and (politely) asking them to explain? In lieu of recent events I don't think anyone would blame you, in fact you might even suggest they explain such an oddly specific side effect in a README or such.

withinboredom
1 replies
10h46m

Have you considered reaching out to the maintainers of that project and (politely) asking them to explain?

That's kind of a catch-22, right? They'd explain with a seemingly good answer if they did it for actual reasons. They'll still explain with a seemingly good answer if they did it for nefarious reasons.

I don't have a good answer to this except to monitor this dependency and its changes.

wholinator2
0 replies
7h56m

Is it possible that it was added by a bad actor, and that most of the individuals in contact will not have signed off on it? I mean, this whole thing started by an interested party digging deeper than anyone else had, you could trigger that for someone else. At the very least, what do you have to lose? If the best they can do is look like they're fine, then every possible side effect is to root up a frighteningly well hidden inflator

miduil
13 replies
1d2h

Super impressed how quickly the community and in particular amlweems were able to implement and document a POC. If the cryptographic or payload loading functionality has no further vulnerabilities, this would have been also at least not introducing a security flaw to all the other attackers until the key is broken or something.

Edit: I think what's next for anyone is to figure out a way to probe for vulnerable deployments (which seems non-trivial) and also perhaps possibly ?upstreaming? a way to monitor if someone actively probes ssh servers with the hardcoded key.

Kudos!

rst
9 replies
1d2h

Well, it's a POC against a re-keyed version of the exploit; a POC against the original version would require the attacker's private key, which is undisclosed.

miduil
6 replies
1d2h

It's a POC nevertheless, it's a complete implementation of the RCE minus obviously the private key.

nindalf
5 replies
1d1h

It doesn't matter. The people with the private key already knew all of this because they implemented it. The script kiddies without the private key can't do anything without it. A POC doesn't help them in any way.

A way to check if servers are vulnerable is probably by querying the package manager for the installed version of xz. Not very sophisticated, but it'll work.

doakes
2 replies
1d1h

Are you saying POCs are pointless unless a script kiddie can use it?

nindalf
1 replies
1d

The context of the conversation, which you seem to have missed, is that now that we have a POC, we need a way to check for vulnerable servers. The link being that a POC makes it easier for script kiddies to use it, meaning we're in a race against them. But we aren't, because only one group in the whole world can use this exploit.

miduil
0 replies
1d

is that now that we have a POC, we need a way to check for vulnerable servers.

You misunderstand me, the "need to check for vulnerable servers" has nothing to do with the PoC in itself. You want to know whether you're vulnerable against this mysterious unknown attacker that went through the all the hoops for a sophisticated supply chain attack. I never said that we need a way to detect it because there is a POC out, at least I didn't meant to imply that either.

script kiddies to use it, meaning we're in a race against them

This is something you and the other person were suddenly coming up with, never said this in first place.

miduil
1 replies
1d1h

It doesn't matter.

To understand the exact behavior and extend of the backdoor, this does matter. An end to end proof of how it works is exactly what was needed.

A way to check if servers are vulnerable is probably by querying the package manager

Yes, this has been know since the initial report + later discovering what exact strings are present for the payload.

https://github.com/Neo23x0/signature-base/blob/master/yara/b...

Not very sophisticated, but it'll work.

Unfortunately, we live in a world with closed-servers and appliances - being able as a customer or pen tester rule out certain class of security issues without having the source/insights available is usually desirable.

nindalf
0 replies
1d

we live in a world with closed-servers and appliances

Yeah but these servers and appliances aren't running Debian unstable are they? I'd understand if it affected LTS versions of distros, but these were people living on the bleeding edge anyway. Folks managing such servers are going to be fine running `apt-get update`.

We got lucky with this one, tbh.

misswaterfairy
1 replies
20h19m

Could the provided honeypot print out keys used in successful and unsuccessful attempts?

bheadmaster
0 replies
6h31m

I don't think any (sane) client would sent its private key on login. Private key only serves as a "solver" of a puzzle created using the public key.

cjbprime
2 replies
1d1h

Probing for vulnerable deployments over the network (without the attacker's private key) seems impossible, not non-trivial.

The best one could do is more micro-benchmarking, but for an arbitrary Internet host you aren't going to know whether it's slow because it's vulnerable, or because it's far away, or because the computer's slow in general -- you don't have access to how long connection attempts to that host took historically. (And of course, there are also routing fluctuations.)

anonymous-panda
1 replies
20h51m

Should be able to do it by having the scanner take multiple samples. As long as you don’t need a valid login and the performance issue is still observable, you should be about to scan for it with minimal cost

cjbprime
0 replies
12h46m

Looks like the slowdown is actually at sshd process startup time, not authentication time. So it's back to being completely impossible to network-probe for.

MuffinFlavored
13 replies
1d

Instead of needing the honeypot openssh.patch at compile-time https://github.com/amlweems/xzbot/blob/main/openssh.patch

How did the exploit do this at runtime?

I know the chain was:

opensshd -> systemd for notifications -> xz included as transient dependency

How did liblzma.so.5.6.1 hook/patch all the way back to openssh_RSA_verify when it was loaded into memory?

tadfisher
11 replies
1d

When loading liblzma, it patches the ELF GOT (global offset table) with the address of the malicious code. In case it's loaded before libcrypto, it registers a symbol audit handler (a glibc-specific feature, IIUC) to get notified when libcrypto's symbols are resolved so it can defer patching the GOT.

MuffinFlavored
10 replies
1d

When loading liblzma, it patches the ELF GOT (global offset table) with the address of the malicious code.

How was this part obfuscated/undetected?

bewaretheirs
9 replies
1d

it was part of the binary malware payload hidden in a binary blob of "test data".

In a compression/decompression test suite, a subtly broken allegedly compressed binary blob is not out of place.

This suggests we need to audit information flow during builds - the shipping production binary package should be reproduceably buildable without reading test data or test code.

MuffinFlavored
8 replies
1d

How/why did the test data get bundled into the final library output?

MuffinFlavored
5 replies
22h44m

build-to-host.m4

I wasn't aware that the rogue maintainer was able to commit himself without any PR review (or he snuck it through PR review) rogue steps in the build process as well that went unnoticed so that he could bundle decompressed `xz` streams from test data, that also patched output .so files well enough to add hooking code to them.

How many "process failures" are described in that process that exist in every OSS repo with volunteer unknown untrusted maintainers?

Aloisius
1 replies
19h13m

The changes to build-to-host.m4 weren't in the source repo, so there was no commit.

The attacker had permissions to create GitHub releases, so they simply added it to the GitHub release tarball.

MuffinFlavored
0 replies
4h19m

How would this have made it into Debian? Part of the Debian build is to pull down a release tarball (and then build from source) and not `git clone` a repo at a specific tag and build from source?

belthesar
0 replies
22h4m

That's kind of the rub here.

volunteer That's the majority of OSS. Only a handful of the projects we use today as a part of the core set of systems in the OSS world actually have corporate sponsorship by virtue of maintainers/contributors on the payroll. > unknown The actor built up a positive reputation by assisting with maintaining the repo at a time when the lead dev was unable to take an active role. In this sense, although we did not have some kind of full chain of authentication that "Jia Tan" was a real human that existed, that's about as good as it gets, and there's plenty of real world examples of espionage in both the open and closed source software world that can tell us that identity verification may not have prevented anything. > untrusted The actor gained trust. The barrier to gaining trust may have been low due to the mental health of the lead maintainer, but trust was earned and received. The lead maintainer communicated to distros that they should be added.

That's the rub here. It's _really easy_ to say this is a process problem. It's not. This was a social engineering attack first and foremost before anything else. It unlocked the way forward for the threat actor to take many actions unilaterally.

bawolff
0 replies
21h52m

How many "process failures" are described in that process that exist in every OSS repo with volunteer unknown untrusted maintainers?

What process failures actually happened here? What changes in process do you think would have stopped this?

acdha
0 replies
20h41m

This guy was pretty trusted after a couple of years of working on the project so I think it’s a category error to say process improvements could have fixed it. The use of autoconf detritus was a canny move since I’d bet long odds that even if your process said three other people had to review every commit they would have skimmed right over that to the ”important” changes.

jeffrallen
0 replies
1d

ifunc

yodsanklai
12 replies
1d1h

Do we know who's the attacker?

toasteros
8 replies
1d

The name that keeps coming up is Jia Tan (https://github.com/JiaT75/) but we have no way of knowing if this is a real name, pseudonym, or even a collective of people.

pphysch
3 replies
1d

Given the sophistication of this attack it would indeed be downright negligent to presume that it's the attackers' legal name and that they have zero OPSEC.

xvector
2 replies
20h19m

He used ProtonMail. I wonder if ProtonMail can pull IP logs for this guy and share them.

eklitzke
1 replies
19h40m

It might be worth looking into, but:

1) Probably by design protonmail doesn't keep these kinds of logs around for very long

2) Hacking groups pretty much always proxy their connection through multiple layers of machines they've rooted, making it very difficult or impossible to actually trace back to the original IP

ajross
2 replies
23h24m

It's also worth pointing out, given the almost two years of seemingly valuable contribution, that this could be a real person who was compromised or coerced into pushing the exploit.

stefan_
1 replies
20h44m

It’s also worth pointing out that parts of the RCE were prepared almost two years ago which makes this entirely implausible.

ajross
0 replies
19h48m

Were they? The attacker has had commit rights for 1.5 years or so, but my understanding is that all the exploit components were recent commits. Is that wrong?

agentdrek
0 replies
2h50m

Interesting to look through the "starred" repos of that account ... seems like a hit list: https://github.com/JiaT75?tab=stars

cellis
0 replies
23h55m

I'd say Ned Stark Associates, Fancy Bear, etc.

herpderperator
12 replies
1d1h

Note: successful exploitation does not generate any log entries.

Does this mean, had this exploit gone unnoticed, the attacker could have executed arbitrary commands as root without even a single sshd log entry on the compromised host regarding the 'connection'?

gitfan86
7 replies
1d1h

Yeah, but then you would have ssh traffic without a matching login.

Wonder if any anomaly detection would work on that

FergusArgyll
4 replies
23h53m

Interesting... Though you can edit whatever log file you want

jmb99
3 replies
21h56m

Any log that root on that box has write access to. It’s theoretically possible to have an anomaly detection service running on a vulnerable machine dumping all of its’ data to an append-only service on some other non-compromised box. In that case, (in this ideal world) the attacker would not be able to disable the detection service before it had logged the anomalous traffic, and wouldn’t be able to purge those logs since they were on another machine.

I’m not aware of any services that a) work like this, or b) would be able to detect this class of attack earlier than last week. If someone does though, please share.

fubar9463
1 replies
20h36m

You would be sending logs to a log collector (a SIEM) in security terms, and then you could join your firewall logs against your SSH auth logs.

This kind of anomaly detection is possible. Not sure how common it is. I doubt it is common.

fubar9463
0 replies
19h35m

In any case the ROI for correlating SSH logs against network traffic is potentially error prone and may be more noisy than useful (can you differentiate in logs between SSH logins from a private IP and a public one?).

An EDR tool would be much better to look for an attacker’s next steps. But if you’re trying to catch a nation state they probably already have a plan for hiding their tracks.

juitpykyk
0 replies
19h35m

You can do it on a single machine if you use the TPM to create log hashes which can't be rolled back.

skykooler
1 replies
17h10m

That would look the same as a random failed ssh login, which happens all the time. The connection isn't maintained past that point (unless the payload chooses to do so).

gitfan86
0 replies
5h16m

It would be similar but the payload is going to be abnormally large compared to other failed login attempts.

sureglymop
3 replies
1d1h

Yes.. The RCE happens at the connection stage before anything is logged.

udev4096
2 replies
23h54m

That's insane. How exactly does this happen? Are there no EDR/IDS who can detect an RCE at the connection stage?

gus_
0 replies
9h13m

An EDR would have detected an inbound connection to port 22. Then it'd have detected the attacker's activity (opened files, executed commands, etc)

If the EDR is capable of intercepting the forks, clone() execves, open(), etc, then you can follow the traces. If it's able to deny certain activity based on rules like modifying /etc/ld.so.preload or download files with curl/wget, it'd have made the attacker's life a bit more difficult.

If the attacker loaded a rootkit, then probably you'd have lost visibility of what the attacker did after that. Also not all the EDRs hook all the functions, or they have bugs, so many times you are not able to follow a trace (without pain/guessing).

This telemetry usually is sent to a remote server, so the attacker could not have deleted it.

bawolff
0 replies
21h50m

An IDS may detect something depending on what it is looking for. The grandparent is saying that sshd doesn't log anything. Which is not that surprising since sshd is atracker controlled.

loeg
7 replies
23h47m

The ciphertext is encrypted with chacha20 using the first 32 bytes of the ED448 public key as a symmetric key. As a result, we can decrypt any exploit attempt using the following key:

Isn't this wild? Shouldn't the ciphertext be encrypted with an ephemeral symmetric key signed by the privkey? I guess anyone with the public key can still read any payload, so what's the point?

thenewwazoo
3 replies
20h57m

This is a NOBUS attack - Nobody But Us.

By tying it to a particular key owned by the attacker, no other party can trigger the exploit.

loeg
2 replies
20h42m

I don't think this is responsive to my comment.

jhugo
1 replies
19h36m

I think it is? They were not trying to hide the content, but rather to ensure that nobody else could encrypt valid payloads.

loeg
0 replies
18h19m

The signing accomplishes that. The chacha20 encryption with part of a public key, which is what I'm discussing above, is just obfuscation.

kevincox
1 replies
20h25m

Encrypting the payload will allow you to get by more scanners and in general make the traffic harder to notice. Since the publicly available server code needs to be able to decrypt the payload there is no way to make it completely secure, so this seems like a good tradeoff that prevents passive naive monitoring from triggering while not being more complicated than necessary.

The only real improvement that I can see being made would be adding perfect forward secrecy so that a logged session couldn't be decrypted after the fact. But that would likely add a lot of complexity (I think you need bidirectional communication?)

loeg
0 replies
18h18m

Seems like single-byte XOR would be almost as good for obfuscation purposes, but I guess using chacha20 from the public key as a pseudo-OTP doesn't hurt.

The only real improvement that I can see being made would be adding perfect forward secrecy so that a logged session couldn't be decrypted after the fact. But that would likely add a lot of complexity (I think you need bidirectional communication?)

Yeah.

rcaught
0 replies
22h55m

That action would create extra noise.

faxmeyourcode
7 replies
1d

Edit: I misunderstood what I was reading in the link below, my original comment is here for posterity. :)

From down in the same mail thread: it looks like the individual who committed the backdoor has made some recent contributions to the kernel as well... Ouch.

https://www.openwall.com/lists/oss-security/2024/03/29/10

The OP is such great analysis, I love reading this kind of stuff!

davikr
3 replies
1d

Lasse Collin is not Jia Tan until proven otherwise.

verytrivial
0 replies
8h46m

Speaking only hypothetically, but two points:

1) No-one has been is proven to "be" anyone in this case. Reputation is OSS is built upon behaviour only, not identity. "Jia Tan" managed to tip the scales by also being helpful. That identity is 99% likely to be a confection.

2) People can do terrible things when strongly encouraged or worse coerced. Including dissolving identity boundaries.

The first problem can be 'solved' by using real identities and web of trust but that will NEVER fly in OSS for a multitude of technical and social reasons. The second problem will simply never be solved in any context, OSS or otherwise. Bad actors be bad, yo.

robocat
0 replies
21h17m

Passive aggressive accusation.

This style of fake doubt is really not appropriate anywhere.

pavon
0 replies
17h27m

No, he likely is not. But the patch series includes commits co-developed by Jia Tan, and lists Jia Tan as a maintainer of the kernel module.

wezdog1
0 replies
18h8m

Also it may br a coincidence but JiaT75 looks a lot like Transponder 7500 which in aviation means hijacked...

ibotty
0 replies
1d

No that patch series is from Lasse. He said himself that it's not urgent in any way and it won't be merged this merge window, but nobody (sane) is accusing Lasse of being the bad actor.

Denvercoder9
0 replies
1d

The referenced patch series had not made it into the kernel yet.

acdha
7 replies
1d1h

Has anyone tried the PoC against one of the anomalous process behavior tools? (Carbon Black, AWS GuardDuty, SysDig, etc.) I’m curious how likely it is that someone would have noticed relatively quickly had this rolled forward and this seems like a perfect test case for that product category.

knoxa2511
4 replies
1d1h

Sysdig released a blog on friday. "For runtime detection, one way to go about it is to watch for the loading of the malicious library by SSHD. These shared libraries often include the version in their filename."

The blog has the actual rule content which I haven't seen from other security vendors

https://sysdig.com/blog/cve-2024-3094-detecting-the-sshd-bac...

acdha
2 replies
23h55m

Thanks! That’s a little disappointing since I would have thought that the way it hooked those functions could’ve been caught by a generic heuristic but perhaps that’s more common than I thought.

acid__
1 replies
17h34m

My experience from working in the security space is that all the tech is pretty un-sexy (with very good sales pitches), and none of it will save you from a nation-state attacker.

acdha
0 replies
6h13m

Same. I was hoping to be wrong in my cynicism but…

RamRodification
0 replies
22h45m

That relies on knowing what to look for. I.e. "the malicious library". The question is whether any of these solutions could catch it without knowing about it beforehand and having a detection rule specifically made for it.

dogman144
1 replies
21h50m

Depends how closely the exploit mirrors and/or masks itself within normal compression behavior imo.

I don’t think GuardDuty would catch it as it doesn’t look at processes like an EDR does (CrowdStrike, Carbon black), I don’t think sysdig would catch it as looks at containers and cloud infra. Handwaving some complexity here, as GD and sysdig could prob catch something odd via privileges gained and follow-on efforts by the threat actor via this exploit.

So imo means only EDRs (monitoring processes on endpoints) or software supply chain evaluations (monitoring sec problems in upstream FOSS) are most likely to catch the exploit itself.

Leads into another fairly large security theme interestingly - dev teams can dislike putting EDRs on boxes bc of the hit on compute and UX issues if a containment happens, and can dislike limits policy and limits around FOSS use. So this exploit hits at the heart of a org-driven “vulnerability” that has a lot of logic to stay exposed to or to fix, depending on where you sit. Security industry’s problem set in a nutshell.

acdha
0 replies
20h32m

Guard Duty does have some ptocees level monitoring with some recent additions: https://aws.amazon.com/blogs/aws/amazon-guardduty-ec2-runtim...

The main thing I was thinking is that the audit hooking and especially runtime patching across modules (liblzma5 patching functions in the main sshd code block) seems like the kind of thing a generic behavioral profile could get but especially one driven by the fact that sshd does not do any of that normally.

And, yes, performance and reliability issues are a big problem here. When CarbonBlack takes down production again, you probably end up with a bunch of exclusions which mean an actual attacker might be missed.

wolverine876
6 replies
1d

Have the heads of the targeted projects - including xz (Lasse Collin?), OpenSSH (Theo?), and Linux (Linus) - commented on it?

I'm especially interested in how such exploits can be prevented in the future.

gowld
4 replies
22h15m

OpenSSH and Linux were not targeted/affected.

xz and the Debian distribtion of OpenSSH were targeted.

pama
2 replies
20h29m

The core source of the vulnerability (symbol lookup order allowing a dependency to preempt a function) might theoretically be fixed at the Linux+OpenSSH level.

wolverine876
0 replies
20h14m

It's in their ecosystem; they should be concerned about other similar attacks and about addressing the fears of many users, developers, etc.

cpach
0 replies
21h6m

Fedora too.

cgh
4 replies
14h40m

Comment I saw on Ars:

Interestingly enough, "Jia Tan" is very close to 加蛋 in Mandarin, meaning "to add an egg". Unlikely to be a real name or a coincidence.
picture
2 replies
13h40m

Jia Tan could mean literally anything as pinyin since it doesn't have diacritics for tone. 嘉坛 (Jiā tán) could be a totally plausible name, and so could 佳檀 (also Jiā tán). Chinese names can largely use any character that doesn't come with a really bad meaning

AnotherGoodName
1 replies
12h44m

Yeah a quick Google brings up lots of real Jia Tans who clearly aren't behind this and hopefully aren't being wrongly accused here. But it is clearly a real and common name.

shp0ngle
0 replies
12h36m

As other poster said, Chinese has diacritics that got usually removed when put into English websites

shp0ngle
0 replies
12h36m

There are many real Jia Tans on LinkedIn

arnaudsm
4 replies
1d1h

Have we seen exploitation in the wild yet?

rwmj
2 replies
1d1h

If it hadn't been discovered for another month or so, then it would have appeared in stable Fedora 40, Ubuntu 24.4 and Debian, and then it definitely would have been exploited. Another year it would have been in RHEL 10. Very luck escape.

bclemens
1 replies
15h41m

Nope, it wouldn't have been in RHEL 10 or any of the rebuilds. CentOS Stream 10 already branched from Fedora / ELN. The closest it would have gotten is a Fedora ELN compose, and it's doubtful it would have remained undiscovered long enough to end up in CentOS Stream 11.

rwmj
0 replies
11h3m

We likely would have backported the change. I'm already planning a big rebase of packages that missed the Fedora 40 / C10S branch (related to RISC-V in that case).

winkelmann
0 replies
1d1h

I assume the operation has most likely been called off. Their goal was probably to wait until it got into stable distros. I doubt there is a large number of unstable Debian or Fedora Rawhide servers with open SSH in the wild.

aborsy
3 replies
23h18m

Why ED448?

It’s almost never recommended, in favor of curve 25519.

throitallaway
1 replies
22h42m

As I understand it Ed448 was only recently added to openssh, so maybe it was chosen in order to evade detection by analysis tools that scan for keys (if such a thing is possible.)

juitpykyk
0 replies
19h30m

Some tools scan for crypto algorithms, typically by searching for the magic numbers, Ed448 is so new many tools probably don't recognize it.

Malware authors frequently change the crypto magic numbers to prevent detection.

stock_toaster
0 replies
22h44m

I believe ed25519 offers 128 bits of security, while ed448 offers 224 bits of security. ed448 has larger key sizes too, which does seem like an odd choice in this case. Maybe it was chosen for obscurity sake (being less commonly used)?

EasyMark
3 replies
11h48m

This whole thing makes me wonder if AI could detect "anomalies" like the human who found the actual hack did. Observe a system (lots of systems) and use that data to spot anomylous behavior from "new" versions of packages being added, throw up a red flag that is like "this really doesn't act like it did in the past because parameter(s) are unusual given previous versions"

lpapez
1 replies
11h15m

Anomaly, fraud, outlier detection...

AI has been trying to solve that for several decades and in my experience in production these systems usually end up being a set of expert written rules saying "if X is greater than Y, notify someone to check".

mjburgess
0 replies
4h9m

The issue is that fraud is anti-inductive: when it's discovered, it changes.

Statistical AI systems are just naive induction, and so largely useless.

pixelfarmer
0 replies
11h36m

Just needs a fine grained enough set of parameters that are observed on a system, aka "monitoring", and flag anything that sticks out. No need for "AI" unless the parameters that are monitored have more of a (complex) pattern to them than some value +/- deviation or similar things. "AI" is, in an abstract way, a pattern matcher and replacer, but also a big cannon that is often not needed and simpler heuristics work faster and better (and are usually deterministic).

mrob
2 replies
1d1h

Do we know if this exploit only did something if a SSH connection was made? There's a list of strings from it on Github that includes "DISPLAY" and "WAYLAND_DISPLAY":

https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01

These don't have any obvious connection to SSH, so maybe it did things even if there was no connection. This could be important to people who ran the code but never exposed their SSH server to the Internet, which some people seem to be assuming was safe.

rdtsc
0 replies
1d

Those are probably kill switches to prevent the exploit from working if there is a terminal open or runs in a GUI session. In other words someone trying to detect, reproduce or debug it.

cma
0 replies
1d1h

Could that be related x11 session forwarding (common security hole on the connectors' side if they don't turn it off when connecting to an untrusted machine).

dxthrwy856
2 replies
12h30m

The parallels in this one to the audacity event a couple years back are ridiculous.

Cookie guy claimed that he got stabbed and that the federal police was involved in the case, which kind of hints that the events were connected to much bigger actors than just 4chan. At the time a lot of people thought its just Muse Group that's involved, but maybe it was a (Russian) state actor?

Because before that he claimed that audacity had lots of telemetry/backdoors which were the reason he forked and removed in his first commits. Maybe audacity is backdoored after all?

Have to check the audacity source code now.

cookiengineer
0 replies
11h37m

Careful, APT28 is pretty dangerous. They are merging their ops with APT29 these days, and I wouldn't wake the cozy bear if I were you.

CommitSyn
0 replies
12h6m

Cookie guy?

supriyo-biswas
1 replies
1d2h

All these efforts are appreciated, but I don't think the attacker is going to leak their payload after the disclosure and remediation of the vuln.

chonkerz
0 replies
1d1h

These efforts have deterrence value

heeen2
1 replies
15h45m

I wonder if separating the test files out into their own repo, so that they would not have been available at build time could have made this harder. The reasoning being that anything available and this potentially involved in the build should be human readable.

Zuiii
0 replies
15h12m

Anything available and this potentially involved in the build should be human readable.

That's actually a good principle to adopt overall.

We should treat this attack like an air plane accident and adopt new rules that mitigate the chances of it being successfully carried out again. We might not be able to vet every single person who contributes, but we should be able to easily separate out noisy test data.

duckyoufan
1 replies
10h53m

So many important points in dead and downvoted comments. Who the hell runs this site?

otterley
0 replies
3h43m

I’d recommend upvoting and commenting on the important points and adding your own useful comments.

nix0n
0 replies
1d

"Yo dawg, I heard you like exploits, so I exploited your exploit" -Xzibit

mkeymaker
0 replies
15h53m

these things are why focus on assuming compromise should be the default. Stop focusing all the effort on identifying the exploit instead focus on the behaviour of the attacker after exploit.

Joel_Mckay
0 replies
1d1h

Too bad, for a minute I thought it was something useful like adding a rule to clamav to find the private key-pair signature ID of the servers/users that up-streamed the exploit.

"Play stupid games, win stupid prizes"

AlexanderTheGr8
0 replies
14h39m

Is there any progress on identifying the attacker? This would make it much easier to find out it this was really a state-sponsored attack.

If this backdoor can be classified as a crime, github logs can identify the IP/location/other details of the attacker which is more than enough to identify them, unless their OPSEC is perfect, which it almost never is (e.g. Ross Ulbricht).