return to table of content

A brief history of the U.S. trying to add backdoors into encrypted data (2016)

coppsilgold
35 replies
21h47m

    FBI director James Comey have publicly lobbied for the insertion of cryptographic “backdoors” into software and hardware to allow law enforcement agencies to bypass authentication and access a suspect’s data surreptitiously. Cybersecurity experts have unanimously condemned the idea, pointing out that such backdoors would fundamentally undermine encryption and could exploited by criminals, among other issues.
"could exploited by criminals" is sadly a disingenuous claim. A cryptographic backdoor is presumably a "Sealed Box"[1] type construct (KEM + symmetric-cipher-encrypted package). As long as the government can keep a private key secure only they could make use of it.

There are plenty of reasons not to tolerate such a backdoor, but using false claims only provides potential ammunition to the opposition.

[1] <https://libsodium.gitbook.io/doc/public-key_cryptography/sea...>

2OEH8eoCRo0
12 replies
21h45m

And Apple has a backdoor that only Apple can use. Why don't criminals exploit Apple's backdoor?

catlifeonmars
5 replies
21h31m

Source/reference? I’m not aware of such a backdoor

adrian_b
3 replies
20h52m

See the posting above about the Arstechnica article.

During the last days of 2023 there was a big discussion, also on HN, after it was revealed that all recent Apple devices had a hardware backdoor that allowed bypassing all memory access protections claimed to exist by Apple.

It is likely that the backdoor consisted in some cache memory test registers used during production, but it is absolutely incomprehensible how it has been possible for many years that those test registers were not disabled at the end of the manufacturing process but they remained accessible for the attackers who knew Apple's secrets. For instance any iPhone could be completely controlled remotely after sending to it an invisible iMessage message.

sylware
0 replies
18h22m

"Convenient software/hardware bugs"... but "they are not backdoors, I swear!"

rightbyte
0 replies
17h51m

Can't Apple just push an software update with some:

    if (user_id == "adrian_b")
       pwn ();

?

_kbh_
0 replies
17h44m

It is likely that the backdoor consisted in some cache memory test registers used during production, but it is absolutely incomprehensible how it has been possible for many years that those test registers were not disabled at the end of the manufacturing process but they remained accessible for the attackers who knew Apple's secrets.

I think we are nearly certain that the bug is because of a MMIO accessible register that allows you to write into the CPU's cache (its nearly certain this is related to the GPU's coherent L2 cache).

But I don't think it's 'incomprehensible' that such a bug could exist unintentionally. Modern computers and even more so high end mobile devices are a huge basket of complexity that has so many interactions and coprocessors all over the place I think it's very likely that a similar bug exists undiscovered unmitigated.

For instance any iPhone could be completely controlled remotely after sending to it an invisible iMessage message.

I don't think the iMessage was invisible I think it deleted itself once the exploit had run, its also worth noting just how complicated the attack chain was and that the attacker _needed_ a hardware bug just to patch the kernel whilst having kernel code execution.

2OEH8eoCRo0
0 replies
2h28m

How is their update path not considered a backdoor? They can sign and serve you any update that they want.

quickslowdown
3 replies
21h33m

Which backdoor do you mean? I'm not an Apple expert by any means, but I thought they encrypted customer data in a way that even they can't get to it? Wasn't that the crux of this case, that Apple couldn't help the FBI due to security measures, prompting the agency to ask for a backdoor?

2OEH8eoCRo0
2 replies
21h32m

What's an update? They can sign and push any code they want remotely.

dataangel
1 replies
21h26m

IIRC the question is when the phone is totally locked, e.g. if you turn it off then turn it back on and haven't entered the PIN yet. In this state even apple can't get an update to run, the secure hardware won't do it unless you wipe the phone first. And your data is encrypted until you unlock the phone.

In practice though most people are screwed b/c it's all already in icloud.

fragmede
0 replies
20h16m

with advanced data protection, it's encrypted before it hits iCloud, so apple, nor the feds can't get at it.

frickinLasers
0 replies
21h33m

https://arstechnica.com/security/2023/12/exploit-used-in-mas...

Looks like criminals were using it for four years undetected.

catlifeonmars
0 replies
21h30m

FWIW this is a fair and valid argument. Generally, no one entity should have that much power. Doesn’t really matter if it’s USG or a tech giant.

devwastaken
5 replies
21h42m

It's not a false claim, assuming the feds will keep such a key "secure" is not backed by evidence. Top secret materials are leaked all the time. Private keys from well secured systems are extracted from hacks. The FBI having such a key would make them a very profitable target for the various corps that specialize in hacking for hire. For example, NSO group.

If the power doesn't exist, nobody can exploit it.

coppsilgold
4 replies
21h37m

Do military cryptographic keys leak often? Do nuclear codes leak?

The times highly valuable cryptographic keys leaked for various cryptocurrency exchanges it has generally if not always been due to gross negligence.

Such a key would be highly sensitive and it would also require very little traffic to use. You would just need to send the secure system a KEM (<100 bytes) and it will respond with the symmetric key used for the protected package.

I don't doubt they could secure it. Can even split the key into shares and require multiple parties to be present in the secure location.

some_furry
0 replies
21h23m

Do nuclear codes leak?

For many years, the code was 00000000.

https://arstechnica.com/tech-policy/2013/12/launch-code-for-...

jliptzin
0 replies
21h7m

What are you going to do with a nuclear code without access or authority to launch the nukes?

dvngnt_
0 replies
21h35m

nuclear codes are probably not used as much as phone backdoors. local police wants access too and other governments so I do believe it would leak

devwastaken
0 replies
20h12m

You're creating so many assumptions that nothing you've stated could be concluded to be an honest reflection of reality.

Nobody has to know the rate of leaks, it's irrelevant. Gross negligence is not necessary, how would you even know? Leaks by definition are rarely exposed, we only see some of them.

A "highly sensitive" key doesn't mean anything. Assigning more words to it doesn't somehow change the nature of it. Humans are bad at securing things, that's why the best security is to not have a system that requires it.

Whatever hypothetical solution you have would be crushed under the weight of government committees and office politics until your security measures are bogus.

bayindirh
3 replies
20h19m

Let’s see:

Mercedes recently forgot a token in a public repository which grants access to everything.

Microsoft forgot its “Golden Key” in the open, allowing all kinds of activation and secure boot shenanigans.

Microsoft’s JWT private key is also stolen, making the login page a decoration.

Somebody stole Realtek’s driver signing keys for Stuxnet attack.

HDMI master key is broken.

BluRay master key is broken.

DVD CSS master key is broken.

TSA master keys are in all 3D printing repositories now.

Staying on the physical realm, somebody made an automated tool to profile, interpret and print key blanks for locks with "restricted keyways" which has no blanks available.

These are the ones I remember just top of my head.

So yes, any digital or physical secret key is secure until it isn’t.

It’s not a question of if, but when. So, no escrows or back doors. Thanks.

wkat4242
1 replies
16h37m

I've been waiting for those wildvine keys to leak which would finally let me choose what to play my stuff on. But it still hasn't happened. They are getting better at secrecy sadly.

bayindirh
0 replies
1h51m

Since Widevine L3 is completely implemented on software, there are tools you can use, but L2 and L1 are have hardware components, and secure enclaves are hard to break. Up to par ones have self-destruction mechanisms which trigger when you bugger them too much.

On the other hand, there are 4K, 10bit HDR + multichannel versions everywhere, so there must be some secret sauce somewhere.

This is not a rabbit hole I want to enter, though.

piperswe
0 replies
13h42m

It's apparently now trivial to brute force the private key used for Windows XP-era Microsoft Product Activation, as another example. (that's where UMSKT and the like get their private keys from)

whatshisface
1 replies
21h35m

As long as the government can keep a private key secure only they could make use of it.

Your devices would be secure as long as a private key that happened to be the most valuable intelligence asset in the United States, accessed thousands of times per day, by police spread across the entire nation, was never copied or stolen.

dylan604
0 replies
18h14m

Well, it's a good thing that we don't have to worry about corrupt police /s

sowbug
0 replies
21h6m

You assume a perfect implementation of the backdoor. Even if the cryptographic part were well-implemented, someone will accidentally ship a release build with a poorly safeguarded test key, or with a disabled safety that they normally use to test it.

It's an unnecessary moving part that can break, except that this particular part breaking defeats the whole purpose of the system.

salawat
0 replies
18h33m

The same government that failed to keep all of it's Top Secret clearance paperwork secure? How soon we forget the OPM hack...

nonrandomstring
0 replies
21h28m

false claims

As Pauli said, "That's not even wrong". It cannot even meet the basic criteria for truth or falsehood.

It's simply naked hubris.

mnw21cam
0 replies
20h4m

As long as the government can keep a private key secure...

Which government? Software crosses borders.

You can bet that if the US mandated a back door to be inserted into software that was being exported to another country, that country would want to either have the master key for that back door, or a different version of the software with a different back door or without the back door. A software user could choose the version of the software that they wanted to use according to which country (if any) could snoop on them. It's unworkable.

eviks
0 replies
9h18m

As long as the government can keep a private key secure only they could make use of it.

That's a disingenuous claim since it's known they can't

catlifeonmars
0 replies
21h32m

As long as the government can keep a private key secure only they could make use of it.

Not disingenuous. Keys are stolen or leaked all the time. And the blast radius of such a master key would be extremely large.

buffet_overflow
0 replies
20h50m

As long as the government can keep a private key secure only they could make use of it.

Well, keep in mind they would have to keep it secure in perpetuity. Any leak over the lifetime of any of that hardware would be devastating to the owners. Blue Team/Defensive security is often described as needing to be lucky every time, where as Red Team/attackers just have to get lucky once.

This attack vector is in addition to just exploiting the implementation in some way, which I don't think can be handwaved away.

Rebelgecko
0 replies
20h44m

As long as the government can keep a private key secure only they could make use of it.

That's a big "if". Look at how the government has protected physical keys...

Ever since the TSA accidentally leaked them, you can buy a set of keys on Amazon for $5 that opens 99% of "TSA approved" locks

Hikikomori
0 replies
18h44m

Are they lobbying for this because they can't access stuff today and "need" it or is just a psyop so we believe what that they cannot access it today.

Geisterde
0 replies
7h36m

Ill take "what is vault 7" for $500.

progbits
20 replies
21h26m

As this is from 2016 it doesn't include this new fun revelation:

On 11 February 2020, The Washington Post, ZDF and SRF revealed that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence, and the spy agencies could easily break the codes used to send encrypted messages.

https://en.m.wikipedia.org/wiki/Crypto_AG

1337biz
5 replies
15h23m

Would be interesting what similar companies are (in parts) most likely agency fronts.

My guess would be quite a few in the soft privacy selling business, such as VPN or email providers.

Goodroo
4 replies
14h10m

Proton mail is a CIA front email provider

ykonstant
0 replies
6h37m

Hmm... should I choose a provider with a history of spying on everyone and destabilization, or Google? ...OK, I'll go with the CIA.

hairyplanner
0 replies
9h54m

I actually wish this was true. I want an email service that would last forever and is secure enough from my threats, namely security breaches of the email host and account takeover from non state actors.

Gmail is close enough, but I want an alternative. An email service run by the nsa or the cia would be great.

(No sarcasm is intended)

OneLeggedCat
0 replies
3h59m

Proton Mail's extremely bureaucratic operational deafness, and their glacial pace of product features and open-sourcing, would certainly lend support to that idea.

AB1908
0 replies
12h41m

It is impossible to tell if this is satire or not.

p-e-w
4 replies
16h10m

The company had about 230 employees, had offices in Abidjan, Abu Dhabi, Buenos Aires, Kuala Lumpur, Muscat, Selsdon and Steinhausen, and did business throughout the world.

That's a... really strange list of office locations, especially considering the relatively small number of employees.

The owners of Crypto AG were unknown, supposedly even to the managers of the firm, and they held their ownership through bearer shares.

How does this work in practice? If management doesn't know who owns the company, how can the owners exercise influence on company business?

quasse
2 replies
13h26m

Via lawyer / legal representative if I had to hazard a guess.

p-e-w
1 replies
11h23m

How does that representative prove that they really represent the owners, if the owners aren't known to management? How can they authorize someone without revealing identifying information?

eviks
0 replies
9h20m

Where would this need to really prove anything arise from? The intermediaries just hire and pay the managers, that's enough

anticensor
0 replies
7h0m

Codify all the management policy in the main charter, leaving nothing else to the board to decide?

EthanHeilman
3 replies
14h52m

I wrote blog entry on this subject with a very similar name [0] which covers the CryptoAG story in more detail. It doesn't have the 2020 news.

[0]: A Brief History of NSA Backdoors (2013), https://www.ethanheilman.com/x/12/index.html

samstave
1 replies
14h38m

This is an epically cool blog post! - submit it to HN on its own merits.

This was of particular interest to me:

>>"...1986 Reagan tipped off the Libyans that the US could decrypt their communications by talking about information he could only get through Libya decrypts on TV15. In 1991 the Iranians learned that the NSA could break their diplomatic communications when transcripts of Iranian diplomatic communications ended up in a French court case..."

Because, in 1986 - thats effectively when a lot of the phreaking and social engineering was at a peak - Cyberpunk was moving from imagination --> zeitgeist --> reality.

Social engineering and line-printer litter recovery were yielding the backdoors into the Telecom Switching system. BBS's were raging [0].

So when you get a gaph-guffaw look into infosec in a slipup like these ones, it reinforces in mind that the 80s were some really wild times all around as technology tsunami'd from people's minds business and reality.

[0] BBS Docu - https://www.imdb.com/title/tt0460402/

[1] phreaking - https://en.wikipedia.org/wiki/Phreaking

[2] history of phreaking - https://www.youtube.com/watch?v=8PmkUPBhL4U

EthanHeilman
0 replies
14h15m

Thanks, just submitted

_kbh_
0 replies
11h12m

I wrote blog entry on this subject with a very similar name [0] which covers the CryptoAG story in more detail. It doesn't have the 2020 news. [0]: A Brief History of NSA Backdoors (2013), https://www.ethanheilman.com/x/12/index.html

Wow this is super interesting I noticed this paragraph in the text.

2013, Enabling for Encryption Chips: In the NSA's budget request documents released by Edward Snowden, one of the goals of the NSA's SIGINT project is to fully backdoor or "enable" certain encryption chips by the end of 201311. It is not publicly known to which encryption chips they are referring.

From what I know Cavium is one of these "SIGINT enabled" chip manufactures.

https://www.electrospaces.net/2023/09/some-new-snippets-from...

> "While working on documents in the Snowden archive the thesis author learned that an American fabless semiconductor CPU vendor named Cavium is listed as a successful SIGINT "enabled" CPU vendor. By chance this was the same CPU present in the thesis author's Internet router (UniFi USG3). The entire Snowden archive should be open for academic researchers to better understand more of the history of such behavior." (page 71, note 21)

https://www.computerweekly.com/news/366552520/New-revelation...

Unfortunately the relevant text for the second is pretty long so I dont wanna quote it.

pgeorgi
2 replies
18h45m

The CIA/BND connection wasn't known, but the collusion with certain agencies was known to different degrees for decades: https://en.wikipedia.org/w/index.php?title=Crypto_AG&oldid=7...

lcnPylGDnU4H9OF
0 replies
18h27m

Considering that I remember reading the CIA’s own historical document on this operation, I would guess its usefulness had run its course. If I’m not mistaken, it was the CIA who released the document to journalists; it seemed like bragging.

_kbh_
0 replies
18h2m

To add another dimension to this, personally i think that the Crypto AG relationship is what is referred to as "HISTORY" in this leaked NSA ECI codenames list.

https://robert.sesek.com/2014/10/nsa_s_eci_compartments.html

HISTORY HST NCSC (TS//SI//NF) Protects NSA and certain commercial cryptologic equipment manufacturer relationships.
treflop
0 replies
16h36m

The guy who founded Crypto AG was really good friends with a guy who became a top dog at the NSA.

hedora
0 replies
20h49m
mmaunder
20 replies
19h9m

Honorable mention for the ITAR regs that prevented Phil Zimmerman from exporting PGP 128 bit encryption until Zimmerman and MIT press printed the source as a book protected by the first amendment, exported it, and this enabled others to OCR it, and recompile it offshore.

Also that ITAR enabled Thawte in South Africa (where I’m from) as a business to completely dominate sales for 128 bit SSL certs outside the US. Thawte was eventually acquired by Verizon for $600 million and the founder Mark Shuttleworth used the cash to become an astronaut and then founded Ubuntu.

p-e-w
6 replies
16h2m

I never understood the story about the book-printed PGP source code. Isn't source code protected speech under the first amendment anyway, regardless of the form in which it is transmitted? All kinds of media receive first amendment protection, including things like monetary donations, corporate speech, art, etc. I've never heard of there being a requirement for the printed form. Did the interpretation of the first amendment change recently in this regard?

int_19h
4 replies
15h45m

The idea was to make it blatantly clear that it's not a "munition".

p-e-w
3 replies
15h34m

Why would that matter, if source code is protected speech anyway?

And why is it more "clear" with a printed book vs. an emailed text file?

rfw300
0 replies
14h42m

I think if you’re looking for a logical answer from first principles, you won’t find one. It’s more that the legal system runs on precedent, and a book fits far more squarely in the fact patterns of previous First Amendment cases. Likely the source code case would end up with the same outcome, but it doesn’t hurt to make it more obvious.

int_19h
0 replies
12h59m

Legally speaking, it didn't really matter. But symbolically, having the feds argue that a book constitutes "munitions" would be bad optics for them in a way that is more understandable to the average American, compared to more legal arcane arguments about software having 1A protections.

__MatrixMan__
0 replies
15h17m

This was a time when we were happy when they managed to get congresspeople to understand that the internet is not like a truck, but more like a series of tubes.

I think anchoring it to something old school like a book was a good call.

femto
0 replies
13h23m

The book was published in 1995 [1,2]. Free speech protection for source code under US law wasn't decided until 1996, in Bernstein v. United States [3].

[1] https://en.wikipedia.org/wiki/Phil_Zimmermann#Arms_Export_Co...

[2] https://www.eff.org/deeplinks/2015/04/remembering-case-estab...

[3] https://en.wikipedia.org/wiki/Bernstein_v._United_States

CWuestefeld
5 replies
16h44m

At the time, I had a t-shirt that said "this t-shirt is a munition", because it also had on it the RSA public key algorithm encoded as a barcode.

a-dub
1 replies
9h48m

i seem to have vague recollection of a variant of this shirt that had a perl script on it? or was the perl script for decoding the barcode?

T3OU-736
0 replies
4h3m

Memory claims: It was RSA in Perl, text shape of a dolphin, but it may be wrong.

mmaunder
0 replies
16h30m

Haha I remember that. OG fist bump!

e40
0 replies
16h37m

Cane to say the same thing. It sparked a lot of interesting conversations over the years.

dramm
0 replies
14h12m

Had the same t-shirt with the barcode readable source code on it. I think prompted by seeing Greg Rose wear one, may have gotten it from him/mutual friends. As an foreign citizen I was never brave enough to wear it through a USA entry airport.

tjoff
3 replies
11h11m

[...] and this enabled others to OCR it, and recompile it offshore.

Did it? Or did it just give them plausible deniability?

I remember playing with OCR as a kid and all the software I could get my hands on gave horrendous results, even if the input was as perfect as one could hope for.

And even today I sometimes run tesseract on perfect screenshots and it still makes weird mistakes.

Would be interesting to know if the book had any extra OCR-enabling features. I'm sure the recipients would get access to proper tools and software but OCRing source-code still seems like a nightmare back then.

consp
1 replies
9h24m

Banks used it on checques for ages, why would it be that difficult? You do need a compatible typesetting though.

ryanackley
0 replies
6h33m

Where did you get your information? MICR lines, the line of numbers at the bottom of the check, use magnetic ink. The acronym stands for Magnetic Ink Character Recognition. So for ages, they didn't use optics at all.

In the modern day, cheque OCR is monopolized by one company, Mitek. They may use tesseract somewhere in their stack but I've never read that anywhere.

MereInterest
0 replies
6h4m

Not so much plausible deniability, as a clear First Amendment argument. If you’re forbidden from exporting computer code, that was some new-fangled magic thing that nobody could possibly understand. If you’re banned from exporting a book, well that has some clear and obvious precedent as a restriction of free speech.

inquirerGeneral
0 replies
17h17m

I didn't know any of this. Thanks.

"The U.S. Munitions List changes over time. Until 1996–1997, ITAR classified strong cryptography as arms and prohibited their export from the U.S.[5]

Another change occurred as a result of Space Systems/Loral's conduct after the February 1996 failed launch of the Intelsat 708 satellite. The Department of State charged Space Systems/Loral with violating the Arms Export Control Act and the ITAR.[6][7]

As a result, technology pertaining to satellites and launch vehicles became more carefully protected." https://en.wikipedia.org/wiki/International_Traffic_in_Arms_....

bouncycastle
0 replies
8h55m

RSA = Republic of South Africa

T3OU-736
0 replies
4h1m

Thawte happily sold certs to US entities like universities (I was a sysadmin at one of those universities with such a cert :) )

loughnane
18 replies
21h50m

This topic comes up a bunch still. Someone please correct me, but as I understand it anyone using new chips that use Intel ME (or AMD's equivalent) have a gaping hole in their security that no OS can patch.

I know puri.sm[0] takes some steps to try to plug the hole, but haven't read up to see if it's effective or no.

[0] https://puri.sm/learn/intel-me/

bri3d
5 replies
21h7m

anyone using new chips that use Intel ME (or AMD's equivalent) have a gaping hole in their security that no OS can patch

Not really; anyone using chips with Intel ME or AMD PSP have an additional large binary blob running on their system which may or may not contain bugs or backdoors (of course, also realizing a sufficiently bad bug is indistinguishable from a backdoor).

There are tens to hundreds of such blobs running on almost any modern system and these are just one example. I would argue that ME and PSP are not the worst blob on many systems; they have both unsupported but almost certainly effective (MEcleaner / ME code removal), supported and almost certainly effective (HAP bit), or supported and likely effective (ME / PSP disable command) mechanisms to disable their functionality, and they are comparatively well-documented versus the firmware that runs on every other peripheral (networking, GPU, etc.) and comparatively hardened versus EFI.

loughnane
4 replies
19h39m

Yeah, this lives in the back of my mind too. I run debian on 11th gen intel, but with the non-free blobs included to make life easier. I've been meaning to try it without them, but it's too tempting to just get things 'up' instead of hacking on it.

matheusmoreira
2 replies
11h22m

There's little we can do about it short of running ancient libreboot computers. We'll never be truly free until we have the technology to manufacture free computer chips at home, just like we can make free software at home.

voldacar
0 replies
10h18m

There's the talos II, if you can afford it.

tonetegeatinst
0 replies
1h15m

ASML fabs in every basement when? I think riskV is as close to an open source CPU we have at the moment, unfortuantly most riskV cpu's rely on the company having IP that is protected like the CPU layout or the core architecture as of what I understand of modern CPU design.

RISKV has been a great step forward and I'd love to see it succeed but I'm also aware of the lack of open source architecture for GPU's or AI accelerators.

mistrial9
0 replies
18h43m

Debian has been hacked by Intel's blobs from my point of view

dylan604
4 replies
18h20m

Are these blob type of attacks accessible after boot? Essentially, are these only accessible if you have physical access? And at that point, isn't it game over anyways?

bri3d
3 replies
17h48m

Intel ME allows intentional remote access through the ME in some enterprise scenarios (vPro). The driver support matrix is quite small and this is a massively overblown concern IMO, but it’s the root of a lot of the hand wringing.

However, onboard firmware based attacks are absolutely accessible remotely and after boot in many scenarios. It’s certainly plausible in theory that an exploit in ME firmware could, for example, allow an attacker to escape a VM or bypass various types of memory protection. Unfortunately the actual role of the ME is rather opaque (it’s known, for example, to manage peripherals in s0ix sleep).

Ditto for any other blob. Maybe a specially crafted packet can exploit a WiFi firmware. Maybe a video frame can compromise the GPU.

These are also good persistence vectors - gain access remotely to the NOR flash containing EFI, and you have a huge attack surface area to install an early boot implant. (or if secure boot isn’t enabled, it’s just game over anyway). On Linux, it’s often just hanging out in /dev as a block device; otherwise, once an attacker has access to the full address space, it’s not too hard to bitbang.

These are all fairly esoteric attacks compared to the more likely ways to get owned (supply chain, browser bugs, misconfiguration), but they’re definitely real things.

The closed-sourceness is only a tiny part of the problem, too - a lot of the worst attacks so far are actually in open source based EFI firmware, which is riddled with bugs.

Which takes me back to my original response to “isn’t everyone backdoored by ME” - sure, maybe, but if you’re looking for practical holes and back doors, ME is hardly your largest problem.

aspenmayer
2 replies
17h9m

The closed-sourceness is only a tiny part of the problem, too - a lot of the worst attacks so far are actually in open source based EFI firmware, which is riddled with bugs.

Can you elaborate and/or provide context/links?

aspenmayer
0 replies
15h40m

Makes sense given the context. When you said bugs in open source EFI implementations, I thought you meant bugs in things like rEFI/rEFInd/rEFIt.

wtallis
2 replies
20h45m

Most consumer products (as opposed to some of those marketed to businesses) don't have enough of the components in place for the ME to accomplish anything, good or bad.

loughnane
1 replies
19h40m

What do you mean? What sort of components?

wtallis
0 replies
18h53m

For starters, few consumer systems have the ME wired up to a supported Intel NIC to provide the remote access functionality that is usually seen as the scariest feature among those related to the ME. The processors are usually not vPro-enabled models so the firwmare will refuse to enable those features due to Intel's product segmentation strategy. And even if all the right hardware is in place, I think a system still needs to be provisioned by someone with physical access to turn on those features.

For most consumers, the main valid complaint about the ME is that it's a huge pile of unnecessary complexity operating at low levels of their system with minimal documentation. Anything fitting that description is a bit of a security risk, but the ME is merely one of many of those closed firmware blobs.

Onavo
1 replies
19h0m

Does Apple have a warranty canary? How do we know that the M series of chips haven't been compromised?

adamomada
0 replies
17h52m

marcan of the Asahi Linux project got into a discussion on reddit about this, and says that when it comes to hardware, you just can’t know.

I can't prove the absence of a silicon backdoor on any machine, but I can say that given everything we know about AS systems (and we know quite a bit), there is no known place a significant backdoor could hide that could completely compromise my system. And there are several such places on pretty much every x86 system

(Long) thread starts here, show hidden comments for the full discussion https://old.reddit.com/r/AsahiLinux/comments/13voeey/what_is...

I highly recommend reading this if you’re interested https://github.com/AsahiLinux/docs/wiki/Introduction-to-Appl...

sweetjuly
0 replies
16h49m

People always complain about ME/PSP but it misses the point: there is no alternative to trusting your SoC manufacturer. If they wanted to implement a backdoor, they could do so in a much more powerful and secretive way.

charcircuit
0 replies
19h25m

but as I understand it anyone using new chips that use Intel ME (or AMD's equivalent) have a gaping hole in their security that no OS can patch.

The existence of security coprocessors is not a security hole and firmware updates to these processors can be released if a security issue was found.

xyst
17 replies
20h42m

for a long time, the US considered cryptography algos as a munition. Needed some arms license to export.

Also, US tried to convince the world only 56 bits of encryption was sufficient. As SSL (I don’t think TLS was a thing back then) was becoming more mainstream, US govt only permitted banks and other entities to use DES [1] to “secure” their communications. Using anything more than 56 bits was considered illegal.

https://en.m.wikipedia.org/wiki/Data_Encryption_Standard

londons_explore
13 replies
19h45m

Even now, if you join a discussion on crypto and say something like "Why don't we double the key length" or "Why not stack two encryption algorithms on top of one another because then if either is broken the data is still secure", you'll immediately get a bunch of negative replies from anonymous accounts saying it's unnecessary and that current crypto is plenty secure.

Geisterde
5 replies
16h25m

Well, I think that would sevearly inhibit future development. Scaling on bitcoin has been a delicate game of optimizing every bit that gets recorded, but also support future developments that dont even exist yet, there is no undo button either. New signature schemas and clevar cryptography tricks can do quite a bit, but when you slap another layer of cryptography on you will inevitably make things worse in the long run.

Histories biggest bug bounty is sitting on the bitcoin blockchain, if it were even theoretically plausible to crack sha-256 like that then we would probably know, and many have tried.

monero-xmr
2 replies
12h36m

The real security of Bitcoin is the choice of secp256k1. Basically unused before Bitcoin, but chosen specifically because he was more confident it wasn’t backdoored.

https://bitcoin.stackexchange.com/a/83623

crotchfire
1 replies
10h30m

And ed25519 was out of the question, since -- being brand new -- its use would have given away the fact that DJB was among the group of people who presented themselves as Satoshi Nakamoto.

voldacar
0 replies
10h12m

Evidence?

londons_explore
1 replies
7h53m

If you reveal you have broken sha-256, then your bug bounty becomes worthless. The smart move is to steal and drain a few wallets slowly.

And that's exactly what we see - and every time it happens, the bitcoin community just laughs that someone must have been bad at key management or used a weak random number generator.

Geisterde
0 replies
7h19m

management or used a weak random number generator.

Except that has been the case in every instance thus far. The dev that lost his bitcoin last year was using arcane software, after a biopsy they found the library being used only had like 64 bits of entropy.

paulpauper
1 replies
17h18m

two encryption algorithms will mean needing two completely unrelated , unique passwords. this can be impractical and increase odds of being locked out forever

ranger_danger
0 replies
13h57m

no it doesn't mean that at all

KennyBlanken
1 replies
10h38m

I'll do you one better.

The head of security for Golang, a google employee, was also part of the TLS 1.3 committee and in Golang, it's impossible by design to disable specific ciphers in TLS 1.3

The prick actually had the nerve to assert that TLS 1.3's security is so good this should never be necessary, and that even if it were, they'll just patch it and everyone can upgrade.

So someone releases a 0-day exploit for a specific TLS cipher. Now you have to wait until a patch is released and upgrade your production environment to fix it - all the while your pants are down. That's assuming you're running a current version in production and you don't have to test for bugs or performance issues upgrading to a current release.

Heaven fucking forbid you hear a cipher is exploitable and be able to push a config change within minutes of your team hearing about it.

I'd place 50/50 odds on it being a bribe by the NSA vs sheer ego.

londons_explore
0 replies
7h47m

Seems like a stupid design, if only for the fact that some uses of TLS, where a very specific client is connecting, you might want to enable precisely the one cypher suite you expect that client to use.

Then all your performance tests can rely on the encryption and key exchange will always use the same amount of CPU time etc.

AnotherGoodName
1 replies
16h13m

The best is the claim that multiple encryption makes it weaker or that encryption is the weaker of the two. If that were true we'd break encryption just by encrypting once more with a weaker algo.

piperswe
0 replies
13h46m

The invalidity of that claim is a bit more nuanced. Having an inner, less secure algorithm may expose timing attacks and the like. There are feasible scenarios where layered encryption (with an inner weak algo and outer strong algo) can be less secure than just the outer strong algorithm on its own.

ahazred8ta
0 replies
17h29m

EVERYTHING IS FINE. WOULD YOU LIKE A BRAIN SLUG?

meroes
2 replies
18h59m

Do you have more on the legality aspect? I knew NSA pressured for a weaker key but what aspect could be made illegal? I had to write an undergrad paper on the original DES and I never saw an outright illegality aspect but wouldn’t be surprised. They also put in their own substitution boxes which I surprisingly never found much info on how exactly NSA could use them. So much speculation but why no detailed post mortems in the modern age?

ahazred8ta
0 replies
17h20m

In the US, since the 1950s, you need a permit to export any product which has encryption. There are fines if you don't file the right paperwork. In the 1970s and 80s they would only approve keys of 40 bits or less.

https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...

MattPalmer1086
0 replies
16h46m

It seems that they changed the S boxes to make them more resistant to differential analysis (which they knew about but the public didn't). So this is actually a case of them secretly strengthening the crypto.

Presumably this is because they didn't want adversaries being able to decrypt stuff due to a fundamental flaw. I guess it's possible they also weakened it in another way, but if so nobody has managed to find it.

keepamovin
12 replies
16h3m

I was so curious about the origins of the SHA algorithms that I made a FOIA to NSA about SHA-0^0, as I wanted to understand how it was developed and requested all internal communications, diagrams, papers and so on responsive to that.

Interestingly I found that after I got a reply (rough summary: you are a corporate requester, this is overly broad, it will be very expensive) I could no longer access the NSA website. Some kind of fingerprint block. The block persisted across IP addresses, browsers, incognito tabs, and devices so it can't be based on cookies / storage.

Still in place today:

  Access Denied

  You don't have permission to access "http://nsa.gov/serve-from-netstorage/" on this server.
0: https://en.wikipedia.org/wiki/SHA-1#Development

p-e-w
7 replies
15h54m

Some kind of fingerprint block. The block persisted across IP addresses, browsers, incognito tabs, and devices so it can't be based on cookies / storage.

Then what is it based on, if it happens across different devices and different IP addresses?

I find it very surprising that the NSA would go to such technologically advanced lengths to block FOIA requesters from their website (which, needless to say, doesn't contain any sensitive information).

keepamovin
5 replies
15h45m

Yeah weird, right? Highly surprising, high entropy, highly informative bit of signal possibly. Obvious way to admit SHA-0 is a pressure point maybe.

Idk, maybe you can figure out the block, I think it's beyond me. Here's a picture if that helps haha! :)

https://imgur.com/a/rNIjrB2

Highly unlikely to be a coincidence but I took it to mean: Don't make these requests ... OK ... haha! :)

lcnPylGDnU4H9OF
1 replies
13h18m

This honestly seems kinda fun. If one was really dedicated: buy new device with cash; purchased and used outside city of residence; don’t drive there, non-electric bike or walk; only use device to connect to the website from public wifi; never connect to own wifi; don’t use same VPN service as usual. Not sure if I missed anything. Probably did.

numpad0
0 replies
12h4m

Or walk into an internet cafe. Cafe membership systems, if any, probably aren't yet connected enough to prevent showing you the raw Internet for first few minutes, for few more years. Everyone who's vocal online should try this once in a while. Even Google search results noticeably change depending on your social classes inferred from location and whatnot.

MichaelDickens
1 replies
12h42m

This seems like a good way to learn what information your system is leaking that it shouldn't be leaking, eg if you use a VPN and they still block you, your VPN is probably not doing what it claims to be doing. (AFAIK a correctly implemented VPN would not send any of your computer or browser information to nsa.gov.)

themoonisachees
0 replies
8h53m

VPNs do not do what you seem to think they do. A VPN is a privacy tool as much as restarting your router to get a new IP lease is a privacy tool.

rvnx
0 replies
12h48m

It’s just Akamai being overzealous against bots.

It could simply be you read more pages and it may have triggered anti-scraping rules.

I cannot access many .gov websites either, and maybe it was after 5 pages or so.

ranger_danger
0 replies
13h58m

there are MANY different ways to fingerprint something or someone, see e.g. https://abrahamjuliot.github.io/creepjs/ or https://scrapeops.io/web-scraping-playbook/how-to-bypass-clo....

also fun fact, even Tor Browser can't hide the real OS you're running when a site uses javascript-based OS queries.

Anon4Now
1 replies
11h38m

I'm curious as to why the NSA still has http:// urls.

pseudocomposer
0 replies
4h13m

It redirects to HTTPS.

justinclift
0 replies
12h22m

That url (http://nsa.gov/serve-from-netstorage/) works via Tor, so maybe try that? ;)

AtlasBarfed
0 replies
10h31m

They probably have someone specifically assigned to crack every device you use.

jonathanyc
4 replies
17h23m

Now the argument coming from civil society for backdoors is based on CSAM:

Heat Initiative is led by Sarah Gardner, former vice president of external affairs for the nonprofit Thorn, which works to use new technologies to combat child exploitation online and sex trafficking. In 2021, Thorn lauded Apple's plan to develop an iCloud CSAM scanning feature. Gardner said in an email to CEO Tim Cook on Wednesday, August 30, which Apple also shared with WIRED, that Heat Initiative found Apple's decision to kill the feature “disappointing.”

“Apple is one of the most successful companies in the world with an army of world-class engineers,” Gardner wrote in a statement to WIRED. “It is their responsibility to design a safe, privacy-forward environment that allows for the detection of known child sexual abuse images and videos. For as long as people can still share and store a known image of a child being raped in iCloud we will demand that they do better.”

https://www.wired.com/story/apple-csam-scanning-heat-initiat...

rpmisms
1 replies
16h34m

CSAM is evil, and I personally believe we should execute those who distribute it.

I have an even stronger belief in the right to privacy, and those in the government who want to break it should be executed from their positions (fired and publicly shamed).

smolder
0 replies
17m

In some cases artwork where no child was harmed counts as CSAM. Is that execution worthy?

matheusmoreira
0 replies
11h11m

The perfect political weapon. Anyone who opposes is automatically labeled a pedophile and child abuser. Their reputations are destroyed and they will never oppose again.

int_19h
0 replies
15h35m

This isn't even a recent thing anymore. "iPhone will become the phone of choice for the pedophile" was said by a senior official in 2014, when full device encryption was starting to become common.

jarjar2_
4 replies
17h53m

One of my favorite comics about cryptography. https://xkcd.com/538/

Government routinely posits a desperate need for backdoors in crypto and crypto secured products, but almost universally they get the data they want without needing a manufacturer provided backdoor. So why they insist on continuing to do that is beyond me. It's almost security theater.

If they really want your protected information they will be able to get it. Either through a wrench or a legal wrench. In lieu of that they can use practically unlimited resources at their disposal from who they employ (or contract out to) to the long axis to which most secured devices succumb from, time.

My personal threat model isn't to defeat the government. They will get the data eventually. My personal threat model is corporations that want to know literally everything about me and bad faith private actors (scammers, cybercrime and thieves) that do too.

Ultimately it will take strict legislation and compliance measurement along with penalties to protect the government from overstepping the bounds they promise not to step over already, let alone new ones. It will take even stricter legislation to stop corporations from doing it. There are significant financial and political incentives for our ruling bodies to not do that, unfortunately.

I mean honestly, when you have this kind of ability at your disposal...

https://www.npr.org/2021/06/08/1004332551/drug-rings-platfor...

paulpauper
2 replies
17h11m

A backdoor, which works anywhere and way better than the wrench option.

jarjar2_
1 replies
17h7m

They don't need it, which was my point. They have all the tools the need right now to get what they want. Why should anyone grant them more?

lazide
0 replies
14h50m

Why would they not try to get a magic back door that makes their lives easier, even if they don’t need it?

dennis_jeeves2
0 replies
13h41m

Ultimately it will take strict legislation and compliance measurement along with penalties to protect the government from overstepping the bounds they promise not to step over already, let alone new ones.

They will find ways to not comply, often blatantly. They have no scruples.

hunglee2
4 replies
18h21m

the biggest threat to a citizens privacy is always your own government.

BLKNSLVR
2 replies
17h10m

Yep. I use Chinese brand phones because if they're snooping all my shit, they're much further away from me than my own government and not likely to have sharing arrangements.

aspenmayer
1 replies
17h4m

Wouldn’t Chinese branded phones be a higher priority target by foreign agencies in the first place?

BLKNSLVR
0 replies
13h42m

It's likely an additional data point in some kind of 'suspicious' rating.

I think I hit quite a few of those 'suspicious' check-boxes that law enforcement would consider important, whilst actually technically knowledgeable people wouldn't even blink at them. Refer: https://news.ycombinator.com/item?id=39050898

AtlasBarfed
0 replies
10h24m

I kinda disagree, because the government, even now, can be shamed and outraged.

Corporations however? They are, by design, utterly amoral.

So the modern state is that corporations are hoovering all your data they can for "ad research and optimization". I think I read recently that facebook has thousands of companies involved in the customer data supply chain?

And if those companies have your data, it's not that YOUR government has it guaranteed. It's that ALL governments have your data.

quasarj
2 replies
17h59m

I like that typo in the image label - the Chipper Clip lol

xrd
0 replies
6h42m

That's the backdoor in the rot13 visible to the naked eye.

bemusedthrow75
0 replies
15h51m

We've had enough of chipper clips to last a lifetime!

  It looks like you're writing an article about encryption. Would you like help?

  (o) Insert a joke about Apple forcing a U2 album on us

  (o) Let me write the joke myself

  [x] Don't show me this tip again

Joel_Mckay
2 replies
16h34m

Encryption is meaningless with cpu-level side-channel memory key dumps active on most modern platforms. The reality is if you have been targeted for financial or technological reasons, than any government will eventually get what they are after.

One can't take it personally, as all despotic movements also started with sycophantic idealism.

Have a great day, =)

https://xkcd.com/538/

keepamovin
1 replies
14h47m

Agree with this. Makes me think that the code-breakers themselves must be using specialized hardware to protect their own side-channels. But for this to be feasible you need to have big chipmakers in on it. Fascinating to consider

Joel_Mckay
0 replies
14h3m

No need, data collection is a different function than exploitation. People that are turned into assets are often not even aware how they are being used.

I once insisted I could be bribed to avoid the escalation of coercion as a joke, that was funny until someone actually offered $80k for my company workstation one day.

It is a cultural phenomena, as in some places it is considered standard acceptable practice.

My advice is to be as boring as possible, legally proactive, and keep corporate financial profit modes quiet.

Good luck =)

mouzogu
1 replies
10h55m

how likely is it that whatsapp or telegram are backdoored?

i wonder what tools do guerilla armies or drug lords use to communicate..

or maybe its better to hide in plain sight.

just use some kind of double speak that gives plausible deniability.

KennyBlanken
0 replies
10h37m

Telegram is a poster-child for sketchy security.

hn8305823
1 replies
21h47m

In case anyone is wondering about the context for this 2016 article, it was right after the 2015 San Bernardino attack and the FBI was trying to get into one of the attacker's phones. Apple resisted the request primarily because they wanted a certificate that would allow them to install any rogue firmware/app/OS on any iPhone, not just the attacker's.

https://en.wikipedia.org/wiki/2015_San_Bernardino_attack

KennyBlanken
0 replies
10h9m

The FBI has used damn near every major incident to push for nerfing encryption, and inbetween they bray about child porn.

whatever3
0 replies
19h25m

I posted this because of the Enigma/Crypto AG mixup in the article, but it doesn't seem that anyone noticed. Seemed relevant considering the post about fabricated Atlas Obscura stories a few days ago.

schmudde
0 replies
21h22m

Sharing this seems like an appropriate way of commemorating David Kahn's passing (https://news.ycombinator.com/item?id=39233855). <3

qubex
0 replies
6h7m

The image isn’t of an Enigma but of a Lorenz teleprinter encryption system. It was would be referred to now as a “stream cipher”.

nimbius
0 replies
19h30m

Everyone forgets the speck and simon crypto the NSA wanted in the Linux kernel that were, ultimately, removed from it entirely after a lot of well deserved criticism from heavy hitters like Schneier.

https://en.m.wikipedia.org/wiki/Speck_(cipher)

mstef
0 replies
5h55m

i believe the cryptomuseum has a more extensive list than the one in the link: https://www.cryptomuseum.com/intel/nsa/backdoor.htm particularly one is interesting as i have reverse engineered and proven its existence: https://www.cryptomuseum.com/crypto/philips/px1000/nsa.htm

mannyv
0 replies
19h57m

There's talk that the NSA put its own magic numbers into elliptical curve seeds. Does that count?

https://www.bleepingcomputer.com/news/security/bounty-offere...

KennyBlanken
0 replies
10h10m

This leaves out at least one other proven case - the NSA worked to weaken an early encrypted telephone system that was sold to numerous other governments and allowed them to listen in on conversations.

Then there's this: https://www.cnet.com/tech/tech-industry/nsa-secret-backdoor-...

And then there was the Tailored Access Operations group that backdoored hundreds if not thousands of computers and networking gear https://en.wikipedia.org/wiki/Tailored_Access_Operations

And then there's Bullrun where they partnered with commercial software and hardware companies to insert backdoors, specifically in many commercial VPN systems https://en.wikipedia.org/wiki/Bullrun_(decryption_program)

Let's also not forget the backdooring of Windows NT: https://en.wikipedia.org/wiki/NSAKEY

...and Lotus Notes was also backdoored, as well.

ETH_start
0 replies
18h45m

For financial encryption, so essential is warrantless surveillance to their control of finance, that they've successfully argued that a neutral and immutable protocol instantiating open source code on a distributed public blockchain is property of a sanctionable entity, and thus within their authority to prohibit Americans from using:

https://cases.justia.com/federal/district-courts/texas/txwdc...