return to table of content

Operation Triangulation: What you get when attack iPhones of researchers

mike_hearn
38 replies
23h44m

That's pretty astonishing. The MMIO abuse implies either the attackers have truly phenomenal research capabilities, and/or that they hacked Apple and obtained internal hardware documentation (more likely).

I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.

As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.

The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.

Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.

Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.

I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.

sampa
18 replies
23h41m

or Apple just implemented this "API" for them, because they've asked nicely

chatmasta
15 replies
23h33m

Or they have assets working at Apple... or they hired an ex-Apple employee... etc.

That's the problem with this sort of security through obscurity; it's only secure as long as the people who know about it can keep it secret.

mike_hearn
14 replies
23h22m

I don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.

So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?

But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.

If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.

This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.

justinclift
3 replies
8h51m

I don't think hiring an ex-Apple dev would let you get the needed sbox

That'd probably depend on which team the dev worked in. If they were in the right team, then it might.

mike_hearn
2 replies
8h12m

What I mean is that (assuming the sbox values are actually random) you couldn't memorize it short of intensive study and practice of memory techniques. If the "sbox" is in reality some easily memorizable function then maybe, but even then, how many people can remember long hex values from their old jobs?

raverbashing
0 replies
7h45m

But having predictably generated sequence of numbers is what cryptographers prefer

https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number

justinclift
0 replies
7h50m

Two points:

a) If a person is using those values daily for years (or even a couple of months), then it's very likely they'd have memorized them

b) Sometimes just knowing the concept exists for sure is good enough, as you can then go and brute force things until you've worked out the values

malfist
2 replies
22h8m

But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.

Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.

joe_the_user
1 replies
20h35m

Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.

That rule should only be applied in the normal world. The world of security where you know bad actors are out there trying to do stuff, it doesn't apply. And there are examples of spy types injecting plans to go halfway with security for their purposes - not that this proves the origin of a given plan, incompetence is still one possibility, it just returns to original point, that this stuff is mysterious.

chatmasta
0 replies
19h25m

As a defender, you should treat malice and incompetence as functionally equivalent. Save the attribution for the post-mortem (or better yet, don't let it come to that).

Veserv
2 replies
19h22m

It should be very easy to add one without somebody noticing. This is the same Apple which shipped a version of macOS for months that added the ability to login to root with any password only a few years ago.

Their review processes are so incompetent even one of the most security critical components, root login, let a totally basic “fail your security 101 class” bug through. It is absolutely inexcusable to have a process that bad and is indicative of their overall approach. As they say, “one cockroach means an infestation”.

mike_hearn
0 replies
8h13m

Mistakes happen but Apple's reputation for strong security is well deserved. They invest heavily and the complexity of this exploit chain is evidence of that. Linux has had its fair share of trivial root login exploits that somehow got through code review.

henriquez
0 replies
14h12m

I’m not trying to defend Apple but I think that line of thinking is pretty cynical and could be used to condemn basically any company or open source project that attracts enough interest for attackers.

mrandish
1 replies
15h35m

What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function?

I agree. This appears to likely be an intentional backdoor injected at the hardware level during design. At such a low-level I think it could have been accomplished with only a handful of employees in on it. There would have been no need to subvert Apple from the top down with large numbers of people at many levels being privy.

In early silicon there can be a bunch of registers and functions implemented for testing which are later pulled out. Except maybe one set of registers doesn't get pulled but instead a door knock is added with a weak hash function, making the registers invisible to testers and fuzzing.

It seems a little too convenient that the door knock hash was weak. After all, strong hash functions aren't unknown or hard. The reason it had to be a weak hash function was to create "plausible deniability". If it was a strong hash then once any exploitation was discovered there would be no denying the vuln was intentionally placed. If it really was just a test DMA function that someone supposedly 'forgot' to remove before production silicon, I can't think of a reason to have it behind any kind of door knock in the first place.

I read that it was patched by adding these addresses to the "access denied" list. While I don't know anything about Apple security, I'm stunned that any such low-level access list isn't 'opt-in' instead of 'opt-out'. If it was 'opt-in' it seems like any such 'undocumented' register addresses would by denied by default. And if they were on the 'opt-in' list, yet remained undocumented, then it would be obvious to anyone looking at the security docs that something was amiss.

codedokode
0 replies
7h39m

It reminds me of Linux backdoor that also was made to look like a mistake (== replaced with =) [1].

[1] https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-...

contingencies
1 replies
22h25m

APTs probably routinely identify and target such developers. With multi-million dollar payouts for single bugs and high state level actor attention, employee profiling is clearly a known attack vector and internal security teams probably now brief on relevant opsec. FWIW the only Apple kernel developer I knew has somewhat recently totally removed themselves from LinkedIn.

saagarjha
0 replies
12h20m

People who work on the kernel are not hard to find.

markus_zhang
0 replies
17h48m

I wouldn’t be surprised if one or two very senior people in large tech companies are agency agents, willingly or not.

I don’t really have any proof but considering the massive gain it shouldn’t surprise anyone. The agencies might not even need to pay large sum of $$$ if the said assets have vulnerabilities.

WhackyIdeas
0 replies
22h57m

I think the way it’s done is that the code is presented to them to use, Apple probably don’t even code those parts themselves.

supriyo-biswas
4 replies
12h59m

So much misinformation in this thread. It’s a Hamming ECC, as described here[1].

[1] https://social.treehouse.systems/@marcan/111655847458820583

codedokode
1 replies
7h45m

Why do you need error-correction code for a debugging feature though? I would not protect debug registers with a hash.

gorkish
0 replies
1h50m

Bc you are DMA-ing the raw bits into cache with the GPU, but the CPU is going to check those ECC codes on read as the caches on Apple SoC's are ECC-native. It's an integrity 'protection' not a security 'protection'

uncle-betty
0 replies
5h52m

More evidence for an ECC, obtained by looking at how the 10 output bits of the function depend on its 256 input bits:

Each of the 10 parity bits output by the function is the xor of exactly 104 of the 256 input bits.

Each of the 256 input bits contributes to (= is xor-ed into) either 3 or 5 of the 10 parity bits.

This is in line with the SEC-DED (single error correction, double error detection) ECC construction from the following paper:

https://people.eecs.berkeley.edu/~culler/cs252-s02/papers/hs...

Translating the above observations about the function into properties of the H matrix in the paper:

Each row of the matrix contains an identical number of ones (104).

Each column of the matrix contains an odd number of ones (3 or 5).

mike_hearn
0 replies
7h59m

Very interesting, thanks. Summarizing that thread:

- The "hash" is probably an error correcting code fed into GPU cache debug registers which will be stored in the cacheline itself, you're expected to compute the ECC because it's so low level. That is, the goal isn't to protect the DMA interface. (but this isn't 100% certain, it's just an educated guess)

- The "sbox" is similar to but not the same as a regular ECC as commonly used in hardware.

- Martin argues that the existence of such registers and the code table could have been guessed or brute forced, even though a compromise or info leak from Apple seems more likely. Or possibly even from the old PowerVR days. But if it's the NSA then who knows, maybe they are literally fuzzing hidden MMIO ranges to discover these interfaces.

- This is possible because the GPU has full DMA access without an IOMMU for performance reasons, so it's fertile ground for such exploits. Probably more will be discovered.

So that's all reassuring.

malaya_zemlya
4 replies
18h26m

also is indicative of a massive organization with truly abysmal levels of internal siloing.

Or a joint project between several organizations.

LargeTomato
3 replies
18h5m

Or, like, they have a root kit and it works so why reinvent the wheel? They have an attack payload so why reinvent the wheel? Just plug and play all the packages you need until you can compromise your target device.

computerfriend
1 replies
7h59m

But there is a very good reason to reinvent the wheel here: to not burn more zero-days than you have to.

baobabKoodaa
0 replies
3h2m

Exactly! This is the part of the story that mystifies me completely and I would love to see some explanation.

mike_hearn
0 replies
7h57m

The attack payload should not be so tied to an exact installation path that you can't just install it via a different exploit chain.

throwaway2037
1 replies
13h1m

truly phenomenal research capabilities

Maybe a nation state, e.g., APT?

rst
0 replies
5h11m

Being able to put together tooling with these capabilities makes the attacker an APT by definition. These are generally assumed to be national intelligence services, though that is an assumption. (Among other things, there are multiple countries where the lines between intelligence agencies and their contractors are... fuzzy.)

And while Kaspersky is refusing to speculate at all about attribution, the Russian government has claimed (without giving specific evidence) that it's NSA.

jsjohnst
1 replies
19h56m

Seems likely a compromise at the GPU or ARM side as equally possible routes.

stephen_g
0 replies
14h6m

What do you mean? Both the GPU and CPU design are proprietary to Apple. They used to use regular ARM designed cores but the last one of those before switching to their own core design was something like the A5 days (from memory). It uses the ARM instruction set but isn’t actually designed by ARM at all.

Similar for the GPU too. They may have started with HDL licensed from others (like I think their GPU might actually have been directly based on the PowerVR ones they used to use, but I believe the ARM one is basically from-scratch) but this vulnerability seems unlikely to have existed since then…

eastof
1 replies
11h5m

Maybe more likely they just have people inside Apple?

newsclues
0 replies
6h36m

This is likely at the scale of Apple and the determination of State Actors.

henriquez
0 replies
14h19m

This is a fairly incredible attack, and agree with your analysis. The hidden Safari tab portion where they “re-hack” the device could be bad organizational siloing as you mentioned or indicative of a “build your virus” approach that script kiddies used in the 90s. Could be a modular design for rapid adaptation, ie. perhaps less targeted.

black_puppydog
0 replies
19h58m

If the hardware feature was openly documented it'd have been found much, much sooner.

Well, the point of kerckhoff's principle is that it should have been openly documented and then anyone lookindg at the docs even pre-publication would have said "we can't ship it like that, that feature needs to go."

aberoham
0 replies
20h36m

Also note the IoC script — This script allows to scan iTunes backups for indicator of compromise by Operation Triangulation. https://github.com/KasperskyLab/triangle_check

pushcx
16 replies
1d

It’s quite unfortunate that Apple doesn’t allow users to uninstall iMessage, it seems to be the infection vector for advanced threats like this, NSO group, etc. Presumably it’s to avoid the support burden, but they could gate it behind having Lockdown Mode enabled for a week or something to shake out the vast majority of mistaken activations.

munk-a
5 replies
1d

They gotta, gotta, have those blue bubbles. Some teenagers fight to get an overpriced phone solely to avoid the deep deep shame of having a green bubble when chatting.

If apple is forced to shut down iMessage being the exclusive option and have some pure SMS application they might see a sudden noticeable drop in market share.

vultour
2 replies
23h22m

Teenagers wanting blue bubbles and people looking to uninstall iMessage because it's a threat vector are two completely disjoint sets of people.

paulmd
1 replies
20h0m

Blue bubbles bad syndrome. Gotta bring it up when ever humanly possible.

Nvidia has a very similar green man bad syndrome going on too. As the amount of time a HN discussion on Nvidia increases, the probability of mentioning that Linus said “fuck you nvidia” approaches 1, even though it’s irrelevant to a topic, or that he's a mercurial asshole who's said a whole lot of things.

The casual fanboyism disrupts all discourse on these topics because there’s a large minority of users who have adopted what PG describes as “hater-ism” and allowed it to dominate their thinking on a topic. Negative parasocial attachment is the same process as positive parasocial attachment and just as problematic, but largely never called out.

http://www.paulgraham.com/fh.html

In short: lotta fanboys on these topics who don't even realize they're fanboys/adopting fanboy frames, because they don't realize that anti-fanboys are still parasocially attached too. And we've casually accepted the low level of discourse on these topics, and it pollutes the whole discussion of a lot of interesting topics because of who's doing them.

matkoniecz
0 replies
18h3m

Can you explain how disliking Nvidia due to being systematically problematic at some point (maybe still being problematic) is a fanboyism or parasocial attachment?

Neil44
1 replies
1d

They knew exactly what they were doing when they chose that nice blue and that cheap looking green.

sadjad
0 replies
23h52m

Never forget the icon they used for Windows servers: https://i.stack.imgur.com/5rYVr.png

I_Am_Nous
4 replies
21h30m

In the face of this kind of threat, it's pretty obvious why Apple treated Beeper as a security risk and took appropriate measures to secure iMessage.

avidiax
1 replies
5h33m

Beeper is the user's choice. And Apple is preventing other companies from providing a more secure iMessage alternative, e.g. one that doesn't even parse messages from people not in the contact list, or doesn't even parse anything without a click, etc.

Apple has had so many zero-click exploits in iMessage, yet they insist that you have to use Lockdown mode to do anything about it, and then proceed to bundle Lockdown mode with lots of potentially unwanted behavior.

I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.

I_Am_Nous
0 replies
2h3m

Beeper is the user's choice.

Me deciding to ride the subway to work for free is a user's choice, but that doesn't mean it's right. Using infrastructure for free because I feel like it is certainly my choice but I can't justify anger when someone makes me pay to use it since I should have paid in the first place. Currently Apple doesn't run iMessage as an open standard so it runs in "authorized riders only" mode.

I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.

This isn't what I claimed. I claimed Apple treated unauthorized 3rd party access to their infrastructure as a security risk and worked to shore up that risk. As you pointed out, there have been plenty of zero-click exploits in iMessage. Limiting the devices sending iMessages increases security. I believe Apple doesn't allow iOS VMs in general for the same reason.

saagarjha
0 replies
12h24m

The security model is basically orthogonal.

madeofpalk
0 replies
19h3m

I don’t think that’s clear at all. I imagine it’s still trivial for attackers to still send specially crafted one-off payloads.

teruakohatu
2 replies
22h42m

Can someone explain to me why we can load vast quantities of untrusted code and a wide variety of image formats in our browsers all day long and be mostly safe today, but somehow even first party messenger apps seem to be a relatively easily compromised? Why can't messenger apps be sandboxed as well as browsers?

saagarjha
0 replies
21h49m

Note that the second half of this exploit chain involves going around and exploiting the web browser.

riwsky
0 replies
21h44m

this exploit chain involved a browser vulnerability; your premise is flawed

stefan_
0 replies
1d

I remember people were very passionately arguing iMessage can only be secure if the only client is the Apple sanctioned one

the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.
Almondsetat
0 replies
1d

what does "uninstall iMessage" mean? you can disable iMessage right in the settings so you only receive SMSs

Despegar
13 replies
22h29m

I'm curious to know from experts if there's anything Apple can do to create a step-change in terms of security of iPhones? Like if the going rate for a zero day is $1 million, is there anything Apple can do that can drive that up to $2 or $3 million? Or is it just going to be a perpetual cat and mouse game with no real "progress"?

maldev
4 replies
19h34m

It's already 2-3 million +. Apple has amazing security, especially for the Iphone and continously monitors it and dishes out silent patches. For a REALLY high level example, it restricts system calls per process and requires all calls to be signed with an apple key, AND it restricts who you can do the system call to, these are continuously monitored and updated. Not only this, but persistence on Iphone is effectively dead, meaning you have to reinfect the device after every reboot. One of the big things you notice in the article is the use of ROP, apple requires every executable page to be signed by them, hence why you have to have these assfisting of rop chains.

Veserv
3 replies
19h8m

2-3 million dollars is not “amazing”. That is less than the cost to open a McDonalds. You can get a small business loan in the US for more than that. There are literally tens of millions of people in the world who can afford that. That is 1/5 the cost of a tank.

2-3 million dollars is pocket lint to people conducting serious business, let alone governments. It is at best okay if you are conducting minor personal business. This ignores the fact that attacks at the 2-3 million dollar range are trivially wormable. If you had actual cause to hack every phone you are only incurring marginal cents per attack. Even relatively minor attacks like targeting 10,000 people are less than one phone of cost per attack.

MuffinFlavored
2 replies
15h18m

2-3 million dollars is not “amazing”.

I don't know. $2-3m for reading code in Ghidra and throwing stuff at a wall until something sticks? Maybe some fuzzing, etc.

I get that you theoretically could find an exploit that for example, you send to 100 known wealthy people, and with it you steal saved cookies + device IDs from financial apps and then try to transfer their funds/assets to an account you control but...

Could you really pull that off 100 times before Apple catches on?

I guess you could... easily... now that I think about it.

sweetjuly
1 replies
11h39m

this has the (un)fortunate consequence of being illegal. Writing exploits and selling them to a friendly government, on the other hand, is totally legal. Plus, then you can sell support contracts for that sweet recurring revenue!

dmichulke
0 replies
9h24m

This also makes you a target for enemy services (for enabling "friendly government") and friendly services for being a potential whistleblower.

Quite the cost in my eyes...

nvm0n2
3 replies
20h0m

Sure. Rewrite sensitive parts of their stack in memory safe languages. They have Swift after all. A lot of the iOS security improvements over time have really been more like mitigations that try to contain the damage when the giant of pile of decades old C gets exploited.

saagarjha
1 replies
12h32m

They’re working on it, but a memory-safe language doesn’t help you in some of the surface that the attackers exploited here.

nvm0n2
0 replies
8h38m

I think memory safety + integer overflow checking by default would have blocked many of these. Not the hardware mitigation disable but getting to the point where that matters required some safety problems that can be excluded by better languages.

stephen_g
0 replies
14h20m

That is exactly their plan. Swift could always link into C applications, and they have recently come out with C++ interoperability [1] so things like WebKit etc. can start having parts re-written or new parts written from the start in Swift so they can gradually replace C and C++ codebases instead of trying to rewrite everything (which sucks because even for things much, much less complex than WebKit, you can have a team working for three years working on a replacement and it’ll have less features than the original had when you started).

They’re even working on an embedded subset for embedded devices so things like microcontrollers like battery management, the Secure Enclave etc. can run it.

1. https://m.youtube.com/watch?v=lgivCGdmFrw

develatio
2 replies
22h13m

I am by no means a security expert whatsoever. Period. But reading the article carefully, there is a step in the chain of exploits (CVE-2023-32435) which depends on exploiting Safari. Apple implemented a "Lockdown mode" (https://support.apple.com/en-us/105120) which might have handled this (?).

Answering more broadly to your question, the "step-change" that you're asking for is precisely the "Lockdown mode" in iOS devices. It disables most of the features in order to reduce the attack surface of the device.

hn_throwaway_99
0 replies
16h44m

The Safari vulnerability wasn't necessary (the device was completely owned before that), and was really just a "nice to have" - it allowed verification of the targeted user and, presumably, customizable malware delivery. From the article, if you look at the bullet points under the Kaspersky diagram of the exploit chain:

After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device and run spyware, but attackers chose to: a) launch the imagent process and inject a payload that cleans the exploitation artifacts from the device; b) run the Safari process in invisible mode and forward it to the web page with the next stage.

In other words, if looking at the diagram, https://cdn.arstechnica.net/wp-content/uploads/2023/12/trian... , it's completely "game over" once you get to the skull icon in the lower left corner, and the Safari exploit is after that.

codedokode
0 replies
20h25m

If you read a better article with technical details [1], you'll see that Apple SOCs contain a "feature" (that resembles a debugging tool) that allows to bypass memory protection by writing into undocumented and unused GPU registers. Apple locks down kernel memory to stop exploits, but these registers allow to bypass the lock.

This vulnerability is they key vulnerability without which all the exploit chain would be useless.

[1] https://securelist.com/operation-triangulation-the-last-hard...

rmbyrro
0 replies
2h48m

We'd need to scrape decades of work in hardware and software for that.

Modern software sits on a foundation that was thought for a different era. They didn't have in mind the current challenges in terms of security and scale.

LanzVonL
8 replies
21h54m

Isn't the most obvious answer that Apple, like other US tech firms such as Google, simply creates these wild backdoors for the NSA/GCHQ directly? Every time one's patched, three more pop up. We already know Apple and Google cooperate with the spy agencies very eagerly.

jsjohnst
6 replies
21h15m

We already know Apple and Google cooperate with the spy agencies very eagerly.

The evidence clearly indicates otherwise…

Aerbil313
3 replies
19h58m

Ahem, Snowden, PRISM anyone?

jsjohnst
2 replies
19h45m

Ahem, you mean you have a single example, from a decade ago, one where Apple was hardly a key player (hence why Apple didn’t sign onto PRISM until half a decade after Yahoo, Microsoft, Google, et all), as conclusive evidence of “eagerness to partner with spy agencies”, despite numerous public cases where they’ve done the opposite… got it!

smallnix
1 replies
17h26m

That makes sense, would you agree to the revised statement:

"We already know Apple cooperated with the spy agencies behind the publics back"?

jsjohnst
0 replies
17h12m

No, I won’t agree to context free blanket statements which are specifically worded to imply something which is simply not provably true, especially given evidence to the opposite. If you knew anything about PRISM at all, even the technical details publicly available with the minimalist of effort on your part, you wouldn’t be asking.

freeflight
1 replies
19h44m

How so? Any competent intelligence service will not just depend on the goodwill of a corporation to secure access to assets and intelligence.

If they cooperate that's good and convenient, but that does not mean the intelligence service will not set in place contingencies for if the other side suddenly decides not to play ball anymore.

jsjohnst
0 replies
18h43m

I said nothing about anything you stated, that’s all clearly possible, I specifically refuted the unsupported claim that Apple “eagerly cooperate with spy agencies”, where there’s ample evidence to support an opposite claim.

freeflight
0 replies
19h46m

I consider that plausible with Google due to Google's funding history [0], but Apple is afaik way less "influenced" and the way this pwn was pulled off could also have been done by compromising Apple's hardware supply chain and not Apple itself.

Particularly considering how in the past Apple has been very willing to be on the receiving end of negative headlines for not giving US agencies decrypted access to iCloud accounts of terrorist suspects, with Google I don't remember it ever having been the target of such controversy, meaning they willingly oblige with all incoming requests.

[0] https://qz.com/1145669/googles-true-origin-partly-lies-in-ci...

transpute
7 replies
23h21m

iMessage can be disabled by local MDM for supervised devices, via free Apple Configurator in macOS app store, https://support.apple.com/guide/deployment/restrictions-for-...

  For Wi-Fi–only devices, the Messages app is hidden. 
  For devices with Wi-Fi and cellular, the Messages app is still available, but only the SMS/MMS service can be used.
SMS/MMS messages and non-emergency cellular radio traffic can be disabled by a SIM PIN, e.g. when using device for an extended period via WiFi.

fishywang
6 replies
22h58m

We purchased an iPad with cellular, with the plan to put my home country's sim card in it so I can still receive SMS (as most of the banks there still requires SMS verification when you login), and it turns out that iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.

transpute
3 replies
22h39m

> iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.

Does iPad support SMS? The cellular line is usually only for data, https://www.howtogeek.com/710767/how-to-send-sms-text-messag...

  iPads can't send SMS text messages through Apple's Messages app. Even if you have an iPad with a cellular data plan for mobile internet on the go, you still can't send SMS text messages.

fishywang
2 replies
21h57m

Apple's own user guide (https://web.archive.org/web/20201223140550/https://support.a...) suggests otherwise:

In the Messages app , you can send text messages as SMS/MMS messages through your cellular service, or ...

Also my own experience is that it at least can receive SMS text messages, just it won't show you if it's not from your carrier (if it's from your carrier, it shows you via a popup window or something, can't really remember as that was several years ago).

transpute
0 replies
21h51m

No direct experience to share, but that sentence may be referencing Continuity via iCloud, which is optional:

  With Continuity, you can send and receive SMS/MMS messages on iPad using the cellular connection on your iPhone.
> if it's from your carrier, it shows you via a popup window

If it's not shown in Apple's Messages app, maybe it was a carrier-specific app?

artdigital
0 replies
4h29m

iPad can neither send nor receive sms. The only way it can is through a nearby iPhone, or iMessage

fsckboy
1 replies
17h43m

I've never understood why iPads can't be used as phones with an ordinary cellphone SIM. Is it simply because Apple doesn't want to pay a Qualcomm licensing fee or some equivalent? Who is it in the chain/ecosystem that does not want tablets being used as full phones, the carriers? Apple?

jrockway
0 replies
12h30m

I'm guessing it doesn't fit well with the carriers' price structure. Adding a tablet / smart watch / etc. is cheaper than adding another phone to your account. I wouldn't have a cellular iPad if it was a lot extra per month, but I think I pay $10 for both the tablet and the watch, which is fine with me.

cedws
7 replies
21h56m

This attachment exploits vulnerability CVE-2023-41990 in the undocumented, Apple-only TrueType font instruction ADJUST for a remote code execution. This instruction existed since the early 90’s and the patch removed it.

This is getting ridiculous. How many iMessage exploits have there now been via attachments? Why aren't Apple locking down the available codecs? Why isn't BlastDoor doing its job?

This is really disappointing to see time and time again. If a simple app to send and receive messages is this hard to get right, I have very little hope left for software.

nvm0n2
4 replies
20h4m

iOS has a reputation for having the best security, but how many times have Android/WhatsApp had these sorts of silent-instant-root exploits via invisible messages? I don't remember it happening. Maybe the strategy of writing lots of stuff in Java is paying off there.

azinman2
1 replies
19h30m
nvm0n2
0 replies
8h38m

Yes, but that wasn't a zero day. WhatsApp's own team found that, and it wasn't a zero-click exploit, you had to be in a video call with the attacker.

Brybry
1 replies
18h27m

Android has had zero click exploits. For example, Stagefright [1]

And even better, there are plenty of old Android phones out which will be vulnerable to various exploits because of weak OTA update support policies.

[1] https://en.wikipedia.org/wiki/Stagefright_(bug)

kernal
0 replies
14h48m

Sigh…there has never been an 0day Stagefright exploit in the wild. And even if there was it wouldn’t have worked on all Android devices due to the OS differences among OEMs.

Also, there are plenty of old iPhones that do not receive updates anymore and are just as vulnerable so I’m not sure why you needed to get that in.

twobitshifter
0 replies
18h25m

i wonder why attachments would ever be loaded from unknown contacts

akira2501
0 replies
18h53m

If I've read the rest of the documentation correctly, the exploit is actually triggered from an attached ".watchface" file, which of course, has the font vulnerability in it.

I'd like to meet the person who suggested even sending .watchface files as iMessage attachments in the first place. What were you thinking? Did you not have a large enough attack surface already?

nothercastle
6 replies
1d

State actor attacks on another state actor. Incredible sophisticated and just goes to show you that it basically can’t be defended against

whartung
4 replies
1d

It can be defended against. The detail is that the only way to harden those defenses is to toss it out in the world and let folks poke holes in it.

This was an extremely complex exploit. It was complex because of all of the defenses put in place by Apple and others. It required State level resources to pull it off.

We also don't know what, if any, external skullduggery was involved in the exploit. Did someone penetrate Apple/ARM and get internal documentation? Compromise an employee? Did Apple/ARM participate? Maybe they just dissolved a CPU cover, and reverse engineered it.

But, that cat is not out of the bag, and it's been patched.

Progress.

As many folks say, when it comes to dealing with security, consider the threat model. Being under the lens of an advanced State is different from keeping your young brother out of your WoW account.

This exploit wasn't done by a bunch of scammers selling "PC Support". That's the good news.

When stuff like this happens, I always go back to Stuxnet, where not only did they breach an air gap, they went in and did a sneak and peek into some other company to get the private signing keys so that their corrupted payload was trusted. There's a difference between an intelligence operation and a "hack".

Making stuff like this very expensive is part of the defensive posture of the platform.

hnburnsy
2 replies
23h56m

It was complex because of all of the defenses put in place by Apple and others.

I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.

This makes everything written on this page worthless...

Prevent anyone except you from using your devices and accessing your information. https://www.apple.com/privacy/control/
tuetuopay
0 replies
23h37m

I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.

well you are in trouble then. all of modern hardware have such hidden parts in them, and are most of the time referenced as "undocumented" instead of "unknown". I know this seems pedantic, but from a public eye, anything undocumented is unknown. what makes those special however, is those are not used at all by public software, thus truly unknown as one can only guess their use or even their mere existence.

tedunangst
0 replies
22h42m

I don't know jack about hardware

Could have stopped writing right there.

ogurechny
0 replies
23h17m

Compromise an employee?

An official visits the headquarters, and informs that certain employees need to be hired at certain departments “to help with national security”. End of story.

What even makes people think that executives whose job is to deal with everyone in order to “do business” are their long distance friends, or some kind of punks who'd jump on the table and flip birdies into faces of people making such an offer?

kornhole
0 replies
1d

However this one seems to have been coordinated with Apple. A nonprofit nonaligned independently managed project could be more immune to pressures of the national security apparatus. I think it is incredibly naïve to think that the largest US corporation does not cooperate. This is why I keep donating to GrapheneOS.

londons_explore
6 replies
19h43m

Coresight is not some backdoor - it's a debug feature of all ARM CPU's. This looks like a necessary extension to coresight to work with Apples memory protection stuff.

Even though no public documentation exists, I'm sure thousands of Apple engineers have access to a modded gdb or other tooling to make use of it.

smallnix
4 replies
17h32m

That does not explain the weird hashing.

duskwuff
3 replies
14h12m

As explained by marcan: it's not "hashing", it's an error-correcting code. Much more understandable in that light.

https://social.treehouse.systems/@marcan/111655847458820583

adrian_b
2 replies
8h3m

That the secret registers are in fact cache test registers, as explained at that link, is a very plausible explanation for their existence.

Nevertheless, this does not explain at all the astonishing fact that they were mapped by default in the accessible memory space, unless listed and explicitly denied in the system configuration files.

No amount of incompetence seems enough to explain such a default policy, so the supposition of an intentional backdoor still seems more likely.

rst
1 replies
5h1m

Apple's mitigation was in fact to alter boot-configured memory mappings to deny access. (And as to the mappings... if they were in the middle of a range of documented registers, or close to one, sloppiness and poor internal communication are at least plausible...)

londons_explore
0 replies
48m

I would hope that all memory outside the known ranges is denied by default... Apple should know all the memory mapped hardware in the SoC, so why would they allow IO to something that doesn't exist?

It's just a few lines of code to parse the memory map and deny all undefined regions. As well as being good for security, it also helps find out-of-bounds write bugs, and ensures you can't accidentally ship an out of date memory map.

repiret
0 replies
17h14m

One persons debug tool is another’s back door.

WalterBright
6 replies
1d

The extra hardware registers might have been discovered by examining the chip itself. One could find where the registers were on it, and notice some extra registers, then do some experimenting to see what they did.

codedokode
3 replies
20h15m

Isn't it easier just to pay to one of hundreds employees having access to chip design? Or even get it without paying by appealing to patriotism?

sangnoir
1 replies
19h27m

How many ex-Apple employees work(ed) at NSA? It may just have been the right person doing their regular 9-5 job, with no subterfuge. The list of employers for Hardware security folks is likely a couple of dozen companies, and Apple and NSA are among the most prestigious of them. I expect some employees to move in both directions.

cryu
0 replies
17h54m

I know of two, one from my team. Don't know how long they stayed there, though.

WitCanStain
0 replies
19h5m

Or just covertly tell Apple to hand over its documentation / to knowingly leave gaps in the defenses for NSA to exploit.

mhh__
0 replies
1d

Maybe, but chips already have vast, vast, quantities of physical registers in a big blob.

Assuming it wasn't a lucky guess, timing attacks are often used to find this stuff.

gusfoo
0 replies
22h36m

The extra hardware registers might have been discovered by examining the chip itself.

Perhaps. But it's easier to phone the technical librarian and say "Hi! I'm Bob from the password inspection department. Can you verify your current password for me?"

cf1241290841
5 replies
17h30m

As its about a 37c3 presentation here a comment from Fefe¹ in German https://blog.fefe.de/?ts=9b729398

According to him the exploit chain was likely worth in the region of a 8-digit dollar value.

¹ https://en.wikipedia.org/wiki/Felix_von_Leitner

I guess somebody is going to get fired.

saagarjha
4 replies
12h31m

Why? Having exploits “burned” is part of the business.

cf1241290841
3 replies
12h26m

Exploit yes

Decade old Backdoors no

_kbh_
2 replies
12h6m

Decade old Backdoors no

I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.

https://social.treehouse.systems/@marcan/111655847458820583

Some choice quotes.

First, yeah, the dbgwrap stuff makes perfect sense. I knew about it for the main CPUs, makes perfect sense it'd exist for the ASCs too. Someone had a lightbulb moment. We might even be able to use some of those tricks for debugging stuff ourselves :)

Second, that "hash" is almost certainly not a hash. It's an ECC code*. I bet this is a cache RAM debug register, and it's writing directly to the raw cache memory array, including the ECC bits, so it has to manually calculate them (yes, caches in Apple SoCs have ECC, I know at least AMCC does and there's no reason to think GPU/ASC caches wouldn't too). The "sbox" is just the order of the input bits to the ECC generator, and the algorithm is a textbook ECC code. I don't know why it's somewhat interestingly shuffled like that, but I bet there's a hardware reason (I think for some of these things they'll even let the hardware synthesis shuffle the bits to whatever happens to be physically optimal, and that's why you won't find the same table anywhere else).

cf1241290841
1 replies
10h59m

I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.

Why? Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.

How would one look from your perspective?

Hackbraten
0 replies
4h38m

Why?

Because 1. it helps with debugging at development time; 2. it may take unreasonable effort to disable, possibly from a hardware team's point of view with no direct security background; 3. it may be worth keeping around for future patching flexibility.

Source: [0]

Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.

No matter how large and hierarchical a company is, there will always be teams making hundreds of small, apparently localized decisions in their own line of work, without consulting anyone outside their team, and without seriously considering ramifications. It's humans all the way down.

How would one look from your perspective?

A feature where you poke a seemingly random 64-bit value into an apparently arbitrary memory address, which grants you access to something you wouldn't normally have. That'd be a backdoor to me.

In the case at hand, the feature neither has a hidden MMIO address (it's discoverable in a small-ish search space), nor does it require a secret knock (but instead apparently just a somewhat convoluted checksum.)

[0]: https://social.treehouse.systems/@marcan/111656703871982875

soupdiver
4 replies
23h51m
contingencies
2 replies
23h38m

Begins @ 27:21

In addition contents of the presentation, in terms of timeline...

2018 (September): First undocumented MMIO-present CPU launched, Apple A12 Bionic SOC.

2021 (December): Early exploit chain infrastructure backuprabbit.com created 2021-12-15T18:33:19Z, cloudsponcer.com created 2021-12-17T16:33:50Z.

2022 (April): Later exploit chain infrastructure snoweeanalytics.com created 2022-04-20T15:09:17Z suggesting exploit weaponized by this date.

2023 (December): Approximate date of capture (working back from "half year" quoted analysis period + mid-2023 Apple reports.

The presenters also state that signs within the code reportedly suggested the origin APT group has used the same attack codebase for "10 years" (ie. since ~2013) and also uses it to attack MacOS laptops (with antivirus circumvention). The presenters note that the very "backdoor-like" signed debug functionality may have been included in the chips without Apple's knowledge, eg. by the GPU developer.

So... in less than 3.5 years since the first vulnerable chip hit the market, a series of undocumented debug MMIOs in the Apple CoreSight GPU requiring knowledge of a lengthy secret were successfully weaponized and exploited by an established APT group with a 10+ year history. Kaspersky are "not speculating" but IMHO this is unlikely to be anything but a major state actor.

Theory: I guess since Apple was handed ample evidence of ~40 self-doxxed APT-related AppleIDs, we can judge the identity using any follow-up national security type announcements from the US. If all is quiet it's probably the NSA.

mike_hearn
1 replies
23h31m

It's really a pity they explain all the mistakes that helped the malware be detected.

halJordan
0 replies
22h21m

It's not, it really isnt. Honestly just apply this mentality to one other scenario to test the waters. We should stop publishing yara rules because it flips our hand to the malware makers? It's nonsense to even say.

mb4nck
0 replies
21h19m

The (first?) version of the real recording is now up: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...

codedokode
4 replies
19h32m

I see that one of the steps in exploit was to use GPU registers to bypass kernel memory protection. Does it mean that the vulnerability cannot be fixed by an update and existing devices will stay vulnerable?

flakiness
1 replies
18h30m

I don't think there is any JIT on GPU and all the code has to go through a host-side kernel call so it should be able to protect the register I guess?

saagarjha
0 replies
12h28m

The kernel cannot protect against this, in fact the attackers have full read/write control and code execution capabilities to mount this attack. The fix is blocking this range from being mapped using features that are more powerful than the kernel.

transpute
0 replies
18h43m

https://x.com/alfiecg_dev/status/1740025569600020708

  It’s a hardware exploit, using undocumented registers. It can only be mitigated against, but not fully patched.

ipython
0 replies
18h12m

The mitigation is that the mmio range in question has been marked as unwritable in the device trees on recent versions of iOS.

xvector
3 replies
23h1m

Does Lockdown Mode prevent agains this?

542458
1 replies
22h28m

I think lockdown drops most iMessage features, so I would suspect the answer is yes. But as far as I can tell, lockdown prevents use of mdm, so it might be a net negative for security… instead, using the mdm policy that disables iMessage might be preferable.

Obscurity4340
0 replies
5h22m

You can still supervise which allows for that all the same, IIRC

halJordan
0 replies
22h18m

It likely does. Lockdown mode stops most ios auto-processing wrt to message attachments and this was delivered via a message attachment.

londons_explore
3 replies
20h59m

Notice that the hash value for a data write of all zero's is zero...

And for a single bit, the hash value is a single value from the sbox table. That means this hash algorithm could reasonably have been reverse engineered without internal documentation.

londons_explore
2 replies
20h26m

This 'smells' like a typical way to prevent memory writes to random addresses accidentally triggering this hardware. Doesn't look like it was intended as a security feature.

In fact, this is how I'd implement it if someone said to me it was important that bugs couldn't lead to random writes. This implementation also effectively prevents someone using this feature whilst giving a buffer address they don't know the contents of.

10 bits of security is probably enough for that as long as you reboot the system whenever the hash value is wrong. The coresight debug functionality can totally reboot the system if it wants to.

the-rc
0 replies
16h16m

On the Amiga, you had to write to a blitter control register (BLTSIZE?) twice with the same value or it wouldn't do anything. This might be the same, only a lot more paranoid.

But it might also be a backdoor, intended or not.

tedunangst
0 replies
19h46m

Like a CRC? I'm reminded of the the Broadcom compression algorithm that required tedious reverse engineering, or a look at the Wikipedia page with sample code.

hnburnsy
3 replies
1d

The resulting shellcode, in turn, went on to once again exploit CVE-2023-32434 and CVE-2023-38606 to finally achieve the root access required to install the last spyware payload.

Why isn't Apple detecting the spyware\malware payload? If only Apps approved by Apple are allowed on an iPhone, detection should be trivial.

And why has no one bothered to ask Apple or ARM about this 'unknown hardware'?

If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.

And finally does Lockdown mode mitigate any of this?

saagarjha
2 replies
22h57m

This chain isn’t delivered via an app, it is sent through iMessage. The checks for “only apps approved by Apple” are not relevant if you exploit your way past them.

hnburnsy
1 replies
18h31m

Thanks I did see the researchers posted how the malware gets into memory, but I still feel like since Apple tightly controls the enviornment it ahould be able to detect anything running there that should not be.

saagarjha
0 replies
12h36m

It is very difficult to do this in general, especially for these kinds of exploits.

I_Am_Nous
3 replies
21h35m

Although infections didn’t survive a reboot

Reminder to reboot your iPhone at least weekly if you are concerned about this kind of attack.

x1sec
0 replies
19h28m

In a week, a lot of data can be exfiltrated. Then after you have rebooted, the threat actor reinfects your device.

Frequently rebooting the device can’t hurt but it likely isn’t going to prevent a threat actor from achieving their objectives.

The best mitigation we have is to enable lockdown mode.

transpute
0 replies
20h40m

> reboot your iPhone at least weekly

with the Hard Reset key sequence, https://www.wikihow.com/Hard-Reset-an-iPhone

HumanOstrich
0 replies
15h30m

No, they could monitor when devices rebooted and re-infect them immediately, as the article states.

luke-stanley
2 replies
20h15m

I didn't hear anyone mention fuzzing once. I guess there was probably very specific insider knowledge being made use of and they wanted to point a finger, which is fair enough I guess. I'm just a bit surprised that it has not been mentioned so far in the discussion. Anyhow it seems that a allow-list approach by Apple would have been better than a deny list approach! Literally not checking out of expected bounds!

Alex3917
1 replies
20h4m

If they were using a deny list, that sounds like an intentional backdoor.

luke-stanley
0 replies
10h6m

It might just be that they couldn't think of another way to code it though.

londons_explore
2 replies
22h8m

What are the chances this MMIO register could have been discovered by brute force probing every register address?

Mere differences in timing could have indicated the address was a valid address, and then the hash could perhaps have been brute forced too since it is effectively a 20 bit hash.

londons_explore
0 replies
21h8m

Looking at that sbox implementation, I can't believe it was implemented as a lookup table in the hardware of the chip - there must be some condensed Boolean expression that gives the same result.

The fact the attackers didn't know that Boolean expression suggests they reverse engineered it rather than had documentation.

chatmasta
0 replies
16h49m

It looks like the registers could have been identified fairly easily via brute force. They're physically close to documented GPU registers, and accessing them triggers a GPU panic, which is how the researchers attributed them to the GPU component. The attackers could have used that same test to identify the existence of the registers.

The part that's less easily explained is how they were able to reconstruct a custom sbox table to execute the debug code. That's where the "insider threat" insinuations are strongest, but personally I'm not convinced that it precludes any number of other plausible explanations. For example, the attackers could have extracted the sbox from: older firmwares, OTA update patches, pre-release development devices (probably purchasable on ebay at some points), iOS beta releases, or a bunch of other leaky vectors.

The researcher basically says "I couldn't find this sbox table in any other binary where I looked for it." Well, that's not necessarily surprising since it appears to be Apple specific and thus there are a limited number of binaries where it might have appeared. And as the researcher notes, this includes now unpublished binaries that might have been mistakenly released. It's totally plausible that the attackers got lucky at some point while they were systematically sniffing for this sort of leak, and that the researcher is unlikely to have the same luck any time soon.

kornhole
2 replies
1d

Who had motive to target Russian government officials, knowledge of the attack vectors, history of doing so, and technical and logistical ability to perform it leads Kaspersky and myself to the only rational conclusion: that Apple cooperated with the NSA on this exploit. I assume they only use and potentially burn these valuable methods in rare and perhaps desperate instances. I expect the Russian and Chinese governments' ban on use of Iphones will not be lifted and expand to other governments. Similarly to how the sanctions have backfired, this tactic will also backfire by reducing trust in Apple which is the core of their value proposition.

hedora
1 replies
23h20m

This looks like a typical modern security hole. There’s a giant stack of layers of unnecessary complexity, and all of them are garbage. The composition is also garbage.

All the NSA needs to launch attacks like this is to get a bunch of mediocre engineers to layer complexity atop complexity. They don’t need Apple to know about the attack.

Honestly, they probably didn’t actually have to do anything to get Apple (or any other large company) to self-pwn itself by hiring and promoting engineers and project managers for adding features, but not for improving product stability or software correctness, or deleting forgotten legacy cruft.

Anyway, the most effective approach to sabotage is to be indistinguishable from incompetence, so it’s hard to say if the people responsible for the vulnerability chain were working with the NSA or not.

kornhole
0 replies
21h5m

You make a good point that a team of mediocre engineers could be responsible for the vulnerabilities. Those doing code review and change control would also need to be mediocre. It could be a combination of compromised and mediocre coordinated by a manager who is in service of the apparatus. Knowledge of the operation would better not go all the way up the ranks to keep it quiet.

throwaway81523
1 replies
14h7m

Philip Zimmermann a while back was working on a secure phone product called the Black Phone. I tried to convince him that a secure phone should not contain any microphones of any kind. That sounds a bit weird for a phone, but it's ok, if you want to make a voice call, just plug a headset into it for the duration of the call. He wasn't convinced, but this iphone exploit makes me believe it more than ever.

x1sec
0 replies
13h51m

Perhaps a physical switch that connects or disconnects the electrical signal from the microphone to the handset could be a more convenient approach.

There is a photo of Mark Zuckerberg with a cut off 3.5mm jack plugged into his laptop - likely to achieve a similar outcome.

patrickhogan1
1 replies
22h49m

Knowing more about the exfiltration component where it sends data to a remote server would be helpful. According to the article it’s sending large audio microphone recordings. I assume a company like Kapersky would explicit deny all outgoing network connections and then approve one by one.

hnburnsy
0 replies
22h43m

There is a series of posts on this including one that details the malware payload...

https://securelist.com/trng-2023/

kevinwang
1 replies
22h5m

Wow, that's amazing. I wonder if attacker like this feel unappreciated since they can't take credit for their work.

belter
0 replies
20h40m

Public key cryptography was developed in 1970s at GCHQ but that was classified.

jeffreygoesto
1 replies
23h9m

Some agencies will be very sad now...

barryrandall
0 replies
20h4m

Those will be the most delicious tears wept in all of 2023.

apienx
1 replies
19h33m

Reminder that Lockdown Mode helps reduce the attack surface of your iPhone. It also helps tremendously with detection. https://support.apple.com/en-us/105120

chatmasta
0 replies
19h29m

I've had Lockdown mode enabled for a few months. It's great, and not much of an annoyance at all. You do need to be fairly tech-savvy and remember that it's enabled, because sometimes something silently breaks and you need to opt-out of it (which you can do for a specific website, or WebViews within a specific app). And it won't auto-join "insecure" WiFi which can be annoying at a hotel, but frankly it's probably for the best. Also you won't receive texts with attachments in them, which is usually desirable but breaks workflows like activating a new SIM card while traveling (it's possible this was broken for me due to some other setting to exclude texts from unknown numbers).

The most noticeable difference is that SVG elements (?) are replaced with emojis. I'm not sure how that fallback works but it's funny to see buttons have seemingly random emojis embedded in them. (Does anyone know the details of how this replacement is done? Is it actually glyph fonts being replaced, not SVG?)

ThinkBeat
1 replies
21h31m

Attack by CIA/NSA?

They have the best possible insight into the hardware and software at all stages I should think.

dannyw
0 replies
21h11m

It targeted Russian embassy officials, and with this level of sophistication, so it’s quite obviously NSA/etc.

Muehe
1 replies
23h47m

For those interested in the talk by the Kaspersky researches, the cleaned video isn't uploaded yet but you can find a stream replay here:

https://streaming.media.ccc.de/37c3/relive/a91c6e01-49cf-422...

(talk starts at minute 26:20)

Sweepi
0 replies
11h45m
trustingtrust
0 replies
14h51m

Hardware security very often relies on “security through obscurity”, and it is much more difficult to reverse-engineer than software, but this is a flawed approach, because sooner or later, all secrets are revealed.

The later works when you are not as big as Apple. When you are as big as Apple, you are a very hot target for attackers. There is always the effort vs reward when it comes to exploiting vulnerabilities. The amount of effort that goes into all this is worth thousands of dollars even if someone is doing it just for research. If I was doing this for some random aliexpress board it would be worth nothing and probably security by obscurity would mean no one really cares and the later part works here. But I wonder what Apple is thinking when they use obscurity cause people must start working on exploiting new hardware from day 1. You literally can get one on every corner in a city these days. Hardware Security by obscurity for example would be fine for cards sold by someone like nvidia to only some cloud customers and those are then assumed obsolete in a few years so even if someone gets those on eBay the reward is very low. iPhones on the other hand are a very consumer device and people hang on to their devices for very long.

sweis
0 replies
20h32m

The video of the talk is online now too: https://www.youtube.com/watch?v=7VWNUUldBEE

stefan_
0 replies
22h53m

Maybe I'm too dumb to find it on this page but if you are looking for the actual recording instead of a calendar entry in the past, it's here (a stream dump for now, fast forward to 27 mins):

https://streaming.media.ccc.de/37c3/relive/11859

neilv
0 replies
23h43m

If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.

Did the systems software developers know about these registers?

mb4nck
0 replies
21h20m

At least the first version of the recording is now up: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...

kristofferR
0 replies
5h27m

Why would the attackers target Kasperspy employees? Seems like a great way to get your exploit chain exposed

jacooper
0 replies
20h5m

This really looks like the NSA just flexing their muscles and their vulnerability arsenal.

hcarrega
0 replies
1d

Theres a talk on ccc today

haecceity
0 replies
19h4m

This wouldn't be zero click if iMessage didn't parse attachments without user consent.

guwop
0 replies
21h29m

Crazy!

g-b-r
0 replies
15h44m

Are hashes of the data ever used in known chip debugging features?

Since they're supposed to be disabled in production, what would be their point?

I'm no electronic engineer, but isn't it best for them to be fast and simple, to reduce the chance that they cause interference themselves..?

And isn't it strongly unlikely that an attacker in the supply chain (TSMC??) would be able to reliably plant this in all Apple chips from the A12 to the A16 and the M1 ??

dang
0 replies
14h42m

Related:

4-year campaign backdoored iPhones using advanced exploit - https://news.ycombinator.com/item?id=38784073

(We moved the comments hither, but the article might still be of interest)

codedokode
0 replies
7h4m

Now I am thinking Kaspersky should not have published this information. What a wrong decision. Instead they should have sold it to Russian government which I am sure could find lot of interesting uses for these "debugging features" and offer a good reward.

cf1241290841
0 replies
17h6m

Years ago i argued about the danger of pdfs with another account and was told not to be a paranoid nutjob.

Told you so.

edit: The fact that this obvious statement gets upvoted above the apple backdoor on 22:40 of the talk also says alot.

edit1: https://imgur.com/a/82JV7I9

anotherhue
0 replies
1d

More important than getting their newly found exploits, you get to know which of yours might be compromised. Prevents counterintelligence.

amai
0 replies
2h27m

See also the article from Ars Technica in June 2023: https://arstechnica.com/information-technology/2023/06/click...

WhackyIdeas
0 replies
23h54m

It’s kind of simple imo. Apple is an American company and after Jobs died, Apple quickly signed up to working with the NSA and enrolled in the Prism programme.

Apple, like any other USA company, has to abide by the laws and doing what they are told to do. If that means hardware backdoors, software backdoors, or giving NSA a heads up over a vulnerability during the time it takes to fix said vulnerability (to give time for NSA to make good use of it) then they will.

Only someone with great sway (like Jobs) could have resisted something like this without fear of the US Govt coming after him. His successor either didn’t have that passion for privacy or the courage to resist working with the NSA.

Anyone, anywhere with an iPhone will be vulnerable to NSA being able to break into their phone anytime they please, thanks to Apple. And with Apple now making their own silicon, the hardware itself will be even more of a backdoor.

Almost every single staff member at Apple will be none the wiser about this obv and unable to do anything about it even if they did - and their phones will be just as fair game to tap whenever the spies want.

I am speculating. But in my mind, it’s really quite obvious. Just like how Prism made me win an argument I had with someone who was a die hard Apple fan and thought they would protect privacy at all costs… 6 months later, Snowden came along and won me that argument.

Luc
0 replies
4h44m

This made me laugh: "Upon execution, it decrypts (using a custom algorithm derived from GTA IV hashing) its configuration [...]"

From https://securelist.com/triangulation-validators-modules/1108...

DantesKite
0 replies
20h36m

Steve Weis on Twitter described it best:

“This iMessage exploit is crazy. TrueType vulnerability that has existed since the 90s, 2 kernel exploits, a browser exploit, and an undocumented hardware feature that was not used in shipped software”

https://x.com/sweis/status/1740092722487361809?s=46&t=E3U2EI...