That's pretty astonishing. The MMIO abuse implies either the attackers have truly phenomenal research capabilities, and/or that they hacked Apple and obtained internal hardware documentation (more likely).
I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.
As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.
The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.
Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.
Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.
I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.
or Apple just implemented this "API" for them, because they've asked nicely
Or they have assets working at Apple... or they hired an ex-Apple employee... etc.
That's the problem with this sort of security through obscurity; it's only secure as long as the people who know about it can keep it secret.
I don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.
So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?
But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.
If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.
This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.
That'd probably depend on which team the dev worked in. If they were in the right team, then it might.
What I mean is that (assuming the sbox values are actually random) you couldn't memorize it short of intensive study and practice of memory techniques. If the "sbox" is in reality some easily memorizable function then maybe, but even then, how many people can remember long hex values from their old jobs?
But having predictably generated sequence of numbers is what cryptographers prefer
https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number
Two points:
a) If a person is using those values daily for years (or even a couple of months), then it's very likely they'd have memorized them
b) Sometimes just knowing the concept exists for sure is good enough, as you can then go and brute force things until you've worked out the values
Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.
Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.
That rule should only be applied in the normal world. The world of security where you know bad actors are out there trying to do stuff, it doesn't apply. And there are examples of spy types injecting plans to go halfway with security for their purposes - not that this proves the origin of a given plan, incompetence is still one possibility, it just returns to original point, that this stuff is mysterious.
As a defender, you should treat malice and incompetence as functionally equivalent. Save the attribution for the post-mortem (or better yet, don't let it come to that).
It should be very easy to add one without somebody noticing. This is the same Apple which shipped a version of macOS for months that added the ability to login to root with any password only a few years ago.
Their review processes are so incompetent even one of the most security critical components, root login, let a totally basic “fail your security 101 class” bug through. It is absolutely inexcusable to have a process that bad and is indicative of their overall approach. As they say, “one cockroach means an infestation”.
Mistakes happen but Apple's reputation for strong security is well deserved. They invest heavily and the complexity of this exploit chain is evidence of that. Linux has had its fair share of trivial root login exploits that somehow got through code review.
I’m not trying to defend Apple but I think that line of thinking is pretty cynical and could be used to condemn basically any company or open source project that attracts enough interest for attackers.
I agree. This appears to likely be an intentional backdoor injected at the hardware level during design. At such a low-level I think it could have been accomplished with only a handful of employees in on it. There would have been no need to subvert Apple from the top down with large numbers of people at many levels being privy.
In early silicon there can be a bunch of registers and functions implemented for testing which are later pulled out. Except maybe one set of registers doesn't get pulled but instead a door knock is added with a weak hash function, making the registers invisible to testers and fuzzing.
It seems a little too convenient that the door knock hash was weak. After all, strong hash functions aren't unknown or hard. The reason it had to be a weak hash function was to create "plausible deniability". If it was a strong hash then once any exploitation was discovered there would be no denying the vuln was intentionally placed. If it really was just a test DMA function that someone supposedly 'forgot' to remove before production silicon, I can't think of a reason to have it behind any kind of door knock in the first place.
I read that it was patched by adding these addresses to the "access denied" list. While I don't know anything about Apple security, I'm stunned that any such low-level access list isn't 'opt-in' instead of 'opt-out'. If it was 'opt-in' it seems like any such 'undocumented' register addresses would by denied by default. And if they were on the 'opt-in' list, yet remained undocumented, then it would be obvious to anyone looking at the security docs that something was amiss.
It reminds me of Linux backdoor that also was made to look like a mistake (== replaced with =) [1].
[1] https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-...
APTs probably routinely identify and target such developers. With multi-million dollar payouts for single bugs and high state level actor attention, employee profiling is clearly a known attack vector and internal security teams probably now brief on relevant opsec. FWIW the only Apple kernel developer I knew has somewhat recently totally removed themselves from LinkedIn.
People who work on the kernel are not hard to find.
I wouldn’t be surprised if one or two very senior people in large tech companies are agency agents, willingly or not.
I don’t really have any proof but considering the massive gain it shouldn’t surprise anyone. The agencies might not even need to pay large sum of $$$ if the said assets have vulnerabilities.
I think the way it’s done is that the code is presented to them to use, Apple probably don’t even code those parts themselves.
So much misinformation in this thread. It’s a Hamming ECC, as described here[1].
[1] https://social.treehouse.systems/@marcan/111655847458820583
Why do you need error-correction code for a debugging feature though? I would not protect debug registers with a hash.
Bc you are DMA-ing the raw bits into cache with the GPU, but the CPU is going to check those ECC codes on read as the caches on Apple SoC's are ECC-native. It's an integrity 'protection' not a security 'protection'
More evidence for an ECC, obtained by looking at how the 10 output bits of the function depend on its 256 input bits:
Each of the 10 parity bits output by the function is the xor of exactly 104 of the 256 input bits.
Each of the 256 input bits contributes to (= is xor-ed into) either 3 or 5 of the 10 parity bits.
This is in line with the SEC-DED (single error correction, double error detection) ECC construction from the following paper:
https://people.eecs.berkeley.edu/~culler/cs252-s02/papers/hs...
Translating the above observations about the function into properties of the H matrix in the paper:
Each row of the matrix contains an identical number of ones (104).
Each column of the matrix contains an odd number of ones (3 or 5).
Very interesting, thanks. Summarizing that thread:
- The "hash" is probably an error correcting code fed into GPU cache debug registers which will be stored in the cacheline itself, you're expected to compute the ECC because it's so low level. That is, the goal isn't to protect the DMA interface. (but this isn't 100% certain, it's just an educated guess)
- The "sbox" is similar to but not the same as a regular ECC as commonly used in hardware.
- Martin argues that the existence of such registers and the code table could have been guessed or brute forced, even though a compromise or info leak from Apple seems more likely. Or possibly even from the old PowerVR days. But if it's the NSA then who knows, maybe they are literally fuzzing hidden MMIO ranges to discover these interfaces.
- This is possible because the GPU has full DMA access without an IOMMU for performance reasons, so it's fertile ground for such exploits. Probably more will be discovered.
So that's all reassuring.
Or a joint project between several organizations.
Or, like, they have a root kit and it works so why reinvent the wheel? They have an attack payload so why reinvent the wheel? Just plug and play all the packages you need until you can compromise your target device.
But there is a very good reason to reinvent the wheel here: to not burn more zero-days than you have to.
Exactly! This is the part of the story that mystifies me completely and I would love to see some explanation.
The attack payload should not be so tied to an exact installation path that you can't just install it via a different exploit chain.
Maybe a nation state, e.g., APT?
Being able to put together tooling with these capabilities makes the attacker an APT by definition. These are generally assumed to be national intelligence services, though that is an assumption. (Among other things, there are multiple countries where the lines between intelligence agencies and their contractors are... fuzzy.)
And while Kaspersky is refusing to speculate at all about attribution, the Russian government has claimed (without giving specific evidence) that it's NSA.
Seems likely a compromise at the GPU or ARM side as equally possible routes.
What do you mean? Both the GPU and CPU design are proprietary to Apple. They used to use regular ARM designed cores but the last one of those before switching to their own core design was something like the A5 days (from memory). It uses the ARM instruction set but isn’t actually designed by ARM at all.
Similar for the GPU too. They may have started with HDL licensed from others (like I think their GPU might actually have been directly based on the PowerVR ones they used to use, but I believe the ARM one is basically from-scratch) but this vulnerability seems unlikely to have existed since then…
Maybe more likely they just have people inside Apple?
This is likely at the scale of Apple and the determination of State Actors.
This is a fairly incredible attack, and agree with your analysis. The hidden Safari tab portion where they “re-hack” the device could be bad organizational siloing as you mentioned or indicative of a “build your virus” approach that script kiddies used in the 90s. Could be a modular design for rapid adaptation, ie. perhaps less targeted.
Well, the point of kerckhoff's principle is that it should have been openly documented and then anyone lookindg at the docs even pre-publication would have said "we can't ship it like that, that feature needs to go."
Also note the IoC script — This script allows to scan iTunes backups for indicator of compromise by Operation Triangulation. https://github.com/KasperskyLab/triangle_check