The thing with cloud and with anything related to it, anything that connects to the internet somehow... is that, unless it's open source and the servers decentralized, you are always trusting SOMEONE. Sure, Apple might make their best to ensure nobody – but them – have access to your data... but Apple controls all the end points. It controls the updates your iPhone receives, it controls the servers where this happens. Like, they are so many opportunities for them to find what you are doing. It reminds me of this article "Web-based cryptography is always snake oil"
https://www.devever.net/~hl/webcrypto
And to be fair, this doesn't apply only to this case. Even the data you have stored locally, Apple could access it if they wanted, they sure have power to do it if they so wish or were ordered by the government. They might have done it already and just didn't told anyone for obvious reasons. So, I would argue the best you could say is that it's private in the sense that only Apples knows/can know what you are doing rather than a larger number of entities .
Which, you could argue it's a win when the alternatives will leak your data to many more parts... But still far away from being this unbreakable cryptography that it's portrayed it to be.
I don’t think that’s completely fair. It basically puts Apple in the same bucket as Google or OpenAI. Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.
Apple, on the other hand, has made a pretty serious effort to ensure that no employee can access your data on these AI systems. That’s hugely different! They’re going as far as to severely restrict logging and observability and even building and designing their own chips and operating systems. And ensuring that clients will refuse to talk to non-audited systems.
Yes, we can’t take Apple’s word for it. But I think the third party audits are a huge part of how we trust, and also verify, that this system will be private. I don’t think it’s far to claim that “Apple knows what you’re doing.” That implies that some one, at some level at Apple can at some point access the data sent from your device to this private cloud. That does not seem to be true.
I think another facet of trust here is that a rather big part of Apple’s business model is privacy. They’ve been very successful financially by creating products that generate money in other ways, and it’s very much not necessary or even a sound business idea for them to do something else.
While I think it’s fair to be skeptical about the claims without 3rd party verification, I don’t think it’s fair to say that Apple’s approach isn’t better for your data and privacy than openAI or Google. (Which I think is the broad implication — openAI tracks prompts for its own model training, not to resell, so it’s also “only openAI knows what your doing.”)
What makes you think that internal access control at Apple is any better than Google's, Microsoft's or OpenAI's? Google employees have long reported that you can't access user data with standard credentials, for example.
Also, what makes you think that Apple's investments on chip design and OS is superior to Google's? Google is known for OpenTitan and other in-house silicon projects. It's also been working in secure enclave tech (https://news.ycombinator.com/item?id=20265625), which has been open-source for years.
You're making unverifiable claims about Apple's actual implementation of the technical systems and policies it is marketing. Apple also sells ads (App Store, but other surfaces as well) and you don't have evidence that your AI data is not being used to target you. Conversely, not all user data is used by Google for ad targeting.
It’s not about technology. It’s about their business.
Apple generally engineers their business so that there isn’t an incentive to violate those access controls or principles. Thats not where the money is for them.
Behavior is always shaped by rewards and punishments. Positive reinforcement is always stronger.
One hundred percent this.
All these conversations always end up boiling down to someone thinking they’re being clever for pointing out you have to trust a company at the end of the day when it comes to security and privacy.
Yes. Valid. So if you have to trust someone, doesn’t it make sense for it to be someone who has built protecting privacy into their core value proposition, versus a company that has baked violating your privacy into their value prop?
That's a false dichotomy. You may have to trust someone but that someone could be something else than an opaque for-profit company.
Give me some examples of benevolent non profits that provide anywhere near the level of consumer services as a company like Apple.
I'll do better, here's a benevolent nonprofit that goes beyond what Apple provides to ensure top-notch consumer service: https://grapheneos.org/
They're not trying to be clever, they're trying to point out the very important philisophy of maximizing self reliance that so many people like you eschew.
How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?
What are you going to do if a major privacy scandal comes out with Apple at the center? If you wouldn't jump ship from Apple after a major privacy scandal then why does your input on this matter at all?
Some people feel that is inevitable so it's best to just rip that bandaid off now.
I'm taking aim at the Google bros who try to raise these arguments to muddy the waters into a sort of false equivalence between Apple and Google.
If you're already using a dumb phone and eschewing modern software services, then I'm not really talking to you. Roll on brother/sister, you are living your ideals.
The business incentives. Apple's brand and market valuation to some extent depends on being the secure and privacy oriented company you and your family can trust. While Google's valuation and profit depends almost entirely on exploiting as much of your personal data as they possibly can get away with. The business models speaks for themselves.
Does this guarantee privacy and security? Does Apple have a perfect track record here? No of course not, but again if these are my two smartphone choices it seems fairly clear to me.
It's not about being clever, it's about being perceptive. Apple's cloud commitment has a history of being sketchy, whether it's their government alliance in China, the FIVE-EYES/PRISM membership in America, or their obsession with creating "private" experiences that rely on the benefit of the doubt.
Apple doesn't care about you, the individual. Your value as a singular customer is worthless. They do care about the whole; a whole that governments can threaten to exclude them from if they don't cooperate with domestic surveillance demands. How far off do you really think American iCloud is from China? If Apple is willing to backdoor one server, what's stopping them from backdooring them all? If they're willing to lie about notification security, what's stopping them from lying about server integrity too?
And worst off, Apple markets security. That's it; you can't go verify their veracity outside the dinky little whitepapers they publish. You can't know for sure if they have privacy violation baked-in to their system because you can't actually verify anything. You simply have to guess, and the best guess you can make gets based off whatever Apple markets as "true" to you. In reality, we can do better with security and should probably expect more from one of the largest consumer technology brands in the world. Simply assuming that they aren't violating user privacy is an absurd thing to gamble your security on.
That's becoming less the case. As Apple's advertising and services revenue grows and hardware sales slow, they have increasing incentive to mine your data the same as any company does. They already use quite a bit data on the location and content personalization front. I would argue that Apple perhaps cares about protecting your data more from malicious third parties (again like any company should - it's never good for FAANG when data leaks or is abused), but they are better at it (and definitely better at marketing it).
There are multiple verified stories on the lengths Apple goes internally to keep things secret.
I saw a talk years ago about (I think) booting up some bits of the iCloud infrastructure, which needed two different USB keys with different keys to boot up. Then both keys were destroyed so that nobody knows the encryption keys and can't decrypt the contents.
The stories about Apple keeping things secret usually go about protecting their business secrets from normal people, up to doing probably illegal actions.
Using deniable, one-time keys etc. are... not that unusual. In fact I'd say I'm more worried about the use of random USB keys there instead of proper KMS system.
(There are similar stories with how doing a cold start can be difficult when you end up with a loop in your access controls, from Google, where a fortunately simulated cold-start showed that they couldn't access necessary KMS physically to bootstrap the system... because access controls depended, after many layers, on the system to be cold-started).
they used smartcards, not usb keys
Which probably were just key transport devices from offline secured KMSes
Destroyed? Where? In all places where they were stored? Or just in some of them? How can you tell? You still need to trust them they didn't copy them somewhere.
It's impossible to use any technology if you don't trust anyone.
Any piece of technology MAY have a backdoor or secondary function you don't know of and can't find out without breaking said device.
What's funny is that, in all these orgs, it ends up being the low-tech vulns that compromise you in the end. Physical access, social engineering, etc. However, I'm really impressed by the technical lengths Apples goes to though. The key-burning thing reminds me of ICANN' Root KSK Ceremonies.
That's not even getting to the fact that Apple is also running a display ads business: https://searchads.apple.com/
It's completely fair, because regardless of third party audits, chips, etc, there are backdoors right along the line, that are going to provide Apple and the government with secret legal access to your data. They can simply go to a secret court, receive a secret judgment, and be authorised to secretly view your data. Does anyone really think this is not already the case? There is no transparency. A licensed third party auditor would not be able to tell you this. We have to operate with the awareness that all data online is already not private - no need to pretend/imagine that Apple's marketing is actually true, and that it is possible to buy online privacy utopia.
The best protection against "secret orders" is to use mathematics.
Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.
Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.
There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.
Right, but I have no reason to think that this isn't a marketing ploy either, just another story. There is simply no way that Apple is as big as it is, without providing whatever data the government requires. Corporations and governments are not your friend.
Apple will obey government orders to give data they have and can access.
No government order short of targeting a specific backdoored update to a specific person will allow them to give data they can't access.
And if you're doing something that can make a TLA force Apple to create a targeted iOS update just for you, it's not something regular people can or should worry about.
Apple keeps normal people safe from mass surveillance, being protected from CIA/NSA required going Full Snowden and it's not a technological problem, you need to change the way you live.
Do you not remember Edward Snowden? Eg this sort of info:
https://www.bbc.com/news/world-us-canada-23123964
You seem to think that 10 years, under cover of secret orders, that this is NOT going on now. Not Apple!
People's lovely trusting natures in corporations and government never ceases to amaze me.
"telephone data" != "contents of every phone call"
You and I have no idea.
Now you can't debug anything.
Mullvad do not need to store any data at all. Infact any data that they store is a risk. Minimising the data stored minimises their risk. The only thing they need to store is keys.
Look, if you want to ask an AI service if this photo has a dog in, thats simple and requires no state other than the photo. If you want to ask it does it have my dog in, thats a whole 'nother kettle of fish. How do you communicate the descriptors that describe your dog? how do you generate them? on device? that'll drain your battery in a very short order.
Because they didn't follow process.
They don't need to, just hack the icloud backup. plus its not impossible, its just difficult. If you own the key authority then its less hard.
If you’re presenting a conspiracy theory, you have to at least poke holes in the claims you consider false.
Under the system described in the linked paper, your scenario is not possible. In fact, the whole thing looks to be designed to prevent exactly that scenario.
Where do you see the weakness? How could a secret order result in undetectable data capture?
No. The information is all out there - secret courts, secret judgements, its all been put out there. I don't need to dissect any technical information, to recognise that I cannot know what I do not know.
In case anyone was uncertain about whether to trust what we are told - we heard that the US government was taping millions of phone records from the Snowden revelations.
So, we are told there are secrets, and we are told that there are mechanisms in place to prevent this information from being made public.
You are also free to believe that the revelations are no longer relevant... I'd like to hear the reason.
IMO - the reverse is the case - in that you need to show why Apple have now become trustworthy. Why would Apple not be subject to secret judgements?
I know there is a lot of marketing spin about Apple's privacy - but do you really think that they would actually confront the government system, in a way that isn't some further publicity stunt? Can one confront the government and retain a license to operate, do you think? Is it not probable that the reality is that Apple have huge support from the government?
Perhaps this kind of idea is hard to understand - that one can make a big noise about privacy, and how one is doing this or that to prevent access, and all the while ensuring that access is provided to authorised parties. Corporations can say this sort of thing with a straight face - its not a privacy issue to private information - its a (secret) legal issue!
Sorry, but secret courts and secret judgements, along with existing disclosure that millions were being spied upon, means one needs to expect the worst.
Fair, go ahead and expect the worse, and handwave away any attempts to mitigate.
But I'm not sure where that leaves you. Is it just a nihilistic "no security matters, it's all a show" viewpoint?
It is fair, I don't accept attempts to mitigate. The trust is gone, and nothing can recover it. The idea of trusting government and corporations was ridiculous in the first place as these entities are not your friends.
You wouldn't expect a repeat abuser to stop abusing just because of 'time' or a marketing campaign. And yet this is the case here. People keep looking to their tormentors for solutions.
Not expecting healing from those also inflicting the trauma, ie changing one's expectations, seems like a minimum effort/engagement in my view, but it's somehow inconceivable.
I don't think this is already the case, and I think the article is an example of safeguards being put into place (in this particular scenario) to prevent it.
On the basis of not having information, cos all this occurs out of sight, you believe this is not the case. Ok.
"ensuring that clients will refuse to talk to non-audited systems."
I'm trying to understand if this is really possible. I know they claim so but is there any info on how this would prevent Apple from executing different code to what is presented for audit?
Unless they pass all keys authorized by the system to third parties that ensure appropriate auditing, none.
And at least after my experiences with T2 chip, I consider Apple devices to be always owned by Apple first...
The servers provide a hash of their environment to clients, who can compare it to the published list of audited environments.
So the question is: could the hash be falsified? That’s why they’re publishing the source code to firmware and bootloader, so researchers can audit the secure boot foundations.
I am sure there is some way that a completely malevolent Apple could design a weakness into this system so they could spend a fortune on the trappings while still being able to access user information they could never use without exposing the lie and being crushed under class actions and regulatory assault.
But I reject the idea that that remote possibility means the whole system offers no benefit users should consider in purchasing decisions.
"I think another facet of trust here is that a rather big part of Apple's business model is privacy. They've been very successful financially by creating products that generate money in other ways, and it's very much not necessary or even a sound business idea for them to do something else."
If a third party wants that data, whether the third party is an online criminal, government law enforcement or a "business partner", this idea that Apple's "business model" will somehow negate the downsides of "cloud computing", online advertising and internet privacy is futile. Moreover, it is a myth. Apple is spending more and more on ad services, we can see this in its SEC filings. Before he died, Steve Jobs was named on an Apple patent application for showing ads during boot. The company uses "privacy" as a marketing tactic. There is no evidence of an ideological or actual effort to avoid the so-called "tech" company "business model". Apple follows what these companies do. It considers them competitors. Apple collects a motherload of user data and metadata. A company that was serious about privacy would not do this. It's a cop out, not a trade off.
To truly avoid the risks of cloud computing, online advertising and associated privacy issues, choosing Apple instead of Google is a half-baked effort. Anyone who was serious about it would choose neither.
Of course, do what is necessary, trust whomever; no one is faulting anyone for making practical choices, but let's not pretend choosing Apple and trusting it solves these problems introduced by so-called "tech" company competitors. Apple pursues online advertising, cloud computing and data collection. All at the expense of privacy. With billions in cash on hand, it is one of the wealthiest companies on Earth, does it really need to do that.
In the good old days, we could call Apple a hardware company. The boundaries were clear. Those days are long gone. Connect an Apple computer to a network and watch what goes over the wire wth zero user input, destined for servers controlled by the mothership. There is nothing "private" about that design.
Yeah. I feel like the conversation needs some guard rails like, "Within the realm of big tech, which has discovered that one of its most profitable models is to make you the product, Apple is really quite privacy friendly!"
I think it’s pretty fair. This example isn’t about Apple but about Microsoft, but we’ve had a decade long period where Microsoft has easily been the best IT-business partner for enterprise organisations. I’ve never been much of a fan of Microsoft personally, but it’s hard to deny just how good they are at building relationships with enterprise. I can’t think of any other tech company that knows enterprise the way Microsoft does, but I think you get the point… anyway they too are beginning to “snoop” around.
Every teams meeting we have is now transcribed by AI, and while it’s something we want, it’s also a lot of data in the hands of a company where we don’t fully know what happens with it. Maybe they keep it safe and only really share it with the NSA or whichever American sneaky agency listens in on our traffic. Which isn’t particularly tin-foil-hat. We’ve semi-recently had a spy scandal where it somewhat unrelated (this wasn’t the scandal) was revealed that our own government basically lets the US snoop on every internet exit node our country has. It is what it is when you’re basically a form of vassal state to the Us. Anyway, with the increased AI monitoring tools build directly into Microsoft products, we’re now handing over more data than ever.
To get the point, we’re currently seeing some debate on whether Chromebooks and Google education/workspaces should be allowed in schools. Which is a good debate. Or at least it would be if the alternative wasn’t Microsoft… Because does it really matter if it’s Google or Microsoft that invades your privacy?
Apple is increasingly joining this trend. Only recently it was revealed that new Apple devices have some sort of radio build into them, even though it’s not on their tech sheets. Or in other words, Apple has now joined the trend of devices that can form their own internet by being near other Apple devices. Similar to how Samsung and most car manufacturers have operated for years now.
And again if sort of leads to… does it really matter if it’s Google or Apple that intrudes on your privacy? To some degree it does, of course, I’d personally rather have Microsoft or Apple spy on me, but I would frankly prefer if no one spied on me.
Specifically, open-source and self-hostable. Open source doesn't save you if people can't run their own servers, because you never know whether what's in the public repo is the exact same thing that's running on the cloud servers.
You can by having an attestation of the signed software components up from the secure boot process, and having the client device validate said attestation corresponds to the known public version of each component, and randomize client connections across infrastructure.
Other than obvious "open source software isn't perfectly secure" attack scenarios, this would require a non-targeted hardware attack, where the entire infrastructure would need to misinterpret the software or misrepresent the chain of custody.
I believe this is one of the protections Apple is attempting to implement here.
Usually this is done the other way around - servers verifying client devices using a chip the manufacturer put in them and fully trusts. They can trust it, because it's virtually impossible for you (the user) to modify the behavior of this chip. However, you can't put something in Apple's server. So if you don't trust Apple, this improves the trust by... 0%.
Their device says it's been attested. Has it? Who knows? They control the hardware, so can just make the server attest whatever they want, even if it's not true. It'd be trivial to just use a fake hash for the system volume data. You didn't build the attestation chip. You will never find out.
Happy to be proven wrong here, but at first glance the whole idea seems like a sham. This is security theater. It does nothing.
If it is all a lie, Apple will lose so much money from class action lawsuits and regulatory penalties.
You have to go deeper to support this. Apple is publishing source code to firmware and bootloader, and the software above that is available to researchers.
The volume hash is computed way up in the stack, subject to the chain of trust from these components.
Are you suggesting that Apple will actually use totally different firmware and bootloaders, just to be able to run different system images that report fake hashes, and do so perfectly so differences between actual execution environment and attested environment cannot be detected, all while none of the executives, architects, developers, or operators involved in the sham ever leaks? And the nefarious use of the data is never noticed?
At some point this crosses over into “maybe I’m just a software simulation and the entire world and everyone in it are just constructs” territory.
I don't know if they will. It is highly unlikely. But theoretically, it is possible, and very well within their technical capabilities to do so.
It's also not as complicated as you make it sound here. Because Apple controls the hardware, and thus also the data passing into attestation, they can freely attest whatever they want - no need to truly run the whole stack.
It is as complicated as I make it sound. Technically, it's trivial, of course.
But operationally it is incredibly complicated to deliver and operate this kind of false attestation at massive scale.
Usually the attestation systems operate on neither side having everything to compute a result that will match attestation requirements, and thus require that both server-side and client-side secret are involved in attestation process.
The big issue with Apple is that their attestation infrastructure is wholly private to them, you can't self-host (Android is a bit similar in that application using Google's attestation system have the same limitation, but you can in theory setup your own).
Attestation requires a root of trust, i.e. if data hashes are involved in the computation, you have to be able to trust that the hardware is actually using the real data here. Apple has this for your device, because they built it. You don't have it for their server, making the whole thing meaningless. The maximum information you can get out of this is "Apple trusts Apple".
Under the assumption that Apple is telling the truth about what the server hardware is doing, this could protect against unauthorized modifications to the server software by third parties.
If however, we assume Apple itself is untrustworthy (such as, because the US government secretly ordered them to run a different system image with their spyware installed) then this will not help you at all to detect that.
What runs on the servers isn't actually very important. Why? Becuase even if you could somehow know with 100% certainty that what a server runs is the same code you can see, any provider is still subject to all kinds of court orders.
What matters is the client code. If you can audit the client code (or better yet, build your own compatible client based on API specs) then you know for sure what the server side sees. If everything is encrypted locally with keys only you control, it doesn't matter what runs on the server.
But in this use case of AI in the cloud I suppose it's not possible to send encrypted data which only you have the keys to as that makes the data useless and thus no AI processing in the cloud can be made. So the whole point of AI in the cloud vs. AI on device goes away.
This is what the “attestation” bit is supposed to take care of—if it works, which I’m assuming it will, because they’re open sourcing it for security auditing.
This isn't right.
If you trust math you can prove the software is what they say it is.
Yes it is work to do this, but this is a big step forward.
The only thing the math tells you is that the server software gave you a correct key.
It does not tell you how it got that key. A compromised server would send you the key all the same.
You still have to trust in the security infrastructure. Trust that Apple is running the hardware it says it is, Trust that apple is running the software it says it is.
Security audits help build that trust, but it is not and never will be proof. A three-letter-agency of choice can still walk in and demand they change things without telling anyone. (And while that particular risk is irrelevant to most users, various countries are still opposed to the US having that power over such critical user data.)
No, this really isn't right.
To quote:
verifiable transparency, goes one step further and does away with the hypothetical: security researchers must be able to verify the security and privacy guarantees of Private Cloud Compute, and they must be able to verify that the software that’s running in the PCC production environment is the same as the software they inspected when verifying the guarantees.
So how does this work?
But why can't a 3-letter agency bypass this?
So your data will not be sent to node that are not cryptographically attested by third parties.
These are pretty strong guarantees, and really make it difficult for Apple to bypass.
It's like end-to-end encryption using the Signal protocol: relatively easy to verify it is doing what is claimed, and extraordinarily hard to bypass.
Specifically:
No, this is secure attestation. See for example https://courses.cs.washington.edu/courses/csep590/06wi/final... which explains it quite well.
The weakness of attestation is that you don't know what the root of trust is. But Apple strengthens this by their public inspection and public transparency logs, as well as the target diffusion technique which forces an attack to be very widespread to target a single user.
These aren't simple things for a 3LA to work around.
It's not fully homomorphic encryption. The compute is happening in the plain on the other side, and given the scale of models they are running, it's not likely that all of the data involved in a computation is happening inside a single instance of particularly secure and hardened hardware. I don't think it's reasonable for most individuals to expect to be protected from nation-state actors or something, but their claims seem a little too absolute to me.
Unless you personally validate hardware designs, manufacturing processes, and all software, even when running locally you're trusting many, many people.
Or if someone compels them to
They already have your private pictures. What difference is it that it's now running AI?
If one has to use tech one has to trust someone. Apple has focused on the individual using computers since inception. They have maintained a consistent message and have a good track record.
I will trust them because the alternatives I see are scattered and unfocused.