return to table of content

Apple memory holed its broken promise for an OCSP opt-out

saagarjha
22 replies
22h29m

It's really quite unfortunate how much of Apple software is designed around "privacy is when you trust Apple" :/

flarex
17 replies
22h7m

Not really they are moving into homomorphic encryption where the entire query and processing is encrypted and Apple has no knowledge of the what you actually requested.

rpdillon
10 replies
20h36m

Completely unclear how much they're moving into homomorphic encryption. The only resource I'm available to find about it is an announcement from 30 July saying that they can now do caller ID lookup using homomorphic encryption and they've announced an SDK that developers can use to leverage it. But the announcement is so vague that it's entirely unclear how much this can actually be used for practical workloads. And, the idea that they're going to go all in on homomorphic encryption is speculative based on what Apple has revealed so far.

That's notable, as we're discussing a case where Apple said they would do something, and then not only didn't do it, but went out of their way to pretend that they never said they would.

flarex
8 replies
20h12m

I'm not aware of any other company of Apple's size (or anywhere approaching) that have been as committed to privacy tech. Of course they are not perfect and sometimes get it wrong but they constantly release new technologies that are furthering our privacy. Who else does it better?

makeitdouble
5 replies
19h47m

It comes down to what you identify as privacy. Apple is commited to not give your data to any other company and keep it protected in their ecosystem. They'll sell access to you for ads, but only exposing your cohort to the advertiser.

From that lens, Google is also commited to never give your personal data (think Gmail content, Maps behaviour, pins etc) to other companies and keep it all in their ecosystem, for themselves only. Your data is their key advantage, the base of the ad empire, and they won't let another company run away with it.

If we call Apple privacy focused, Google also fits the bill, the question just falls down on whether we see Apple or Google as part of our intimate circle, within our private life. I assume you do for Apple but not for Google.

flarex
4 replies
18h31m

There is no serious person that could think that Google is a privacy focused company. Their entire business is founded on knowing everything about their users. It's an ad company. They need user data to function and they will never release tech that compromises their business. Just look at the direction of ad blocking and chrome to see where they are headed.

makeitdouble
3 replies
18h14m

The Apple side is the similar: their current entire business is to middle man your relationship with other companies. You buy Apple products, purchase and subscribe to apps and services from the App store, use Apple Cloud, etc.

They need you in their ecosystem, the same way Google needs you in theirs.

And I totally agree with you, I wouldn't't call Google privacy focused, and I don't call Apple privacy focused either, even as they market it harder than anyone else.

flarex
2 replies
17h52m

Google is a privacy antagonist. Apple is privacy focused because it suits their business. Apple has been privacy focused for years and has built several technologies to prove it. It's not hollow marketing to build privacy software.

talldayo
0 replies
15h55m

Google is a "privacy antagonist" with an Open Source OS you can build locally and modify to your heart's content? And Apple's been privacy focused, suing security researchers for copyright violation when they try to analyze iOS?

Methinks you're holding a double standard. Compared to Android and Linux, Apple's "promise" is no better than the one Microsoft offers Bitlocker customers.

makeitdouble
0 replies
13h21m

I don't define "privacy" as "only a single company has access to all my stuff", so to me Apple's claims are just marketing. I'd buy an argument about good security and some protection against other companies, just not "privacy".

7jjjjjjj
1 replies
19h57m

These companies shouldn't be graded on a curve. Everyone knows Microsoft is crap for privacy. But Apple has their reality distortion field, and it's important to show people that their privacy promises are BS.

flarex
0 replies
18h41m

Okay but from an evolutionary sense which company should we be supporting. The one company that is somewhat moving towards privacy or the 10 others that don't give a shit. Which one should survive. Would you like to see companies that copy Apple's privacy approach or Facebook's dumb fucks approach.

csande17
0 replies
20h8m

Also, the homomorphic encryption is a requirement for third-party caller ID providers, not Apple themselves. Apple's first-party "Contact Photos" caller ID feature operates primarily on the "trust Apple" security model AFAIK.

TimSchumann
4 replies
21h4m

I was unaware there exists a fully homomorphic encryption scheme that has the right trade offs between security and computational effort to make this economically viable for even moderate to small workloads.

I’ve always thought it was either far too time or far too space intensive to be practical.

Do you have sources on this, either from Apple or academic papers of the scheme they’re planning on using?

rpdillon
2 replies
20h35m

I've posted about this above a little after you did. Reading the article, I'm unable to determine whether or not this has any practical utility outside of niche applications or if it has the potential to be broadly useful. Has anyone reviewed the SDK that can render an opinion?

flarex
1 replies
20h17m

Homomorphic encryption is broadly useful and in fact should be ubiquitous for remote computation that leaks private data (not to comment specifically on Apple's implementation). They did open source it though, which gives you an idea that they want others to follow.

saagarjha
0 replies
8h55m

Can you point to other ways this is used or is intended to be used?

walterbell
0 replies
21h8m

Following through on a public privacy promise does not require R&D.

api
1 replies
19h28m

Some of it is self-serving and some is explainable by the deep and pervasive tension between (security / privacy / autonomy), and usability.

saagarjha
0 replies
8h57m

I generally do not impute malice to it, but it is in many cases the lazy option out.

gilgoomesh
0 replies
19h0m

Ultimately, whoever controls operating system updates always has control over your privacy. Even if Apple did offer perfect privacy, there's no reason an update couldn't completely change that.

Bluntly: if you don't trust your OS vendor, then you can't use OS updates. There are people in this category but it's a lot of work.

Much easier to trust your OS vendor (at least to this extent).

cageface
0 replies
17h22m

Exactly. What does privacy even mean when your entire digital existence is owned by and visible to a single entity?

cyberpunk
15 replies
23h1m

How does one go about creating a little snitch rule to prevent these connections?

lapcat
5 replies
22h36m

That's obsolete. macOS now uses ocsp2.apple.com.

The process is "trustd".

jiripospisil
4 replies
22h28m

Thanks. Does that depend on macOS version? Better to block both I guess.

phi0
2 replies
22h17m

I've checked my DNS logs and there hasn't been a single hit against ocsp.apple.com over the last year, but around 20-30 hits for ocsp2.apple.com per day per device. (iphone, macMini, macbook)

Just blocking ocsp2.apple.com is probably fine if you're running anything recent-ish.

Wowfunhappy
0 replies
20h53m

Does Little Snitch support regex? Perhaps it should be `ocsp\d*\.apple\.com`

JBiserkov
0 replies
21h8m

Block ocsp3.apple.com while you're at it.

And ocsp4. And 5. Block them all!

lapcat
0 replies
22h19m

Does that depend on macOS version?

Yes, though I couldn't say offhand when exactly it changed.

FireBeyond
4 replies
19h49m

Well, let's not forget the little dance Apple did to make their programs and the system be able to bypass packet filters like Little Snitch...

... and then claim it was necessary for "updates and upgrades". Not sure why TextEdit.app needed a kernel network extension for "updates and upgrades".

They also denied it was possible until it was provably demonstrated that they did.

I like Apple stuff, everything I use is Apple. But too many see them as infallible or go to the whataboutism for missteps like these.

flemhans
3 replies
18h3m

Oddly enough, UTM VMs also bypass Little Snitch, making me wonder if you could always just bypass LS by running a VM in your evil app.

wpm
2 replies
17h44m

Will it bypass it regardless of the virtualization engine, or only if you're using the Hypervisor framework for macOS VMs?

jiripospisil
1 replies
9h55m

Depends on the network adapter in use. E.g. a bridged adapter will bypass it, an emulated vlan won't.

lloeki
0 replies
9h26m

Ah that clarifies things, I was confused for a moment.

Well, the goal of a bridged net adapter for VMs is to make the VM as if it were physically plugged in directly into the network, independently of the host, so it makes sense for it not to be affected by a firewall.

running a VM in your evil app.

IIRC back then, creating a bridged net adapter required a special entitlement (special as in you can't get it without explicitly asking Apple for it and the dev cert for it has additional entries, like for kernel extensions). Dunno if that's still the case.

ProllyInfamous
1 replies
1h38m

Little Snitch WAS NEVER A GOOD PRODUCT if you're worried about DNS-leaks.

Fact: Little Snitch resolves the IP address (i.e. does domain-to-IP lookup) before the Deny/Allow dialogue ever appears onscreen. Only by pressing "Allow" does Little Snitch then allow/initiate connections to the already-resolved IP ADDRESS.

maeil
0 replies
42m

Then do you have a suggested alternative?

Spivak
7 replies
22h25m

Honestly I can imagine the preference being axed when OSCP is the macOS antivirus and I'm pretty sure I know what the first thing any malicious software is gonna do if it's able to be turned off.

macOS preferences aren't magically locked away from the rest of system, regular users can change their own user preferences, and root can change system preferences. An antivirus has to still work against an attacker who has root. It's why you can't block certain apps/domains from the firewall as well.

You could put the preference in recovery mode along with disabling SIP and I think that would accomplish everyone's goals.

lapcat
6 replies
22h14m

OSCP is the macOS antivirus

It's not. There are multiple layers of security, including notarization and XProtect.

I'm pretty sure I know what the first thing any malicious software is gonna do.

What?

macOS preferences aren't magically locked away from the rest of system, regular users can change their own user preferences, and root can change system preferences. An antivirus has to still work against an attacker who has root.

You sound confused about admin vs. root. Anyway, if you have a local attacker running on your Mac, then it's already too late for OCSP.

Spivak
5 replies
21h56m

Those aren't so much layers as the different parts, and OSCP is how notarization actually provides security, the developer cert is what's being checked to run the app.

This scheme isn't really designed to prevent attacks so to speak, it's to stop the malicious software on everyone's system all at once. You can say theoretically it's too late because the software is already doing what it's doing and you should consider the machine compromised. But the rubber meets the road for regular users who aren't going to wipe their machines in response to malware and so this lets you purge it.

I don't think I'm confused about admin vs root. I'm talking about the System Administrator user, The oops I ran malware with sudo user, uid 0, the user who is only constrained by the kernel via SIP. But yeah, admin is the more common use-case for regular users and I'm not sure the point you're making, malware can get admin and if you put the preference somewhere changeable by admin then welp. You could put the preference in recovery mode same as SIP, I think that would be fine.

lapcat
4 replies
21h52m

Those aren't so much layers as the different parts

Mkay.

OSCP is how notarization actually works, that's what's being checked to validate the notarization.

No, you are misinformed. OCSP is checked by the trustd process on ocsp2.apple.com, whereas notarization is checked by the syspolicyd process on api.apple-cloudkit.com.

OCSP is simply checking whether the Developer ID certificate has been revoked. Notarization, on the other hand, requires uploading a build to Apple and receiving a special notarization ticket. The notarization ticket is either "stapled" to the app or downloaded from Apple when the app is first launched.

Spivak
3 replies
21h33m

Mkay

Well they're not. What would you call it? Windows Defender and Microsoft's code signing requirement aren't super related. You could purge discovered malware with a signature/scan but that's not impossible to get around.

I'm not sure I really grok the difference from a security perspective when the main thing with notarization is ensuring it's signed with your developer cert.

I guess the Venn diagram isn't technically a circle but is it not that the actual security of notarization is provided by OSCP? I suppose I could have phrased that bit better.

Is there a case where a hypothetical notarization process that excludes that bit provides any real security? Because Apple "scanning it for malware" isn't going to be that different from Xprotect.

I'm really not sure what I did to get such an, idk hostile? response.

lapcat
1 replies
21h28m

the main thing with notarization is ensuring it's signed with your developer cert.

It's not the main thing.

is it not that the actual security of notarization is provided by OSCP?

No.

I tried to explain the difference in my previous reply, but I'm not going to sit here and write an entire essay on the subject (though I could). The information is out there, for example on developer.apple.com. Or even on my own website. Inform yourself, or at least stop spouting falsehoods.

sroussey
0 replies
16h47m

Oh god, don’t send people to developer.apple.com. It’s apple’s worst product.

I’d rather you shill your own blog posts. Even without reading them, I know they are better. ;)

Vegenoid
0 replies
20h5m

is it not that the actual security of notarization is provided by OSCP?

The security of notarization is provided by Apple's signature over the hashes of the executables in the app [0]. The hashes and signature are put into a "ticket". This ticket is stored on Apple's servers, and can also be "stapled" to the app. Gatekeeper (one of the macOS security systems) will prefer to fetch the ticket from Apple if possible, and fall back to the stapled ticket if available. Notarization is meant to guarantee that the code was sent to Apple and checked for malicious code.

OCSP checks that the Apple Developer ID certificate used to sign the app hasn't been revoked.

They are two separate checks done by the Gatekeeper system, which is meant to ensure that only trusted software runs on macOS. I believe it makes sense to call the OCSP check part of the Gatekeeper system, but this may be incorrect.

[0]: https://forums.developer.apple.com/forums/thread/710738

hansvm
6 replies
22h7m

Can you get around that nonsense by turning off wireless radios before launching apps?

wkat4242
5 replies
18h36m

Yes you can. If there's no network it will skip the check.

It's not very practical though. Better to block the address with little snitch or a hosts file

lightedman
4 replies
17h22m

HOSTS hasn't worked very well for a long time, the OS often ignores it.

latexr
2 replies
9h10m

Do you have a source? I regularly use /etc/hosts and never saw any inkling of it being ignored, but do see plenty of cases that confirm it is not.

shakna
1 replies
8h13m

Safari, when using the "iCloud Private Relay", does ignore the hosts file. Not obvious to everyone, but does kinda make sense.

latexr
0 replies
7h30m

I just tested this, and it’s even worse than your description. Safari ignores /etc/hosts even when Private Relay is off.

wkat4242
0 replies
14h16m

Oh ok thanks! I didn't know that, I've used it for other stuff but mainly third party software. I've never tried to block this.

katzinsky
5 replies
16h30m

This kind of stuff is a major reason I completely cut all Apple stuff out of my life.

When you're network is all Linux everything actually does "just work" more so than it ever did with OSX. Everything is just an SSH away, it's really pretty amazing.

boffinAudio
2 replies
7h51m

... except the ssh bin is also tracked with OCSP ..

kjkjadksj
1 replies
1h50m

Can’t you disable it though or block the connection?

katzinsky
0 replies
1h9m

But you don't have to on Linux.

Like there's just so much crap you have to do to make these non-free OSes pleasant or private and it's always blowing up. Linux mostly "just works" OOTB.

ProllyInfamous
0 replies
1h40m

Be careful doing this without enabling your own key/certificate validation. We deployed SSH to a few dozen OS_X machines in a lab (for backend maintenance) without certificates for handshake (i.e. password-based) and next morning several machines had been compromised.

We were using complex passwords; this was when 10.6 was latest OS, so perhaps security is better now with simple password-based SSH — but I would never take such a risk again.

jesprenj
3 replies
18h0m

Regarding OCSP, off-topic for Apple: Firefox enables OCSP by default. This means that for every TLS connection an OCSP plaintext HTTP request will be made to the certificate authority that signed the certificate of the website the browser is connecting to. This means that the certificate authority receives very well timestamped information about the exact domains you are visiting if you are using Firefox and don't disable OCSP and the website you're connecting to does not use OCSP stapling (most don't). Note that disabling OCSP will make Firefox unable to get certificate revocation information (maybe it still uses system's revocation store, I'm not sure about that, but it certainly does not use more privacy preserving CRLs).

lol768
1 replies
7h45m

It's a trade-off though, isn't it? Everyone seems upset about OCSP as of late, but the domains being sent in plaintext is not exactly limited to just OCSP - we've had this with SNI and DNS.

The advantages of OCSP are that you get a real-time understanding of the status of a certificate and you're not needing to download large CRLs which are stale very quickly. If you set security.OCSP.require appropriately, you don't have any risk of the browser failing open, either.

It seems to me like the people who most dislike OCSP are CAs who have to maintain the infrastructure capable of responding to queries. I have really limited sympathy, that should be part of running a CA.

The privacy concerns could be solved by mandating OCSP stapling, and you could then operate the OCSP responders purely for web-servers and folks doing research.

Unfortunately the ship has sailed with ballot SC63 now, and we are where we are. I don't necessarily agree that OCSP as a concept was unfixable, though.

jesprenj
0 replies
4h19m

Everyone seems upset about OCSP as of late, but the domains being sent in plaintext is not exactly limited to just OCSP - we've had this with SNI and DNS.

My main privacy related concern isn't about domains being sent in plaintext but that they are sent to the CA and they can then theoretically do analytics on this data and profile web users.

But maybe this concern doesn't really make sense as we have strict personal-data regulations now.

mananaysiempre
0 replies
17h14m

For intermediates, Firefox distributes OneCRL[1] based on CCADB[2] data and the Mozilla Root Program’s own trust decisions. For leaf certificates, they announced the CRLite[3,4] experiment a while ago but it’s not clear how far along it is.

[1] https://blog.mozilla.org/security/2015/03/03/revoking-interm...

[2] https://www.ccadb.org/

[3] https://blog.mozilla.org/security/2020/01/09/crlite-part-1-a...

[4] https://bugzilla.mozilla.org/show_bug.cgi?id=crlite

minkles
2 replies
22h3m

Ok I understand the technical considerations here. But really what is the risk surface for me here as a dumb end user who uses apps from the store and a few things off homebrew and not a lot else? I mean I've got a large pile of Apple crap sitting here. Is this even remotely worrisome enough to shift it and move to something else? The CSAM thing probably was. This? I don't know.

(I could probably do everything I need to do on Linux - I just don't want to)

odo1242
1 replies
19h12m

tl;dr: Apple can see what apps (or at least what app developers' apps) are installed by a specific IP address through OCSP. It used to be unencrypted and therefore visible to the internet, it is now only visible to Apple. Apple said they don't keep logs, but theoretically they could if they wanted to.

chatmasta
0 replies
17h40m

IIRC this request happened every time you open the app, not just at install time. So Apple has a log of (IP, AppID, TimeOpened).

IndySun
1 replies
21h2m

What are Apple up to? Very long term?

slackfan
0 replies
1h4m

Selling that data.

keleftheriou
0 replies
22h49m

Shameful behavior by Apple.

boffinAudio
0 replies
7h40m

I feel the same way about this as I do with the whole NSA clustfuck: If I had access to my own data and could do what I wanted with it, I'd be fine with it.