return to table of content

Meta's Onavo VPN removed SSL encryption of competitor's analytics traffic

wordhydrogen
33 replies
11h48m

Documents and testimony show that this “man-in-the-middle” approach—which relied on technology known as a server-side SSL bump performed on Facebook’s Onavo servers—was in fact implemented, at scale, between June 2016 and early 2019.

Facebook’s SSL bump technology was deployed against Snapchat starting in 2016, then against YouTube in 2017-2018, and eventually against Amazon in 2018.

The goal of Facebook’s SSL bump technology was the company’s acquisition, decryption, transfer, and use in competitive decision making of private, encrypted in-app analytics from the Snapchat, YouTube, and Amazon apps, which were supposed to be transmitted over a secure connection between those respective apps and secure servers (sc-analytics.appspot.com for Snapchat, s.youtube.com and youtubei.googleapis.com for YouTube, and *.amazon.com for Amazon).

This code, which included a client-side “kit” that installed a “root” certificate on Snapchat users’ (and later, YouTube and Amazon users’) mobile devices, see PX 414 at 6, PX 26 (PALM-011683732)(“we install a root CA on the device and MITM all SSL traffic”), also included custom server-side code based on “squid” (an open-source web proxy) through which Facebook’s servers created fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis, see PX 26 at 3-4 (Sep. 12, 2018: “Today we are using the Onavo vpn-proxy stack to deploy squid with ssl bump the stack runs in edge on our own hosts (onavopp and onavolb) with a really old version of squid (3.1).”); see generally http://wiki.squid-cache.org/Features/SslBump

Malware Bytes Article: https://www.malwarebytes.com/blog/news/2024/03/facebook-spie...

leononame
27 replies
9h32m

That is insane and I would be inclined to not believe it if someone had told me this. This is such an immense breach of trust that even for me, who has a very low opinion of Meta, it is unexpected. I hope this will blow up as much as it should

RowanH
10 replies
9h13m

So this one time, I had a bug report at a client site. The business was largely a member of _______ religion. Our images wouldn't load in the app, but did on the website. How odd I thought, that doesn't make sense! Luckily I was able to be physically present, so I hopped down with laptop in tow, ssh'd into the server and started tailing logs....

Sure enough all the API requests for data were coming through, but whenever a request for image happened - nothing would hit the servers.

What the heck I thought to myself?

I said to the client 'that can't be, that's almost impossible....the only way that's possible is if the SSL traffic is decrypted, inspected, and images blocked from being requested, which, is a MITM attack".

He redirected me to his IT provider. I phoned them up, and explained the situation.

"Ahh so they're _____"

Me: "So what does that have to do with the price of fish?"

Them : "Content filtering..., you need to talk to ____"

Sure as the day is long, the content filter was a VPN all members of ____ had to have on their mobile devices (I don't know how widespread this is, whether it was just this business, or the entire ____ )

I applied to have our system approved, it was, and just like magic the next day photos started coming through.

I'm guessing basically it detected any .jpg/.mp4 etc URL's in https requests and flagged it up and blocked them from being requested. You can be sure on those devices the VPN would have been somehow locked in with device management, and there's no way on gods green earth they were getting at Facebook/insta etc.

So, it's not just meta. That really hammered home how seamless it can be to end users that they really can't trust what's actually happening on their devices.

leononame
7 replies
9h7m

Not that I'm a fan of it, but in corps it's pretty standard praxis to have a custom root cert installed on all devices and enforce VPN connections on devices outside the network to be able to MITM all requests and do stuff like content filtering (e.g. NSFW, swearwords and obviously malware). It's the company's device and they give it to you for work specific purpose, you shouldn't use it for personal stuff. I don't think it compares to an app that shadily installs its own root cert on an end user's device to spy on them.

RowanH
3 replies
8h23m

It's not corporate level it was/is religious group level (of which this particular org I'm guessing largely employed staff from that religion). They are well known within our country to be quite insular.

It certainly seemed for all intents and purposes if you were a member of _____ group (wider than the company) you had the vpn on your device, and it was filtering content. I've found other reports in other countries of that happening with the same group.

So it's not corporate content filtering, it's personal content filtering and our app got caught up in it (and approved).

It certainly made my skin crawl for anyone in that religion. That means the central filtering service could be reading messages. Not sure if they're that sophisticated but certainly they didn't want people to see random images/videos.

ndriscoll
0 replies
7m

This is one reason I think ECH is probably on net a bad idea. Content filtering is a legitimate use-case for lots of users/networks, and if traffic is completely opaque to all networks, you end up needing things like root level processes or full MITM or laws requiring ID for websites instead of more privacy-preserving inspection of basic metadata (like SNI) at the network level.

leononame
0 replies
7h54m

Is it like required from their religious leadership to install this? That is incredible, and I only now understand your comment to its full extent. That is brutal.

FergusArgyll
0 replies
7h39m

Yes, this exists. There's more than one company you can choose. It's not 'forced' but strongly recommended. Also, my love for hacking started with getting around it...

felsokning
2 replies
8h43m

From the inference of the commenter, I think they were referring to an app on a mobile device and not the device itself.

It also sounds like their issue was at the ISP provider level, as well, which takes the business out of the loop of being the data controller/owner (of the collected data) at that point.

Note: I'm not saying that your comment doesn't have merit, I just don't think that the points that you made apply - specifically - in this case?

leononame
1 replies
8h22m

After re-reasing the comment I think you're right. I had a hard time grokking it it seems. But since the issue was apparently a VPN app installed on the phone, I don't know whether this was the ISP or maybe their IT service provider that did content filtering on behalf of the company (like an outsourced IT department?)

RowanH
0 replies
8h14m

The VPN (much like Meta's) is doing some root cert trickery to filter content that is deemed inappropriate or potentially inappropriate. This appeared to be controlled by a Company A in another country that undoubtedly contracted to Y religion to be their central point of content filtering globally.

So, member of the church? you get this VPN on your phone, (not sure whether phone was supplied by the church, but certainly this VPN was on it) VPN is effectively content filtering and blocking content.

I had our app whitelisted by that central company (literally raised a ticket with them, next day magically fixed).

bratwurst3000
0 replies
3h44m

Holy shit they can brainwash their peers even better. Those are evil geniuses….

Sorry I meant the optimize the content for their peers and shield them from harmful content for the better of humanity // irony

Tijdreiziger
0 replies
3h20m

There are even ‘safe’ (filtered) ISPs aimed at religious communities.

FLT8
9 replies
8h17m

I also hope that any ethically minded engineers inside Meta take a stand against this BS. The only way stuff like this happens is because engineers working on these projects decide that they can set aside whatever morals they may have had for the price of a big fat FAANG pay cheque. It's about time our profession adopted a code of ethics, like that of the ACM[1]. To the engineers who _have_ walked away despite the obvious pressures, I salute you.

1. https://www.acm.org/code-of-ethics

reaperman
3 replies
6h47m

Wouldn’t Meta simply hire unlicensed “engineers”?

FLT8
2 replies
6h3m

You simply legislate that if a company is building anything that will be used regularly by more than eg. a few thousand people, then the work must be designed and/or signed off by a licensed engineer, who will a) be subject to a code of ethics and b) be professionally liable for any failures causing loss or damage to the public.

We seem to be able to manage this with bridges, planes, electrical & hydro installations etc. No reason it shouldn't be the same for critical software infrastructure.

pixl97
0 replies
4h35m

I mean with a thing like a plane you can say "that's not allowed in our state/country", with software that starts to get a whole lot more problematic. Soon you'll see people starting to push laws that say things like "because people are running dangerous software from outside the country we demand that only signed software can run on our phones/computers and that devices here must enforce it" coming out of our politicians that seemingly get a pile of cash from groups like Microsoft and META.

andsoitis
0 replies
4h40m

No reason it shouldn't be the same for critical software infrastructure.

Why do you think Meta's work is critical software infrastructure?

trvz
1 replies
7h16m

Ethically minded engineers don't go work for Facebook in the first place.

beagle3
0 replies
3h35m

This was news … 5 years ago, I think, I don’t know why it blew up again. But context matters:

Onavo provided a compression + VPN service for people traveling; they let users use little or no data while roaming, and still get internet access. I do not know what their original business plan was, but Facebook bought them for the ability to spy on users.

Their MITM was, in fact, the raison d’etre of Onavo. And then, they were bought by Facebook. And then there was just some more analytics added. At no point, as I understand it, was it built explicitly for evil - and I suspect very few employees were in on the real reasons.

Plausible deniability works for many things.

toyg
1 replies
6h54m

You expect all people to have morals in the first place. That is an erroneous assumption.

FLT8
0 replies
5h35m

Nah, I've met enough amoral people over the course of my career to know that's not the case. However, the overwhelming majority of people I've worked with are people who do have morals and do care about the outcomes they're creating, and that gives me great hope.

bertil
0 replies
2h53m

I was directly involved in this.

I am happy to answer any questions you have about questioning or ethics at the time. Assuming that people's reaction to this was wrong, while not knowing what that reaction was, or having less than 5% of the context, isn’t going to help much.

Short answer: No, there were strong arguments for it. I reached out for institutional support to answer some questions, groups that I expected to be a lot more supportive than the ACM, but I found the reaction seriously lacking. Your intuition that groups like the ACM should offer assistance is sensible but completely overlooks many problems: geopolitics, different types of security, and individual capacities, among others. Each institution has its priorities; those are not always compatible, and it’s unclear who should have precedence. The ACM won’t help you if the argument is the kind of compromise with the devil that spy agencies often make or if problematic tools are used in efforts to dismantle large criminal groups.

hulitu
2 replies
6h9m

This is such an immense breach of trust

Why do you trust it ? Do you think that others (Google, Microsoft, Apple) are not doing/would not do such a thing ? SSL is as secure as its certificates.

exitheone
1 replies
5h17m

Honestly, yes, I don't think Microsoft Google and Apple would do something like this.

ethbr1
0 replies
3h31m

Imho, the correct way to evaluate corporate potential corporate trust is on self-interest.

In Microsoft, Google, and Apple's cases, they all have substantial enterprise business that would shit a brick if they were caught doing this.

Ergo, it's not in their best interest to do it.

Safer to rely on a company's desire to make money than any sense of "good".

redder23
0 replies
7h2m

Here is what is going to happen:

1. Nobody will care in 10 days. 2. They will get a slap on the wrist at best.

Reminds me of Google driving around in StreetView cars, hacking and capturing all wifi traffic they could get their hands on. Did anything happen? Of course not!

https://www.theguardian.com/technology/2010/may/15/google-ad... https://www.wired.com/2012/05/google-wifi-fcc-investigation/

The guardian says "open" networks, apart from the fact that in 2010 networks were not secured by default in many cases. I think WEP 1 was a thing and easily hacked, and I would not be surprised if they were actually Wardriving, on the largest scale ever.

kevin_thibedeau
0 replies
4h28m

It's a criminal CFAA violation.

justahuman74
0 replies
4h17m

I'm somewhat surprised it's taken this long to come out. It was something of a open secret that onavo was spying somehow on snapchat traffic within atleast the infra/release org back in 2016 era

bcye
2 replies
3h30m

Can someone explain how exactly they were able to decrypt the SSL traffic, is it possible to install a root CA without huge warnings from the OS?

_joel
1 replies
2h50m

By using mitm, basically "pretending" you're the site the victim wants to connect to and trasparently connecting to the actual upstream site. Basically decrypting the traffic locally for inspection before sending it back out. https://en.wikipedia.org/wiki/Man-in-the-middle_attack. You don't need a root CA, you just need to poison the DNS to point to the mitm server and just present any old valid cert for the domain so it doesn't trigger a self-signed warning or whatever.

bcye
0 replies
1h39m

How can you take any old valid cert though? I presume they have some sort of private key you don't have access to and it would still trigger an expired cert warning?

liuliu
1 replies
10h37m

That's appalling to say at least. But Snapchat implemented certificate-pinning since 2015. Does that mean either the analytics endpoint was not covered or somehow the certificate-pinning is circumvented in this case?

KomoD
0 replies
10h31m

analytics endpoint was not covered

This sounds most likely

tigrezno
16 replies
9h53m

why people pay for 3rd party VPNs? It's far more secure to create your own wireguard/openvpn/whatever with a cheap VPS

byyll
6 replies
9h20m

My €5 VPN allows me to use 500 IPs in 50 countries. Give me the VPS that allows me to do that.

k8sToGo
5 replies
9h12m

Which service do you use? Mullvad?

defrost
3 replies
9h4m

That could be either Mullvad or ProtonVPN.

Both are Swiss zero log, Mullvad has a flat 5 euro/month charge that goes back to when they started to (they say) forever - you can send them cash in envolope for the next twenty years with a generated account number and you're away.

ProtonVPN has plans - the two year streaming sign up is 4.99 euro/month.

xvector
1 replies
7h26m

Ah, good 'ole trustworthy Swiss companies! Like Crypto AG![1]

Realistically, all VPNs are compromised. But for most people's threat model, that's irrelevant anyways.

Proton for instance revealed the location of a climate activist leading to his arrest[2], with the inspiring message from the CEO that "privacy protections can be suspended", silently on a per-user basis at any time.

Haven't seen anything like that for Mullvad, but it's probably the same. At least the company takes crypto. But these things are always just surface level obscurity at best.

[1]: https://en.m.wikipedia.org/wiki/Crypto_AG

[2]: https://techcrunch.com/2021/09/06/protonmail-logged-ip-addre...

andsoitis
0 replies
4h33m

Proton for instance revealed the location of a climate activist leading to his arrest[2], with the inspiring message from the CEO that "privacy protections can be suspended", silently on a per-user basis at any time.

That person isn't just a climate activist, they (and others who used that email account) broke French laws. Swiss authorities compelled the disclosure.

byyll
0 replies
3h55m

Mullvad is not Swiss, it's Swedish.

byyll
0 replies
3h56m

I am not here to advertise, the point is the same for most commercial VPNs.

orthoxerox
2 replies
9h39m

Not everyone is savvy enough to do it, even though the process has been simplified with many hosting providers providing preconfigured VPN servers.

And it doesn't anonymize you that well. When you post a message that draws the attention of law enforcement, the IP will lead them to a VPN provider that hopefully doesn't keep any logs.

But if it leads them to a specific server, the hosting provider will disclose your account and payment data, since it is linked to your private server. Unless they accept fully pseudonymous accounts and let you pay for your VPS in cash, Monero or tumbled Bitcoins, finding you is much easier now.

thegrim000
1 replies
9h31m

I find it so insane that people think the major VPN providers aren't all completely compromised one way or the other. As if you're really going to be able to just pass your traffic through such a business and they're going to actually keep no logs, and not have secret deals made with intelligence agencies, and aren't unknowingly completely insided/compromised by intelligence agencies. As if you can just push your traffic through a major VPN and intelligence agencies would just go "well shucks, oh man, they sure got us, we'll never know who it was, foiled again".

dewey
0 replies
9h22m

I find it so insane that people think the major VPN providers aren't all completely compromised one way or the other.

For 99.9% of people a VPN is just something they use to access something in another country or because some YouTube ad scared them into believing you need a VPN as soon as you step into a coffee shop.

The threat model of most people does not include state actors or intelligence actors and they just don’t care.

maple3142
0 replies
9h14m

It really depends on why are you trying to do. It is not easy (or just impossible) to get the same amount of ip with that $5/mo or $10/mo VPN services by renting your own VPS at the same price.

felsokning
0 replies
8h6m

why people pay for 3rd party VPNs? It's far more secure to create your own wireguard/openvpn/whatever with a cheap VPS

Your comment seems to infer that you're unable to empathize with people who might think/understand differently than you. It also seems to negate that you avail of other services/non-self-controlled processes without worrying about the threat models, there.

Just hand-waiving with a "Why don't people just do 'x'?" is ironic - in the sense of "Why do you do your own medical care?" or "Why don't you grow your own food and slaughter your own animals?" or "Why don't you manufacture your own phone, it's operating system - oh, and the cellular tower closest to you?".

Threat models exist, _everywhere_, and it's impossible for someone to build all of the pieces, themselves, to prevent all threat models at every possible avenue/point.

In other words, at a non-arbitrary point, doing _everything_ yourself is untenable and that's precisely why services in society exist, today (that and ease of access, use, required foreknowledge, and - most notably - cost).

almostnormal
0 replies
9h38m

Oftentimes, it's not about security but about circumventing censorship. A cheap VPS comes with a fixed IP located in one fixed part of the world. Many VPN providers allow switching.

account-5
0 replies
9h19m

You have to trust someone somewhere. You're simply placing your trust in the VPS provider instead of a third party VPN provider.

Not to mention the other stuff the VPN providers give you as standard which you'd have to implement and maintain yourself.

MissTake
0 replies
9h41m

Because most people are not techies.

Compared to the rest of the world, the number of people who even know what a VPS is is microscopically small.

And even those that do, the number of them with the time, desire, or skill, to do as you suggest, is even smaller.

I myself was into this sort of thing just 10 years ago. Now, as I start looking at hitting the big 6-0 in just a few years time, I’m already working on divesting myself of all this complexity,

dddddaviddddd
10 replies
10h17m

If an individual had somehow done this, I expect that the Computer Fraud and Abuse Act would be used against them. With Meta, we'll see.

xvector
9 replies
7h44m

I heard about this a few years ago. The trial participants were informed, consented, and paid. If you consent to a root cert being installed and analytics being proxied, well, that's that.

itopaloglu83
7 replies
6h2m

Two issues. 1) Did Snapchat consented to this? And 2) did the users know what they were consenting to?

Saying we’re going to do “ traffic monitoring” doesn’t carry the weight of “we are going to listen to your private conversations”.

UncleMeat
6 replies
5h5m

Why would Snapchat need to consent? It's my traffic.

I'd wager that most participants don't know the full details of the program, but "company pays you for your usage information" is a very old thing. You could (maybe you still can) get paid to install a box on your TV that recorded all of your viewing statistics to be used for market research.

To me, the biggest concern is that this is only really viable because Facebook had nontrivial market penetration of a more-or-less unrelated product to their main offering. This isn't something that Snapchat could have easily done to get market research on Facebook usage, for example. This feels (to me) more like an anticompetition concern rather than a privacy concern.

ddol
2 replies
3h13m

That box on your TV would have been a Nielsen box which sat on your TV and was connected to your landline. It didn’t collect anything automatically: every time you turned the TV on you were contractually obligated to press a button every 20 minutes to have the box call Nielsen and log a datapoint.

Those boxes have been phased out in favour of “Personal People Meters”[0], which are basically a pager with a SIM card that you wear which has a microphone listening 24/7 for TV broadcasts. You must keep it on you, listening at all times.

Nielsen will pay you $250/year (less than a dollar a day) for the data you provide.

[0] https://en.wikipedia.org/wiki/Portable_People_Meter

_joel
1 replies
2h45m

Had them here in the UK, used to get a free TV license for the inconvenience. My mate always pressed the same button despite what channel we were watching though, so there is that...

Lammy
0 replies
1h29m

My mate always pressed the same button despite what channel we were watching though

“They like Itchy, they like Scratchy, one kid seems to love the Speedo man… what more do they want?"

figassis
1 replies
1h56m

They would because the communications involve 2 parties. Your consent to someone snooping on my calls with you should not be enough, because for example, you still need my consent to record calls I have with you.

Now, Meta decides to MITM the communications that I intentionally encrypted so that it can gain a competitive advantage…well, remember when meta kicked out researchers what had obtained consent from users to perform research on its platform? That was not even illegal. This is.

ndriscoll
0 replies
15m

At least in the US, most states are single party consent.

The whole thing's a mess, but it's funny to me that people would get indignant over a user letting another party intercept analytics data. "Hey, that's my data from spyware! Get your own!" As if their "consent" to collect the data in the first place were any less flimsy than Facebook's.

itopaloglu83
0 replies
3h26m

Here’s how I see it. This is akin to opening your USPS mail and reading your correspondence with a friend. When instead they could’ve checked who the mails were addressed.

If Facebook wanted to learn the protocol Snapchat uses, they only needed a single test device. If they only needed to learn usage patterns, they could’ve checked where the traffic is sent to or app usage time etc.

Installing a root certificate is very intrusive and they behavior shows that if they are ever given the opportunity to be become a root certificate authority, they are likely to issue malicious certificates. As far as I know, no website can pin their certificates, so this takes us back to pre-HTTPS days where ISPs and network operators had a lot of fun reading user traffic.

bcye
0 replies
3h38m

Afaik only in some instances, in some they were not paid and informed consent is in all cases quite questionable

edit: I think this is something I wouldn't call informed consent: "Of particular concern was that users as young as 13 were allowed to participate in the program. Connecticut Senator Richard Blumenthal criticized Facebook Research, stating "wiretapping teens is not research, and it should never be permissible. This is yet another astonishing example of Facebook’s complete disregard for data privacy and eagerness to engage in anti-competitive behavior.""[1]

1: https://en.m.wikipedia.org/wiki/Onavo

vincnetas
9 replies
10h12m

So how can we be sure now that todays VPNs are not tomorrows Onavos. :(

baby
4 replies
10h6m

First, all VPNs spy on you, just don't believe these claims because they are forced by law to do it. Second, don't use a VPN that clearly states that they're analyzing your traffic data.

byyll
2 replies
9h19m

forced by law

Which law?

xvector
1 replies
7h34m

National Security Letters.

byyll
0 replies
3h57m

Not a law.

k8sToGo
0 replies
9h9m

and your ISP will not spy on you?

mgiampapa
2 replies
10h9m

Certificate pinning and validation in apps for one. Onavo's VPN was really clear it collected market research data. It was as informed consent as a click-through could be.

forgotusername6
1 replies
8h12m

Interception of encrypted communications is beyond the expectation of what most people would consider "collecting market research data"

hiatus
0 replies
3h39m

I would expect the exact nature of the collection to be spelled out in some TOS that users probably clicked through.

cpach
0 replies
4h49m

Don’t install additional root certificates.

That’s what Facebook enticed users to do here. Without that root cert they wouldn’t have been able to see as much as they did.

keikobadthebad
7 replies
10h37m

Facebook is not removable from many android devices... does this mean Zuckerberg has been seeing all user traffic for years regardless of tls?

1oooqooq
4 replies
10h14m

Yes and No.

for TLS traffic you need to also install onavo.

But the app does scan your contact list every couple minutes and send diffs to their servers. Even if you have never opened the app. And on previous android versions all your recently open apps list too.

But again, if you install whatsapp you must give them the contact list permission anyway otherwise the app is intentionally broken and annoying.

14
2 replies
3h24m

I really think you are a fool if you install WhatsApp. I do think you are higher intelligence than normal if you install Signal. When I hear friends talk about WhatsApp I cringe. The few who have signal I regard highly.

eru
1 replies
3h4m

Real life is full of compromises. If your grandma is on WhatsApp, and you want to talk to her, it might be a good idea to install WhatsApp.

(However, if you have time on your hand and principles, you can use WhatsApp on a burner phone, I guess?)

14
0 replies
3h2m

Or educate grandma on why she should use signal and that fools use WhatsApp since meta is balls deep inside the app and watching what you do.

felsokning
0 replies
8h34m

for TLS traffic you need to also install onavo.

I'd be interested to know if it shipped as part of the Facebook SDK, as well.

eru
0 replies
10h25m

Only when they used Onavo, it seems?

https://en.wikipedia.org/wiki/Onavo is slightly more readable than the legal document submitted as the link.

douglasmoore
0 replies
10h23m

I might be wrong but I think you need the onavo VPN installed

Then your YouTube, Snapchat analytics would get man in the middled

cabirum
7 replies
10h13m

What do you think Cloudflare is doing with its SSL termination/offloading?

supriyo-biswas
3 replies
9h1m

Why single out Cloudflare? They are not the only CDN or PaaS with SSL fronting.

isodev
1 replies
7h55m

I honestly can't think of one without googling. Cloudflare is kind of everywhere. Just like Google... can't really get rid of them even if you want to.

akerl_
0 replies
6h40m

You can’t think of anybody else in the CDN or DoS mitigation business other than cloudflare?

thistletame
0 replies
6h52m

They explained pretty clearly why they think that's the case. You're both right though. It's likely not the case that cloud flare is the only company conducting and cooperating with government agencies to do these types of things. In my opinion it would be very silly to assume that.

viraptor
0 replies
6h8m

That's different. I have a lot of problems with CF, but when you sign up for a service which requires to see the traffic and you configure it explicitly to see your traffic... what's the complaint here?

ben_w
0 replies
9h25m

Given Snowden, I have to assume Cloudfare is under the thumb of at least the NSA.

For example, all the usual arguments against backdoors are going to be used by intelligence agencies to justify "providing assistance", which isn't even merely a euphemistic excuse given how incredibly valuable it would be for normal organised crime to spy on some of the encrypted data… but also is at least a bit of a euphemism, as I have to assume the controversies about terrorist groups using Cloudfare are only pemitted to happen because someone in US intelligence knows how to squeeze secrets from those groups.

In theory, messing with SSL is one of Cloudfare's features, not a secret; in practice I suspect most end users treat all this as magic — I've directly witnessed magical thinking with the padlock icon in browsers.

LoganDark
0 replies
9h28m

The difference is that people* know and accept that CloudFlare does this. They advertise it as a feature.

*most willing customers of CloudFlare.

baby
6 replies
10h2m

There's a lot of confusion around these stories these days, which reminds me of the "Gmail is looking at your emails" stories[1].

First, this is not wiretapping, come on. There's targeted man-in-the-middle (MITM) attacks, and then there's this. This is plainly "we are using advanced powers to analyze your traffic".

This is not even Superfish[2] type of stuff, where Lenovo had preinstalled root certs onto laptops to display ads. This is "if you opt in we will analyze your data".

Every program you install on your laptop can basically do WHATEVER it wants. This is how viruses work. When you install a program, you agree to give it ALL power. This is true on computers generally, and this is true on phones when you side-load programs. The key is that when we install something we understand the type of program we're installing, and we trust that the program doesn't do more than what it _claims to be doing_.

So the question here is not "how does Onavo manage to analyze traffic that's encrypted", it's "does Onavo abuses the trust and the contract it has with its users?"

[1]: https://variety.com/2017/digital/news/google-gmail-ads-email...

[2]: https://www.virusbulletin.com/blog/2015/02/lenovo-laptops-pr...

skywhopper
3 replies
8h51m

So, your argument is that MITM/wiretapping is okay if you do it at a large enough scale?

xvector
2 replies
7h33m

If someone consents to your clear request to read their data in the plain, then it's not evil. Still not my cup of tea, but if you clearly explain and obtain consent, it's shady but fine.

cycomanic
1 replies
6h20m

So how is that relevant in the context here. FB did not clearly request to be able to read all traffic (encrypted and nonencrypted) so how could they get consent. Unless you're arguing, "we will monitor your Internet usage", clearly means we will man-in-the-middle all your connections. Which would be a weird take.

hiatus
0 replies
3h41m

FB did not clearly request to be able to read all traffic (encrypted and nonencrypted) so how could they get consent.

I can't find the consent page/legalese shown to users, do you have a link?

jasonvorhe
1 replies
9h29m

That might have been true in the past, but nowadays at least macOS/Android/iOS can enforce several restrictions on the apps you install, like prevent them from changing OS settings/files, limit access to only specified/opt-in directories, limit the amount of background activity, etc.

I don't know about Windows or Linux though.

felixg3
0 replies
8h29m

Windows applications can easily install TLS root certificates, which essentially all „anti virus“ tools (i.e. snake oil) do. On Linux, it’s obvious; if you’re installing something as root, you can add certificates. In that context, apple is doing something right and makes it rather tedious to install root certs

1vuio0pswjnm7
4 replies
9h25m

Direct link to PDF:

https://s3.documentcloud.org/documents/24520332/merged-fb.pd...

Here is Meta's response:

https://ia802908.us.archive.org/29/items/gov.uscourts.cand.3...

Meta denies that they violated the Wiretap Act but offers no evidence of consent. (They try, but it is a laughable attempt.) Meta is also arguing the documents are not relevant. Meta claims the VPN app intercepting communications with other companies that sell online ad services, e.g., Snap, was not anti-competitive. It was just "market research".

Why is Meta so afraid to produce documents about "market research".

Meta does _not_ deny that they intercepted communications. From the attention this is getting on HN, MalwareBytes, etc. it seems clear no one using the VPN app would have expected Meta was conducting this interception. It is difficult to imagine how anyone could have consented to interception they would never have expected.

Additional details:

https://ia802908.us.archive.org/29/items/gov.uscourts.cand.3...

Apparently Facebook was using a "really old" version of squid.

mctt
2 replies
8h30m

Here is a quote from Facebook/Meta's legal council to the Judge. In this document "Advertisers" refers to Snapchat, YouTube and Amazon.

"... the Wiretap Act provides that an interception is not unlawful if a party to the communication “has given prior consent to such interception.” 18 U.S.C. § 2511(2)(d). Advertisers conspicuously fail to mention—and apparently do not contest—that Meta obtained participants’ prior consent to participate in the Facebook Research App, and with good reason: Participants affirmatively consented to “Facebook … collecting data about [their] Internet browsing activity and app usage” to enable Facebook to “understand how [they] browse the Internet, how [they] use the features in the apps [they’ve] installed, and how people interact with the content [they] send and receive."

So users consented?

DannyBee
1 replies
7h52m

Lawyer here.

No.

They have ...'d out an important part of 2511(2)(d).

(and they probably meant (c))

First, it starts out with: "It shall not be unlawful under this chapter for a person not acting under color of law "

This basically means a state/federal official or someone acting in their capacity as one (the color of law part basically means it applies even when they act beyond their legal authority by accident)

Which they aren't. So this doesn't apply at all. (d) has an additional requirement they ...'d out at the end, but (c) does not.

So it's both a wrong cite and a dumb one.

Second, you'll note "competitive research" or anything similar is not one of the allowage usages of collecting data that facebook got.

Third, the return argument will also be "the how matters", and users did not consent to this how, and would not have.

If I give consent to participate in collection of my internet data, it doesn't give you authorization to like, have someone live in my house and follow me around 24/7 so they can see what i do on the internet.

valicord
0 replies
3h48m

person *not* acting under color of law

Did you miss the "not" part?

skywhopper
0 replies
8h53m

I mean, sure, you could also do “market research” by breaking into people’s homes, reading their mail, and listening in on all their phone calls. I hope some actual criminal prosecution results from this disclosure, as it’s very clearly “hacking” and “wiretapping” and “unauthorized access”.

neglesaks
2 replies
6h14m

"Meta" is The Evil Online Empire at this point, it's company history is a litany is decidedly immoral if not outright evil actions.

hereme888
0 replies
2h39m

That article does not back up the claim that Meta is a state-actor.

I hate FB, but all big platforms these days will cooperate with federal agencies in cases like the one described. Doesn't make them "state actors".

imglorp
2 replies
10h29m

So, the FANGs can conduct mass psyops warfare against the populace basically with impunity -- a pesky little suit now and then is inconsequential.

But what will happen when they get caught stealing each other's surveillance booty?

motoboi
1 replies
5h36m

Bear in mind that they don’t applied this to everyone, which would be practically impossible.

They hired Snapchat users (via a testing services provider ) to let meta observe their usage of Snapchat.

Something akin to paying someone to let a meta researcher sit by your side and observe while you use the app.

This happens all the time (hiring the testing services to recruit users to use your own app and analyze the patterns with screen recordings and such).

The news here is paying for someone to “test” a competitors’ app.

I hope that the testers knew they had Snapchat analyzed and not that they were told they were testing only Onavo.

vitus
0 replies
3h21m

They hired Snapchat users (via a testing services provider ) to let meta observe their usage of Snapchat.

Something akin to paying someone to let a meta researcher sit by your side and observe while you use the app.

Onavo Extend and Onavo Protect positioned themselves as providing consumer-oriented benefits (bandwidth reduction and security, respectively).

The news here is paying for someone to “test” a competitors’ app.

Facebook acquired Onavo in 2013, so this was 100% a first-party effort to turn their first-party products into spyware.

asimpleusecase
2 replies
10h12m

Can we please see prison time for this. DCMA should apply and it have criminal penalties including prison.

r0ks0n
0 replies
4h54m

IT;S DMCA NOT DCMA

Simon_ORourke
0 replies
9h19m

"May I direct your honor that my client is a wealthy tech billionaire who would otherwise be at risk of being slightly annoyed if they were sent to jail for intercepting private communications of competitors..."

shnkr
1 replies
3h28m

Whatever may be the end goal, MITM is called an 'attack', not 'research'.

I'd not last a single day at such a company who would ask me to do such things. I had worked for a national political party in IT and left the job once I found about it corrupt practices and scams.

If we, as engineers collectively upheld ethics as part of work culture, Meta wouldn't have attempted it.

fagrobot
0 replies
2h3m

Lame

quitit
0 replies
4h7m

Yes it's old news(1) but it has come up again in numerous HN and reddit posts for a few reasons (if you flick through HN you'll see various versions of this story holding lower ranks.)

Also noteworthy is that Google were also doing something similar at the time, both were side-stepping Apple's privacy protections in iOS by using enterprise certificates that allowed the side-loading of apps without Apple's overview. In response Apple more thoroughly restricted how these certificates can be used.

Interestingly I've noticed in the DMA threads people suggesting that a company exploiting side-loading to dodge Apple's privacy protections was nothing more than fear mongering. As if this is a red line developers won't cross.

To me, it's wild to think that people on HN don't know about this relatively recent history and are so naive to think that these protections were just pulled out of the air to frustrate developers, and not a reaction to an on-going arms war against consumer's right to privacy.

(1) https://www.extremetech.com/internet/284770-apple-kills-face...

typeofhuman
0 replies
6h2m

The engineers should be criminally charged.

ramshanker
0 replies
7h14m

This seems to be a valid reason to implement certificate pinning in the application's network layer. At least 3rd party VPN providers don't get to intercept without replacing the pin.

chiefalchemist
0 replies
6h47m

Did the bury the lede? Sure this a blow against "competitors" but that is ultimately a competition for the collection of data, user data. In doing this FB has expanded its ability to hoover up more data at the individual user level, correct?

Yeah, crap move but my concern isn't those other scoundrels, it's me / us.

bobcostas55
0 replies
7h48m

Seems like a straight-forward CFAA violation, no?

bawolff
0 replies
3h19m

Was this before google started certificate pinning their apps or did they get around that somehow?

agaull100
0 replies
6h6m

Nice diversion in comments away from Meta...

adtac
0 replies
10h30m

Lol the irony of publicly announcing the addition of end-to-end encryption in one app (Whatsapp) while secretly breaking TLS in another, all in the same year #Tethics

KaiserPro
0 replies
10h12m

So was the plan to just yolo this out into the wild?

because the document says here that it was going to be given to trial participants as part of yougov(and others) survey. Which implies that they would have been informed/paid.

If its the former, then obviously thats unauthorised wiretapping. If its the latter so long as informed consent is given, that a shittonne better that the advertising tech we have now.