Companies like Google have had access to our full email, search, location, photo roll, video viewing, docs, etc history for 10+ years. I don't think also having our LLM prompts fundamentally changes this picture..
I guess this is what the shifting baseline argument refers to..
Having said that, I think it's rational for almost all people to not care to give up privacy for all these (addictively) amazing tools. Most people don't do anything worth privacy protecting. Having worked in data/engineering at bigtech, it's not like there's a human on the other side reviewing what each user is doing. For almost all people the data will just be used for boring purposes to build models for better marketing/ads/recommendations. A lot of the models aren't even personalised, the user is just represented by some not-human-readable feature vector that again nobody looks at.
Hell, I have multiple Google Home devices that are always on and listening, and the thing's internal model is so basic and not-personalized that after multiple years it still has trouble parsing me when I say "Play jazz" and "Stop", even though these are the 2 commands I exclusuvely use. Sometimes it starts playing acid rock, and when I say "Stop" it starts reading me stock quotes.
You choose to have Google spying devices at home. You choose to have Gmail. You may have no choice for Internet search for quite some time, but you certainly choose your browser.
Start there.
Disagree with this. self hosting email is notoriously difficult. Gotta give the data to somebody. Plus, your work email is either going through MSFT or GOOG, 99% of the time
Yet people do it anyway. It's not an impossible task like you're making out.
The act of hosting postfix/dovecot is not in itself difficult. Debugging deliverability issues is, though. And it's time-consuming.
And doesn't change anything I said at all.
It kinda does, though. It's why I stopped self-hosting. I bet if you emailed my gmail address it wouldn't come through, even through no fault of your own.
You can have DMARC, SPF and DKIM all correctly configured on a clean IP and some mail server at Microsoft will still drop your mails because it's having a hard day and it feels like it.
You can also route outbound email through an existing place like smtp2go.com, which stops all of that outbound hassle. :)
That place in particular (which I use and can recommend) even have a (permanent?) free tier. ;)
This makes it something that the average person cannot do; you're suggesting something that already requires more time and resources than 75% of the population has access to.
If you're serious about this than go talk to a non-tech person and tell them to self-host email and see how they do. Look at their challenges, build a solution and then offer it.
Please don't try changing the goal posts by introducing "non-tech" people into this.
That's not at all what the conversation is about, and routing through an external place removes a whole bunch of hassle compared to setting up and maintaining outbound email.
You might not like it for some reason, but that's on you.
If you need a third party to deliver your mail on your behalf, are you really self hosting?
What's the point? At that stage you've already conceded the deliverability problem so now you're just wasting time administrating dovecot and keeping up with security patches.
The greatest trick the centralised email services ever pulled is convincing people that a faulty spam filter is the sender's problem.
What about the numerous other email providers?
The numerous other email providers are... numerous. Every discussion like this ignores, to an absurd extent, how hard it is for non-tech people to gather information on these topics and make an informed choice: Information about which email providers are care about which aspects of privacy, which aspects of privacy and information security even exists, which email providers even exist, what they are doind with your data, what parts of what they are doing is a problem...
You can't even ask tech people to make a choice for you because they all say different things.
Other domains like cars, medicine, construction, whatever have established standards because they have recognized that individuals simply _cannot_ make an informed choice, even if they want. I'm eager to say that only information technology likes to call the user "unwilling" and "lazy" instead, but actually individuals from other domains do that too. Luckily, the established standards are mandatory, so their opinion doesn't count.
Rounding error that doesn't matter, because the recipients of any e-mail sent from those providers are likely on mailboxes backed by Google or MSFT anyway.
That rationale sounds great (albeit dismissive/invalidating) until something you've done (and have provided ample digital evidence of) becomes illegal or is otherwise used against you.
Oh actually, what's your email password? I mean, since you're not doing anything worth keeping private, right?
You think there isn't a human reviewing the data of what each user is doing, but there absolutely could be, and there's no reason there can't be, like when Tesla employees were viewing and exfiltrating footage/imagery from customers' vehicles. Not just one or two people but apparently disparate _groups_ of employees. https://www.reuters.com/technology/tesla-workers-shared-sens...
I'm not sure there is a solution to this problem, unless we accept to lose a lot of features in our products and switch to E2E services.
the only alternative I can think about is some required audit about the measure in place to prevent employees from accessing data, but I'm not sure how effective that would be
I'm not so sure.
Based on the figures I could find for 2021, Google's ad revenue was about $60B with approx 3B users (I asked various chat bots...).
So, if you extrapolated and did some slightly dodgy maths (there were other factors but I can't be arsed typing them) it would cost about $6-$7 per month per user if Google stopped all ads, and by extension, tracking & data mining - This is for Google to maintain their current figures.
Take these figures with a big pinch of salt though...
That $ figure is across the whole of Google. So that is for every single product they have but if you just use Gmail then it might only be $2 a month.
So, if Google wanted to make the same money without tracking, ads etc, they could but the temptation to sell your data and mine it would be strong!
Anyway, my point is that it's possible to do it but would people pay for it? I pay for my email with Fastmail so there is a single data point for you :)
Likewise. I pay for Fastmail, I contribute to Signal. Hell, I'm even paying for Kagi even tho' its results are fairly useless, in the hope that paying for privacy will lead to a better service over time.
n+2
I pay for all apps and services I regularly use as a matter of principle even if they have a fine free version. It's not even that much, a couple dozens bucks a month, so here's a second data point.
Good point about future exposure.
A counter-argument: people already do all sorts illegal (misdemeanor?) things every day, tracked by big tech, and nothing happens. Some examples:
• speeding: your smart phone knows what road you're on, what the speed limit is, and that you're over it
• movie piracy: Chrome knows, your OS knows, your ISP knows, your VPN provider knows, any device that is listening can tell you're watching a movie that you shouldn'be able to watch at a non-cinema GPS location
• ..
20 years ago, legal firms sending out threatening letters to people they could identify on torrent trackers was commonplace.
It's still quite commonplace in Germany in 2024. Typically, they claim around 1000€ in said letters, and refusing will have the case go to court, which usually rules in favor of said legal firms.
Have there never been cases where law enforcement requested that sort of information and used it? I don’t think we know.
I worked at Tinder and we had full access to all the messages in plaintext, and ability to look up users by phone number, email, etc.
Yep. Even NSA agents spy on their ex’s. So of course Microsoft employees will be spying on people.
It’s not about privacy, it’s about having control of your own destiny. Not having some shitty OS maker think they are God in a 1984 world.
This is the warped kind of stuff which happens to companies who join the NSA Prism program… give it a few years and all they care about is power and money and playing spy.
There’s probably an NSA agent rubbing the higher ups at these companies off and telling them they are God as they finish. Or taking them on tours of the office and showing their cool exploding pens and other James Bond tech. Whatever it is, Microsoft is more than just invested.
Recall? Give me a break. That is the most in your face global surveillance tech I have heard of to date.
It's a common debate hole that privacy = hiding something bad/illegal, whether now or something deemed to be in the future. While this can be true it's only an aspect. Take benign examples where privacy would be useful and it's unrelated to that.
Eg: was reading recently in WSJ how various insurers are using satellite and drone imagery of customer's roof conditions and using it to deny them coverage. However this has been abused where even brand new roofs have been marked as bad even despite evidence provided and push-back. The insurers were collecting imagery and making decisions but not providing evidence on their end. According to someone working for an insurer they're expecting soon to take images daily for such purposes.
Here various incorrectly affected parties have done nothing illegal/bad/wrong but they're losing control and insight into processes that are affecting them in real ways. These aspects are part of what Daniel Solove outlined in their privacy taxonomy, where they broke down privacy into different things that comprise it (where information collection is distinguished from processing, etc).
> much of the data gathered in computer databases is not particularly sensitive [...] Frequently, though not always, people’s activities would not be inhibited if others knew this information. I suggested a different metaphor to capture the problems – Franz Kafka’s The Trial, which depicts a bureaucracy with inscrutable purposes that uses people’s information to make important decisions about them, yet denies the people the ability to participate in how their information is used.
I think it’s interesting that you call out Google. It’s not that I disagree, but from the European enterprise perspective you can say that Microsoft has access to virtually everything. Banks, Healthcare, Defense, Public Services and so on, everyone is using the Office365 product line and almost everything is stored in Azure.
I don’t begrudge Microsoft, I think they are a fantastic IT-business partner from an enterprise perspective. They are one of the few tech companies in the world that actually understands how an Enteprise Organisation wants to buy IT and support, and they’ve only ever gotten better at it. As an example, when I worked for a Danish city, someone from Seattle would call us with updates on major incidents hourly. Which is something you can translate to a CTO being capable of telling their organisation that Microsoft is calling them with updates on why “e-mail” isn’t working. So I actually think Microsoft is great from that side of things.
I don’t think we should’ve put all our data into their care. We live in a post Snowden world, and even here in Denmark we recently had a scandal where it was revealed that our government lets NSA spy on every internet point leaving the country. I get that’s the way it is when you’re a pseudo vassal state. We’ve always had our government secrecy regarding the US and Greenland. It also makes me wonder how secret anything we’ve let Microsoft have access at really is.
This. The next time, there's a real disagreement in trade policies, Europe is going to be fucked. Microsoft does have access to literally everything and no one even seems to understand that, because no one understands what "cloud" or even just "online vs. offline" means nowadays. It's a bit scary.
This is another big issue, but the EU does know and care about it. My current employer falls under the critical infrastructure category (we’re finance/energy) and that means we’re required to have contingency plans for how to exit Microsoft in a month. Not just theoretical plans, but actual hands on plans that are to some degree tested once in a while.
The issue is how impossible it is to exit Microsoft, and this is where I’m completely onboard with your scary part. We can exit Azure painlessly from the digitalisation perspective, well not financially painless but still. IT-operations will have fun replacing AD/EntraId though, but all our internal software can be moved to a Kubernetes cluster and be ready to accept external authorisation from Keycloak or whatever they have planned to move to.
But where is the alternative to Office365? Anyone on HN could probably mention a bunch, but where is the alternative for people who don’t really “use” computers as such? The employee who basically think a pc “is” Office365. As in we could probably switch their Windows to Linux and they might not notice if they still had Office365.
This is where the EU currently doesn’t really have an answer. We have a strategy to exit Office365, but I’m honestly not sure our business would survive it.
If those plans exist and there is even a tiny chance you can pull that off i'm impressed. In most organizations it would be a almost impossible challenge to even upgrade all their servers to a new OS in a month. I don't think i've ever seen a organization of more than 100 employees that could reasonable migrate their cloud provider, identity source and operating system in a month. Endpoint operating system upgrades often take a year (or more).
Most organizations do not spend any time even thinking about that, nor considering it in their decision processes, nor prepare for it. An organisation that do, will have an IT architecture. For example limiting exposure in the first place. For example, they might chose to not have have any servers with Windows in the first place. They might have a thin client or web oriented workflow for endpoint applications, which make switching out Windows easier on employee mdchines. They might have already have multiple OSes in use, to check that critical systems can be successfully accessed without Windows. That said, it is of course a big endeavour.
This is a big deal in cybersecurity education. I'm in the UK doing it. We've a dilemma that industry is desperate for fresh new cybersecurity recruits to fill an enormous skills gap. In the UK, Microsoft is a "preferred supplier" for lots of organisations, even defence stuff, and to get our students past the gatekeepers they pretty much need "365". Regardless of whether they can recompile a Linux kernel and do protocol analysis with Wireshark... no 365, no job, Not even tier-1 support.
By contrast my last cohort of masters students worked on things like critical infrastructure, national security, long-term resilience, hybrid interoperability... everything that Microsoft is not and makes worse.
So there's a schism between academic understanding and industrial reality that makes cybersecurity really rather hard to fix.
So I have to walk into a classroom and say:
And I hope they took enough from Ross Anderson's SecEng book, and from the BSD/Linux classes and my the other lectures to go out there and start undoing the harm.https://blog.documentfoundation.org/blog/2024/04/04/german-s...
This is exactly it. Execs want to sound in charge of situations, even if it's just a person who can be shouted at. Microsoft can employ very expensive, individualised call centre staff in expensive suits to read out to you a service status page.
I agree but I also think it’s bigger than the ego of C-types. The fact that Microsoft calls you with updates also has a near magical impact on organisation culture in general. It’s the, “oh ok” gestalt that every employee feels, the thing that makes them consign to wait instead of being angry, and what not.
Sure there is ego, but a lot of C types are frankly good enough to work beyond that part of the equation.
I wasn't necessarily talking about ego, but more about how other people in the C-suite will react differently knowing that someone's calling with updates regularly.
Is that what is being discussed? The biggest issue with Microsoft's new AI announcement was that their system was going to take screenshots of your computer every second and process them with AI. That means they could have way, way more data about you than LLM prompts.
https://arstechnica.com/ai/2024/06/windows-recall-demands-an...
No, this is not what the article nor this discussion is about. Windows Recall is completely local and hence actually is in line with the argument of the article that stuff was local way before they were in the cloud. Those screnshots Recall takes pose a whole different security and privacy problem: that someone (bad actor, your employer, your partner) may access these screenshots and gleam a lot of information about you they shouldn’t get. Recall is not about „Microsoft having way, way more data about you“. Other MS products, sure - Recall isn’t the point here.
I'm pretty confident if the NSA or whatever asks MS for those screenshots, they've got a way of making them non-local. The EU is already pushing for mandatory local scanning for CSAM, do you think they wouldn't also extend this to Windows Recall snapshots once the technology is there?
I don't take Microsoft devices seriously, esp. PCs, except the Xbox they never last or gain widespread adoption..
Microsoft Surface
I've never seen one irl.
I’ve seen exactly one. Got one at a dev agency I worked at as a Windows test device.
This was like a 3rd or 4th generation one, I think. I was really excited to finally get my hands on one because they were supposed to be really good.
TL;DR it was mediocre-leaning-bad judged as a laptop, and a terrible tablet. I can’t figure out how they got anything but bad press for the things.
> Windows Recall is completely local
Until it isn't, due to future changes or malicious 3rd party managing to make use of bad security decisions & bugs.
Yes, that's what this article is about. It doesn't mention user device surveillance.
Which is why I don't have a Google account.
Their search is OK, but their other services aren't that great.
What do you use for email?
I can recommend Runbox. It's a paid service, but I really think that's for the better.
I have been very satisfied with Fastmail.
GMX
Self host
Google it.
ISP's IMAP server and Thunderbird.
With LLMs looking at all this data if you want to persecute or narrowly propagandize those who are X (X = pro-israel, anti-israel, pro-trump, anti-trump etc) it can be done much better than before. The "humans on the other" side will be using all this data to narrowly find people.
Narrowly?
Half of people are pro-Trump, half of people are pro-Isreal/Palestine.
Stats are really misleading when boiled down to binary decisions.
While polls generally show that roughly half of US voters plan to vote for Trump, that's in the context of only being given the option of Trump or Biden. Most polls I remember seeing since 2016 show roughly 1/3 of the US really consider themselves Trump supporters.
The Israel/Palestine question has similar problems. A binary poll question sets the context that a respondent needs to be on one side or the other, and that supporting or opposing both sides isn't an option. It also puts respondents in a position to have to pick a side regardless of how much or little they may know about the situation. With no more context, a 50:50 split could mean simply that most people don't know enough to decide and randomly pick a side instead.
Nobody randomly picks a side. People who can’t make well-informed decisions simply follow the lean of whatever biases they have. Slightly hawkish or conservative? Pro-trump. Bleeding heart? Pro-Palestine.
Yeah but you don't need to target them all. Most people are pretty much useless, politically speaking, they just do what they're told by someone else. If you can identify who is doing the telling and target them specifically, large groups of people will otherwise be docile.
Historic attempts to apply that theory have been broad-brush to say the least [0]. With LLMs and access to enough data the authoritarians can get really fine-grained about when they take people out the next time they seize enough power. Anyone attempting to do something politically uncomfortable for the incumbents will be at serious risk in a fine-grained way that has not previously been possible.
I don't think it is 50-50, more like 20-30% for Trump and I don't have a read on the Israel/Palestine stats. Trump has a dedicated core of supporters but I'd suggest a lot of the people polling for him just don't see a better option.
[0] Eg, I was reading up on https://en.wikipedia.org/wiki/Intelligenzaktion the other day
It’s not that simple - each group has a bunch of sub-groups which respond to specific propaganda tactics/buttons.
Same with abortion/anti-abortion, guns/anti-gun, and any of thousands of other topics.
Yes, for a search to be considered narrow, the resulting group should be small and specifically defined by precise criteria, not encompassing a significant portion of the population. *Notice once criteria are stacked groups can get small.* E.g. Which young males on your street (or in your building) who like Trump but not Israel were protesting at city hall today? Big datasets let authorities / advertisers / ... answer questions like that. The answer will often be a tiny "narrow" fraction of the population (of your city / state / country / ...).
Normalization of abusive behavior is not OK! If one company abuses you for 10+ years, then it is not OK for other companies to abuse you!
If big data were not lucrative, they could not sell your data. If your data was not valuable Facebook and Google would immediately remove it from their servers without hesitation.
Mantra.
- I don't care about privacy
- I don't care big tech has access to all my data
- I don't care that Google has access all my politicians data
- I don't care I am building a worse future for society
- I don't care that I am being recorded while having sex in Tesla
- I don't care I am building surveillance state
- I don't care that my data are being sold to China, to India, to wherever highest bidder lives
- I don't care how my data are being used. I don't care if my data are being used to train military robot dogs that will be used for wars
- I don't care that I will not receive insurance because my medical data are sold wherever
- I don't care about privacy
- I don't care about privacy
- I don't care about privacy
Right but I think his point was that we accepted this decades ago, and this latest case hasn't changed anything at all. In other words the baseline hasn't shifted.
No, you are not correct. The amount of surveillance IS rising. At first search data was recorded, emails, then maps, social interactions, car sensors, smart speakers.
Now literally everything you do is captured, repackaged and sold. Corporations also are more creative when it comes to selling your data. They have more data, they sell more.
Every drama is adding more to the privacy nightmare.
Sorry, but you cannot say that "nope nothing has changed for privacy in last 10 years".
'We' didn't accept it. The internet expanded and the mass media conversation moved on to the next shiny thing. The majority of geeks have vehemently opposed mass surveillance every time it's been revealed (Snowdon, Assange). The majority of normies are completely unaware of the extent to which companies like Palantir and Clearview AI sell their most personal information to 'law enforcement', and Microsoft, Google and Facebook own and manipulate their information to fuel advertising.
"I have nothing to hide"
I hear people saying things like that occasionally, and have to wonder... why did you do that to yourself?
And why are you assuming everyone else was similarly unwise?
"Companies like Google" certainly haven't had that kind of info from me, nor from several people I know, as we've avoided the vast majority of their products.
Do you use any search engine? Then yeah, your data is funnelled to google/microsoft.
Kagi
Only the data on what is being searched. Which is of course quite a lot. But it requires active effort to leak data that way. Many types of sensitive information is not natural to put there, say conversations with others or a diary. And other things can be actively avoided, like searching about illegal things. Which in some countries could be simple things like gay communities etc, unfortunately.
A lot more people are vulnerable to abusive partners than you may think, and that's a threat model most of these products never consider.
Would local hosting be any better against the abusive partner threat than these products? Disclosure: I work at Google.
Local hosting/processing is a good thought, but it only helps in limited circumstances, because partners you haven't separated from yet are likely to have physical access to your devices.
It's one of the big criticisms of Microsoft Recall: the database is locally generated and encrypted at rest, but practically, any user in the same home with device access can probably access it, and bypass any efforts you've made to delete your browsing history or messages.
Remember that abusers are often controlling and suspicious, so disabling Recall, denying them access to your devices, or changing your passwords is enough to set them off because you appear to be hiding something (maybe making plans to leave or report them).
Plausible deniability can be an important feature for activists and regular people alike. You can't always predict when a relationship goes south like this, or get out of it as soon as it does, or afford and hide a burner phone.
One of my friends remarks (edit) that tech companies should have a social worker and a public defender on staff for threat modeling these things.
There doesn't need to be a human inspecting stuff, just a computer doing the filtering and reacting to it. The example that the article is using is already mentioning that they found the nefarious purpose. Given the current trend of politics, notably in the US, its not far-fetch to draw a future where searching for things like, say, abortion, becomes illegal, and tech companies get coerced into sharing that kind of information.
The fact that MSFT found that co-pilot was being used by hackers suggests that they already have the entire set of tools to already do that.
This is the weakest of the privacy arguments to me. If we are in the "evil government" hypothetical future, why do they need my real searches? They can just as easily lie, make something up, use some other piece of data to persecute who they want, or do away with all of that pretense because they don't need it.
You're imagining a fictional, united evil government acting as a single entity. In real life right now, any given government has numerous bad actors at various levels who can and do use data to persecute people and groups they dislike. Most of the time, they still need real data, but it's a messy multidimensional spectrum and sometimes they can fabricate it. Another thing you're missing is that they're using this data to find the people they want to persecute.
That may be true, but most people still benefit indirectly from the actions of the few who do have something to hide (e.g. protestors, journalists, whistleblowers).
If we want the masses to continue to benefit from the actions of those few, then we need to find a middle ground. Someplace where the masses are private enough that a truly private individual can hide among them without sticking out like a sore thumb. You don't need to hide, you just need to be able to hide.
I would say that statement comes out of people's ignorance not out of informed knowledge. For those that even do know feel helpless in the face of industry standard treatment of privacy.
this is literally the reason given for 100% adoption of https (because what if)
Privacy is (a) freedom.
The reason why people care about privacy is not necessarily because giving up privacy has some directly observable negative effect. But, simply, living without freedom sucks.
I don't want you to know my personal information not because you could/would do something nefarious with it. I don't want you to have it simply because it's none of your business.
I am personally starting to think that this is the framing we need for this conversation. It needs to be discussed as a freedom as opposed to a right for no other reason than 'expectation of privacy' was very successfully neutered.
Information about your preferences is weaponized in the current era. As an example that some might find relatable, if you support Trump and you work for a FAANG, you might just lose your job.
While I agree with the general sentiment, I think the problem in this specific case is lack of job security in the US. In most European countries getting fired for something like this would never fly.
You have no idea what future advantage can be taken against people using this data. Seriously, like it's complete normal instinct to never overshare in conversation. Everyone knows this. This is the equivalent of extreme oversharing at all times.
Yeah, just potential for gossip and slander type attacks, makes it same to not share everything. Our cloud services probably had the power to discredit or blackmail any user.
I don't think that's really true. Google has had pretty serious privacy guards in place, where if you don't want Google to collect info on you (for advertising), they won't.
Also, lots of people use other email services than Gmail, don't share their location information, don't share their photo roll nor share their internet history with Google. And a lot of video viewing is obviously not done on YouTube.
How do we know this? They may claim this, however their incentives as an advertising company would provide strong pressure against this.
Bringing back the same argument in favor of protecting privacy:
"Saying you don't care about privacy protection because you have nothing to hide it's the same as saying you don't care about protecting free speech because you have nothing interesting to say"
I killed my social media accounts because I disagree with my pictures being used for training AI models. My friends, who aren't tech savvy people, keep mocking me for caring about it, and when I confront them about big tech knowing much more about our private lives than ourselves, they simply respond "well, I don't care anyways, it won't make a difference in my life".
You are welcome to live in any country that doesn’t value privacy just to see what it’s like after you surrender it.
There are lots of reasons why you don't hand your personal information to everyone, why you wear clothes even though it might be warm enough not to, etc.
But the key point for me, is that knowing you are being watched, or even suspecting it, changes behaviour.
You cannot be you online. You do things differently, edit yourself. It is a form of manipulation. Which was always the point. The panopticon was conceived as the perfect prison to control others.
https://www.wikipedia.org/wiki/Panopticon
No wonder you don't care too much about privacy, but this is not normal for everyone.
Do you feel the same way about TikTok?
It’s not Google you need to worry about. It’s the state that can compel access to Google’s (and Apple’s, and Meta’s) data without a warrant. Big tech doesn’t run concentration camps, but governments can and do, including the USA.
Are you confident that the state will always be friendly to people like you? How about the people you support politically?
You really need a google device and an internet connection to be able to say "stop" to stop your music? Wow. My old "smart" watch could do that offline xD This can't be hard to do on your own, without a google device listening at all times...
I won't disclose why Russian government considers me a part of three different terrorist groups. But they also tried to call people who watched anime a terrorist organization. And they tried that at least 3 times in last ten years. Sooner or later they will succeed.
Also, wasn't there a recent problem in USA where women were uninstalling period tracking software, because it could report someone as pregnant for having irregular period? I know at least 4 causes for missed period, do judges know that much?
We don't protect our privacy because we are imperfect, we do it because assholes and|or stupid people exist. It's like a middle school. You might not be ashamed of having a crush, but you wouldn't tell everyone who they are, because some of our classmates are assholes that would care too much
Advertisement has the purpose of trying to get money from you, or to make you change your mind on political issues. These things are worth protecting. Companies using your private information for such purposes is an attack.
One of the worst takes I've ever read. There is something called metadata. Even if you don't do anything that is worth protecting explicitely, the data about your 'worthless data' enables perpetrators to see the patterns of your daily life. You can reconstruct so much by just gathering metadata over a certain time span, knowing when someone usually interacts with devices, social media etc.
I don't want everybody to be able to derive when I'm sleeping, going to work or when I'm going on vacation. Hopefully, it is obvious to you that even a simple thief could use such information (if leaked) to know when it is the best time to go on a heist in your aparment.
There is a nice 3c presentation by David Kriesel called SpiegelMining available on YouTube. It is in German, but the autogenerated subtitles are good enough to understand everything. He downloaded Spiegel Online newspaper articles over a certain period of time and was able to derive lots of information about the authors based on the article's metadata (publishing timestamps, author initials etc.)
Its not some accident Google ended up with all this data.
There is a reason they give you Free stuff like video, email, chat, search etc.
People have forgotten the only way (in the past) to solve the Info Explosion that results when networks grow, is trying to understand Peoples needs better. That was the intention behind data collection. But ofcourse the story went off the rails when advertisors and marketers and politicians found value in all that data.
But there is an upper bound to how much value there is, just like there is an upper bound to how much milk you can extract from a cow.
Once you build huge ever scaling infra assuming there is no upper bound and then an upper bound is hit, what happens?
Nothing good. Excess cows start getting slaughtered. Larger the system. Greater the over run. More slaughtering. Dont expect what you got for free to stay free. Expect all your data to be sold off at fire sales.
Absolutely, its very rare for people to have things Like passwords, bank accounts, confidential documents, secrets, fears, weaknesses.
And as we know, everyone applies good infosec practices, none of us have a txt file sitting in a folder with all our passwords.
So on average, there isnt a growing collection of confidential data that our models are getting trained on.
Matter of fact, if everyone who reads this were to randomly start loudly talking about “EAR-WAX REMOVAL” or “LOW LIBIDO”, in close proximity to a friend’s phone or Smart TV, theres no impact. They dont end up seeing some interesting and potentially embarrassing ads.
It’s not like we live in a world where bad actors exist, maybe in some distant country, who try and take resources away.
——-
EDIT: I came back to edit this because I felt I was perhaps too snarky and as a result having fun at expense to your argument.
For what you said - it should be understood that it is easy to end up in a situation where privacy or PII is converted into a “problem” which has to be minimized.
This is a position that then leads to many other far too complex failure states.
Our future selves are better served by thinking of data we generate, as default private, and that all private data is “heavy”
Except for people whose employer does not approve of their politics, or whose family does not approve of their religion, or whose community does not approve of their sexuality, or whistleblowers, or journalists, or people who support causes the government disapproves of, or........
That covers an awful lot of people.
Pretty much anyone living in, or with any links to anyone living in an authoritarian state too.
The flip side is false positives.
Have a scanned photo from when your grandmother bathed you as a baby? Google may identify it as child porn and shut down your account. https://timesofindia.indiatimes.com/city/ahmedabad/google-la...
"Gujarat high court has issued notice to state govt, Centre and Google India Pvt Ltd after the tech giant blocked an engineer’s account citing “explicit child abuse”. The engineer had uploaded a photo showing his grandmother giving him a bath as a two-year-old."
"... his client could not even access email and his business was suffering. The blocking was like a loss of identity for Shukla, a computer engineer, most of whose business depended on communication through the internet. Shukla had requested Google to restore his account, but in vain."
I can only describe this take as disgusting. Saying you don't care about privacy because you have nothing to hide is like saying you don't care about freedom of speech because you have nothing to say.
Privacy is a fundamental pillar of society, for without privacy there is no freedom. We already see the chilling effects across many situations, we have seen them for the past decade+ at least. It's only the beginning.
Privacy doesn't matter until someone in a vulnerable group gets killed because someone doxxed them. Hell, even some semi-private people have been killed by things like swatting. Privacy matters and we should pass laws to protect it, just like we have the 4th amendment to protect against unlawful search and seizure.
Having said that, it's easy not to care - until something significant happens to you or your family...
How many clients has my team helped when it was just too late? Many if not most.
One fundamental flaw humans have is not to care until it's too late.
Why should I care about my gutters? Until the day the basement is filled with water because of these "ridiculous" gutters...
This is blatantly wrong, If i'm a large corporation i can use the information you think is worthless against you via first degree price discrimination and countless other targetted mechanisms. You simply haven't thought about it enough, and as soon as you do, you will realize when no privacy exists people will and have been developing mechanisms to take advantage and capitalize on this state of affairs.
You say that like it's a mundane and acceptable use, but this is the primary thing I want to avoid my data being used for, and is a huge part of why I get very concerned over privacy issues.
Well, but that's sort of the point of the article. You're comparing against a baseline where our privacy is already eroded. If you compare to an earlier (say, pre-web) baseline it's quite different.
This is true of many fundamental rights. Most people don't say anything worth speech protecting either. The catch is that when you do need to say something worth saying or do something worth privacy protecting if the right wasn't there all along you're kind of screwed.