return to table of content

Microsoft is spying on users of its AI tools

fnordpiglet
23 replies
6h5m

A more precise yet general headline: “Microsoft is spying on users”

It’s nearly impossible to install their products without opting into layers of spyware and adware cloaked in dark patterns and requiring extraordinary measures to back out of hidden spyware installs done without permission. Microsoft has always been a fairly malignant business, built in shenanigans of various sorts, and as the market perfects new shenanigan vectors they’re right there pushing the envelope.

vosper
15 replies
5h23m

What are some examples outside of the OpenAI one in Schneier's blog post?

illusive4080
14 replies
5h17m

Windows 11

fnordpiglet
8 replies
5h6m

Office 365

Zenul_Abidin
5 replies
4h51m

Outlook

throwanem
4 replies
4h46m

VS Code.

enopod_
2 replies
4h27m

Edge

fnordpiglet
1 replies
3h43m

XBox

rubicon33
0 replies
3h4m

Notepad

sleepybrett
0 replies
4h24m

github, azure...

vosper
1 replies
3h13m

What are the “layers of spyware and adware cloaked in dark patterns and requiring extraordinary measures to back out of hidden spyware installs done without permission” in Office 365?

fnordpiglet
0 replies
3h4m

Specifically you can’t back out for office 365. That part was windows 11. The rest of their products offer no option to bypass the spyware and adware.

vosper
3 replies
3h14m

What are the “layers of spyware and adware cloaked in dark patterns and requiring extraordinary measures to back out of hidden spyware installs done without permission” in Windows 11?

jwitthuhn
1 replies
2h18m

Everything you type in the start menu gets sent to microsoft by default and this can only be changed with Group Policy, so anyone on home edition is stuck with that.

vosper
0 replies
32m

Thanks for giving me an answer, all the other posts in this chain just seem to be people naming Microsoft products (I get it, people love bashing Microsoft...)

fnordpiglet
0 replies
26m

This does a fairly detailed analysis of the various vectors and wraps up patching in an enormous script plus related tools.

https://simeononsecurity.com/github/optimizing-and-hardening...

There’s also a lot of extensive details of the dark patterns in the installers where no means yes with the privacy preserving option hidden behind rolled up options etc. Even then you need to do other patches to avoid telemetry and data collection. Even then you end up with ads surfaced in the start menu, etc.

CoastalCoder
0 replies
4h43m

And Windows 10, IIRC.

giancarlostoro
2 replies
4h14m

I've posted it before, but when I saw that Microsoft Defender sends your files to be inspected, and has no audit history of what goes from your computer to their servers, I installed Linux.

jxramos
1 replies
2h5m

Do you have a blogpost or something of the like to read up on the analysis backing that conclusion. I’m very curious here.

tredre3
0 replies
1h38m

In Windows' "Virus and threat protection" settings there is the following checkbox (defaults to enabled):

    Automatic sample submission
    Send sample files to Microsoft to help protect you and others from potential threats. 
    We'll prompt you if the file we need is likely to contain personal information.
Regarding that last sentence, it is not documented what they consider "likely to contain personal information".

Vetch
1 replies
4h33m

In this specific instance, I think it is necessary to make a distinction because given the direction things are headed, spied upon is tautological in the context of AI. AI-as-a-service requires sending private details to gain utility and there is a real risk this is the only kind of AI that will be allowed to exist. GPT4 was most probably used given they stated this action was taken in collaboration with OpenAI and so the emphasis on Microsoft was likely used as engagement bait by the blog author. In truth, Microsoft probably put this out to advertise how they take steps to keep AI safe. Unfortunately, this will also be ammunition for those who seek to ban opensource AI, you can only do this kind of thing with Monitored AIs.

Even if you're not an adversarial government leaning on it as a hacking enhancement, it's only a matter of time till governments worldwide demand certain controversial conversational topics be reported to law enforcement. I suspect the Knight Paladins of Anthropic would be more than eager to further this greater cause for the Safety of mankind.

From the start, since all but a few services are using your data to improve their models there is already no notion of privacy. The ethical ones will eventually (if not already) report suspicious activity (as determined by law) to law enforcement and the less ethical ones will report all activity to advertising and insurance agencies.

Some already lobby to ban opensource AI because enhancing the learning rate and reducing the friction of gaining new information for humanity without controlling oversight will also enhance hackers or other bad actors ability to access sanctioned knowledge. They consider this a heresy and deem humanity at large incapable of responsibly handling such increases in cognitive ability. Only a few Adept can be trusted with administering AI. Truly, spying is among the more trivial concerns for the future of computing, given AI's compute heaviness and the amount of centralizing control it engenders by default.

miohtama
0 replies
3h29m

In truth, Microsoft probably put this out to advertise how they take steps to keep AI safe. Unfortunately, this will also be ammunition for those who seek to ban opensource AI, you can only do this kind of thing with Monitored AIs.

This is exactly how the OpenAI blog post reads between the lines: “We work with the government to make the world safe. Or else.”

https://openai.com/blog/disrupting-malicious-uses-of-ai-by-s...

The actual “bad” tasks “bad guys” performed were no different that what one can accomplish with Google searches. But no one talks about stopping terrorists to use Google.

sonicanatidae
0 replies
3h51m

How generous of you to assume that one could opt-out of all of it, regardless of the pattern.

m463
0 replies
1h58m

The darkest pattern is that employers use microsoft software as your tools for employment.

This seems to violate all kinds of boundaries between your personal and professional life.

Most people as they mature learn how important boundaries are to health and well-being, as a child or teen growing up, as a partner, as a parent, and with friends and community.

People don't need this mess too.

There is no clear way to opt out of this, to create and maintain a boundary and to say no.

gpjanik
21 replies
6h25m

I love how the user made "Microsoft is spying on users of its AI tools" from "Microsoft found out that hostile governments of authoritarian countries use their AI tools to conduct illegal activities" and forgot that you legit have to click "Yes, I agree" under the terms and conditions that state they indeed do track all this stuff.

Next up, "Major television broadcasters spy on soccer players during Champions League games".

hef19898
15 replies
6h11m

Can I use those tools without clicking on "Yes"?

bilekas
14 replies
6h6m

Can you drive your own car without a key ?

You don't have a 'right' to use the services. So the answer to your question is "It depends if the service lets you, so 9/10 cases, no. Rightly so"

Edit : I can see a lot of people don't like this. But like it or not, right or wrong, that is how it is. Expecting anything else is just fanciful.

arp242
7 replies
5h34m

Informed consent is key. Do they clearly inform you of this when using these tools or is it hidden in a dense Privacy Policy somewhere with language so vague and woolly most people hardly understand it, which you "accepted" when you first installed Windows?

I don't have a Windows machine and didn't check, but I'm guessing it's a lot closer to the second than the first.

In principle I don't think anything is "wrong" with keeping a record of AI chat sessions, but you do need to know about it so you can modify your behaviour as you see fit (i.e. ask different questions). If there's a camera pointed at you then some might decide that the nose is best left unpicked and those balls will have to remain unscratched.

This is really the problem with a lot of these things: not that they're doing these things as such, but that it's so hidden and obfuscated that you need to spend an hour or more trying to understand it, and even then you only know what they "may" do, not what they "actually" do – a lot of these privacy policies are so vague and full of qualified language that they "may" do almost anything.

And even in recording conversations there's nuance. Is this recorded with your full account detail, or is that information stripped? Are all conversations recorded or only some? Etc. etc.

Geisterde
3 replies
4h25m

Informed consent is key

sounds nice

arp242
2 replies
3h49m

If you want to say something then say it instead of leaving weird cryptic comments. Otherwise don't post anything.

barrysteve
1 replies
2h54m

There's nothing cryptic about his post.

He'a referring to an idea that sounds nice in theory and doesn't work in practice.

It's a common English phrase.

And the phrase works well, the concept of informed concept sounds good in theory and never works in practice.

Anyone who has observed tech companies for decades knows they don't ask for your consent before they make any consequential changes. Tech companies do whatever they want until they get sued or regulated. It's always been this way.

Nullabillity
0 replies
1h23m

Read arp's comment again. The "consent" under those premises isn't valid, because it was neither informed nor freely given.

atq2119
2 replies
4h18m

Another important ingredient of informed consent is that there are realistic, viable alternatives. Those alternatives may have downsides, but if those downsides are so large as to become crippling, they stop being realistic and viable.

That is often a crucial issue in these discussions around big tech, enshittification, and so on.

arp242
1 replies
3h51m

Yes, it depends on the specifics though. "Simply don't use any AI tools" is a viable alternative, IMHO.

But "simply don't use Windows" is already a lot more tricky, because so much of the world runs on it. You and I can get away with just running Linux, but we're not normal people in this regard.

atq2119
0 replies
30m

Agreed, though I'd add that it's quite likely that "simply don't use AI tools" will become less and less viable relatively quickly.

int_19h
4 replies
5h15m

You don't. But it means that the title of the story is correct as worded: if you do use those tools, you are getting spied on. The fact that you're warned about that somewhere in the TOS doesn't change the fact, and the story is basically correct.

educaysean
3 replies
4h32m

I think the word "spy" is problematic here because one might assume that the subject being spied on must be unaware of the fact, otherwise you no longer qualify as "being spied on".

Geisterde
2 replies
4h23m

The government doesnt spy on its cotizens, because the citizens know they are being spied on?

anp
1 replies
4h3m

I hate that states do this, but yeah it’s called “surveillance” when legitimized by norms, statute, and precedent.

pbhjpbhj
0 replies
3h13m

It's called surveillance _to_ legitimise it.

OkayPhysicist
0 replies
2h52m

That's actually a fantastic example of the difference between actually owning something and not: I am absolutely allowed to modify my car in any way I please, including installing a ignition bypass.

r721
1 replies
6h16m

Not even a random "user", but Bruce Schneier!

tuwtuwtuwtuw
0 replies
6h10m

Who also helpfully linked to his pages on Facebook and Twitter. I'm sure those never "spy" on their users.

probably_satan
1 replies
5h50m

Not to mention Microsoft intentionally built their AI lab in China haha. Bananas.

bastardoperator
0 replies
3h0m

You're referring to an office/lab they built in 1998, lol. Pretty sure they bought in on this latest round...

nonrandomstring
0 replies
5h52m

Next up, "Major television broadcasters spy on soccer players during Champions League games".

I strongly agree with you that this story is a nothingburger.

But as I note your reaction, I take a different interpretation. The implication that statements like "Microsoft spies on X" are truisms, is the news. We obviously all now accept as a given that these are tautologies.

In other words "Spies spy on X" -> (is to say) "Microsoft = spies", not just in this sentence/context, but in all contexts.

Terms, conditions, legality, illegality, hostility or friendliness all recede into irrelevance hereafter.

surrTurr
11 replies
6h40m

If you call this "spying", you should read the TOS of the OpenAI API again. If you, as a "hacker", still use their API, it's your own fault:

OpenAI may securely retain API inputs and outputs for up to 30 days to provide the services and to identify abuse. After 30 days, API inputs and outputs are removed from our systems, unless we are legally required to retain them. You can also request zero data retention (ZDR) for eligible endpoints if you have a qualifying use-case. For details on data handling, visit our Platform Docs page.

Source: https://openai.com/enterprise-privacy

binarymax
6 replies
6h23m

Azure's TOS are different. You can also opt-out of data retention: https://learn.microsoft.com/en-us/legal/cognitive-services/o...

blackoil
4 replies
5h56m

It also lists "30-days retention window" for Asynchronous Abuse Monitoring.

binarymax
3 replies
5h52m

Yes, but you can opt-out of the data retention and abuse monitoring if you go through a vetting process and are deemed low risk.

ftkftk
2 replies
5h48m

That is correct. The process is manual and definitely has humans in the loop. It is also required for every individual subscription in your account.

bugbuddy
0 replies
5h23m

That’s the catch every little thing is a subscription on their platform.

Spooky23
0 replies
4h41m

It’s also not failure evident.

I’ve managed a very large tenant with a number of carve out T&C. I’m aware of at least a dozen times that settings reverted during upgrades or migrations. O365 is a huge service with a lot of humans and duct tape holding things together.

Unless you’re auditing them, you have no idea.

gpjanik
0 replies
6h22m

I'd assume Azure's TOS does not allow foreign secret services to use it for its operations. I remember TOS of iTunes mentioned you can't use it to build nuclear bombs. In that sense, there's no way that what Microsoft did was illegal.

blueblimp
2 replies
5h13m

It's still an exceptionally poor privacy policy compared to pre-LLM online services.

Compare with the Google Docs privacy policy, for example:

https://support.google.com/docs/answer/10381817?hl=en

Google respects your privacy. We access your private content only when we have your permission or are required to by law.

I think it is reasonable to expect the same from LLM API providers. The fact that they all currently do mass surveillance on users is bad.

blackoil
1 replies
4h29m

only when we have your permission or are required to by law

That can be very vague. does TOS gave them the permission? Does law can eb interpreted to require them to check for CSAM and pirated material? What about access by Iran or North Korea agents? If you are pushing things to cloud, these questions will be asked.

judge2020
0 replies
4h1m

Anyone who cares will heed this language and not use them in the first place. For everyone else, Google's reputation as a "secure" place to host your proprietary files, docs, and emails will be flushed away if they cooperate with (foreign, mostly) law enforcement or have such a glaring security vulnerability that allows an org's files to leak from Google servers.

infecto
0 replies
5h33m

These are not all that unique and pretty common across all cloud services, GCP, AWS, Azure included. Especially in the spectrum of their ML/AI services.

I don't think these TOS are out of this world.

DinoCoder99
5 replies
6h50m

How do they determine access is state-affiliated? Secondly why is Microsoft kowtowing to jingoist bullshit?

poszlem
2 replies
6h43m

The illusion of a borderless world has faded. The comfortable notion of "global citizenship" was a fleeting artifact of American dominance. As new realities emerge, the vital concept of state security demands far greater vigilance than in decades past.

This is not jingoism, it's a clear-eyed assessment of our world. In this new reality, we can no longer afford the naivete of sharing our most valuable assets with nations who oppose us.

int_19h
0 replies
5h13m

Yet corporations still embrace this illusion when it comes to, say, operating in those nations. It's a borderless world for them, just not for us.

15457345234
0 replies
4h24m

Warmongers will never cease to monger war, I guess. What you've written is massively jingoistic; yes certain forces seem oddly desperate to talk a war into existence - I guess all those solar panels coming out of CN are disrupting certain money flows on a permanent-looking basis - but the absence of any actual, idk, military buildup will be harder to explain away. Unless you just deepfake the war. Which is probably what will happen.

rhdunn
0 replies
6h24m

You can geolocate a user/request based on the IP address. This is common in servers. It allows a server to know which country a request is coming from.

I suspect that this is what OpenAI are doing, as Bing Translate and Google Translate do similar things.

BlueTemplar
0 replies
5h44m

2nd : Because Microsoft is still part of PRISM ?

https://en.wikipedia.org/wiki/PRISM

P.S.: As a reminder, metadata is typically more informative than data.

sammyteee
3 replies
6h41m

In other news, water is wet.

All joking aside, and even as the article points out re: the T+Cs, is anybody surprised?

michaelt
2 replies
6h30m

Many cloud providers maintain the pretence that their employees don't have access to customer data.

For example, they like to maintain that if you run an amazon competitor hosted on AWS, amazon insiders can't look at sales numbers in your hosted database. Or if you run a google competitor with e-mail hosted on gmail, that google insiders can't view your e-mails. And so on.

Personally I've always found these promises rather hard to believe, but a lot of people have a great deal of trust in them.

nullindividual
1 replies
5h11m

I worked for a large cloud provider and that is correct, employees did not have access to customer data. The encryption key in this product is split between a SQL database and the front end servers with blob data being stored outside of SQL (customer data). There was no way for a single employee to get access to both portions of the key to decrypt customer data.

Using AWS when you're a competitor of Amazon seems like poor business decision making.

marcosdumay
0 replies
3h29m

Who isn't a competitor of Amazon?

Or, better, who isn't a competitor and is also not at risk of becoming one?

api
3 replies
6h38m

I've seen numerous stories recently that boil down to people being surprised that cloud hosted products "spy" on you.

If you're sending the data to someone else's computer and it's not encrypted, you're.... sending the data to someone else's computer.

Do people not understand this?

__MatrixMan__
2 replies
6h15m

My wife teaches English. She uses Google docs revision history to spy on her students writing (and occasionally to recover it in the event of a disaster). So if it's plagiarized, she can see the moment where they pasted the plagiarized version.

They apparently have no idea.

So yeah, I think that people legit don't understand. The gap between tech literate and hacker is problematically large.

api
1 replies
5h38m

I increasingly feel like a member of some kind of high priest or scribe class in a feudal society, and I don't like it. I don't like where this is going.

It'll either end in a dark cyberpunk total surveillance and mass manipulation state or a kind of Butlerian Jihad where people reject networked cybernetic systems en masse. Maybe the first happens followed by the second.

__MatrixMan__
0 replies
5h29m

Agreed. I think we need to be a bit more conscious of whose agenda we're willing to further with our work.

I'd rather retire of unremarkable means in a functioning society than be wealthy in a dystopia of my own making.

_sword
3 replies
6h4m

The White House’s AI regulatory order requires big cloud providers to monitor for and report foreign clients (ie Chinese) for using their services to train AI fyi. Monitoring for malicious activity isn’t nearly as invasive as what the government is requiring

mistrial9
1 replies
5h23m

oddly, top ranking AI research in the world right now comes from Stanford USA+Wuhan PRC .. this is trivially verified.

Intelligent commentary will need to differentiate between public cooperation layers, non-public cooperation layers, and non-public adversarial layers .. because, they are all active.. all of those.

_sword
0 replies
2h28m

The White House doesn’t differentiate! If you’re foreign and training bigger models on US cloud infra, you’re reported

https://www.whitehouse.gov/briefing-room/statements-releases...

nonethewiser
0 replies
4h35m

Sounds like the government is requiring monitoring of malicious activity.

skybrian
2 replies
3h56m

OpenAI is welcome to train on the questions I’ve been asking about how to do type-level programming in TypeScript. In other circumstances, I might ask such questions in a public forum like StackOverflow, or a discussion forum on GitHub, but ChatGPT is more convenient.

I suppose other people might ask more private questions, but I have trouble coming up with examples of private things that I’d want to ask an AI chatbot about. And there are plenty of things to ask about that aren’t personal.

copperx
1 replies
2h52m

That sounds like the old "I don't do bad things therefore I don't need privacy" pro-spying argument.

skybrian
0 replies
1h16m

I do need privacy sometimes, but not for this. Others may have different use cases.

This is true of everyone. Do you need privacy all the time? Presumably you're okay with your Hacker News comments being public.

phillipcarter
2 replies
3h44m

This is a weird post. Yes, they monitor for abuse of their TOS. No surprise there.

But also, this is really the only way to actually improve the underlying tech. How do you know patterns of abuse? Normal use? Weird but non-abusive use? By collecting data on that usage and feeding it back into development.

yjftsjthsd-h
1 replies
3h23m

But also, this is really the only way to actually improve the underlying tech. How do you know patterns of abuse? Normal use? Weird but non-abusive use? By collecting data on that usage and feeding it back into development.

You can collect that data internally or let users opt in; spying on users is absolutely not the only way to improve.

phillipcarter
0 replies
54m

It's not spying on users.

And no, it's really the only way to do it effectively. When you make it opt in or internal only, you get far too much selection bias to make the product good. That's just how it works with this tech right now.

dev1ycan
2 replies
3h44m

Microsoft is very literally an arm of the US government, I think people should already realize that.

Google is terrible, but Google refused (they say so we don't know how true this is), to help the US government on spying, and Microsoft stepped in.

Bill Gates gets to sit with Xi Jinping and get called a "friend" and MS has exceptions in China unlike other companies.

Microsoft wants to buy ActivisionBlizzard, gets brought to the senate on the antitrust case, and the senators start barking against Sony instead, lol.

Microsoft is very literally all the US accusses Chinese companies of being for the Chinese government, now, so far it doesn't seem like MS has done that much with our data, but who knows what the future holds.

coldtea
1 replies
3h33m

Google is terrible, but Google refused

Isn't "we refused" what they would have said they did even if they haven't refused?

dev1ycan
0 replies
2h34m

Yes, I know, that's why I stated I can't confirm if that's really the truth. Again I'll cite the whole ordeal with Apple claiming they are the privacy company and how they "refused" the government access to a persons iphone back in the day as an example, I wonder how they refused when the patriot act is a thing? they refused the government? lol

codeptualize
2 replies
6h35m

Yes.. they are pretty clear that they monitor for abuse. 30 days.

Not sure how this is a surprise.

wredue
1 replies
5h31m

As nothingburgers go, this is pretty tame and expected. You use a platform that entirely runs on a hosted back end, you’re going to have everything logged and saved.

Compare that to, say, Nvidia, who sends every single titlebar value, as well as every single click you make to their servers (this data sends even if you opt out, as it is classified as “necessary”). Probably the most invasive spyware you have installed today.

ambichook
0 replies
20m

jfc, is this from geforce experience? if so i knew it was bad but this is something else entirely

2OEH8eoCRo0
2 replies
6h31m

No shit. The internet is a network of other people's computers.

Was the internet designed to be private?

bilvar
1 replies
6h13m

The neighbourhood is a network of other people's homes. Let's allow everybody in ours then.

2OEH8eoCRo0
0 replies
5h34m

What does the public street represent in your analogy?

living_room_pc
1 replies
4h47m

I always assumed so, and I never feed it anything sensitive.

autoexec
0 replies
4h42m

I assume it's true for anyone offering AI as a service. Anytime you hand your data over to someone else they will use it for whatever benefits them. No one should be handing them sensitive data.

VyseofArcadia
1 replies
6h7m

Could be shortened to just "Microsoft Is Spying on Users".

If you use a modern MS product, you are sending them lots of data, full stop.

berniedurfee
0 replies
1h14m

Or more generally, “Everyone is spying on users, that’s how the internet works.”

Urgo
1 replies
4h51m

Google Bard.. Gemini makes it explicitly clear that they do this right from the homepage:

"Your conversations are processed by human reviewers to improve the technologies powering Gemini Apps. Don’t enter anything you wouldn’t want reviewed or used."

"How conversations improve Gemini

Just by having a conversation with Gemini, you’re making Google services better, including the machine-learning models that power Gemini.

As part of that improvement, trained reviewers need to process your conversations.

So when using Gemini, don’t enter anything you wouldn’t want a reviewer to see or Google to use.

Your Google Workspace content, like from Gmail or Drive, is not reviewed or used to improve Gemini.

You can turn Gemini Apps Activity off If you don’t want future conversations reviewed or used to improve machine-learning models, turn off Gemini Apps Activity"

ilc
0 replies
4h11m

I'd love to get some clarity on this from Google on if they actually support not getting spied on or not?

It is only kinda 1/2 clear here.

wut-wut
0 replies
4m

Well Duh.

weare138
0 replies
3h42m

In collaboration with OpenAI, we are sharing threat intelligence showing detected state affiliated adversaries—tracked as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon

Does anyone else think this new APT naming scheme is terrible or is it just me?

tzm
0 replies
2h54m

spying === user engagement?

throwaway22032
0 replies
5h15m

When you send data to someone else they have that data, act accordingly.

Anything else is just stupid. Didn't we all learn on the playground that a secret isn't secret any more once you tell a friend?

sandworm101
0 replies
6h49m

> The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I’m sure the terms of service—if I bothered to read them—gives them that permission. And of course it’s no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.

What? The chatbot is not your therapist. The chatbot is Microsoft. They are the same entity. Everything shared with the chatbot is shared with microsoft, just as every google search is shared with google.

Related legal determination re chatbots:

https://www.washingtonpost.com/travel/2024/02/18/air-canada-...

racl101
0 replies
4h17m

SSSHHHHHHHHHHHHHHOCKER.

probably_satan
0 replies
5h52m

Why did Microsoft build their AI lab in China, then?

Sounds like Microsoft sold the US citizens out to China and are now trying to play the victim card while adding Russia to the blame.

Say what you want about conspiracy theories but China and Bill Gates seem to be afraid of women to the point where they have chosen to try to eradicate them from society. Maybe the rumors are true about Bill Gates' attempt to make little girls infertil via vaccination.

orev
0 replies
4h46m

I don’t think this is a surprise to anyone here (I just assume anything done on any of these services is being “used to improve it” i.e. “spying”). But if Schneier is talking about it, maybe it will get more attention.

numair
0 replies
5h58m

While we are on this topic — can someone, anyone, tell me how to disable the AI autocompletion suggestions in Outlook for iOS? Getting ready to dump Microsoft and Exchange to get away from that utter mess.

Perhaps some of the people who are commenting here don’t realize that there isn’t much of a way to opt out of these data “analysis” activities.

nox101
0 replies
6h12m

Curious what the difference is between that and a search engine.

notavalleyman
0 replies
5h6m

There are real spies in this story, who are committing truly evil acts in the world, and using technology to further their aims of spreading malware and destroying international space conventions.

And yet the poster finds Microsoft the bad guys in this story?

This is a story about our international enemies using advanced western tech to develop malware, with which to attack us.

It's right to be outraged, but not at Microsoft

lm28469
0 replies
4h30m

What's new? The vast majority of online companies are glorified data miners

lenerdenator
0 replies
3h49m

Duh?

AI is built on data. The more you spy, the more data you get to train AI, the more you train AI, the better/more valuable it gets and the more people want to use it to give you data. It's a cycle.

kmeisthax
0 replies
6h22m

The whole excuse OpenAI gave for going proprietary was specifically that they could spy on you and use that to reactively adjust GPT's alignment training.

For what it's worth, the US military is also evaluating LLMs in the same way the Chinese and Russian hackers are. I imagine - or at least hope - they aren't using OpenAI's hosted ChatGPT and are getting copies of models they can run on SIPRNET or JWICS.

kelnos
0 replies
4h32m

The title/headline here is a bit flamebaity. Of course Microsoft is going to retain copies of AI chatbot sessions for a bit, and look for signs of abuse. Even not having read their ToS, it's pretty safe to assume that's going to happen.

I just don't see the harm. There are plenty of examples of privacy overreach on the internet, and Microsoft is guilty of some of them, but this particular thing seems fine to me.

And in this case, I think MS is doing the right thing! They shouldn't allow people to use these tools if they're going to use them to help them build malware. Certainly this sort of abuse-tracking should be narrow in scope, but it seems like that's exactly what it is, at least in this instance.

If you don't want a third-party reading your interactions with an LLM, build/train your own.

Eventually we may decide (legally, hopefully) that our interactions with AI tools should have strong privacy protections, but that has not happened yet, and companies are going to apply their usual privacy rules to these things: that is, not much privacy granted, at all.

I generally have a lot of respect for Schneier, but this sort of breathless, sensational "reporting" isn't doing privacy advocacy any favors.

jsnell
0 replies
6h49m
interludead
0 replies
4h33m

There is no such a thing as privacy nowadays

gigel82
0 replies
5h12m

I mean, probably. But there's no smoking gun here, just wild speculation. Someone should dig deeper and ask Microsoft / OpenAI to disclose exactly how they obtained the "threat intelligence".

Here's the full report for reference, but they don't answer how the data was collected or correlated to the specific "threat actors": https://www.microsoft.com/en-us/security/blog/2024/02/14/sta...

drivingmenuts
0 replies
2h13m

If it’s running on their serve and it’s part of the license, why shouldn’t they? A real competitor or wanna-be should have the sense not to do that, so it seems justified that they would keep an eye for how their product is used.

cynicalsecurity
0 replies
5h8m

Finally Microsoft is doing some good things. Spying on Russian, Iranian etc spies is what you are supposed to do.

cooper_ganglia
0 replies
6h6m

I’m sure the terms of service—if I bothered to read them—gives them that permission.

I can’t tell if this article is parody or not, and that’s concerning, haha

capital_guy
0 replies
6h21m

Anyone using these tools is asking for all their data to be hoovered up. Locally powered AI is the only way to use it if you care about this kind of thing.

canadiantim
0 replies
4h8m

How much do people trust that VSCode isn't spyware?

bilekas
0 replies
6h7m

I do not like M$' behavior in general but this is almost certainly not spying.

The OP seems to think that because the users region was recorded its spying on sessions. With a self righteous "I didn't read the TOS but This just confirms it".

Microsoft would obviously be recording region metrics, maybe even keyword/request trends. None of this information is identifiable and makes complete sense to be monitoring for improvement on a new product area.

bee_rider
0 replies
5h46m

I could see somebody being confused and not understanding that, say, a copilot tool in a local IDE, or some of their AI in the start menu stuff, is actually using online services. But it is pretty obvious that when you write something into a website that’s… going on the internet, right?

bearjaws
0 replies
5h46m

EZ rage bait.

Everyone of these platforms is scraping your inputs to train their next model.

How is it not obvious they would also detect abuse?

bastard_op
0 replies
5h5m

Do they have a product that does NOT spy on users at all?

Pretty much everything they do is either cloud connected inherently giving them visibility into what you do, or they feed constant telemetry back of what you do in one form or another in the os, office apps, whatever you happen to use they make, on whatever platform you use it on, including non-windows systems.

Really do you expect anything more of Microsoft at this point in history?

atum47
0 replies
4h28m

Not to be an as*ole but is anyone surprised by this?

add-sub-mul-div
0 replies
5h47m

Why waste time on performative outrage about how services that require a login have our data and use it in ways we'll never know? You have to accept it as a given and move on in order to start asking the right questions.

Like, are we letting ourselves be herded towards a future where it's normalized that all search must be done by authenticated users, and "classic search" is effectively deprecated ten years from now?

Zelphyr
0 replies
6h45m

I don't trust Microsoft as far as I can throw them collectively but it doesn't come as a surprise to me that they are doing this any more than it would if I were to hear that OpenAI, Google, etc... are engaging in this behavior.

That's not to excuse it, however. I think it's already time they all implemented a privacy stance where they are only allowed to view chat conversations if the user gives them explicit and revokable permission.

RIMR
0 replies
2h43m

From the comments:

OpenAI are “spying” on ChatGPT sessions the same way that email providers “spy” on your email to filter spam. This sounds like a variation on “Google is reading your email”.

Yeah, but Google is actually reading your emails. They build a profile of everything you buy from emailed receipts, and use that data to target ads at you. It's definitely not about improving the user experience, like a Spam filter accomplishes...

BandButcher
0 replies
5h50m

Imo bad story piece to cover up the recent Microsoft hacks done by "state actors".

This has nothing to do with MS spying on user ai tool usage.

Admit it MS, you got hacked, service accounts and executive emails were leaked, along with who knows what else, and you "caught them" after damage was done.

23B1
0 replies
6h35m

You should know that if you're interacting with an AI tool through various social channels – facebook, twitter, instagram, texting – those chats are not only being actively read by the employees of that company, but anyone who is responsible for the 'marketing' of that company, e.g. outside vendors.

Certainly these firms are bound by contract, but the stuff I've seen people ask these chatbots – health information, sexual information, highly personal stuff you might only share with your shrink... well, it's being read by the 22 year old marketing intern as well. Unencrypted, in the clear, copy & paste'able, and attributable to your username/ID