A more precise yet general headline: “Microsoft is spying on users”
It’s nearly impossible to install their products without opting into layers of spyware and adware cloaked in dark patterns and requiring extraordinary measures to back out of hidden spyware installs done without permission. Microsoft has always been a fairly malignant business, built in shenanigans of various sorts, and as the market perfects new shenanigan vectors they’re right there pushing the envelope.
What are some examples outside of the OpenAI one in Schneier's blog post?
Windows 11
Office 365
Outlook
VS Code.
Edge
XBox
Notepad
github, azure...
What are the “layers of spyware and adware cloaked in dark patterns and requiring extraordinary measures to back out of hidden spyware installs done without permission” in Office 365?
Specifically you can’t back out for office 365. That part was windows 11. The rest of their products offer no option to bypass the spyware and adware.
What are the “layers of spyware and adware cloaked in dark patterns and requiring extraordinary measures to back out of hidden spyware installs done without permission” in Windows 11?
Everything you type in the start menu gets sent to microsoft by default and this can only be changed with Group Policy, so anyone on home edition is stuck with that.
Thanks for giving me an answer, all the other posts in this chain just seem to be people naming Microsoft products (I get it, people love bashing Microsoft...)
This does a fairly detailed analysis of the various vectors and wraps up patching in an enormous script plus related tools.
https://simeononsecurity.com/github/optimizing-and-hardening...
There’s also a lot of extensive details of the dark patterns in the installers where no means yes with the privacy preserving option hidden behind rolled up options etc. Even then you need to do other patches to avoid telemetry and data collection. Even then you end up with ads surfaced in the start menu, etc.
And Windows 10, IIRC.
I've posted it before, but when I saw that Microsoft Defender sends your files to be inspected, and has no audit history of what goes from your computer to their servers, I installed Linux.
Do you have a blogpost or something of the like to read up on the analysis backing that conclusion. I’m very curious here.
In Windows' "Virus and threat protection" settings there is the following checkbox (defaults to enabled):
Regarding that last sentence, it is not documented what they consider "likely to contain personal information".In this specific instance, I think it is necessary to make a distinction because given the direction things are headed, spied upon is tautological in the context of AI. AI-as-a-service requires sending private details to gain utility and there is a real risk this is the only kind of AI that will be allowed to exist. GPT4 was most probably used given they stated this action was taken in collaboration with OpenAI and so the emphasis on Microsoft was likely used as engagement bait by the blog author. In truth, Microsoft probably put this out to advertise how they take steps to keep AI safe. Unfortunately, this will also be ammunition for those who seek to ban opensource AI, you can only do this kind of thing with Monitored AIs.
Even if you're not an adversarial government leaning on it as a hacking enhancement, it's only a matter of time till governments worldwide demand certain controversial conversational topics be reported to law enforcement. I suspect the Knight Paladins of Anthropic would be more than eager to further this greater cause for the Safety of mankind.
From the start, since all but a few services are using your data to improve their models there is already no notion of privacy. The ethical ones will eventually (if not already) report suspicious activity (as determined by law) to law enforcement and the less ethical ones will report all activity to advertising and insurance agencies.
Some already lobby to ban opensource AI because enhancing the learning rate and reducing the friction of gaining new information for humanity without controlling oversight will also enhance hackers or other bad actors ability to access sanctioned knowledge. They consider this a heresy and deem humanity at large incapable of responsibly handling such increases in cognitive ability. Only a few Adept can be trusted with administering AI. Truly, spying is among the more trivial concerns for the future of computing, given AI's compute heaviness and the amount of centralizing control it engenders by default.
This is exactly how the OpenAI blog post reads between the lines: “We work with the government to make the world safe. Or else.”
https://openai.com/blog/disrupting-malicious-uses-of-ai-by-s...
The actual “bad” tasks “bad guys” performed were no different that what one can accomplish with Google searches. But no one talks about stopping terrorists to use Google.
How generous of you to assume that one could opt-out of all of it, regardless of the pattern.
The darkest pattern is that employers use microsoft software as your tools for employment.
This seems to violate all kinds of boundaries between your personal and professional life.
Most people as they mature learn how important boundaries are to health and well-being, as a child or teen growing up, as a partner, as a parent, and with friends and community.
People don't need this mess too.
There is no clear way to opt out of this, to create and maintain a boundary and to say no.