This is confusing and vague to me, which I believe is exactly the intent. It focuses on security, reiterates that security is their top priority (and we know that this is untrue). What were the security problems? They don't even allude to the existence or detection of any specific security problems.
It sounds to me like they're figuring out a new marketing approach, or they're softening the blow by "listening to users" and then rolling out more slowly, when outrage has died down and people will just accept it.
My takeaway is that Microsoft has been trying to boil the frog, but slipped and turned the temperature up too quickly. They're retreating for now, but make no mistake that Recall will slowly trickle back into Windows under another name. Every major power broker wants something like Recall to become the norm - bosses to spy on their employees, governments to spy on their citizens/enemies, and tech CEO's to collect training data for AI and target more ads at end users.
This is a very cynical take. I've not seen anything to make me think this feature is intended for surveillance as opposed to personal utility. The personal utility benefits are very clear to me - the problem is the ease with which malicious attackers might steal the data (if they can breach the system).
I do not think it is cynical to assume that Microsoft would sell this to companies as a way to do constant surveillance of their employees with OCR and LLMs used to make it easier for a manager to sift through massive amounts of data.
That's just an actual use case that their true customers would pay for, I think it's awful and should be illegal under any reasonable worker protections but why would they not advertise it this way privately to business customers?
I also don't think it's cynical to think that a manager looking for a reason to get rid of someone will have a much easier time justifying a PIP or just straight up firing someone if they can retroactively have an AI do it for them.
Why wouldn't they be able to ask the system "how much of <employee they don't like>'s time do they spend doing things on the computer that are not directly related to <company name>?"
Is it technically happening already? Sure, there's nasty nasty spyware being forced on people and it is awful and I hate that those employers are getting away with it. But integrated into the OS, on by default, with a long memory? Just imagine how easy it will be to fire anyone that tries to unionize in an effort to fight against such surveillance.
It's exactly this.
Development of a feature like this surely started during the WFH craze, where managers could no longer casually walk behind people who had to have their monitors facing outwards. A market opened up, and this is not the only tool for this sort of corporate surveillance.
Certain Software Engineers will probably get some time without it by claiming they need Admin rights and that the system messes up their graphics or slows down their system or what have you.
Or you are living in a country where worker rights prevent causeless mass surveillance of employees.
Workplace surveillance of employees became widespread in part because of sexual harassment laws, employers suddenly had to protect themselves from litigation.
See:
https://archive.nytimes.com/www.nytimes.com/books/first/r/ro...
This is a really pernicious lie. If you believe this sort of thing, explain why you think sexual harassment laws are unfair, and why corporations were so trusting of their employees before that.
Hint: They weren’t trusting. Corporate surveillance follows technology. The bosses are obsessed with watching their workers every second. This is nothing new. What’s new is that we now do most of our work on networked computers, cameras are vanishingly cheap, and data storage is abundant.
That doesn't seem plausible given that "scientific management" is quite a bit older and one of its main concepts comes from an experiment in surveillance from 1927.
https://en.wikipedia.org/wiki/Hawthorne_effect
The linked article does not support what you say in any way. If anything, it argues that invasion of privacy can actually be used against somebody by getting things out of context. It is definitely not what the link talks about, spending 1/3 of the text writing about how wrongfully invasion of privacy was used during Clinton's impeachment. Maybe you meant to share something else?
It's not even only about surveillance. Microsoft also makes Github Copilot. Getting Recall onto developer machines gives them the opportunity to train their AI on how programmers actually program, rather than just using an LLM trained on code.
Eventually we'll have programmers with Recall activated by company policy on their PCs, actively training the AI models that will replace their labor.
That has to be part of the goal here. The full automation of software development. Think about how much money Microsoft would make if they did it, and how much they would save if they implemented it.
We need a new Luddite movement to protect the workers from all of this.
Typing is the least interesting part of programming. And most of the other doing parts have been automated already (compiling, testing, deploying,…) Most of my days are mostly spent reading, thinking, and waiting.
Hear! Hear!
I work in a massive data center. Manned by very few people. I often think about how many homes could be heated or cooled with the power used to prop up the internet.
It feels borderline criminal when there are homeless and hungry all over the world.
If that's the case, why don't they sell Teams activity data to companies? I mean, after you're idle for 5 minutes, Teams detects this and changes your status to "idle". Following your reasoning, they should be selling this data already.
https://learn.microsoft.com/en-us/microsoftteams/teams-analy...
As far as I can tell, this does not allow the employer to see whether employees were idle or not. It does allow tracking of how much time they spent in meetings and how many chat messages they sent.
Why are you focused on idle time? You don't think a LLM can try to answer other questions?
Do you think Microsoft will prevent their paying customers (the companies) from querying, "What is the strongest legal reason to fire <person> based on the past three years of activity on this computer?"
Their sales team would be absolute fools not to point out how much easier it makes it for a manager to see historically whether someone is performing tasks as they are specified in some formal handbook.
The difference between doing that for reasonable reasons and doing that for post-hoc justification of targeted reprisals is in the mind of the manager and nowhere else. Maybe unspoken, but incredibly obvious.
"Give me the man and I will give you the case against him."
You mean Viva Insights? (formerly Workplace Analytics)
https://www.microsoft.com/en-us/microsoft-viva/insights
Does this allow seeing how long an an employee was idle, or just whether they were in meetings?
As of now, Microsoft seems to be boiling the frog slowly by marketing derived analytics. I believe they're less-than-specific on what goes into the mix.
E.g. "Employees that might be suffering burnout and need attention"
Which I can see for both PR optics and product reasons. If they retain the secret sauce and raw data, it makes it more difficult for others to go over-the-top and compete.
You’ve got a point. Presuming you are correct, what do you think happens when the team has been culled?
Union busting & screen tracking already works pretty well as is for the goals you’ve outline.
We usually think about tracking/measurement as Big Brother looking over our shoulder, but all of us are living a day-to-day reality of losing context and having to invest a lot of effort and time to get it back (usually only partially).
I'm not quite sure what you mean, I see this as a long term trend that doesn't really have an end point.
There are always people that some manager wants to get rid of, for performance or unrelated reasons.
Employers are scared of getting sued for wrongful termination. Often they do it anyway, but then they need to make up a reason. They're decent at it already, but my prediction is that wrongful termination will become far more widespread and harder to detect or fight in court.
It won't stop though.
I don't think I understand your point here. It feels as if you're framing this as a binary decision/outcome. Personally I see Relay making such abuse easier. So I don't think the existence of bad acts in any way lessens the potential harm of Relay.
I also don't understand this. Do you keep notes? If the problem is quite large for you, I think you should take more notes and likely better notes (a skill in of itself). Yes, this has cost, but so does everything. There is no free lunch. But notes are distilled while technologies like Relay are dragnets. And at the root of your argument is the recognition that information is powerful. So you have to ask what information has power and to who. Because information that may not be useful to you may be useful to others who wish to use power against you. And in those scenarios, I don't know about you, but I'd rather have distilled information, and more specifically be more aware of what information is being stored, than just scoop up everything.
Personally, I just don't think it is very hard to take notes.
I agree it's not cynical. But MSFT doesn't give a shit about surveilling employee computers for PIP purposes. Like, really? A 3 trillion dollar company and this is how they're going to add shareholder value?
They need data to feed their LLM / AI models. Period.
I think you underestimate the amount of businesses who would love this for reasons of fear mongering. Yes, they also want it for training their crummy AI models
As far as I know, a long while ago, the Islamic Republic of Iran asked Cisco to develop a filtering solution to stop their citizens from accessing undesirable content. Cisco said no. Then US companies started asking for filters to stop their employees watching porn at work, Cisco invented a centralised domain/packet filtering solution for their routers, and Iran went "can we buy one of those, please?".
My take is that MS did intend the feature purely for utility (and to be fair to them I can think of a lot of scenarios where it is useful). But they did this by not seriously thinking about security at all, and the wider internet has now done that thinking for them.
It reminds me of why SSL version numbers effectively start at 3. Netscape wrote version 1, their internal security team broke it, so they wrote version 2 and I believe shipped it without letting their internal security team do a full review. That got broken quickly too, so they want back and did the job properly (by the standards of the day) and shipped SSL v3, which lasted a while. (It's also been broken now, of course.)
I think Microsoft realised recall needed more work, and is now looking at that more seriously.
When would this be useful? Microsoft's best examples are that the user forgot the location of a chinese food place your friend told you once.
I imagine MS did a lot of user studies, and found that the average user could gain a lot from being able to ask the computer questions like "where's the word document for the summer anniversary party that I worked on a couple of weeks ago" or "the photo with the waterfall from our holiday in Greece in 2015 that I sent to Mary recently". Whether Recall in 2024 will be good enough to answer queries like that remains to be seen.
From helping non-technical family members find where they've mislaid files (such as behind another file on the desktop, which can happen if you drag more than one file at a time) I am confident there is a user base for this kind of thing.
We are, after all, in a world where the youth don't seem to understand file systems and folders [1] and rely on the search feature for everything. Recall could, if done properly, be a great user experience for such people.
It was through user studies that we got both the ribbon interface (great for new users apparently, even if less so for experts) and the fact that when you open an office app it suggests a list of documents you worked on most recently. Sharepoint even takes this further in organisations and suggests documents shared by others that "might be relevant to you" based on what you worked on recently (it's not very good).
If I want to be really snarky, I could mention that UNIX had "Recall" back in the days of text-mode only consoles. It was called the `.bash_history` file, and it's genuinely useful.
[1] https://news.ycombinator.com/item?id=30253526
Google Photos' search bar would be able to complete this search, since like 2015. Recall is completely overkill for this, like building a Death Star to swat a fly.
Only if your photos are in Google Photos. And weren't we expressing the concern of sharing our personal data with giant massive tech companies? Google Photos work entirely locally these days?
Google Photo is opaque and unreliable and keeps degrading and corrupting your photos, and if I'm not misremembering, had data loss issues in a past.
OneDrive doesn't have those problems, but its search is even more unreliable than that in Google Photos.
In both cases, the companies go out of their way to remove any controls over classification, or even user agency in search. Like, how hard would it be to list all of the categories it knows for users to browse, as well as on the photo page for users to know all the buckets the photos land in? They go out of their way to not do that.
Not that there are any better alternatives. For example, Samsung gallery app is just as bad, despite running locally on your phone, and on top of that, has data loss issues that the company refuses to admit or fix. For some reason, tech companies managed to fuck up something as basic as a photo gallery.
I think this was done on purpose to disempower the user.
Easy answer. It's a built in history.
I use bash history all the time, I use my browser history all the time.
To be able to use an OS history would be amazing.
What was the name of the esoteric software i was using to program my lego robot,
What was I working on last Thursday so I can fill out the government required SHRED report to get the Canadian RnD tax rebate.
What was the song i was listening to that Spotify played last Tuesday afternoon.
There are so many times i'd use a feature like this.
Which is fine because the browser has a private browsing mode, and the shell has the space trick (for example if a tool requires an SSH key as a command-line argument) as well as various "pinentry" things.
You'd need some API for applications to signal to Recall "the user has requested not to save this", and then every single program with a password input box would have to update to call this.
Yea it has stuff for these use cases.
https://support.microsoft.com/en-us/windows/privacy-and-cont...
All the important controls here have to be done by the user. You really think the average user is going to blacklist things in the awful settings app?
what could the OS do to "blacklist" things on its own?
How would the OS have any chance of knowing I don't want my programming session recorded if I don't' tell it?
How would google chrome know to go to incognito mode if I don't tell it?
Of course the burden for this is on the user, what other way could possibly work?
I think the best unspoken use cases is Recall is basically distributed backup of content. MS will get the idea in their head one day that they can pull dead info from peoples HDs. This is sus capability is MS decides to play info broker. This would be great if there's some system where people can access link rot / vanished content backed up from someone elses computer.
For example you had some issue while developing the application, but you don't remember what parameters did you use to create this bug.
Or for example, you were reading some article but did not save it and now want to recall it.
Or maybe you watched some music clip or song on the some website and forgot the link to website.
A lot of use cases.
It seems weird that Cisco wouldn't help Iran when they were indispensable in the creation of China's firewall. Do you have more details on the reasoning? Was it due to sanctions or did they genuinely not want to help Iran?
I'm afraid my source for this is a half-remembered conference talk from someone who I believe worked for the TOR foundation. My best guess technically was that they didn't want to invest R&D effort into the form of Deep Packet Inspection that came out as a result, for a project that could get them bad press or hauled before congress.
TPM was met with resistance due to privacy concerns and Microsoft quietly re-introduced it anyway. The same will happen to Recall.
Has TPM been a net positive or negative for users / enterprises / the industry?
TPM protects against two main threat models:
1. You don't trust people with physical access to the computer. For the average home user, this means you consider the hardware owner a threat.
2. You want to protect against malware that has already taken complete control over the OS at runtime, and that wants to write itself to disk or the BIOS so that it survives a reboot. At this point, the attacker has already won, so... This might make sense on a stateless appliance like a Chromebook where you do factory wipes a lot.
So TPM mostly "protects" against the hardware owner, or against malware that already has 100% access to all user data, and just wants to stick around a bit longer.
Personally, I'd go with TPM being net negative, because the primary threat model it "protects" against is the actual hardware owner.
For a mobile device, such as a laptop, lots of people other than the device owner will have physical access.
The useful use-case of a TPM to me is the ability to encrypt my disk without having to type a decryption password each time I use it.
It does require someone to steal the entire laptop rather than just the hard drive, but… I don’t think that this was an actual worry, and the security result of encrypting to a device with the key stored in the same device is much like not encrypting.
It also makes it a lot harder to bypass the login screen, even if someone takes the whole laptop.
In case you weren't aware, the ability to do a passwordless unseal can be tied to not tampering with the bootchain. It's not entirely bulletproof, but it's beyond the abilities of most thieves to bypass this (versus just popping the drive in another machine).
I think you are missing some parts in the industrial use.
The TPM is also used for device authentication. It prevents the leakage of certificates that are used to ensure that you are using the device you claim to be using. This is highly relevant when having remote access from users and one would like to enforce tiering rules together with privileged access workstations.
Furthermore, the second example in which "the attacker already won" is missing the context. The attacker does not want to access the computer (in the industrial example), it wants to use to escalate access within its organization. The TPM can be used for remote attestation, that is, a remote server can verify the integrity of the boot process of the device before giving access to remote resources. In other words, it can be used to check for device compliance.
It is definitely a positive for enterprise security.
Interesting perspective. While I know secure boot has some downsides, on the whole I think it’s a pretty good thing.
I guess you’re looking at it as a freedom for gramps to dual boot a homebrew OS, and I’m looking at it as taking away gramps’ freedom to install persistent malware that requires buying new hardware to get rid of.
No
Smartphone encryption uses TPMs to keep keys out of RAM and to limit thieves/police to 9 PIN attempts before wipe on failed attempt 10. If you care about your phone being encrypted you benefit. If you wipe a phone with just a few taps thanks to key destruction instead of waiting for a full TRIM run you benefit.
On the negative side requiring TPM to install Windows 11 is planned obsolescence that greatly outweighs any perceived platform level security Microsoft promises. A lot of e-waste will be generated ahead of the Oct 2025 sunset of Windows 10. Who really believes Microsoft is fighting for user security like Google did when they proactively sunset SHA-1? Platform security also means bank apps refuse to run on rooted phones. Some online games have metastasized from kernel extensions to TPM verified hardware IDs.
It's the same playbook every company uses, who want to feed us something we don't like. They'll try again and again. Maybe they'll add sugar to the medicine, maybe they'll wave the spoon around and make airplane noises, maybe they'll distract us with a toy and jam the spoon in when we aren't expecting it, maybe they'll hold us down and give it as a suppository. One way or another, the baby is going to take the medicine. That's how these companies think about their customers.
Another example comes from Facebook/Meta.
When WhatsApp forced accepting terms that affect privacy, they faced huge backlash and many were migrating to alternatives like signal & telegram. In response WhatsApp didn't backout of new the policy but just removed the enforcement deadline.
Now they silently and randomly show an annoying popup asking users to agree to the new privacy terms. The dialog is strategically placed and designed to collect as many accidental as clicks possible.
Sadly, the strategy worked for them and nobody cares about the new terms any more.
and your take is quite naive.
Surveillance is absolutely the purpose, overt or not. The huge push for bossware/spyware for windows in 2020+ demonstrates that the less ethical portions of industry desperately want to spy on users workstations! Eventually there will be retention laws in certain regulated industries that mandate such technologies! Why enable this potential abuse?
Microsoft is trying to Sherlock the surveillance software industry with this!
I’d rather run North Koreas spyware Red Star Linux than Microsoft Windows.
This doesn't make sense. Screen recording is trivial. Why go to this much trouble? I don't buy the "Trojan Horse" argument in this case.
Occam's Razor, folks.
Well yeah, but doing it by default and saving the results in a searchable way for each and every one of your users is not.
Screen recording is Data.
Being able to perform text-search queries on those is Information.
Having pie charts of "what % of the time did my minions spend on work-related tasks today?" is Knowledge.
What's lacking IMHO, is the Wisdom to ask "just because you can build this technology, should you?"
Recording is trivial.
monitoring at scale, in real time? getting a concise "what did bob do on his computer all day" those are hard.
I would suspect its much more ambitious than just peeking over your shoulder.
If you are going to try to make some new product to automate white color jobs a good way would be to sample what all the people are actually doing on windows every 5 seconds and see what you really have.
Peeking over your shoulder will be a side effect you get for free.
It is amusing to me because I was actually considering getting a windows laptop then they pull this shit. So standard for this evil company, I had just been lulled to sleep.
You're taking about this company:
https://learn.microsoft.com/en-us/purview/purview-compliance
https://learn.microsoft.com/en-us/purview/communication-comp...
This is disgusting.
I did not know that Microsoft offers these tools to organizations. I'm honestly shocked that this exists. They'll 100% abuse preview to offer similar features in the future.
Over the last years/decade, they worked hard to improve their image in the tech community, and I have to admit, it worked, at least for me. They've just lost all the respect I had for them.
I can't believe I'm saying this, but in Microsoft's defense, those controls are aimed at companies working in regulated industries. They're meant to help those companies prove they they're meeting their legal and/or contractual compliance obligations.
For example, if your company works with healthcare information and is a HIPAA "covered entity", your customers will demand to see proof that you're using data loss prevention (DLP) software. Such software does things like:
- MITMing output email to make sure you're not sending a spreadsheet full of social security numbers.
- The same but for posts to web forms.
- The same but for instant messengers.
...etc. Netskope is a big player in that space. Go read up on what all their stuff can do sometime. As an individual, a donor to the EFF, and a vocal advocate for user privacy, those things make me shudder. As someone responsible for making sure our employees didn't accidentally upload PHI to Facebook from a work computer, I gritted my teeth and accepted that they're a necessary evil.
There's no reminder that "your work laptop belongs to your employer" quite like working in healthtech. I'm willing to cut Microsoft some slack for offering those products to customers.
You can enable some pretty strict policies with device management and general policies. But actually recording the screen is a big breach of information if the database is not secured.
Every enterprise communication platform provides something similar.
It’s important to realize you don’t own any of the communication on a corporate owned device.
Explain the personal utility here... Ohh I cannot find that one website I visited but I know I had found it a couple weeks back? Really. The personal utility use case looks pretty weak IMO.
I disagree. I think having an easy to search database of everything I've looked at would be very useful.
And if I ever want such a thing, I'll be happy to go and find one and install it myself. I don't want it anywhere near my computer unless I deliberately select and acquire it myself.
lol I definitely don’t want that. That is the reason I already use incognito mode for everything.
Heh! I just mean it seems like a cool thing to have the ability to turn on when you want to use it, because you want it, not as an opt-out feature.
It's a system that constantly surveils you, of course it's meant for surveillance. The only question is who gets access, is it just you, or is it you and the cops, or is it you and the cops and anyone with a checkbook.
I think the issue is more that nobody asked for it.
These tools are useful, and on a Mac if you want Rewind, you have to know you want it, go out download it, pay for it, install it yourself .. and you knew what you were getting into the whole time.
Having a tool like this planted in your device without your consent is pushing your userbase over the edge.
If they made it a separate feature you had to manually install, like Windows Sandbox or WSL .. they could have avoided shooting themselves in the foot.
I think you hit the nail on the head. The feature itself can be benign and useful if Microsoft valued being respectful of user agency. Using Windows feels increasingly like a battle against against someone who can't accept "no" and tries to sneak around your intentions.
Along with Adobe recently it consensual business relationships are no longer common.
What it's intended for and what it can actually be used for are two different things.
— Stafford Beer, 2001 (via Wikipedia: https://en.wikipedia.org/w/index.php?title=The_purpose_of_a_...)
I think that's a reasonable and insightful definition, but I don't think that's what most people are likely to think when they read the words I quoted.
But also very correct.
Now that's a very naive take.
They already use tons of telemetry to profie you for ads, snitch about you to your boss, share with partners, and so on, and only growing on that front. Plus all the cooperation they do with their favorite government.
But I pay for Windows! Surely, the existence of a preeminent financial contract with my benefactor means they would never sell me downriver to a suspicious partner. At least, that's the rationale I seem to hear these days from people that pay extra for peace-of-mind.
So you are:
- part of a captive audience
- with money so spare
- and for whom someone else has done pretty extensive KYC
Please ignore the sounds of drooling from the marketing department. We have called the cleaners.
My take is more cynical. They actually want your soul. By collecting all the information that was ever used to train the neural network between your ears, they can create a synthetic version of you, to impersonate you, and some might even argue resurrect you, inside a computer, to torture you Clockwork Orange style with an endless display of ads, predicting what the fleshy version of you wants to buy, how to preempt your real life decisions, deny you the things you desire, and more.
The fundamental energy responsible for the universe is consciousness, and the goal of consciousness is to create, to experience, to learn and to improve (or re-create), ultimately evolving to a state of lower entropy (creating order out of chaos). The pyramids on the ancient artifacts represent our consciousness. And if you take a look at the depictions of the pyramids with the eye (or sun) on top, you’ll notice that the top of the pyramid is always missing. This symbolizes the fact that the development of our consciousness is always ongoing and will likely never end — at least for as long as the universe exists. We’re on a continuous path of building our consciousness, brick by brick, slowly but surely reaching higher states of awareness.
And where are we ultimately heading? To the very top, of course; towards the sun; towards enlightenment. The ancient people used various objects in nature to symbolize certain concepts, and the sun above the pyramid represents enlightenment — the highest state of awareness, knowledge and wisdom. The idea behind this is that the bright light from the sun allows us to see our environment and when we can see our environment clearly — i.e. when we can see things as they truly are — we can start to collect valid information about it and build a good understanding of it. That’s why when you withhold knowledge from people it’s called “keeping them in the dark.” This is also why one of the well known secret societies called themselves the Illuminati; they considered themselves the illuminated ones, because they possessed knowledge others didn’t have; in other words, they were illuminated by the extra knowledge they possessed while everyone else was (relatively) in the dark.
It's published by Microsoft
tbh that's a knockdown argument. All the conversation second guessing the intent and motives of bosses, users and third parties is moot when it runs on an OS that is controlled remotely and insecure by design. Apple are following, (and I exlect you'll have even less choice about that - because its clientsode scanning in disguise) and Google have always been proud of their surveillance based business model, so I think the whole landscape of big provider computing is changing. People are actually starting to question what they want computer devices for
I think you’re a little naive if you don’t think this will become handy tools for management. We view its potential as two-fold from a strictly non-employee-friendly side.
The monitoring abilities will be better than what is currently available. But it’s not really something a lot of organisations is going to be too interested in. Everyone already knows you’re spending a few hours each week doing internet things, maybe you’re even playing some digital board games with your coworkers. That’s fine (again, in most organisations), in good organisations you might even be able to play a little with your managers. What would be interesting isn’t the DDR type surveillance, it would be if the tools come with automatic detection for outliers. This would help you gather information on poor performers and maybe help them get better.
The other potential is much more sinister. At least if the tools work out as we expect they will. In that everyone will basically be training their AI replacements. This isn’t going to kill the office job, but it’ll make the processes where we’re already putting in more and more RPA smoother and more rapid. Microsoft being who they are, they will sell these tools of course, and if they keep up with their current pricing… well… let’s just say that having a student worker move data is cheaper than most of Microsoft’s current data automation, so we’ll…
As far as security goes I think this is more about complaisance than actual IT security. It’s frankly illegal to monitor employees the way these systems are intended to do in a lot of countries, and I’m not sure Microsoft really thought that true. If they roll out the current system in the EU then they are going to get a lot of attention from the big bureaucratic dragon. They probably will regardless of how they roll it out.
Poor performers can get better on their own time, after they've been separated from the company. PIPs are a formality to provide documentation that ensures wrongful-termination lawsuits don't stick.
In the future companies can have this enabled and just ask chatgpt to fire bottom 10% of staff.
Or they can ask microsoft to 'train' their own company AI based on worker interactions then fire them once the AI can mimic the work good enough. (this is likely the goal)
Worse, they can pick whistleblowers, people who attempt to unionize, people who have harassment claims against the company, and ask it to retroactively come up with a legal justification for firing them that would pass muster if challenged in court.
It would be for sure a nightmare if it's automating the thing some companies do where they constantly hire their "worst performers" -- but they're doing it anyway with manual labor. The worse thing is that it makes it much more possible to justify firing someone for deceptive reasons in order to avoid anti-discrimination or harassment claims.
This enables much more, because screenshots to comb through for dirt exist where they otherwise would not.
This is not the first time they've done this—have you forgotten the "Xbox One-Eighty," when they initially announced the Xbox One as having mandatory Kinect functionality, only to similarly realize they boiled the proverbial frog too quickly and renege?
If "this" is temporarily backing off the surveillance frog boil because they went too fast, then the Kinect is clearly not an example. It has been over ten years since the launch of the Xbox One and they never did anything surveillancey with the consoles.
I believe that any corporate entity will eventually use any tool at their disposal to optimise profits at the cost of their customer.
Given they have performed the strategy of user-hostile rollouts time and time again, why would you think they would behave any differently?
Relatedly, do you like ads in the OS?
I agree. If this feature was developed by someone not in the ad tech surveillance business and it ran on a secure by design operating system there would be a positive reaction.
Cynicism? Mind you, what the GP described is exactly what Microsoft has been doing for the past decade. It's not cynicism, it's extensively documented fact.
# Privacy Violations
Windows 11 Update 23H2 is stealing users' IMAP credentials - https://news.ycombinator.com/item?id=38212453
I noticed some disturbing privacy defaults in Windows 10 - https://news.ycombinator.com/item?id=9976298
Even when told not to, Windows 10 doesn't stop talking to Microsoft - https://news.ycombinator.com/item?id=10053352
# User Interference and Coercion
Microsoft has removed the “use offline account” option when installing Windows - https://news.ycombinator.com/item?id=21103683
Microsoft intercepting Firefox, Chrome installation on Windows 10 Insider build - https://news.ycombinator.com/item?id=17967243
Outlook now ignores Windows' Default Browser and opens links in Edge by default - https://news.ycombinator.com/item?id=36492329
Microsoft blocks EdgeDeflector to force Windows 11 users into Edge - https://news.ycombinator.com/item?id=29251210
Microsoft has not stopped forcing Edge on Windows 11 users - https://news.ycombinator.com/item?id=37461449
Windows 11 Officially Shuts Down Firefox’s Default Browser Workaround - https://news.ycombinator.com/item?id=29579994
Last Windows 11 update changed all default browser settings to Edge - https://news.ycombinator.com/item?id=30055222
Microsoft tests Windows account menu error badge when Microsoft Account not used - https://news.ycombinator.com/item?id=35443361
Removing “Annoying” Windows 10 Features Is a DMCA Violation, Microsoft Says - https://news.ycombinator.com/item?id=23486887
# Ads
Windows Now Showing Full-Screen Ads - https://news.ycombinator.com/item?id=11167964
Why can an ad break the Windows 11 desktop and taskbar? - https://news.ycombinator.com/item?id=28404332
Windows 10 nagging users with Bing advertisements - https://news.ycombinator.com/item?id=27337382
Microsoft begins showing an anti-Firefox ad in the Windows 10 start menu - https://news.ycombinator.com/item?id=22288599
Windows 10 Tip: Turn Off File Explorer Advertising - https://news.ycombinator.com/item?id=13835733
# Unwanted Features
Windows needs to stop showing tabloid news - https://news.ycombinator.com/item?id=35323121
iMessage and iCloud weren’t designed for surveillance, but they allow the FBI to read basically every text and image sent to or from every iPhone without probable cause or a warrant.
Something doesn’t need to be designed with the intent to surveil to be used by the state for that purpose.
Microsoft already builds countless APIs and services into Windows that are there mostly to enable spying by corporate owners. If you don’t think governments of all sorts are asking for This sort of functionality to be baked into all operating systems, you are being naive, especially on the face of recent reports of Microsoft’s internal willingness to retain a major security hole in ADFS rather than risk a lucrative US government contract.
It’s true they also have folks internally pushing for this as a source of training data for MS AI models as well. There are countless “benefits” for Microsoft that have nothing to do with the personal utility.
The personal utility angle is just the marketing hook, which they thankfully misjudged. How else, though, could they justify recording the screen all the time?
https://answers.microsoft.com/en-us/msteams/forum/all/tracki...
Cynicism is forgivable. Smart, even. Given that it implies expectations from experience. Naivete, and possibly "willful naivete", on the other hand is not forgivable given perceived stakes by many.
It's not cynical whatsoever to understand that features that enable surveillance are for surveillance. It's simply a realistic take.
With large corporations and governments the general rule is: assume a cynical take until proved as not.
I actually think this is a pretty healthy mindset for anything that is political.
I think you may have forgotten about Chat Control[0]. Regardless of its intent for surveillance or not, Relay would be an essential technology for making things such as Chat Control even possible.
I must stress that this can come with all good intentions. That the developers and even Nadella see this purely from the utility perspective and have zero intentions to use it for increased surveillance. But like they say "The road to Hell is paved with good intentions." So I'm trying to distinguish between the potential harm of the technology itself and the conspiracies that are arising. Because we need to recognize that evil often arises with no malintent, and to be careful attributing malicious intentions to those who never had none. It can be incredibly hard to know.
But regardless of the intent, I think we can now look at this and see how ripe the technology is for abuse. And I think we can ask the questions about how likely it is to be abused. And don't just ask how likely __you__ are to be subjected to the abuse, but include others. Because even if others are subjected to that abuse, it is not unlikely to affect you in some form (if you need that specific motivation). I think we can all agree that the likelihood of the technology being abused in authoritarian countries like Iran, North Korea, and many others, is quite high. Maybe this isn't on your radar or maybe it isn't a concern for you because those powers will already abuse their citizens. But certainly this gives them the ability to be more abusive and more invasive.
[0] https://www.patrick-breyer.de/en/posts/chat-control/
Cynical, that's cute. The only thing that's "very clear" up to this point is that no one wants msft taking screenshots of their activity.
I agree, I think GP is overly cynical. There's a strong chance that the primary reason is for personal utility. But MS (like all big tech) are all about two-birds-one-stone wins. If you can get the personal utility, while also gaining capability that "rightsholders" and advertisers, etc will want, that's a huge win to them. Reminds me a lot of Apple's hardware DRM that is primarily about reducing the value of stolen Apple hardware, but which also serves to make third party repairs way more difficult and expensive, which is not a "con" to them.
How is this cynical? In what way have evilCorps of any name/brand shown you in the past that this is not exactly what will happen? Even Apple's CSAM back pedaling hasn't been long enough ago to see what the next attempt at it will be.
I do not trust anyone attempting to make money on AI that will not ultimately just be a data hoover for whatever model it is they are using. That's being generous in their motives. Anyone that is trying to hide their ulterior motives of out right spying would use this as the perfect cover.
So, am I an asshole in assuming everyone has nefarious intent or are you a good sheeple for giving people benefit of the doubt?
"cynical". That's like calling the sky blue a "cynical" take. It should be obvious to anyone that has been paying attention for a while that this is exactly what is happening. Requires absolutely zero conspiracy mindset. You are either very young or don't pay attention whatsoever. Sorry about being blunt, but I'm tired of these pollyanna naive takes that it's "cynical" to suggest that corporations and government agents want to spy on you when it's obvious to my 8 year old that they are doing it. There have been hundreds of events and leaks indicating exactly this situation that made front-page news in major publications over the last couple decades. Where have you been?
I cant fathom someone writing this and not doing so in bad faith.
Microsoft is already selling analytics on Microsoft Office employee usage statistics to companies with Office site licenses. Selling analytics based on data gathered from Recall is a very short hop from what they are already doing.
If this was released out of the blue (and not on by default) after maybe windows 7 era: sure microsoft is just putting some new untested feature out in the wild.
But Microsoft has made loud clear reputation destroying moves in the last few years by putting ads into the BASE OPERATING SYSTEM. And also forcing online account linking into the BASE OPERATING SYSTEM. They are yelling out into the world that they can no longer be trusted because they dont understand what an operating system is suppose to be anymore. What kind of deep trust is required to be that layer in a computer.
Taking screenshots of everything a user sees, running it through image recognition, and cataloging all of it in a database is surveillance no matter what Microsoft currently intends to use the data for.
If intent mattered, police could have us all wiretapped without a warrant. They wouldn't be actively sueveilling us for a specific case so there's really no problem, right?
I don’t mean this to be rude, but wake up and smell the coffee already.
The reason why Silicon Valley has got to where it is with the complete erosion of user privacy is naive individuals not being able to see far in front of them. Recall isn’t just one event, it’s an accumulation of a thousand tiny events to the point where Microsoft are so up their own arses that they assumed this would be an easy hole in one. Because it usually is.
And they will just slip it in regardless. This is just a PR thing. Mark my words, Recall will be back with a new name and slipped in with an update at some point and it will be enabled without the user even wanting it. Or coerced out of the user. Microsoft want people’s data, whether for their own greed or because they’ve been asked to by the NSA. Regardless, Recall is coming, and the public will be naive about its true intentions. Microsoft will win this in the end.
I don't think that mistrust of tech companies is cynicism, especially not after we have seen them repeatedly prioritize profits over our privacy, including literally selling our privacy on the open market.
It's hard for me to imagine that Microsoft would implement a "watches everything you do" program if they didn't want to look at what it sees.
The entire internet, all of your personal information, every written text, and every photo uploaded to social media have been absorbed into these companies AI models, and they are all clamoring to one-up each other. They are going to acquire as much data as they can get their hands on, and this software is a clear way to do it.
Even the AI features in MS Paint will send your data to Microsoft for "content safety", even though the model runs locally. They're already setting the scene for what they plan to do with Recall.
I think it fits reality.
The previous commenter was attributing malicious intent to Microsoft and other parties, but in the long run, I'm not sure that anyone's immediate intentions are particularly relevant.
My concern is much less about how the creators of these tools currently intend for them to be used, and much more about how they will end up being used regardless. Well-intentioned people have often created things that were viciously abused by ill-intentioned others later, or created things that had negative unintended consequences.
Please explain to me, because I keep failing to understand. How would Recall help me do anything I want to do on my PC?
Isn't that already the norm, or at least very very common? It's just a 3rd party package totally focused on surveillance, not built into the OS and used for some user-accessible features.
These applications would be novel, at least on a widespread basis in Western liberal democracies.
How? We already know Google trains its AI on people's private emails and Five Eyes conducts mass surveillance on Western citizens (see: Snowden). You can be sure that the people behind the PRISM program are salivating at the thought of access to the unencrypted Recall databases, and that they'll be twisting Microsoft's arm for backdoor access.
I think you're making the mistake of interpreting this as a binary thing, which obscures the difference between, for instance, tapping phone calls and installing bugs in every room of everyone's home (a la 1984's telescreens). Or in this case, Google scanning the emails you sent/stored on their servers vs. Microsoft storing and scanning every action you take on your PC.
It would be novel because most people outside a corporate environment don't have a keylogger/screen-recorder running on their system.
Source?
There are already Recall type of products on the market, not just that, they also work on the cloud not just locally. All Microsoft had to do was make it opt in by default
Yes, these existing products are generally called RATs or spousal stalkerware.
No
I can't believe that no one there didn't anticipate the blowback. It could just have been a way for Satya to put the feature in front of their business customers. They'd likely want that feature even if consumers reject it.
Employers can collect task/business process staps by recording the screens.
This will help train RPA bots and reduce the need for human workforce for repetitive tasks.
Microsoft can collect this data across industries with or without informed consent and sell RPA/AI bots back to the same enterprise customers as a managed service.
Lot of commercial potential there for the taking. Just needs a innocuous enough cover story ro make it a default offering to server you the individual customer alone and help you gain an edge over your peers.
Not even that. It's still coming, under the same name, just not as soon for everyone.
There's a much more mundane read:
They invested a bunch of effort into a product the market loudly rejected.
They're now withdrawing the product while they figure out what they can salvage from the effort.
Key stakeholders may have a few ideas about how to proceed (ranging from "try again later" through "repurpose it" to "forget it"), but enterprises of Microsoft size make decisions very slowly so of course it's vague about what's next. Collectively, they almost certainly don't know!
In addition to direct market reaction, they must be a bit red in the face considering that Apple just laid out a complex and well thought out implementation of "AI", which focused on privacy.
As someone who grew up near Redmond, who still has an emotional soft-spot for Microsoft for some reason, I feel truly embarrassed for their implementation.
From all three major OS vendors on the consumer market, Microsoft is still the one that pushes more C and C++ into production on their OS, in detriment of .NET, despite all the security discussions.
All the efforts from other teams to have .NET reach Swift, Java, Kotlin levels of adoption on Windows, have always hit a wall against WinDev culture.
Also the 90's spirit from features over security hasn't yet gone away from WinDev, so it isn't really surprising this turned out this way.
My personal feelings aside, Microsoft is Too Big to Suck like this, regarding security and privacy. At this point, their culture is a national security liability.
We have seen some recent efforts, but how does one right such a large ship?
https://www.theregister.com/2024/06/14/brad_smith_microsoft_...
https://www.theverge.com/2024/4/3/24119787/microsoft-cloud-e...
Your post could've been written in 2004, when Microsoft was pinky swearing it was gonna refocus on security-first development, starting with XP SP2
To be a bit fair, Windows security has gone from a laughing stock in 2004, to having Windows Defender in the 2020s. I ain't no city slickin' infosec guy, but Defender appears to be state of the art end point protection today.
They can figure this stuff out sometimes, right?
How did they get from Windows/AVG/ESET to Windows Defender, and how can they make that happen on Azure?
To me this seems like a different aspect of security. The push with the winxp service packs onwards was to make it secure by default against the network (trying to be vague because I'll probably be wrong on the details), I'm fairly sure it was xp where you could be infected before setup was complete if the network was plugged in, or that acquiring third party AV was something you must do for anything that touches the internet or media from a source you can't 100% trust. Now with defender this is far in the background for most users that they don't need to think about it at all.
The difference with recall is about blast radius of any unauthorized/unintended access, which still happens even if it's less common or via something like clicking a bad link in an email. That's in addition to mistrust of MS or large corporations sucking up data, and how secure they are (what would a Ashley Madison type breach look like with recall data?)
They did improve their story, with SAL exactly introduced for XP SP2, and having for many years having one of the few C++ standard libraries with bounds checking enabled by default in debug builds.
However that was it, WinDev fought against Longhorn, Office folks redid the .NET ideas in COM for Vista, and so on.
It's too bad that the rest of the "90's spirit" -- consistent, well-organized UIs, users controlling their own computers, and software that runs locally without dependence on cloud servers -- seems to be receding at Microsoft, leaving everyone with the worst of both worlds.
My suspicion is that Microsoft learned of Apple's effort, thus this rushed, skunkworks implementation, pushed to be released before Apple. The effort backfired spectacularly.
I worry that it's worse. They have been working on this for years, but I think that they may have assumed that their desktop market dominance was so sound, that they just didn't care to put the effort into privacy. What are you going to do, Linux Desktop?
This seems like the general attitude that delivers lackluster solutions across many products, like Teams, SharePoint, etc.
Intelligent search for your personal data is still a feature with broad appeal, and they're bound to come back with that.
The critical blunder was in indexing that personal data by watching over your shoulder, which is both creepy and low-effort. They've got to put the work in to find a better way.
But market did not reject it - the OS and corresponding copilot devices literally hasn't launched yet.
Per one of the ars Technica articles, All the information collected was stored locally completely unencrypted, and would be accessible by anyone with local administrator rights.
Nevermind accessible to other users, but accessible to any 3rd party application that the user executes. A nightmare of a security hole.
That's already true for every desktop application though. All third party programs can spy on all other programs and documents that user has available. This has been a seemingly criminally-overlooked shortcoming of desktop systems and this approach has fallen WAY behind current mobile security practices.
This is not true on macOS.
That is why "firejail" exists.
What if it was encrypted but the key need to be present locally anyway. Key under the mat situation? PIN on the back of the card case?
You're assuming Microsoft acts as a singular, cohesive entity, which like any company it is not.
You're right. Carolina Hernandez speadheaded this initiative to take screenshots of your desktop every few seconds or minutes and transcribe the result into a regular local sqlite database.
It's convenient for corporations to have this as an excuse, but they should be assessed as singular entities. They enjoy corporate personhood also.
As the size and influence of an entity increases, it has more power in the economy and therefore should have more responsibility, not less, to act according to high standards.
A gargantuan company that is 7% of the S&P 500 getting whoopsie-daisy passes because it is so large and nobody knows what it's doing is a dystopian situation that we should have incentives in place to discourage
People should not get over this (but probably will). There was an uproar (decades ago) about GMail "reading all your email". This was overblown, but Microsoft building the infrastructure to view a history of everything on your screen is much much worse. There's a lot more private things that get displayed on a screen (and of course all of your email would be a subset) that no one has a right to see.
Security is a mindset and some people don't have it.
I used to work for a company that made a rather popular database for mobile applications. An easy API to store data on your phone and have it synced to a server with no effort on the developers part.
Two of my co-workers spent a few weeks making a nice looking chat application which worked by syncing messages from many users to different devices, and they wanted to publish it as a demo. Until somebody else pointed out that there was no security at all. The server just accepts the latest state from the client. This was fine for most of the current use cases, but for chat basically meant that any client could rewrite the entire history and the server would just say "thanks!" on next sync and distribute the changes to everyone else. These were adult humans with degrees from respectable institutions, and this hadn't crossed their minds at all.
Basically, I think a combination of Hanlon's razor and nobody wanting to be a naysayer is a perfectly adequate explanation for this Recall thing. I think it's obvious that a lot of people would like their computer to work like that, and I can see them wanting to get it out without having listened to any internal criticism (if they even have a culture that allows that).
I would argue there really weren't away, apart from the usual disaster/lack of security that desktop systems have.
It wasn't uploaded anywhere, so the only threat would be from programs that would run locally and steal it, which is already the same for any other (even third-party) program stealing your local files, which they have always been able to do.
Currently I am still looking forward to when the Secure Future Initiative (SFI) will actually mean more .NET and Rust and less COM and C++ love by Windows team.
So until this changes, take with a grain of salt how much secure Recall is actually going to be.
Contrast this with Apple Inteligence, where not only are most local APIs made available via Swift, they have created special hardware and a unikernel like OS with sandboxed layers, exposing only what OS capabilities required for AI processing and cluster communication.
Versus "Thrust us, we are going to do the right thing".
Or maybe they have to figure out how to actually make it work
My recollection is that the CEO stated no security problem with the product, security was their utmost and first the toppest priority all the time and into eternity, they wouldn't dare trying to release anything with security concerns.
Apparently there are security concerns afterall. Did they lie before or now or just completely clueless about what is a security concern or what? I am confused.
Arguably the product itself. Which is another reason they might be vague about it. Because to talk about those security problems would taint the entire product and they can't do that if they aren't willing to completely scrap it.
People have been talking about how the data in here is similar to what may be already existing but that's far from the truth. Yes, these companies have a lot of data on us, but this is a significant step forwards in the granularity of that data. It's also worth noting that hackers could not get into your computer and assume that your computer not only has a keylogger that they can access to further compromise your system (and other systems/accounts) but that they can also obtain screenshots. These increase user risk significantly and greatly reduce the requisite technical skill needed for those infiltrating machines.
Similarly, many have pointed out the potential connections to Chat Control[0] and how such systems can likely be used by many companies to be exploitative of workers. While you may trust your company/partner/significant others/government and so on, it is important to remember that not everyone has such luxuries. It is also important to remember that such things can change. Even in the US there are high risks of potential abuse: such as police obtaining a warrant to get this data to see if someone is trying to obtain abortion medication. Regardless on where you fall on that specific issue, you can replace it with any other concerning issue and I'm sure you wouldn't like that (guns, religion, gender identity, political affiliations, and so on). So even if you trust Microsoft to not give away this type of information nor to provide authorities access (which often includes authorities not in your home country), then you must ask if the benefits are worth the costs. And not just for you, but for others.[1]
I suspect this is correct and as segasaturn suggested, turned up the heat too fast. I also suspect that this type of data invasion can be much more easily understood by the general public, who often struggle with understanding what metadata is and how it is/can be used. It does require technical knowledge for this and is often non-obvious, even for people who are well above average in technical literacy (as is the average HN user).
[0] Specifically we should note here that Chat Control would force Microsoft to use this system in a much more invasive way. We lambasted Apple over their proposal for CSAM detection, including the potential risks of abuse even if it were theoretically impossible to avoid hash collisions. Having Relay would require Microsoft to implement such a system and that's why there are many conspiracies arising that Relay is specifically intended for Chat Control, because true or not it would likely have similar outcomes. We'll see if Apple revisits the idea, and the recent WWDC doesn't rule out such a possibility https://www.patrick-breyer.de/en/posts/chat-control/
[1] https://www.youtube.com/watch?v=goQ4ii-zBMw
The specific security problem was that their enterprise customers said no, and not in a 'no thanks' way, but a more vehement 'no fucking way', way.
They could conceivably push to SOHO users, but a) there's no revenue there (and this stuff is expensive), and b) it's really bad optics.
"We're going to offer you a feature that your workplace refused to run on their network."
I'm sure there's ways to spin that, but it'd be a challenge.
They're totally waiting for the negative press to die down, then they'll try again.
"It sounds to me like they're figuring out a new marketing approach, or they're softening the blow by "listening to users" and then rolling out more slowly, when outrage has dies down ad people will just accept it."
Of course "listening to users" really means "listening in on users". Or just "bad press".
Microsoft does not consult with users before adding code into Windows. Nor do users contact Microsoft to tell the company what code they want or don't want.
Even if they did, the company does not operate based on user suggestions.
The reaction to "Recall" by journalists, bloggers and commenters is not that they think it should be "delayed". They think it is a bad idea.
Microsoft will do as it pleases. As it always has done.