After reading the article, and specially the remarks about this engine being copy-pasted from the Xbox DRM engine , does anyone still believe that Pluton, also copy-pasted from the Xbox, is about end user security? And not totally about MS finally having enforceable DRM on PCs?
Oh and by the way Pluton is now on the latest batch of Intel laptop chips. And has been on AMDs for a while. How soon until Windows requires it?
People have been saying that for more than 10 years now, since the TPM was introduced.
Yet you can still install Linux on PCs sold with Windows, you can still install third party software on Windows not from a Store, you can still watch pirated movies downloaded from torrents.
You can even run an unregistered/unpaid version of Windows if you don't mind that it will not let you change the desktop background image.
Or you can recognize that app/game developers are starting to require Secure Boot enforcement if you want to continue to use their apps or play their games.
RIOT requires users to enable TPM-enforced Secure Boot starting with Windows 11 to play Valorant: https://support-valorant.riotgames.com/hc/en-us/articles/100...
Let me tell you a secret: it's because the gamers are demanding that. The game companies couldn't care less if there are cheaters in the game, but it's the players which put huge pressure on the game companies to detect and ban cheaters.
Citation needed.
Whose these gamers ? I surely didn't ask for this neither any of the gamers I know, nor seen any demand about that in gaming forums.
The jump from this to "requiring TPM" is quite a long one.
Go on steam and look at the recent reviews for older but still popular fps games. Gamers complain about cheaters constantly and will negatively review games cause of it
They're demanding a way to handle or ban cheater, not requiring TPM, that's a non sequitur.
There is no technical way to prevent cheating in advance without secure boot. Gamers aren’t saying they want lots of cheaters but they should be banned eventually, they are saying they want to play games without cheaters.
I'm not really sure I buy this. I can't really give a way that can guarantee no cheating but I know for example games like Genshin Impact run almost all the code (dmg calculation etc) server-side. Perhaps something that's an extension of Geforce Now might be the best "anti-cheat" technically speaking.
To run anti-cheat in that way, you need all game mechanics to be run server-side, and you need to not let the client ever know about something the player should not know - e.g. in a first-person shooter you need to run visibility and occlusion on the server too! Otherwise the cheating will take the form of seeing through walls and the like. This is going to boost the cost of the servers and probably any game subscription, and might lead to bandwidth or latency problems for players - just to avoid running any calculation that is relevant to game balance on player hardware.
Well yeah, that's the correct way to run a server, don't send information you don't want the user to get.
But as you are pointing out, forcing client-side intrusive anti-cheat is cheaper, thus this as nothing to do about preventing cheating, but about reducing cost.
The end state of your argument is the game runs entirely on hosted hardware and you pay for a license to stream the final rendered output to your monitor. This is already happening. Soon games won’t be able to be “bought” at all, you’ll just pay the server a number of dollars per hour for the privilege of them letting you use their hardware.
You will own nothing and like it.
Making occlusion calculation sever-side during multiplayer have nothing to do with "owning" a game or not.
You can even do this calculation on community-run private server.
If all surfaces are fully opaque, maybe. The second particle effects and volumetric effects and all sorts of advanced techniques play a role in actual gameplay, no. And that’s only for this one type of cheating.
It's not just about cost. Theoretically yes, you shouldn't send information that you don't want users to get and abuse. However, in the context of games, this is not always possible because most games are realtime and need to tolerate network latency. There is no perfect solution - there will always be tradeoffs.
Ideally player A shouldn't be networked player B if there is a wall between them but what happens when they're at the edge of the wall? You don't want them to pop in so you need some tolerance. But having that tolerance would also allow cheaters to see players through walls near edges. Or your game design might require you to hear sounds on the other side of the wall (footsteps, gunshots, etc.) which allows cheats to infer what what may be behind the wall better than a person would.
Yes, and you cannot prevent this except in in-person tournament.
Any output send toward the player, even a faint audio queue could be analyzed, and use to trigger an action or display an overlay to the screen, and no amount of kernel-level stuff will prevent that, as you can do this outside of the computer running the game.
Back in my day we all played on private, community ran servers where you could easily vote to kick/ban folks, the server owner was your buddy, or you played with people you trust.
Now everything is matchmaking, private servers, live service and that sense of community is gone.
Why isn't it still like that? Don't players want small communities?
It's very hard to gather full teams (usually 10 persons) in a small communities. Public matchmaking gives an opportunity to start a game in a minute from clicking "play", regardless of how many people you have at hand right now.
Small communities still exist, it's just that vacant places are now filled with strangers.
lot of thing happened, 6th gen consoles started a new way of using online games (no keyboard, no third party chat/vocal, no group chat out of game, no private server), then the industry pivoted away from private server to have more control on their games, then the whole F2P economy then GaaS took any agency out of players hands.
There's no way secure boot totally prevents cheating, either. It just moves the goalpost a little, cheating will always be possible.
The goalpost just needs to be moved further than is economically interesting for cheaters in general to reach.
Perhaps secure boot by itself isn't enough, but I would imagine it would be a relatively large bump, when combined with a kernel-level anti-cheat. I presume such anti-cheats would e.g. disable the debugger access of game memory or otherwise debugging it, accessing the screen contents of the game or sending it artificial inputs.
What vectors remain? I guess at least finding bugs in the game, network traffic analysis, attempting MitM, capturing or even modifying actual data in the DRAM chips, using USB devices controlled by an external device that sees the game via a camera or HDMI capture.. All these can be plugged or require big efforts to make use of.
You cannot "prevent" cheating, you can at best mitigate it, it's a balance.
There plenty of way to mitigate cheating in game, but the game industry is focusing on the ones where they don't bear the cost and only the customer will (and this view is in part due to the model of F2P games, where banning cheater is useless as it doesn't cost them anything to create a new account).
Letting game developer having complete control and spying on the device playing the game is fine in a physical tournament were they provide the device, but it's insanity when it's the user own device in its home.
You're being disingenuous here, or just missing the point. The point being made was the gamers are demanding game developers stop cheaters... and that secure boot (and related ways to lock down the computer) is one of the primary tools they know to use to do that.
That's akin to saying that, as people want security on the street, mandatory strip search as soon as your exit your home is fair game.
Asking for a result doesn't give a blank-check for all the measures taken toward this result.
I agree, but it doesn't change the fact that it's one of the primary reasons they're doing it. And "strip searches on the street" may not happen, but "Stop and Frisk" certainly is/was. And it was very much done because people were complaining about crime and safety. And it was done regardless of whether or not it was right, or effective, or even legal.
Cheating in online games (especially ones that are free) is so absurdly rampant and disruptive that you can sell gamers just about anything if it can meaningfully deter cheaters. Every now and then a Youtuber will say “kernel level anti-cheat is bad for [reasons]” and gamers will pretend to care about it until the video leaves the “For You” page.
Because a root kit is the only way to do anti cheat? CS2 ban wave begs to differ.
I haven’t played valorant, so I don’t know about them, but what I can say is that definitely other anti-cheats are highly ineffective (VAC being one that is highly ineffective), with blatant cheaters going years without ever being caught.
Hell, blatant cheaters literally stream themselves cheating and their own communities do not recognize the cheating till the stream makes a mistake and selects the wrong scene. This also means that VAC methods of sending footage to random players is ineffective, as some streamers who are very obviously actually cheating do so in front of tens of thousands of people, and those people do not recognize the obvious cheating happening.
We also know game companies don’t care about cheating, as activision admitted in their lawsuit that they leave cheaters on a safe list so long as the cheaters have any semblance of an audience streaming.
That is absolutely wild, and completely characteristic of Activision.
Do you have a link that I can share with my CoD-playing friends?
I personally stopped playing CS because my friends started using an alt-launcher to avoid cheaters, which added a whole layer of complication that made the game undesirable. Ban waves aren't perfect but in my limited experience, cheaters weren't that rampant, in others experience it became intolerable.
That's not the gamers asking, though. In this instance they're being taken advantage of because they have maligned priorities, and being sold an over-the-top solution they don't need. You can still detect process injection, memory injection, sketchy inputs, HID fuckery, DRM cracking, host emulation and input macros without ever going kernel-level.
Truth be told, if the exploiter-class of your game would even consider a kernel-level exploit, your game is fucked from the start. Seriously, go Google "valorant cheating tool" and your results page will get flooded with options. You cannot pretend like it's entirely the audience's fault when there are axiomatically better ways to do anticheat that developers actively ignore.
Gamers arent demanding this. There are tons of ways to detect cheaters, the most effective one being human moderation. But no, companies wont do MaNuAl WoRk because it doesnt sCaLe, even though they have more than enough cash in the bank.
How do you do manual moderation on a massive fast-paced game like Valorant? It’s correct, that doesn’t scale
maybe not manual ... but ... log behavior, find outliers, make outliers play with outliers only
This absolutely happens already. The problem with finding statistical outliers is that plenty of legitimate players are outliers too. And if you're banning/segregating players for being outliers, you get a very angry player base.
Riot has a pretty indepth blogpost about their anti-cheat systems, they've had years to mature them on some of the most demanding competitive gaming platforms ever made. Requiring players install kernel anti-cheat was very far down the list of possible solutions, but that's what it came to. It was either this or stop being free to play.
The server is all-seeing, if there is no way for the server to discriminate cheater from other player, then no player can possibly know there a cheater on the server, thus cannot complain about cheating is either irrational or the server-side detection is severely flawed.
It's impossible to tell in-game if a baseball player is using steroids, yet there's a laundry list of banned substances and players who got banned for taking them because the MLB believes it gives them an unfair advantage. It's called competitive integrity.
Since it sounds like you don't play games, at least not competitively, I'll clarify that "cheating" in this case isn't the obvious stuff like "my gun does 100x damage" or "I move around at 100mph" or "I'm using custom player models with big spikes so I know everyone's location" that you would've seen on public Counter-Strike 1.6 servers in 2002. Cheating is aim assistance that nudges your cursor to compensate for spray patterns in CS, it's automatic DPs and throw breaks in Street Fighter 6 that are just at the threshold of human reaction timing, it's firing off skillshots in League of Legends with an overlay that says if it's going to kill the enemy player or not. All of this stuff is doable by a sufficiently skilled/lucky human, but not with the level of consistency you get from cheating.
This is relative to meat-space, not videogame, but we could go there and say caffeine or Adderall use is cheating, thus making anti-cheat a little more invasive…
And there another difference, you're referring to professional sport. I have no problem with invasive anti-cheat for professional gamer, even better it the gaming device is provided by tournament organization.
But we're talking about anti-cheat used for all players, akin to asking people playing catch in their garden or playing baseball for fun an the local park to take a blood sample for drug test.
That's the point, there no difference for the other players between playing against a cheater and playing against a better player. Any ELO-based matchmaking will solve this, cheater will end-up playing against each-other or against very skilled player.
You could argue that they could create new account or purposely cripple their ELO ratting, but this is the exact same problem as smurfing.
Many games have ranked ladders now which are taken fairly seriously. Success at high levels of ladder player often translates into career opportunities, especially in League of Legends.
Well, first, you're wrong, because cheating only makes them good at one part of the game, not every part of the game. e.g. in League of Legends, a scripting Xerath or Karthus who hits every skillshot is going to win laning phase hard. However, scripting isn't going to help if they have bad macro and end up caught out in the middle of the game, causing their team to lose. Most cheaters don't end up at the top of the ladder, they end up firmly in the upper-middle.
Secondly, you're basically saying "cheating is OK because they'll end up at the top of the ladder." You don't realize how ridiculous this sounds?
Third, ranked and competition aside, playing against someone who's cheating isn't fun, even if you end up winning because they make mistakes that their cheats can't help them with.
You don't play competitive games, that's fine, but a lot of people do and they demand more competitive integrity than casual players.
Little difference : I don't play competitive game with completes strangers on company run servers.
I've played competitively on community based server, with people being screened by other players and the community able to regulate itself (ban or unban players).
The problem space is vastly different, you don't need intrusive ring 0 anti-cheat for this.
The whole kernel-level anticheat stuff is a poor solution to a self-made problem by the developer : they wanted to be the one in charge of the game and servers, so they needed to slash human moderation need. They also wanted to create a unique pool of player and didn't want the community to split between itself and play how they want.
There's cheaters even on consoles which are vastly more locked-down than a PC.
Those technical shenanigans clearly aren't working, be ready to be disappointed if you thought that a TPM would help against cheaters. Cheaters always find a way, what those game needs is proper moderation.
Yes that does cost money but that's the only known thing that works in the long run.
This seems like the old “any imperfect solution is no better than doing nothing” argument. Moderation is expensive, hard to scale, and can only address problems after other users have bad experiences.
It’s like saying seatbelts are useless because some people still get hurt, so instead of seatbelts we need a lot more ambulances and hospitals.
Like any complex system, games have a funnel. These technical measures reduce (but not to zero) the number of cheaters. Then moderation can be more effective operating against a smaller population with a lower percentage of abuse.
Since the technical measures like TPM are very heavy, there's some better evidence needed that it reduces the number of cheaters, personally I don't buy it.
On the other hand, all the games / servers I've seen which are successful against cheater have some very good moderation.
Just see Valorent vs Counterstrike. Similar levels of popularity, similar kinds of cheat concepts. One has a kernel level anti cheat and has few cheaters, one doesn't and is overrun by cheaters.
Look at Counterstrike with regular VAC based matchmaking and then with kernel level anti cheat in FACEIT. One is overrun with cheaters and one isn't. It's the same game.
Isn't this the argument used against non-kernel-level anticheat and server-side anticheat in the first place ?
Alternatively, it's like saying poisoning your customers is a bad way to reduce complaints, because some of them survive. Matter of perspective.
TPM security is broken on a lot of motherboards too.
Gamers don't want cheaters, but gamers also don't want malware. Some people won't care, others will care. The real problem is that publishers don't give anybody a choice on this. They sneak these invasive anti-piracy measures into their games without asking since they don't want to fragment their player base.
The reasonable, fair, common-sense pro-consumer thing to do is to split the online play in two: a non-anticheat server and an anti-cheat server. Players can opt-in to installing a rootkit/sharing their SSN/whatever if they want to play on the hardened server. This costs nothing, and makes all types of gamers happy.
But doing this has less upside for the publisher than forcing anti-cheat on everyone. The only risk is that they might get dragged through the mud by a handful of influencers peddling impotent rage to viewers who are just looking for background noise while sleepwalking on their Temu dopamine treadmill live service of the month.
This is a very good point! And I'd like to point out that there is an analogue to the problem of smurfing in online video games, and the corresponding solution, which is to require semi-unique ID to play (e.g. a phone number which can only be tied to one account at a time with a cool-off period when transferring between accounts). Valve does this for Dota 2, and smurfing is far, far less common than it is in League of Legends.
Some League players complain that they don't want to give their phone number to Riot (which is entirely reasonable given that it's a subsidiary of Tencent), but if enough people don't want that, then Riot could simply split the ranked queue into two: one where (soft, ie phone #) identity verification is required, and one where it isn't.
Riot won't do this, though, not because it wouldn't fix the problem (it would, as demonstrated by Valve), but because they profit from smurf accounts buying skins.
But it allows Windows 10 without TPM.
And why is that? It isn't for DRM (the game is free). It is for anti-cheat, and it is great.
The libertarian maximalist i-can-do-what-i-want-with-my-computer ignore the many use cases where I want to trust something about someone else's computer, and trusted computing enables those use cases.
How is it great? Vanguard is extremely invasive; having kernel access, you have to relinquish your PC to this chinese-owned company at all times (whether you're playing the game or not), and just trust in their good faith.
And for what? Cheaters are more rampant than ever, now that they have moved to DMA type cheats, which can't (and never will) be detected by Vanguard.
So you give away complete control of your PC to play a game with as many cheaters as any other game. I wouldn't call that "great".
I don’t think you can make the argument that the amount of cheaters using DMA is “just as many” as in a game with a less restrictive anti cheat, allowing cheaters to simply download a program off the internet and run it to acquire cheats. The accessibility of DMA cheats is meaningfully reduced to the point that I would guess (only conjecture here, sorry) the amount of cheaters is orders of magnitude less in an otherwise equivalent comparison.
Now, the amount of DMA cheaters may still be unacceptably high, but that’s a different statement than “the same amount as”.
So, it’s not “giving up something for nothing”, it’s giving up something for something, whether that something is adequate for the trade offs required will of course be subjective.
I don’t know, the number of cheaters appears to be non-zero and present enough in my games. Why give any random game studio kernel level access to anything? There are absolutely server-side solutions, likely cheaper solutions because the licensing fees for the anti-cheat software aren’t cheap.
We gave up something real. But it has not been proven whether we got anything. Maybe we got nothing, maybe we stopped a few of the laziest cheaters, but we still see tons of cheaters. The number of possible cheaters is based off the quality of the software. No amount of aftermarket software will magically improve the quality of your game in a way that 100% deters cheaters. I’m positive that their marketing claims they reduce cheaters by an order of magnitude, but I have not observed them successfully catching cheaters with these tools.
Yeah, valid point.
You're right, a game with no anti-cheat or a bad one will have more cheaters. But as you said, it's about the tradeoff, and that's what isn't "great". It was for a period of two years or so, since the tradeoff was "lose all control of your PC by installing a rootkit, play a game completely free of cheats", which was compelling, but now that the game isn't sterile anymore it's hardly worth it, at least for me.
Is it so radical to want to be in control of your stuff? What are these use cases where we need to have third parties in control?
I don't really buy the gaming one, in every other domain where a community of people are gathering to do a thing they enjoy together it's on the community and not the tool maker to figure out how to avoid bad behavior. If you don't wanna play with cheaters then just play with somebody else.
People who are concerned about this should realize: Microsoft will never create a situation where alternative operating systems can’t be installed. They already went through the antitrust ringer on that issue. They don’t even control what hardware vendors do for the most part.
This requirement will only hit multiplayer games where cheating and security threats are rampant.
Also, if you have a PC with secure boot enabled, there are popular Linux distributions like Ubuntu that have a signed key. Or, you can add a signing key to the firmware, depending on your hardware. And of course, most commercially available PCs will let you disable secure boot entirely.
(Most multiplayer games with anti-cheat software don’t really work on Linux anyway.)
They have shipped ARM Surfaces where alternative operating systems could not get installed, enforced with Secure Boot permanently on. Have they been through any such "antitrust ringer" in the past 10 years?
Note that there's one key MS uses for Windows and one key they use for everything else. They actually advise OEMs not to install this second key by default ("Secured Core" PCs), and some vendors have followed the advice, such as Lenovo. Resulting in yet another hoop to install non-MS OSes.
Even recently, a Windows update added a number of Linux distributions to the Secure Boot blacklist, resulting in working dual boot systems being suddenly cripped. Of course, even _ancient_ MS OSes are never going to be blacklisted.
True, 3rd party not trusted by default is a "Secured-Core PC" requirement, but so is the BIOS option for enabling that trust [0]. On my "Secured-Core" ARM ThinkPad T14s it's a simple toggle option.
Actually they are in the process of blacklisting their currently used 2011 Windows certificate, i.e. the Microsoft cert installed on every pre-~2024 machine, also invalidating all Windows boot media not explicitly created with the new cert. It's a manually initiated process for now, with an automatic rollout coming later [1].
It'll be very interesting to watch how well that's going to work on such a massive scale. :)
[0] https://learn.microsoft.com/en-us/windows-hardware/design/de...
[1] https://support.microsoft.com/en-us/topic/kb5025885-how-to-m...
As I said, yet another increase in the number of hops for no reason.
Before you say anything else: until this you could install _signed_ Linux distributions without even knowing how to enter your computer's firmware setup. Now you can't.
The trend is obviously there. First, MS forced Linux distributions to go through arbitrary "security" hoops in order to be signed. Then, MS arbitrary altered the deal anyway. Even mjg59 ranted about this. And the only recourse MS offers to Linux distributions is to pray MS doesn't alter the deal any further.
Maybe at no point they will make it impossible on x86 PCs, but they just have to keep making it scary enough. And in the meanwhile keep advertising how WSL fits all your Linux-desktop computing needs. While at the same time claim they have nothing against opensource.
No, they are NOT in the process, and that is precisely what I was referring to. They have not even announced when they are going to even start doing the process. All you quoted is instructions to do it manually. So I'll believe it when I see it.
And besides, just clearing the CMOS is likely to get you a nice ancient DBX containing only some grub hashes on it, and the Windows MS signature on DB. Not so much luck for the MS UEFI CA signature, as discussed above. So "recovery" will be trivial for Windows, not so much for anyone else..
You can in fact disable secure boot on the arm surfaces.
The problem is nobody really has put enough effort to port Linux to it. Some people started but haven't gotten very far
https://github.com/orgs/linux-surface/projects/1 https://github.com/linux-surface/aarch64-firmware https://github.com/linux-surface/aarch64-packages
It was due to a bug/and or not being able to detect all manners of dual boot correctly.
The goal was not to blacklist old distros, it was to blacklist vulnerable boot managers
Microsoft's response and fixes were provided: https://learn.microsoft.com/en-us/windows/release-health/sta...
If it's software your job requires, that's one thing. But games? Just play different games, or get a different hobby. You have a choice so exercise it.
First they came for the socialists, and I did not speak out—
Because I was not a socialist.
Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.
Then they came for the Jews, and I did not speak out—
Because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
Financially supporting games which do a thing you disapprove of is so counter productive it defies rational explaination. You aren't "speaking out", you're joining the party and paying membership dues. How could you get so twisted around? Brain damage, that must be it.
Software doesn't require it so far because these devices are "uncommon" (i.e. for example, not on server hardware, not usually virtualized).
But guess what is happening now that MS requires TPM for Windows? All virtualizers now have some support for TPM. The time will come.
And Windows PCs are still not safe.
So either way it fails it's purpose
Most Windows PCs have Secure Boot enabled the many have the drives encrypted with Bitlocker.
What does that do for me to stop malware? Bitlocker is only protecting an offline system
Also consider that some keys for Secure Boot have been compromised.
So I guess then your computer does not have a form of Secure Boot enabled, and your drives are not encrypted. Makes sense, more secure.
I’m using Linux and LUKS but have never been convinced Secure Boot adds anything for me. It does sometimes add extra steps though, or block a driver from loading.
LUKS also only protects an online system. So why are you using it?
Oh, I think I know, if you are on Windows it's bad to use BitLocker because it's made by Microsoft and it doesn't protect against malware, but if you're on Linux of course you use LUKS, it's a sensible thing to do. Got it.
Back in my retail computer technician and sales days, it wasn’t uncommon for somebody to lose their Bitlocker keys, and encryption did what it was designed to do - make the data unreadable without them. Sometimes they didn’t even understand what they enabled.
To that customer, Bitlocker itself was a threat.
In my small sample size, I’ve seen that more often than lost laptops. I’ve also seen many more malware infections.
Tying encryption to the TPM, which is the default, makes it easier to lose those keys. With LUKS I choose my own password.
It’s an important implementation difference, especially if it is going to do it by default. Warning a person “you will lose all data if you don’t write this down” in big bold red text is sometimes not enough.
Does tying those keys to your MS account fix that failure method?
Yes. Bitlocker recovery keys are escrowed to the Microsoft account. I've relied on this recover data from a family member's PC when it failed and they had unknowingly opted-in to Bitlocker (a Microsoft Surface Laptop running Windows 10 S Mode).
Which then opens the door to other attack vectors, even government.
I’d imagine most people would like some insurance in the event of loss or theft, but are not worried about government.
I’m vulnerable to the $8 wrench attack, but enjoy knowing it is only a VISA problem if I leave it a laptop the bus.
I'm genuinely curious to know how VISA helps (or doesn't) in your analogy - what is a 'VISA problem'?
VISA as in the credit card not a travel permit
Mostly a joke, but I swipe a card and the problem goes away. No need to worry anymore.
I mention that only because it's one avenue. I figured obviously on a place like Hacker News that malicious agents aside from government could also compromise the security of 3rd party-held keys; as always security is a matter of difficult tradeoffs and anticipated threat categories.
As opposed to just not encrypting their data at all and letting everyone who ends up with the drive have their data.
So one scenario, everyone can access the data if they get the drive. The other, the government might get Microsoft to release the encryption keys.
You are presenting a false dilemma where either Bitlocker is in use or the drive is entirely unencrypted; there are other ways to ensure data integrity in the face of physical compromise.
1. It's not a false dilemma, it's more of a question of how to handle the "average Joe" user that doesn't know how to store encryption keys. I don't like how this automatic encryption is implemented, by the way, but sending the keys to MS servers is not the worst idea ever.
2. Bitlocker can totally be used without a MS account and without sending keys anywhere and without TPM... But seeing how most people fail to RTFM we're back to point 1.
The point is Linux doesn't enforce useless hardware that on top could be used against the user.
Same with MS's recall feature.
A Windows PC is just C but not P anymore.
Secure Boot makes persisting malware in the kernel fairly difficult. Which IMHO made sense coming from Windows 7 where driver rootkits and boot kits where trivial. With today's main threat model being encryption malware I would agree that it doesn't add all that much for most people.
It really doesn't prevent anything like that, not even remotely. First, to do any type of persistence that would be detected by Secure Boot, you already require unencrypted, block-level access to the disk drive, possibly even to partitions outside the system drive. There are a gazillion other ways that malware can persist if you already have this level of access and none would be detected by Secure Boot. If you were able to tamper with the kernel enough to do this in the first place, you can likely do it on each boot even if launched from a "plain old" service.
More accurately, unbreakable security as enabled by hardware TPMs also enables unbreakable vendor lock-in like we have with iOS. Pick your poison.
People will keep saying it, because that ratchet only seems to go one way. Consumer access to general purpose computing is something we take for granted, but every year it seems like there's a bit less of it, and once we lose it we will never get it back.
For now. The cogs will turn slowly towards our demise.
For now. It's not ubiquitous enough yet. Games are already starting to require secure boot, the rest will follow in a few years.
I never did. The worst part is explaining it to people drinking the MS coolaid. I'm an MS admin so people at work love Win11, Intune etc all that max lockdown shit. To me that's not what Windows is about, for me Windows is excellent because of the admin tools and backwards compatibility. But hey that's just me.
Proton will be another TPM thing, introduce it, wait 5 years, then mandate it. They have time.
Another TPM thing? What problem do you have with the TPM?
TPM end game is to have identity tied to a device on pcs, just like the monopolies already have on Android and IOS.
you know how google and apple dropped actual totp 2nd factor for their own accounts and force you to sign on another device to confirm signing on new devices? same thing.
Apple has SMS if you don’t own an Apple device. In fact, they require SMS to set up 2FA.
They probably dropped totp because non-technical people can’t figure it out.
SMS is not really great.
SMS is trivially exploitable. It has negative security value.
Trivially? How?
You can use FIDO2 keys as 2nd factor for Apple accounts now
It being a Win11 requirement. It failing and triggering Bitlocker on our machines. It's just shit :) No I don't have another solution. Let me complain.
What garbage hardware are you running where TPM is failing?
Every Windows Update that Lenovo kept pushing UEFI updates on their shiny new X13s with the Snapdragon and the Pluton chip in it kept tripping Bitlocker on every update.
So, uh... Lenovo?
Hundreds of millions of perfectly good PCs are going to be end-of-life due to this.
-no not end of life, end of microsoft.
The TPM thing that got hacked the other day?
There are more of us out there!
There are literally dozens of us!
I may be naive, but I still do. Skepticism is warranted, yet outright dismissal based on conjecture is its own brand of fallacious reasoning. Can Microsoft potentially benefit? Certainly. But that doesn't negate the possibility of genuine user security motivations and benefits for end users
it's important to ask which one of the motivations will allow them to lock users down and ask for ongoing rent. one of these two will, and that's what will always drive the decision.