Obviously people might screw up, but the spec included a way to revoke any signed components that turned out not to be trustworthy
"trustworthy" according to who? Remember that dystopia does not appear spontaneously, but steadily advances little-by-little.
What's the summary? Microsoft (understandably) didn't want it to be possible to attack Windows by using a vulnerable version of grub that could be tricked into executing arbitrary code and then introduce a bootkit into the Windows kernel during boot. Microsoft did this by pushing a Windows Update that updated the SBAT variable to indicate that known-vulnerable versions of grub shouldn't be allowed to boot on those systems.
Who is Microsoft to decide what others do on their machines? Should they have the right to police and censor software they have no control of? In the spirit of Linus Torvalds: Microsoft, fuck you!
We are seeing the scenario Stallman alluded to over 2 decades ago slowly become a reality. He wasn't alone either.
https://www.gnu.org/philosophy/right-to-read.en.html
https://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
Things like TPM and "secure" boot were never envisioned for the interests of the user. The fact that it incidentally protects against 3rd party attacks just happened to be a good marketing point.
"Those who give up freedom for security deserve neither."
The alternative dystopia is one where the NSA can grab your laptop, rip out the storage, write some code into the boot chain, put the storage back, leave, and you have no evidence to know who did that.
Signed code fixes this by requiring someone actually put their name to the code. If it's not someone I recognize, I don't boot. And yes, the NSA could theoretically compromise a signing key with a $5 wrench. But then they blow their cover. Signatures create a paper trail that makes plausible deniability vaporize.
I mean can you actually protect against the NSA? After Stuxnet, I fully trust that nation/state actors can infect whatever they put their mind to - I'd rather at least have control over my machine
If your adversary is a nation state, you've already lost.
Which gives me another opportunity to quote from my favourite Usenix paper:
"In the real world, threat models are much simpler (see Figure 1). Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@ virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. "
Figure 1:
Threat: Ex-girlfriend/boyfriend breaking into your email account and publicly releasing your correspondence with the My Little Pony fan club
Solution: Strong passwords
Threat: Organized criminals breaking into your email account and sending spam using your identity
Solution: Strong passwords + common sense (don’t click on unsolicited herbal Viagra ads that result in keyloggers and sorrow)
Threat: The Mossad doing Mossad things with your email account
Solution: • Magical amulets? • Fake your own death, move into a submarine? • YOU’RE STILL GONNA BE MOSSAD’ED UPON
-- https://www.usenix.org/system/files/1401_08-12_mickens.pdf
Is that why it took 10 years to find Bin Laden, the most wanted man on Earth?
Get the feeling intel agencies aren't as omnipotent or competent as they want people to believe.
Most of that time he was in a series of caves located in a fairly apathetic nuclear power's boarders.
He was also trained and equipped by the CIA.
So, if you're willing to live in caves where they can't easily search for you after being trained and equipped by the best of the best, sure, you might live slightly longer.
Doesn't seem like a tenable circumstance to me though.
Both your premises are wrong
https://www.theguardian.com/world/2011/may/03/osama-bin-lade...
You know that lies spread online easier than facts. Why make the problem worse?
to be fair, he did lose eventually, and it took the CIA impersonating a vaccine distribution program to take blood samples to find him, which is pretty fucking omnipotent if you ask me, although sowing distrust in vaccine distribution did have some unintended consequences...
Did you hear about Snowden?
Nitpick, this is a column written by James Mickens, not a published paper.
It is funny, true, and wise, though.
You can at least make it very expensive.
I don’t know what country you live in but it’s impossible to decrease your attack surface when targeted by a Nation State Actor. Even more impossible if you live in the country in which the Nation State Actor controls through a plethora of agencies and relationships with corporations.
It is usually possible to decrease your attack surface.
Unplug all your computing devices, put them in a safe, embed the safe in concrete, drop it all in the sea.
Just try Qubes OS with Heads.
Yeah that'll work for everybody who never ever touched any cloud service and who's friends and family never ever touched any cloud service (nobody in the real world).
There's no state actor that any of that would protect against. You, and everyone else, is already compromised at a level so deep there is no hope of digging out if that is your adversary.
What these technologies protect is market share, nothing more.
Strong assertions with no justifications.
Targeted attacks against individuals or small groups from state actors are basically impossible to protect against. Widespread compromises of all operating systems at the boot level should be fought against.
I don't really think malice explains Grub being limited b/c of Microsoft's software at the boot level. There's conflicting objectives at play, and that will inevitably produce, well, conflicts.
Pretty sure the NSA as a government agency could make a US company do what you're suggesting for them.
Whether the government was allowed to compel a company to write and sign code was going to be determined in the "Apple-FBI encryption dispute" but the FBI withdrew the day before the hearing since they had found another way to crack the phone without apple's help. I wonder if this will ever be re-litigated or the government just learned its easier to pay someone to write an exploit than it is to pay a company to write a backdoor.
https://www.eff.org/deeplinks/2016/03/deep-dive-why-forcing-...
https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
Kompromat on a staff engineer is far more effective, sustainable and silent than a five-dollar-wrench attack
kompromat? pffff money talks. somewhere there is someone who will take a bribe, and 300k to completely compromise every toolchain in the world is a pittance.
The game is already over for those who worry about the NSA spying on them.
Using Windows in its default form means Microsoft already has a full backdoor into your machine, authorised by none other than MS itself.
If the NSA grabbed your laptop, you've already lost. For instance, they could replace all input and output devices (keyboard, mouse, screen, audio, etc) with ones that not only log everything you do, but also allow them to remotely control your machine as if they were physically present. They could then pretend the laptop was opened (by falsifying the hall effect sensor which detects the lid state), power it on (by forging a press of the power button), log into your account (by replaying the password they logged earlier), and do anything they wanted, as if they were you. They could even use the camera to detect when you looked away for a second while logged into the laptop, and quickly do some input, bypassing any extra validation (like fingerprints or a smartcard) logging into your user account might have required. No need to modify or even touch the boot chain and storage.
I guess in their defense the same attack can be used against any other OS so they're unintentionally protecting Linux as well, since they stated this was supposed to be a Windows-only system change. You can disable secure boot if you don't want to be secure. And, there is a way to disable the SBAT policy and keep secure boot if you want that, which is also insecure. Disable Secure Boot, login, sudo mokutil --set-sbat-policy delete, reboot again, re-enable secure boot. But, then you're susceptible to the attack.
I think understandably, everyone is concerned because it felt like an affront by MS against Linux. But, I don't think that was their thought process at all.
Given Microsoft's history, it's hard to really be sure. It's been a quarter century since The Halloween Documents and Microsoft definitely gives the air of contributing to the open source ecosystem today, but giants like having a big moat to defend, and old habits die hard. And Microsoft definitely has a reputation, even if, technically, undeserved.
There was nothing to be gained in this except ill will. Hanlon's Razor suggests they were in a hurry to fix a security issue and didn't dot their i's on checking for dual boot systems.
It's a trolley problem, and it's not in Microsoft's locus of control to keep dual boot systems dual booting. So they don't try.
They have never, ever supported anything other than the Microsoft bootloader[s], and if you work around that for instance it's pretty trivial to blow up your data by hibernating Windows and booting into a different partition. Resuming hibernation loads the old MFT onto the modified partition and you pretty much lose everything.
If you apply Hanlon's Razor to known malicious actors then the only one being stupid is you. In fact, it's a really bad heuristic for any corporation.
Intent, being squishy and debatable matters far less than the outcome.
I can say that I never intended X, but in the end, X still happened. That it happened unintentionally assuages exactly no injury from X having happened.
That would be an amazing rant had it only ended with "Sent from my iPhone".
Since the Blaster worm incident two decades ago, we're in a new era where security at scale becomes the forefront responsibility of the companies developing the product. That includes writing more secure code, having more verifications in place, adopting more secure technologies, but also, limiting user capabilities in order to avoid at scale security incidents.
This isn't about Microsoft. Some of these "forced" limitations are: UAC (User Access Control)/SUDO, Bitlocker/Full disk encryption, App sandboxing/On-demand permissions, Signed firmware and boot mechanisms, signed release binaries, Jailbreak-protections, Limitations on raw packet operations, auto-installed updates, forced security updates, closed source code, built-in anti-malware.
When you have a billion devices running around the world, you can't say "hey we'll let this arbitrary group of billion people do what they think is best for them", because you then end up with Blaster worm, and the whole Earth falls apart.
Think about the more recent CrowdStrike incident. That kind of deployment has been performed by professionals, not even regular people, and yet, it's managed to bring down the entire world to its knees. People might have died because of CrowdStrike.
CrowdStrike happened because one of the "user-empowering" features: ability to install kernel drivers on a machine. Now, people are begging Microsoft to adopt a more isolated, user-mode-only device driver system, so this kind of incident won't happen. Yes, some users who want to install their precious kernel driver could have problems, but at least the world would keep running.
Microsoft is nowhere to be blamed about this. Secure defaults is the responsibility of every product that intends to be used at scale.
If you'd like, you can disable Secure Boot, keep your data in plaintext on your hard drive, let all applications run as root, and you'd be the most powerful user in the universe. I'm all for personal freedom to disable the security features, but, at scale defaults must always prefer security over capability. That's not about Microsoft, or Google, or Apple. That's about at scale risk management.
Blaster was Microsoft's own incompetence. CrowdStrike was CrowdStrike's own incompetence. They are free to fix the problems of their own doing. But messing with software you do not own, on machines you do not own, crosses a line and should be considered an act of aggression. What if some Linux distro releases an update that deletes any installations of Windows it finds "because Windows is insecure" (according to them)?
people are begging Microsoft to adopt a more isolated, user-mode-only device driver system, so this kind of incident won't happen
Those people are, to put it bluntly, either authoritarian idiots or corporate shills. They want to give more control to Microsoft, but it's not like M$ is all that competent either, as what this article and past fiascos (like the Blaster you mentioned) have already shown, so they're going to just make things worse for everyone.
CrowdStrike happened because one of the "user-empowering" features: ability to install kernel drivers on a machine.
And crimes happen because people still have freedom. Doesn't mean we should start imprisoning (or enslaving to the machine) everyone from birth.
"Freedom is not worth having if it does not include the freedom to make mistakes."
The bug is in the fact that billions of machines are running exactly the same proprietary software.
Following the "virus" metaphor, having billions of identical organisms is how you get pandemics, mass die-offs, and extinctions.
The Blaster rworm did not in fact make the whole earth fall apart. Stop scaremongering.
This is exactly the point: Microsoft does NOT have those billions of devices, their users do.
Crowdstrike happened because the corpration behind it had direct control over the computers it was running on and the ability to install security updates without the user's consent. They even ignored configuration that was supposed to delay updates for critical machines. Spinning this as some kind of failure of user empowerment instead of a consequence of the same kind of ownership inversion that secure boot and other DRM brings is absurd.
And that's exactly how you end up in a dystopia. Because the demand for increased security never ands and can be used to justify any and all loss of freedom.
At the time of opening this page this was the top-ranked comment and that is a bit depressing. If you read Matthew Garrett's blog in full, you can learn quite a lot about what went into the process of building out secure boot for Linux.
* The UEFI Consortium via their spec mandates nothing, but Microsoft (not mentioned here, but to stick Windows stickers on your boxes and get WHQL for your hardware) requires carrying their db keys: https://mjg59.dreamwidth.org/9844.html
* You can take control of the process yourself and evict Microsoft's keys: https://mjg59.dreamwidth.org/16280.html the details are sort of in here, but let me summarize it for you: by default the platform key is provided by your manufacturer, which signs a key-signing-key, which itself signs updates to the DB (what you can boot) and DBX (what won't boot even with valid signatures). As the article says, x86 specifications explicitly require that this database be modifiable, so you can always install your own keys. I did this for a while, and on my laptop I evicted Microsoft's keys entirely. Ultimately you can bypass this if you can bypass the BIOS password simply by resetting the database or disabling secure boot and... well, https://bios-pw.org/ .
* The whole thing was built so that you can re-sign your own kernels and other bits if you want (you could just sign your distribution's db keys with your KEK, which will make OS upgrades smoother): https://mjg59.dreamwidth.org/12368.html
* Here is an article on secure versus restricted boot: https://mjg59.dreamwidth.org/23817.html - I said above that the x86 specifications explicitly allow the key database to be modified (Microsoft's ARM devices were the inverse).
Now some non-Garrett points:
* To be affected by Windows Update, you need to run Windows. Tautological and true!
* If you update your firmware via, say, LVFS (https://fwupd.org/) and your distribution via its standard tools you get updates to things like dbx all the time. All from your hardware vendor and friendly FOSS folk, no Microsoft involved. You might even be using SBAT right now.
* Those Talos II boards people like? They also have secure boot. It is entirely optional and since Microsoft only implemented a "kinda" version of NT for PowerPC, they're definitely not involved. It is not UEFI, since there's no UEFI for POWER (there is for ARM and RISCV though). You also aren't getting anything from LVFS and barely anything from your distro, but, secure boot is there. You can turn it on.
Personally, providing I can control the keys and decide what is and is not trusted and whether I use it, I am fine with it. Depending on what you want to achieve, secure boot is not always unreasonable, and neither are TPMs - small example, software exploits won't be able to successfully modify the boot chain if you have good key management (i.e. you sign elsewhere). They also have their limitations - as usual, physical access is hard to defend against and remote attestation is a hard problem all around.
I agree with everything except this:
I am successfully using TPM with coreboot and Heads, with my own keys, to protect against boot attacks on my Librem 14 with Qubes OS.