Yet again we're shown that secure boot and TPMs are security theatre designed for DRM and not to protect you, the user.
Yet again we're shown that secure boot and TPMs are security theatre designed for DRM and not to protect you, the user.
Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now.
If your code has a dozen vulnerabilities, it has more than that.
Also, every time I see a 2D image library, I assume it almost certainly has vulnerabilities. There's something magical about the task of coding 2D image libraries, such that it really calls out how bad we are at writing correct code (especially in C).
2D image library
Maybe I'm ignorant but what's a 1D image ?
A line of pixels.
Ah ok. It sounds like it's important to distinguish the two by saying 2D image, but I'm sure the reason is too advanced for me.
More likely to be distinguishing 2D from 3D graphics, and just saying "image" can be ambiguous when firmware images are in play.
So, are video-files of 2D images technically 3D then?
and time-based volumetric recording of 3D video-games are 4D?
Time doesn't necessarily have to be another dimension. They're just what they are but organized in time.
You don't have to think of it as a dimension if that obfuscates the problem or isn't a useful mental model for you, but "organizing" it is what a dimension does. You can think of an image as colors or intensities "organized by" (I would call it "indexed by") width and height. If you work with videos in machine learning, you generally accept a 6 dimensional tensor (width, height, time, red, green, blue - your order may vary, time often comes first. And you may use grayscale instead of color to reduce the number of dimensions.).
If you work with videos in machine learning, you generally accept a 6 dimensional tensor (width, height, time, red, green, blue
(Assuming you do work with ML+videos) - it's surprising to hear you say you work with RGB instead of YUV - can you briefly explain how that's the case? I'd have thought that using luma/chroma separation would be much easier to work with (not just with traditional video tooling, but ML/NNs/etc themselves would have an easier time consuming it.
To clarify, I don't work professionally with videos, I've hacked on some projects and read some books about it. My professional experience with ML models is in writing backends to integrate with them, the models I've designed/trained were for my own education (so far, at least). The answer to your question is probably, "I'm a dilettante who doesn't know better, you may well know more than me."
I take the impression that much of the time, color doesn't provide much signal and gives your model things to overfit on, so you collapse it down to grayscale. (Which is to say, most of the time you care about shape, but you don't care about color.) But I bet there are problem spaces where your intuition holds, I'm sure that there's performance the be wrung out of a model by experimenting with different color spaces who's geometry might separate samples nicely.
I did something similarish a few months ago where I used LDA[1] to create a boutique grayscale model where the intensity was correlated to the classification problem at hand, rather than the luminosity of the subject. It worked better than I'd have guessed, just on it's own (though I suspect it wouldn't work very well for most problems). But the idea was to preprocess the frames of the video this way and then feed it into a CNN [2]. (Why not a transformer? Because I was still wrapping my mind around simpler architectures.)
[1] https://en.wikipedia.org/wiki/Linear_discriminant_analysis
[2] https://en.wikipedia.org/wiki/Convolutional_neural_network
You can render a video file as a volume. I've looked at using that to make a video compression algorithm that operated on the volume rather than on the 2D frame stream. My hunch was that shapes in the 3D volume changed more predictably than surfaces from frame to frame because the frames describe the movement of objects in space. But they're projected onto a two dimensional surface. So you get these interesting 3D shapes that have fairly predictable qualities across larger spans of time than your average 2D encoder sees while encoding a video. But I never could get it to work more efficiently than existing algorithms.
Still, it was a fun project.
existing algorithms do a lot of compression across frame-sequences, but yeah not quite in the same way as the imputed volume.
I wonder if your idea would work for lightfield captures, or time sequences of a lightfield.
I suspect you'd need your compression to "understand" the object relationships and camera movement to do better than frame sequences, and it'd probably still be incredibly hard because you then add a lot of extra information first in the hope they let you discard more pixel data...
But the more you understand the scene, the more you can potentially outright reconstruct, and in some contexts more loss would be entirely fine if the artifacts are plausible.
That's exactly where I ended with this: I was decomposing the scene and realized that if you had the ability to do that reliably enough you'd be recreating a model rather than an image and then re-rendering that model. But at that point I don't think you are looking at a compression algorithm any more other than in the very broadest sense of the word. Boundaries between objects would start to look fuzzy otherwise. As in: you'd no longer know exactly where the table ended and the hand started unless you modeled it precisely enough and at that point you have an object model. So you might as well use it to render the whole scene.
Note that I did this in '98 or so, when there was less of a computational budget, maybe what I couldn't hack back then is feasible today.
So, are video-files of 2D images technically 3D then?
Video files are a linked list of 2D images.
and time-based volumetric recording of 3D video-games are 4D?
Kind of, but it would be an oversimplification. Typically when we refer to the dimensionality of objects, we're referring to physical dimensions. Time is a temporal dimension. I think it would be more specific to say this is a linked list of 3-dimensional images, right?
2D images are already 3D if you consider the colors (rgba) a dimension or the image contains layers like gif
Pixels are a unit of area like an acre or square meter (see [1] if skeptical). So a line of pixels is still 2D, in the same way a 1x5 unit rectangle is a 2D object with an area of 5 units squared. I'm not sure there's an accepted name for a 1 dimensional picture element. Maybe lenxel, working backwards from length like voxels works backwards from volume?
I like the sibling's suggestion about audio; if we were to adopt it, it would make a 1D element a "sample".
[1] People are often confused on this point, because in the course of everyday conversation we don't distinguish between the number of pixels on the side of a rectangle (which is a 1D quantity) and the number of pixels inside that rectangle (a 2D quantity). So if I say I have a 10 pixel by 10 pixel image, what I mean is that I have a grid with an area of 100 pixels, with sides measuring 10 pixel-widths by 10 pixel-heights (each a 1D quantity of length). If that looks awkward and tiresomely pedantic to you, well, that's why we just say pixels and let the details be implied.
If you're still skeptical, consider for instance that voxels are more clearly a unit of volume (think Minecraft blocks), and that pixels are obtained by subdividing a rectangle. Another useful way to think about it might be by replacing "pixels" with "dominos" and imagining making grids out of dominos, pixels can be tricky since you can't see their area yourself.
Still, fixed size pixels arranged in a line, where each pixel's position is described by a single coordinate, could be called a 1-dimensional arrangement of pixels. You could do the same with voxels, or corn fields of 1 ha each, or hypercubes.
Yeah, if you have an array of pixels or hypercubes, you can think of it as a 1D array and leave the details of what's contained below your barrier of abstraction. And that's a useful mental model much of the time.
But I would still argue that an array of pixels doesn't represent a 1D image. If a 2D image associates areas with color or intensity values, a 1D image would associate intervals with color or intensity values (since intervals are the measure of 1D space which is analogous to areas in 2D space). In my mind, those are different data structures, but the difference is pretty nuanced and I would understand if people felt I was splitting hairs.
Given the question, "what's a 1D image?" I'd argue this is the more complete answer. But if we were to ask that question in the context of a real world problem, yours is likely to be the more useful answer.
Pixels are a unit of area like an acre or square meter.
Some computer graphics experts don't agree with this: http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
I've skimmed the article, it's gunnuh take me a while to read it so that I can respond properly. In case I don't get around to it, thanks for the article. It's an interesting perspective.
“1D image” is the technical term used in graphics programming for a single row buffer of width * bytes per pixel.
Am I so out of touch? No, it's the children who are wrong!
But in all seriousness, call it what you want, I happen to enjoy this minutia but understand many people see it as an impediment to clear communication. If you're working in the unusual contexts where the difference matters you probably know.
> 2D image library > Maybe I'm ignorant but what's a 1D image ?
There is no such thing ("2D image" is still useful, to distinguish from 3D.)
"2D image" is still useful, to distinguish from 3D.
No it isn't? You don't say "here's that 2D image you wanted" when you send someone a jpg.
still useful in contexts like the one it was used in upthread, which was not "when you send someone a jpeg".
There is absolute such as thing as a 1d image, in many disciplines.
(+ the meaning in math as what comes out of a function....)
There are even 1D cameras - e.g. scanners, finish line cameras for races, some old barcode scanners (although most can do 2D now and even those that only support 1D barcodes might make use of a 2D camera).
I'd highly recommend reading Flatland. Not only because you had to ask this, but because it's a fun (and short) read.
LOL yes I've read Flatland. I was trying to sus out why the parent was specifying 2D image since that's the default interpretation of "image"...
1D arrays are usually used for 2D images
Without the "2D" qualifier, it might refer to something like a disk image.
A barcode.
An audio file.
Scan lines?
It doesn’t help that the C development community is still largely in denial about the very human cognitive limitations that result in unsafe code being written time and time again.
why do you think that?, in my perception was the opposite
Go read one of the more C/C++ focused subreddits. HN is skewed towards specific groups. Read r/cpp and you will see a ton of people saying "I can write perfectly memory safe C++", I see it often (because I love C++ and enjoy reading about it).
Im always hoping that it's the bad subset of the lang users that are loud, but for this one youre most probably right. I've been using C++ for a while and its definitely the language I know best, but I have to ackmowledge that its impossible to write really truly safe code in it. Rust makes it easier, but a panic() is just a segfault with a crash log to the user -- they dont care that its safe, it just died.
They do care about when it is unsafe, at least to some extent.
Just a note, a panic is recoverable, includes a stack trace, and is memory safe. Segfaults are none of those things.
I think your point is that it doesn't matter when the issue is still "but it died", and I can see your point.
The problem with a segfault isn’t that it crashed. The problem is that it might just as easily not have crashed but silently corrupted memory instead. A crash is (generally) safe.
You probably see this attitude a lot because of modern C++ features that do make it a lot easier to write safer code. However that is not absolute.
Just the other day I had used std::sort on and std::vector. Pretty simple stuff, figured there can't be a way to screw that up, but my app kept crashing in the sort call because it was trying to write past the end of the array. Turns out if your comparison function doesn't perfectly follow strict weak ordering sort will just blow the bounds of your array without checking.
From https://www.boost.org/sgi/stl/StrictWeakOrdering.html:
A Strict Weak Ordering is a Binary Predicate that compares two objects, returning true if the first precedes the second. This predicate must satisfy the standard mathematical definition of a strict weak ordering. The precise requirements are stated below, but what they roughly mean is that a Strict Weak Ordering has to behave the way that "less than" behaves: if a is less than b then b is not less than a, if a is less than b and b is less than c then a is less than c, and so on.
So if I take it correctly, std::sort will happily ticker-tape your array into the nearby dragon's lair if the sort callback doesn't return -1/0/1 correctly?
<Rueful mental note>
The range of the return value is not the issue - the comparison function [0] for std::sort returns a bool and not a tristate anyway.
What gp probably messed up is the one of the guarantees of a strict weak ordering:
- Irreflexivity: comp(a, a) == false for all a
- Transitivity: if comp(a, b) == true and comp(b, c) == true then comp(a, c) == true
- Asymmetry: if comp(a, b) == true then comp(b, a) == false
std::sort can assume that all these requirements are true and does not have to care about what happens if they don't.
I am long-time member of /r/C_programming and it's certainly NOT the attitude there.
Maybe so. I suppose I'm biased by C++ and falsely assumed that it would be the case for both.
Bjarne Stroustrup in latest presentations literally states C++ could be used to write safe code and developers simply use it wrong.
I think you're agreeing with him.
It doesn't help that much of the "C development community" is made of electronics engineers writing "some code to make the hardware work", instead of people with a focus on security.
It's similar to the "C++ developer community" being made of CS people writing whatever abstract thing they wish, then blaming the compiler for their code going belly up on one of the myriad of "undefined behaviors".
As easy as it seems to be to just include an image decoder it feels like "must be a specific size bmp file" is a perfectly acceptable solution here.
...but what if the "must be a specific sized BMP file" rule is because if you were to flash the eprom with, say, a PNG file it meant you suddenly got root over Intel's Management Engine in your CPU?
BMP isn't encoded at all. Effectively width + height followed by row ordered pixels.
Hard code file length and fail if the width or height are different. Copying pixels to the screen is hard to mess up.
BMP supports run length encoding.
Oh yeah it supports the most useless of compression algorithms (outside of MSPaint pictures I guess)
Pick your favorite simplistic tool and give people a conversion program
Parent is suggesting logic which refuses to parse png standard, not extension/mime-type validation.
Presumably a precision formatted png uploaded as a bmp will just render as a starry mess of color, assuming format confusion is possible within the bmp standard.
20 years ago that was, in fact, the solution.
(I wrote some tools back then to convert between 256-colour TIFF and the "AWBM" format needed by the BIOS on VIA's EPIA motherboards.)
I dunno, I think "solution" implies thought was put into this risk factor.
Assuming a security risk in writing to data read by the BIOS would be crazy 20 years ago. If you could update the image you already were inside the airlock.
This is one of the reasons I'm a big fan of wuffs[0] - it specifically targets dealing with formats like pictures, safely, and the result drops in to a C codebase to make the compat/migration story easy.
See also https://review.coreboot.org/c/coreboot/+/78271 :-)
Oh, very nice:) Great to see that!
This commit also adapts our jpeg fuzz test to work with the modified
API. After limiting it to deal only with approximately screen sized
inputs, it fuzzed for 25 hours CPU time without a single hang or
crash. This is a notable improvement over running the test with our
old decoder which crashes within a minute.
Cooool.
Also, no-one knows if it has gone unnoticed until now. Only that it's not been publicly called out until now. It could actively have been exploited for years, but those who discovered it kept their secret.
I wonder if it'll show up as previously discovered in the next big nation state leak from NSA/GCHQ/Unit 8200/whoever.
Also, no-one knows if it has gone unnoticed until now.
Or undetected. They may not be exploits but backdoors.
backdoor, bugdoor, tomatoe, tomatoh
Also, every time I see a 2D image library, I assume it almost certainly has vulnerabilities. There's something magical about the task of coding 2D image libraries, such that it really calls out how bad we are at writing correct code (especially in C).
If image libraries are magical, then font rendering libraries are a transcendental nexus of cosmic power...
Which largely explains the fontations project. This is new Rust implementation of the lower levels of the font stack, and also on track to shipping, it's now in Chrome Canary.
(especially in C).
I thought iMessage was written in ObjectiveC. /s
SecureBoot and UEFI is a regular pain in my ass. Just yesterday I couldn't run memtest86+ on a new computer because of it. I'm installing a bunch of Linux VMs these days and both on the real hardware and in the VM images UEFI and SecureBoot is a constant source of stress.
And it doesn't even work? Because the firmware authors used crappy code to display marketing images?
I want my tens of hours back.
Disabling secure boot takes like five seconds, I don't see what the big deal is.
I don’t feel comfortable depending on a vendor to expose that switch. Once all the Secure Boot infrastructure is built out all it will take is one more decision for them to stop giving me an option.
At which point all hardware that currently exists which you already own will get remotely updated out from under you so you can't boot your own kernel?
I get that being allowed to run the code you want on hardware you own is paramount, but let's not live in fear of hypotheticals. There are already secured platforms like the Xbox/PS5/iphone where we don't have that option and the world hasn't ended.
How do you expect the ecosystem to look in 20 years? You're simply not thinking far enough ahead.
"it's only a few right now don't worry about it" More happen. "okay maybe I see where you're coming from but it's still avoidable!" It continues. "I still have options I don't know what your problem is." Eventually, no choice. "How did we get here!?!?!"
We need protective regulations that ensure that general computing is accessible to every citizen. Giants of industry are not entitled to control devices that they sell after they are sold.
20 years from now, we're still going to want to run our own software. You're worried about falling down a slippery slope, and your concerns are valid, but it's called the slippery slope fallacy for a reason.
20 years from now, we're still going to want to run our own software
And you'll be able to, on hobby machines or VMs. But the general purpose machine where the owner controls the OS will fade.
Banks, popular sites and other choke points will demand attestation of an unmodified system for access. People will talk about internet access the way they do about driving in the US - a privilege, not a right.
Bet it.
Let's have that conversation right now, then.
What about Internet access deserves to be considered a privilege and not a right? What human is less entitled to accessing the wealth of knowledge and information available on the Internet? Discriminating who can and cannot access the Internet is not something that will be popular or defensible.
If the search for knowledge itself is considered dangerous, then what of all the knowledge gathered on the public against its will?
Internet access isn't remotely comparable to driving. By driving, you're exposing yourself and others to potential mortal danger. Nothing is automatic, you must be aware of, and follow, all laws concerning how it is to be operated. You have to have a license.
The Internet's spirit will die the day you need a license to access it. The body will take a lot longer.
If you want to go dystopian, we're already seeing the glimpses of such a future with Covid, where alternative "facts" are considered dangerous. It's easy to dismiss now, when it's because it's because sane people don't buy that there are microchips in the vaccine, but a future where information is so dangerous that you need proof of government programming in order to access the unrestricted Internet isn't too far down the slippery slope you're sliding down.
sane people don't buy that there are microchips in the vaccine
That kind of rhetoric is itself part of the dystopia. There were and still are much more rational perspectives that are deemed wrongthink and lumping them together with obvious crazies is one tactic used to suppress them.
I'm not making a normative claim - I don't want a corporate mall, either.
I'm making a prediction. The unregulated internet is a risk to some very powerful interests. They cannot tolerate that. And despite what some people thought early on (me included), IT is a power-amplifier, not an equalizer.
I'm not optimistic.
In that vein I agree, nation states are about control, and giving any control to people comes with risk. Personally, I would argue that if they can't trust their own people with basic computing power, it says a lot about the administration's character and capabilities to defend itself. Maybe don't piss off people who outnumber you and are responsible for the prosperity of the state? Royal "you", of course.
That's why I think we have to go the legal route, as little as I trust society and its policies, others do. We need to consider the personal use of property as an extension of the 1st amendment, at least in the States. If I purchase a computer, I should be able to do whatever I want with it, especially if I'm not hurting anyone or violating rights. Ownership needs to mean something, or capitalism's core tenet is lost and the veil begins to slip.
Building spying nanny chips and other "safeguards" are really just obstacles to ownership. It should be considered anti-consumer and a form of military-grade espionage. The John Deere escapades are a prime example of what will happen to general computing if we don't make some sort of effort to protect and enshrine computing freedom.
Maybe it won't be an issue and we'll be 3D-printing PCBs from open or patent-expired schematics so it won't matter. Maybe e-waste will be enough of a problem that there will be enough to hobble along until something bigger coalesces.
I'd rather not go on maybes though, and would rather vote for legislation that ensures the government will punish any business that sells me something and then tries to prevent me from exerting control over it as the owner. That is such obvious fraudulent behavior, there's no good defense for it. Business already enjoys the protection of copyright, trademark, and patent. An important aspect of business is actually parting with what you are selling, and giving up control.
The control is what is being bought!
Unfortunately, I don't think anyone would frame it in that way. It will just be a matter of saying "to protect everyone's security and privacy, only known good devices will be allowed on the web/internet". Software and hardware vendors will provide the right attestations, but if you write a custom kernel, that device will not be allowed to connect by any attested device.
But, you as a person will not be denied the right to access the internet. It's just that you'll need to use a device that doesn't "risk the security of the internet" to do so. Just like if you build a custom vehicle, you aren't allowed to drive it on national roads, because it risks the safety of the road system and all its participants.
I have seen no assurance that the slope is well-gritted. There is a proven inverse correlation between corporate powers and consumer freedoms.
It's not fallacious in the slightest, you're simply enjoying the slide for now.
That's your opinion. Smartphones have already brought about the trusted computing only world you fear, for non-technical users. The biggest distinction of general computing then, is the ability to run untrusted executables. Hardware is cheap enough these days that having a secure device dedicated to banking/whatever isn't out of the question. And with Google making virtual machines basically a primitive on Android, having an unlocked system but with trusted VMs inside seems to be the next step in allowing users to have local admistrative access, if they so choose. I don't doubt that ChromeBooks and iPadOS devices will. continue to be popular, but at the very least, there will always be a need for developers to have unrestricted systems to develop on, which, I believe, means that they'll exist until all software ever needed has been written.
Smartphones have already brought about the trusted computing only world you fear, for non-technical users.
What's the purpose of this statement? "You've already lost" anime-style bullshit? This is proof that it's gotten worse as time has gone on. That is, it's evidence that the slope is slippery.
Receding into a VM doesn't give people control; the host OS can still view everything happening in that VM. It has to in order to do its job virtualizing.
I agree that we will always have a need somewhere for machines to just run code. However, I do not trust that developers will be steadfast enough to resist the inevitable anti-features that make their way into products to take control from the user.
Smartphones and UEFI+secure boot enabled devices are a testament to this. It's possible to root and install your own ROM, on some models, but for how long? It's been a cat and mouse game between hackers and phone manufacturers.
Today's developer systems are already infected with nannyware, unless they're running OpenPOWER or a similarly open and unencumbered system. I'm on a Librem 14 with a mostly-neutered IME (so, still x86_64), and honestly I wonder if what Purism was able to do to isolate it was enough. AMD pushes PSP with their chips, and ARM is its own strange song and dance, and licensing is a bitch.
We need hardware that can be verified and trusted not by business, but by consumers. How do you think people will get developer systems if this culture of "no code is good unless it's corpo code" continues to prevail?
Technology ultimately can't protect you from government and corpo snooping. It's only laws that can limit what happens, at least to some extent. And those laws are better focused on the actual collection and uses of data, than minutiae about the hardware/software. It's ultimately irrelevant that the OS could listen on you if it doesn't.
Windows does listen to you. So does Android.
an unlocked system but with trusted VMs inside seems to be the next step in allowing users to have local admistrative access
There's not a single example of that happening, as far as I know. What we get is "oh, you want general purpose access? here's a sandbox for you to play in", with the system itself remaining locked.
It’s called the slippery slope fallacy by people who don’t want to grapple with the slippery slope.
It's a logical fallacy but a rhetorical argument. Slippery slope is entirely about how likely and probable that it is you fall down the slope. We have direct evidence that the same organizationa calling for Secure Boot intend to use it for DRM, because they are quoted in sales documents touting it for that.
And to be clear:
Intel® CSME supports HW DRM that helps users enjoy premium services from third-party providers, with control access to copyright material https://www.intel.com/content/dam/www/public/us/en/security-...
This is 100% supported by manufacturers own documentation. I'm not sure it's even a slippery slope argument, when it's that clear. It's more like a murderer, caught red handed and having already dictated and signed a confession, saying "That was just a joke, I didn't really kill him"
We're still going to want to, and we're not going to be able to.
Some cheap laptops have forgotten to expose that switch.
Then they cannot get Windows certification. Microsoft mandates Secure Boot disable setting for all x86 computers.
For now.
Everything in life is only for now.
Doesn't mean that accepting a situation where all it takes is a policy change is a good idea.
Start installing coreboot then?
I do use coreboot on my 51nb faux-ThinkPad X210 thanks to @mjg59 :)
If your BIOS allows it. Some don't, or if you disable Secure Boot you lose other motherboard features. My home server is an ASUS motherboard that loses Intel integrated graphics without UEFI, which makes installing an OS awkward. https://www.reddit.com/r/ASUS/comments/s68b25/psa_enabling_l...
Also like a sucker I thought secure boot was doing something to help secure the boot of my computer and was worth the bother.
I thought secure boot was doing something to help secure the boot of my computer and was worth the bother.
it does, and it is. SecureBoot makes sure that the firmware your computer boots to begin the boot process is signed by a trusted authority. it does not check the firmware for bugs or somehow sense them and skip over that code or something.
SecureBoot makes sure that the firmware your computer boots to begin the boot process is signed by a trusted authority.
The passive voice in "trusted authority" is pretty much the source of discontent in "SecureBoot."
SB is definitely imperfect, but a useful tool in moving toward a trusted boot. I think we'd all agree having a trusted boot sequence is desirable, the point of contention being who gets to decide the criteria for trust. It's been a few years since I worked in the space but I think SB gets a bit of an undeserved bad rep (I'm sure because people were vocal early on). There is a SB signed uefi application that allows for enrolling other loaders based on the hash of the loader.
who gets to decide the criteria for trust
Good point. Both are important: who does the trusting and how they define trust.
The latter is the second set of concerns: remote attestation.
I recall reading someone on Twitter mentioning having remote attestation for online banking. So starts the dystopia.
But yes, having a trusted chain can be a good thing. It depends entirely on the who, the what, and the how.
I think we'd all agree having a trusted boot sequence is desirable
We don't.
That trusted authority can be you if you choose to enroll your own signing key. While its true that most motherboards come pre-seeded with Microsoft's keys there is absolutely nothing to stop you from removing those keys and replacing them with ones you specifically trust.
if you choose to enroll your own signing key.
Can you also re-sign the firmware and hardware checks as well? Last I knew you could not.
Of course, you can't dual boot anymore.
https://community.frame.work/t/solved-secure-boot-and-custom... seems to be a good read on it.
"If your BIOS allows it"
Which systems enforce Secure Boot out of the box and lose functionality without it? You mention something ASUS and Intel graphics being broken without UEFI which isn't the same thing.
Android. After rooting your phone, apps relying on the secure key store and DRM media playback are very likely to stop working.
That's nothing to do with Secure Boot though.
It does. Except they call it verified boot. Maybe rooting was the wrong term, it’s more about installing custom ROM, where this mechanism prevent your custom ROM from getting hold of DRM keys.
It's not the same technology.
That's not secure boot. It's possible your memory test only runs in BIOS/CSM mode, but that setting has little to do with secure boot. CSM disables secure boot, yes, but only as a side effect.
If you're on Windows, try launching Windows' memory test instead. That'll work both on secure boot and UEFI.
Disabling Secure Boot is not the same as enabling CSM (non-UEFI boot). You can disable Secure Boot while still booting via UEFI without CSM, and that shouldn't affect your graphics at all.
Even with it disabled sometimes you're still locked into this UEFI mess that I certainly didn't order.
I love my thinkpad p1 g3 more than most laptops but I miss having a simple BIOS. Stupid thing can't boot off normal USB keys like I've been making for decades.
You can boot usb from UEFI. The only tool that I found is best to handle is Rufus. I used it to clean install my Windows through UEFI. Even Linux Live USB works. BalenaEtcher gives me problems, Rufus does it well for me.
Also I remember there is a setting in UEFI to allow booting from USB.
It's possible, I've done it, it just involved a lot more steps. Simpler when I can `fdisk && mkfs && debootstrap` to make a key. Plus then I need to keep two bootable usb keys, I think... it's just... "Who ordered this??"
Strange, I have that laptop model and booting off USB drives works just fine for me (GPT+1 giant FAT32 partition+the UEFI capable ISO just extracted onto the disk). Maybe update the firmware? It could also be some kind of weird compatibility issue with specific USB drives, I wouldn't know.
Lenovo also recently released a CVE fix, possibly even for this vulnerability, so you may want to check for firmware updates regardless!
I’ve never had a problem booting from USB with UEFI.
I haven’t done any serious admin work on Windows machines for over a decade. Few weeks ago I went to reinstall Windows on a Dell desktop… I had no idea about SecureBoot. I spent hours trying to figure out what was wrong and learning about SecureBoot and Intel RAID drivers necessary to reinstall Windows.
SecureBoot isn’t just for windows and it’s a damn nice feature for securing workloads in the datacenter.
What is the threat model that this protects against?
Someone booting something on your computer without you consent. You can add your own keys and sign the bootloaders of your OSs/LiveCDs/Thumbdrives
Do you use it in datacenter? Sounds like a huge investment of time with no reason
E.g. if we want to protect from a datacenter employee plugging his usb stick with ubuntu and booting it on our server. We can probably solve it with UEFI with removing default keys and adding them for our OS. (Have never tried it so might be wrong). Also we need to set a BIOS password so that the attacker doesn't simply disable secure boot.
This works, but the attacker can just remove BIOS password -- remember, he has physical access. It's for sure harder for him than plugging a usb stick. But if he anyway willing to risk and illegally mess with a customer's server while being monitored with cameras I think he would do it.
Something feels off about all of this
It was clear to see that the purpose of foisting UEFI, TPM, and Microsoft SecureBoot, accompanied by GPT drive partitioning, simultaneously on consumers with the launch of Windows 8.0, was to inhibit the installation and use of alternate operating systems.
From that point, new PC buyers would not be able to simply and directly install some other OS other than Windows 8 (or newer Windows) on the PC without four different unfamiliar workarounds, for those four obstacles, which were implemented differently on different motherboards. Even though UEFI was touted as ideal because there were published standards that traditional BIOS never had, we all know in practice that BIOS motherboards were more consistent among vendors and price points than UEFI motherboards anyway.
Linux was ominously on the brink of doubling its user share (which still wouldn't have been very significant), but especially it was Windows 7 which was the real threat to adoption of Windows 8.
Nothing more, nothing less.
As we now see, the touted improvement in "security" was a joke the whole time.
Definitely not as secure as a business-class non-UEFI late Windows 7 Professional PC's motherboard.
Not like there's any question.
But UEFI can have pretty graphics and mouse support, so it must be better... /s
Now seriously, TPM and GPT are improvements. Customizable SecureBoot along with disk and RAM encryption, are also nice.
I'll agree that you can use TPM and GPT to your advantage, and even SecureBoot can fill an actual need for a small number of PC owners.
But with GPT it was strongly recommended by Microsoft as more secure by having no unused sectors on the drive when it is partitioned according to GPT.
The unused sectors of a traditional MBR-partitioned drive had been identified as the preferred location of malicious "root-kits" that were capable of executing before the OS even had a chance to boot, were not actually on the Windows partiton and therefore difficult to scan for, and were resistant to reformatting the partition which did not delete the rootkit. To be really sure you got rid of a BIOS/MBR rootkit completely you would have to zero the entire drive, or at least the sectors containing the root kit. Full reinstallation of Windows or even zeroing the entire partition itself didn't help at all.
But using GPT there are usually way more unused sectors on the same drive compared to MBR partitioning. Always have been. That's just one of the original lies propagated by Microsoft, endorsing the migration away from a more well-proven traditional BIOS.
And here we have a defect in one of the supposedly true security improvements baked into UEFI, with ridiculous false-sense-of-security implications since day zero, now-confirmed and it's exactly a vector for a rootkit no differently than under good old-fashioned BIOS.
Except zeroing the entire physical drive still wouldn't get rid of a UEFI rootkit which can now be even more stealthy, enough to reside in the firmware itself. Even at this late date, how many users are scanning their firmware and what apps would they use for that anyway?
When truthiness is not a way of life, there can not be actual trust.
at least you can run VMs with traditional boot
The latest release of memtest86+ can actually run on UEFI computers.
I'm shocked!
Firmware vendors, the last bastion of high quality memory-safe C code, solving complex problems. Code firmly audited and formally verified to the highest degree. And now you tell me these people, with the highest known software quality got "parsing image formats" wrong?
It's truly zebras all the way down https://youtu.be/fE2KDzZaxvE
Firmware and BIOS vendors, world renown for the quality of their software.
Of course their software is the best. Some of these vendors are subject to regulatory, contractual legal requirements.
That always produces the best software, unlike the "move fast, break things" people in other domains.
I’ve been in the move fast and break things domain. Unsurprisingly, things are often broken there too.
I've yet to run into place where they aren't. But run fast, fix fast often works better than constant debate and red tape.
Currently working on a new federal report requiring us to send 7 fields to them yearly. They have provided a 1,670 page document to 'help' us send them the data.
Hehe. "We're from the government and we're here to help you!"
I'm somehow positive that the folks that fix the things that break are thrilled on the daily that someone did something quickly, without really understanding the ramifications, when they could have asked about it first. ;)
You have a valid point about red tape, but does it really amount to 2 steps forward, or is it the same one step, but 2 forward, 1 back?
I work in environments where moving fast is usually bad (Enterprise/Healthcare), so I'm honestly asking.
genuine question: is this sarcasm? how dumb am i for not being able to tell?
You're not dumb. You simply lack the specific body of knowledge and industry experience that provides the context for you to determine this.
That's just life. :)
Yes.
Since it's always turtles^Wsubcontractors all the way down, I wouldn't be surprised in the least if some of the actual mission critical stuff turned out to be written by some unpaid intern at an underpaid outsourcing shop somewhere in Elbonia. And if it would be in fact the most solid and reliable piece of the firmware.
> Elbonia
Showing your age there, pal. And yes, the '90s were awesome.
On topic: does the open-source Coreboot fare any better here? (I hope it does; the article does not mention it.)
coreboot just initializes the hardware, the logo is something that the payload displays: https://www.coreboot.org/Payloads
The most typically used payload is u-boot: https://docs.u-boot.org/en/latest/
u-boot supports specifying splash screens via "splashfile", but it seems only bmp and maybe some raw image format are supported: https://github.com/u-boot/u-boot/blob/2f0282922b2c458eea7f85...
In other words, no support for png, which this exploit uses :). That doesn't mean that coreboot/u-boot aren't written in C though which is a language known for its vulnerabilities.
which is a language known for its vulnerabilities
C is not known for its vulnerabilities, it's known for being fast, portable, and low on abstractions. It's a perfect fine language to write this kind of code. There are techniques like fuzzing and using things like valgrind that can catch invalid input and memory issues quite quickly.
Yeah that stuck out to me, too. The fucking kernel is written in C. 90% or more of the networking stack is written in C. Graphics APIs are written in C. Compilers, databases, firmwares, older video games, everything.
C is not known for vulnerabilities, it simply makes it easy to shoot yourself in the foot. The tooling exists to improve code safety, but generally if you don't do anything crazy, nothing crazy happens.
Lasers, chainsaws and pressure washers are all easy to hurt one's self with but I wouldn't say they're known for creating injuries. It's more like people are stupid enough to hurt themselves with tools regularly.
It's not hard to check a value before dereferencing or freeing it. It's hard to remember to do it.
Lasers are so known for creating eye injuries they come with warning stickers.
Chainsaws are so known for accidents they come with warning pamphlets instructing you in how to use them in ways that minimize the risk -- including recommendations for wearing reinforced chaps to protect your legs.
So, following your comparison, it should be perfectly valid to apply a warning sticker to C that says "Warning: Known to state of California to cause data corruption and accidental code execution".
Except all programming languages can corrupt data. Even knives with sheaths can be misused to harm one's self.
Are we gonna include warnings for SQL injection risk? This is a stupid premise.
Are we gonna include warnings for SQL injection risk?
No, we've moved to calling conventions that avoid SQL injections: bind params. That argument speaks for minimizing use of C, not in its defense.
No, all I'm seeing from you is "it's possible to break in C therefore C bad."
It's okay to admit you don't like a programming language.
Constructing SQL by string manipulation is hard to get right, hence we largely stopped doing that. Same story with C, really. It's okay to admit you're irrationally attached to a tool, and calling things stupid because of it.
Rather, C is know for lack of guard rails, like no bounds checking by default, and some unfortunate early design decisions, like the handling of null, or zero-terminated strings. These are partly mitigated by modern compilers, but can't be done with completely.
I'm anticipating the time when Rust and Zig will displace C as the default language for writing hardware-interfacing code. It's not going to happen tomorrow, though.
coreboot has a rather fragile JPEG parser for splash screens but the saving grace is that barely anybody uses it because that's normally the payload's job (the jpeg + display thing was mostly added for a proof of concept "look how fast we can boot" demo in 2005)
With payloads, some are more at risk than others. edk2 might run into issues here, grub2 has tons of features where stuff like this might hide in, other payloads (e.g. ChromeOS' depthcharge) are more limited in their functionality and therefore in their attack surface, too.
We need UEFI! With out it everything is ancient. And it needs to be updateable from the internet and the OS.
Yeah, and it should (somehow) involve javascript/wasm and Big YAML.
Not sure if sarcasm...
As a former firmware developer I would like to point out that I lacked the competence and interest to have done a good enough job of it past getting paid at the end of the month.
Well that's a relief - I was worried I'd never be good enough to get firmware dev job.
Do you breathe air? Are you capable of making noises, intelligible or not?
Welcome aboard!
I kid...or do I?
Thank you for that reference. As someone outside of tech, I still found Bryan Cantrill's presentation to thoroughly engaging and entertaining.
You'd enjoy most his talks, even if you don't understand the source material. They're half anecdotes and bashing Oracle in a very entertaining way.
Indeed -- and not coincidentally, this is why we don't have UEFI at all on the Oxide compute sled.[0]
[0] https://www.osfc.io/2022/talks/i-have-come-to-bury-the-bios-...
Hey, big fan of your work. Keep up the great work.
I fear your sarcasm may be a bit too subtle.
C code? Nah, my friend, firmware is written in SPARK to achieve the highest possible quality. /s
(and yes it should have been written in it, arduous it is, it's a "you had one job" situation)
My time in firmware has thoroughly broken my brain. All those things that we rely on to live our lives? I don't think people understand The Horrors that live within them
Does this successfully bypass "secure" boot? Quite a big deal if true.
On the other hand now I know it's possible I'm tempted to replace my boot logo with a custom one..
Yes :(
The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device—a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June—runs standard firmware defenses, including Secure Boot and Intel Boot Guard.
Just checked; my laptop mounts /sys/firmware/efi/efivars and /boot/efi rw by default. Time to fix this. Not a huge defense against an RCE + priv escalation somewhere, but at least would help against fat fingers under sudo.
Poettering is very convinced that this is not a bug.
I mean, I agree with him - read-only mounts are not a security boundary, user accounts and permissions are the security boundary.
Guard rails are not a security boundary, but are helpful nevertheless. You can still do that, but it becomes harder to do by mistake.
I am still amazed this isn't an option in the boot menu that's off by default.
I've read the second thread today.
I'm running a "Poettering-free" distro (Void Linux), but it does not have this mitigation either.
The way UEFI loads image files differs per firmware (and sometimes firmware version). On some devices all you need to do is drop a LOGO\Logo.bmp file into the ESP, on others you need to flash a modified firmware image. The latter requires a lot more permissions than the former, of course, and with firmware downgrade protection, flash-based attacked should only work right after a new firmware has been released (or if you choose not to update your firmware for whatever reason).
The authors seem to imply that there are semi-standard ways of loading user-supplied images in UEFI firmware, and I'm kind of interested in how they achieve this without reflashing the motherboard's firmware.
Yes :)
Where did you find the video that demonstrates a PoC exploit? It doesn't seem to be the YouTube video embedded in the article.
I still use BIOS and I hopefully never be forced to use UEFI or Secure Boot. Again RMS and Theo both came out against UEFI (IIRC). But $ mean more in security.
that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux
What does this mean, "decades" ?? I kind of doubt it. I guess we cannot have a article without sensationalism.
If BIOS is immune to this, it's only by coincidence; there's no reason to think older firmware that allows the user to set the boot logo isn't equally riddled with vulnerabilities.
Last I heard, BIOS firmware is forced to be 64k (or 128k), but that limit could have changed over the years. Not much of a logo would fit with that limit.
edit: just checked, the BIOS in my newest laptop ~10 years old is 128k. But the ROM size is 12M which is interesting. I can enable UEFI so I can only guess the remaining space is used for that.
In reality you have UEFI and the 128k is 16bit shim code of the Compatibility Support Module that emulates BIOS interfaces.
UEFI Class 1 devices (which had hardcoded CSM and didn't allow access to UEFI APIs from bootloader or OS) started shipping as early as 2005 - because it made it much simpler to build firmware especially when you're supposed to integrate complex vendor specific code (for example from Intel).
Even before that, it was not uncommon for BIOS to actually be much bigger than 128k. Logos were also common as part of BIOS. Hell, 1990s had a bunch of BIOSes with GUIs with mouse support. Some even tried to emulate look&feel of Windows.
At least UEFI provides a graphics toolkit that supports rendering to both GUI and serial port, so you don't have to deal with BIOSes that can't be connected to over serial because they need VGA raster output...
This was true in 8086 machines, but it was extended in various ways.
https://stackoverflow.com/questions/56216128/how-can-the-bio...
PCBIOS is just a compatibility mode for UEFI chipsets now, so there is no avoiding it -- sorry :(
x86s is going to be UEFI only and amd64 only: https://www.intel.com/content/www/us/en/developer/articles/t...
All computers newer than 10 years are based on UEFI.
Which exposes a BIOS emulation layer: CSM (Compatibility Support Module)
If "running OS can cause the computer to load unsigned code" is something you care about, BIOS is already "vulnerable", by virtue of not even trying to prevent it.
Putting an image parser in the first stage of boot seems like a mistake in the first place tbh. Surely you could skip that and just bake the decoded image into the firmware? Or is this all so vendors can rebrand their BIOS without having to rebuild and sign it?
IIUC There are two vectors:
1. Logos in EFI binaries (OS, bootloader, shell, etc), not the UEFI firmware logo itself. For these, "bake it into the firmware" is not relevant because these are just files that anything, such as malware, can drop into the ESP.
2. The UEFI firmware logo itself. This would only be updated by firmware updates, which ought to be signed, but apparently these vendors put the logo in non-signed sections, so malware could edit a pending update to use a malicious image.
Lenovo ships the standard Phoenix shell app named ShellFlash.efi (which calls itself ShellFlash64). One of its CLI flags is to flash the UEFI logo. The Lenovo BIOS package includes instructions to update the logo by dropping the image file in the ESP where their updater (BootX64.efi) can find it.
So flashing unsigned logo images is supported and intended behavior here.
From what I can tell by the demo video, the Windows process seems to be injecting the bitmap before calling a reboot without calling into the flash utility.
I wish I could put up a nice customised image without having to mess with firmware files. It's kind of stupid to include a logo feature but then to remove the image file every time you install an UEFI firmware update.
UEFI update protocols have a variant where you store the update Capsule in memory, mark it so that it's not overwritten on reboot and picked up by update handler during boot.
The same protocols are afaik used by the bootable updaters (there are IIRC three ways to pass the update capsule to flasher that is actually part of the firmware)
I see, that would explain how they managed to flash the image file without writing to the ESP. That would also bypass naive detection mechanisms for antivirus solutions.
You nailed it. This is so vendors can rebrand without having to rebuild and re-sign.
The point of uefi was to unlock the future bios was preventing. It's crazy how quickly the mob forgets and then turns against what it used to love.
Is this completely mitigated by disabling boot logos?
Edit: What I mean is, do the image parsers still run if boot logos are disabled?
No because an attacker who got control could undisable them, then become persistently entrenched in your system such that you remain compromised even if you reinstall your OS.
Right. I was more curious about if the image parsing code was still vulnerable while boot logos are disabled.
Oh, you mean disabled in the motherboard's firmware settings. I hadn't consider that interpretation of your question.
ADDED. I disabled it in my ASUS motherboard's settings.
Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities
I'd suspect that an image parser is only involved when a logo needs to be displayed. So it looks like a great way to lower the chances of triggering these vulnerabilities. I'll switch off mine.
OTOH if an image parser is also invoked when the logo is updated, to check that it parses correctly, this won't be enough :(
I guess I wouldn't be surprised at all if some BIOSes always parse the logo and the switch just turns off the display.
Thanks. My take is on this is root access -> stealthy bootkit. Further demonstrates "secure" boot and similar DRM is theater only.
On the defense of the secure boot illusionists, though, I'm going to imagine that _anyone_ who was actually concerned about secure boot working as intended would have disabled the running kernel from accessing UEFI variables and the UEFI system partition.
AFAIK, while supporting XBOOTLDR, the systemd project still encourages /boot to be on the ESP if possible.
Personally, I don't understand why they advocate this. Beyond separation of concerns and situations like this, it also seems more realistic to advocate for XBOOTLDR style deployments in the face of dual boot systems and Windows creating an anemic ESP nowadays.
Personally, I don't understand why they advocate this.
It seems easier to explain if you don't mind choosing from a nice range of conspiracy theories about what systemd is and why it has spread like a virus.
disabled the running kernel from accessing UEFI variables and the UEFI system partition.
How exactly do you propose doing that? I suppose you could see if OPAL can block writes to the ESP, but that does nothing about efi vars.
Kernel lockdown
Okay,
how do I even change the boot logo in BGRT so it persists across boots? From userspace, no less? And how do I turn them off? On your generic dell/lenovo boxes I doubt it's even possible.
Do you even run installers from not entirely trusted third parties, or curl | sudo bash type of thing? That could sneak in.
Yes, but how do I do it if I want to do it for my own amusement?
Lenovo uses Phoenix provided tool that will prepare compatible graphic image into update capsule and pass it to flasher program (embedded in the firmware)
So that would require an authenticated program execution, a reboot, a reflash, and a reboot again. Hardly unnoticeable. I think EFI updates can be disabled in most UEFI setup programs by the administrators, so they only reenable them explicitly when doing the actual, you know, EFI updates.
I thought that was possible by mucking in a RAM region that would persist across warm reboots and UEFI would pick up the logo from there.
You can put an update capsule in that region and it will be picked on reboot. Another option is putting a file in ESP then telling EFI with a special call that there's an update to be applied (special call is also needed to inform EFI about the memory update method).
Windows Update uses those to deliver firmware updates on behalf of vendors.
Bit of a nothingburger IMHO, but as usual the corporate bootlicker types want to scare the hell out of everyone.
Flashing the BIOS from within the OS never seemed like a good idea to me[1], but if you can do that then you can replace it with whatever you want. I also recall replacing the Energy Star logo with something more amusing was at one point a semi-common BIOS mod.
[1] I remember the story of a certain company that included silent automatic background BIOS updates in the preinstalled bloatware of its laptops, and the subsequent large number of "I didn't do anything but reboot and now it won't even turn on anymore" reports.
Not exactly a nothingburger; this makes your machine once and always pwned.
...until you reflash the BIOS.
with physical access to a device. anything is possible anyway
this makes your machine once and always pwned.
Well, no, it doesn't, but that's besides the point.
These specific image decoding bugs are indeed a bit of a nothingburger in terms of the implications they give rise to. There's just no reason to overwrite the boot logo graphic and leverage these exploits if another (simpler) method of achieving the same end result exists, and often it does.
For example, many systems to this day are shipped in a configuration such that you can disable write protections for certain ranges (or all) of the SPI EEPROM on which the firmware resides simply by changing some NVRAM variables (typically the variables correspond to (often hidden) 'BIOS settings' in common firmwares such as those from AMI or Phoenix), after which you can write contents of your choosing (eg. using Intel FPT) to the chip which will promptly be executed without any checks upon the next restart. This is by design, not even abusing any exploit or flaw in the software (of which there are plenty). If you want you can try it out on some of your own systems, for instance dump the firmware, extract (for example) the AMI setup menu form and simply run it through something like LongSoft's IFRExtractor[1] to locate the regions and offsets of said NVRAM variables, then try writing to them. It is true that the NVRAM regions for these settings (and others) are sometimes write protected or locked in such a way that you can't overwrite them after the firmware has started another program (eg. your bootloader), but often there are even ways around that. It's clear that firmware security is not always much of a concern for a surprising number of vendors currently shipping consumer computer systems / motherboards today.
How about we just have a physical switch that you have to activate to update the firmware? I don't care about the rest of the software in the boot chain, an attacker can just trojan any binary that runs as root to get the same effect. As a user I don't care about the distinction between boot and the rest of the time.
Chromebooks did this for a while; they had a write-protect screw on the motherboard that had to be removed to enter developer mode and/or flash custom firmware. Once you were done, you could reinsert the screw to lock it back down again.
I have an Acer R11 that has one of those screws.
It’s amusing, because back in the 80’s we used to have a saying: “Beware of programmers who carry screwdrivers.”
Chromebooks still do this, they just swapped the write-protect screw for disconnecting the battery [1] or using a special closed-case debug (CCD) USB-C cable (which was discontinued and is now neigh impossible to buy). You can DIY your own though [2]
[1] https://wiki.mrchromebox.tech/Firmware_Write_Protect
[2] https://chromium.googlesource.com/chromiumos/third_party/hdc...
How about we just have a physical switch that you have to activate to update the firmware?
A very long time ago motherboards were like that. You couldn't flash the firmware without first physically moving a jumper on the motherboard (btw what you're asking for typically would be done with a jumper, not a switch, AFAIK).
I remember a very heated argument with a friend of mine when I got my first mobo that didn't require a jumper to be physically modified to be able to flash the firmware. I told him it was heresy and that, invariably, exploits would come.
Note that these first mobo that could see their firmware flashed without requiring changing the position of a jumper had literally not confirmation asked of anything. You'd just flash the firmware by running an exe.
Although, "file to hard-drive" isn't possible if you use BitLocker, LVM or ZFS Encryption.
More a reason too.
Those don't encrypt the EFI system partition, which is where the image in question is stored.
Yes thats still vulnerable. But the drop and place on to a hard drive isn't so possible.
The BIOS never reads the logo image from OS partitions on a drive. It either gets it from firmware stored in flash on the motherboard, or from the EFI system partition on the harddrive. Thus dropping the logo file on ESP partition or reflashing the firmware are the two attack vectors.
Writing to other partitions is not an attack vector for this vulnerability, so encrypting those partitions does not protect against it.
Writing to other partitions is not an attack vector for this vulnerability
I'm not saying that this is the attack vector for this vulnerability. I'm merely stating that utilizing the exploit to place a file as the video shows, wouldn't be possible if your main C: /root partition is encrypted.
If your main drive partition is not encrypted, than that can lead to the ability to dropping a file on the desktop.
I suppose, yes, if someone codes "wait_until_encryption_is_completed" function.
I wasn’t aware that one could just copy paste and replace arbitrary UEFI boot files from user space? I was under the impression that they are in non-mounted partitions controlled by the system.
You can comment out /boot/efi on Linux from the fstab, and only mount it when you need to run a grub update. I'm not sure which Linux systems keep it mounted by default (I suspect most of them do, mostly for grub updates)
I don't think Windows mounts it by default anywhere. At the same time, on Windows or Linux, you would need root/Administrator access to inject this, wouldn't you? You could inject any binary.
I guess the thing here is that if you just replace the bootloader, secure boot should stop the boot process with your unsigned binary. With this exploit, you can replace the logo and it doesn't do the same signature check?
I guess the thing here is that if you just replace the bootloader, secure boot should stop the boot process with your unsigned binary. With this exploit, you can replace the logo and it doesn't do the same signature check?
Exactly; most vendors apparently check signatures on anything officially executable but will blindly load any picture you hand them.
This does make me ask questions; namely, why the hell would anyone risk allowing compression more complex than RLE [0] in a firmware picture format? It's almost asking for trouble.
[0] - Maybe there's something else out there that works better with similar ease of doing safely low level, would love to know other options!
John Carmack suggests using something even more basic than RLE: "Media file parsing has been behind so many security issues. In many cases today, using completely raw data is very practical. No DCT. No ADPCM. No RLE. Just raw pixels/samples. There should be a widely agreed on header for modern raw data that is just a non-extendable struct."
https://twitter.com/ID_AA_Carmack/status/1732519369362596262
Not "widely agreed" for sure but Suckless people proposed https://lists.suckless.org/dev/1407/23155.html
If a local vulnerability that requires administrative privileges to exploit is "high impact", I wonder how they would classify remote code execution. Super duper high impact?
This of course needs to be patched, but it's way overblown. This vulnerability alone provides no attack vector, you need other vulnerabilities to be able to replace the logo.
The attack defeats the whole premise of security in "secure boot". It's high impact in relation to that.
As a user I don't care too much about "secure boot" and its threat model, and I assume you don't either.
But an RCE goes away as soon as you re-OS. Once your UEFI is pwned, you might as well pitch the box in the rubbish bin.
Does this have any actual significant impact? If you have enough access to exploit this, couldn't you already do a lot of harm even without it?
A locked down encrypted at rest data center is much easier to break into when you can reboot a server with a usb and be in.
That's what I'm trying to figure out too. I guess it's just harder to remove?
The risk is "image parsers in UEFIs ... are riddled with roughly a dozen critical vulnerabilities" but what is the attack vector? Has a dropper been identified? Or is this an academic finding only?
There’s no indication that LogoFAIL vulnerabilities have been actively exploited in the wild, but there’s also little way one would know, since infections are so hard to spot using traditional tools and methods.
Their idea of an attack vector: local exploit or phishing -> privilege escalation -> accessing the UEFI partition.
Say, an installer of a pirated game might pull this off, and turn your machine into a permanent botnet node, incurable by standard means, and maybe undetectable. But this is a low-value target; a usually dormant, stealthy fileless malware on a laptop that belongs to a CEO or a high-ranking government official may be much more insidious and impactful.
Will this help in de-obfuscating the notorious Intel ME and/or its analogs?
ME was leaked years ago.
I still don't know even a one person/blogger/researcher who claims he understand ME more than 90% even if don't limiting the researcher's choice of Intel's series which ends up in talking about as old hardware as Core2Duo.
I’d be shocked if this whole thing wasn’t carefully coordinated and pulled off by the NSA. It’s the perfect attack— ubiquitous, low-level, and with plenty of plausible deniability and “aww shucks, memory usage C strikes again, waddya gonna do?” People should start trying to extract those images and comparing the SHA-256 hashes of the images. If you were going to exploit this you’d make it look the same visually and just put a low—level back door into the system to avoid raising any suspicions.
Why pull it off when you don't have to do anything and the bugs just fall out for you to use anyways?
I’d be shocked if it was. Mostly because these are exactly the kind of bugs you find whenever you go digging into crusty old code. It would be more surprising if the image parsing code wasn’t vulnerable. And while these probably aren’t very hard to exploit, you would probably need to customize the exploit for the specific firmware and firmware version you’re targeting, making the bugs relatively unattractive to exploit en masse across a wide range of systems.
So in secureboot, do images used not to be signed? Otherwise how does this work?
That seems like an odd flaw
The images are in the unsigned section :( It's more convenient to OEMs and corporate customers this way :(
What's wrong with simply not running malicious software? Is it supply chain attacks?
It means that any bug which lets an attacker get privileged access is now a potentially hard to recover compromise below the operating system kernel, but Secure Boot still passes. That means a compromise might no longer be recoverable short of trashing the hardware, and it could be triggered in many ways from some kind of exploit to having a few minutes alone with the hardware to tricking someone into running an installer.
Who knew that non-free UEFI might be a bad idea? /s
It seems that running anything for what you don't have source code becomes more and more of a liability. Not enough eyeballs, fuzzers, white-hat cracking attempts means a lot of serious risk.
Nobody enjoys dealing with UEFI. Its a very ugly mess of a standard that is an order of magnitude more complicated compared to BIOS.
Actually, while you're technically correct, most of the complexity of UEFI is just fluff that no one actually uses. I think it only exists so that EFI compliance is a huge pain to fully implement, so that companies have to pay the standards body for consulting fees.
Puts a bad taste in my mouth the only individual represented in the article is the CEO when they even call out that it was an individual who discovered it.
Epic!
I wonder if anyone tried this in Android, which also has a logo partition. Maybe not the flagship phones, but if I were to guess, I'd guess the TV sticks and other Android streaming devices might have similar issues.
https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
“Reflections on Trusting Trust”, Ken Thompson, August 1984
Does UEFI have any mechanism to run some firmware code with reduced privileges? For example an image parser?
Oxide Computer Company taking the moment to gloat: https://twitter.com/oxidecomputer/status/1732513396401012915
Looks like we picked the right day to not have UEFI -- or a BIOS for that matter
I wonder if this could be used to remove Lenovo supervisor passwords and such. Could be a great way to reduce the amount of e-waste corporations create so that people can repurpose old corporate laptops.
Today, the UEFI system firmware contains BMP, GIF, JPEG, PCX, and TGA parsers.
Let's add more, guys, let's go. Webp -- no, that's too old...wait for it... JPEG XL.
UEFI is just another M$ abomination 8-(
In this specific vulnerability, I wonder how possible it is to modify that boot logo remotely?
Wouldn't exploiting this vuln require local access?
As in all things, open firmware is the way to go:
I think
there are 3 paths.
1. placing logo into ESP (EFI System Partition) via admin privileges
2. Bios update tool
3. When there is no expected way to customize the logo, it is still possible via physical attack vector: just with an SPI flash programmer if a logo is not covered with any hardware-based Verified Boot technology like Intel Boot Guard or AMD Hardware-Validated Boot
“As we can see in the following table, we detected parsers vulnerable to LogoFAIL in hundreds of devices sold by Lenovo, Supermicro, MSI, HP, Acer, Dell, Fujitsu, Samsung and Intel. “
https://binarly.io/posts/finding_logofail_the_dangers_of_ima...
Good, more people can own their own devices.
It would be fun to collect all the exploits of EFI and secure boot and create some sort of super-bootloader-shim thingy that tries to work on as much hardware as possible where secure boot is enabled.
This should say “just about every UEFI device”. The problem has little to do with any particular OS.
It is also important to remember that attestation based on TPM has a lot of limitations in terms of runtime visibility.
For sure. Secure attested boot at best attests all the code ostensibly loaded, but it can't do anything about vulnerabilities in that code or in code that's responsible for that loading.
It's a mess.
Is this the end of the "secure passkeys"?
Yet another reason why boot logos are stupid and the good old text-output is the way to go.
inside unsigned sections of a firmware update
Why would you leave some sections of a firmware update unsigned??
This is an absolutely terrible take. Just because the mechanism doesn't provide perfect security doesn't mean a) it provides none, or b) it isn't even trying to provide security.
"Secure boot", despite the name, really has very very little to do with security and everything with only running "sanctioned" code, even if that sanctioned code is an ancient browser engine or 300 MiB worth of .text.
Better an ancient browser engine and 300 MiB worth of .text than a trojaned userspace launched by someone with a USB stick.
This is a bad take. Authenticated and validated secure boot is an important feature that keeps us all safe.
If someone has access to the hardware you have lost already. The other argument people make is that it prevents "persistence", as in the "authenticated and validated" code is actually a remote code execution engine, but while it does that, at least it can't be modified to be.. even worse? It's little consolation really.
Tell that to my Chromebook that you can't crack. Or a Macbook. Or your phone. Or even a UEFI device absent the occasional vulnerability like this. There are vulnerabilities, but they're comparatively rare and they get patched.
In fact this is simply wrong. Defense of systems against attackers with physical access is a mostly solved problem, and secure boot is the answer. It is not (and never will be) perfect, but it does work.
Who is cracking anything? The problem with physical access is that at the end of the day your security is only as good as the flat flex cable transporting your key presses.
This is not some science fiction scenario; look at the "addin boards" they found in CryptoPhones (you know that thing was using secure boot!):
https://www.cryptomuseum.com/crypto/gsmk/ip19/implant.htm
Nobody cares to exploit or modify the software if at the end of the day what you are trying to protect is running across a PCB trace and they have physical access.
That literally seems like it taps the microphone wire to record raw audio signals.
Brilliant, but not a software or hardware issue. (Although actually having the device brick itself if it is opened up would have prevented the bug from being inserted).
Likewise secure PIN pads are easily "defeated" by a camera.
Plenty of TPM devices are encased in epoxy and designed to self destruct if tampered with. And lots of modern day devices (iPhones, game consoles) have stood up to years of attempts to exfiltrate their secrets.
Work arounds are possible, but the industry has, for better or for worse, figured out how to make secure secret stores.
NSO ? I mean, am I the only paranoic that thinks that Apple fixes its holes only _after_ other people make them public ?
There are a lot of companies that are paid big bucks by nation states to find vulnerabilities and not make them public. NSO for example. So yes Apple only fixes the bugs people or companies find that make them public. But not for the reason you are implying.
Very interesting link, though I'm very disappointing it doesn't include speculation on when/where/how the bug was introduced to the phone. Also, I'm pretty surprised I hadn't come across the info that Wikileaks was bugged. Thanks for sharing.
Mad respect for whoever designed that, for a tailor made small series design that is an incredible piece of work. The component density alone is probably some kind of record. Note the Spartan 6.
I'm sorry but you've lost this argument the moment you advanced the claim that there are largely solved problems in the context of computer security.
This might be more true if we didn’t have counter examples. It’s much easier to suborn a PC than a Chromebook or Apple product, and while that’s certainly not perfect nobody should be complacent about their enterprise laptops being softer target than an AppleTV.
None of which alters the fact that there have been multiple reported attack vectors on both Apple and Google products in the last 2 months. They can be and are hacked daily.
Yes, but note that this attack works against a locked or powered off device. I’m not saying it’s perfect or that we can stop replacing C code, only that it’s safe to buy a used Apple or ChromeOS device in a way which is not true of a PC.
+1 freeman. I work in and around this all day.
Can I have it for a week or two, then send it back?
Chromebooks are quite robust against remote attacks, and they're fairly robust against local physical attacks, but "Put an external interface on the NOR SPI flash and put whatever you want there" defeats just about everything they do with secure boot, because you can put your own code there instead. Or, on at least some devices, just remove the write protect screw and run some incantations[0].
If you have physical access, very few systems are designed to be trustworthy in those cases. Even if you have a ROM root of trust somewhere, if it's on the board it can be desoldered and replaced with a different one (and I'm not aware of any hardware that does more than "write protect regions of the SPI flash - it can be done, but it's certainly not common).
Even the TPM can be physically de-encapsulated and be manipulated/have data read out, if it's a discrete physical device.
[0]: https://www.chromium.org/chromium-os/developer-information-f...
This hasn't been true for a decade or more. Boot ROMs are validated by on-chip firmware in the modern world (not just on Chromebooks, everywhere). You can flash the chip with your JTAG gadget, sure, but if doesn't have a signature that works it won't do anything but brick your board.
No, the obvious holes have long since been plugged. The design is secure. The implementation may have holes, but on the whole you can't break into an arbitrary box. You need to get lucky with a crack like the one in the linked article.
Yes, and one can bypass those write protections with physical access.
I'm going with what's written here as truth - if that's out of date, well... wouldn't surprise me, really: https://chromium.googlesource.com/chromiumos/docs/+/HEAD/wri...
Unless I'm missing something, the "read only" region is simply a normally write-protected region of the flash chip, and with physical access, there a range of ways to rewrite that region.
It also causes a great deal of trouble (at least for me), which is why disabling it is the first thing I do when I get a new machine.
I feel like this is similar to “Disabling UAC is the first thing I do, because it causes problems”.
If UAC prevented me from doing what I what I want to do with my machines, I would totally disable it, yes.
None of this means that I'd be going totally unprotected. It means that I'm addressing my security risks in a different way that is compatible with my use of my machines.
UAC is similar to Microsoft SecureBoot to the extent they can both be user-unfriendly false-sense-of-security "features".
But UAC is not built into the motherboard, it's just a part of modern Windows, and it's only a factor when running Windows.
While UEFI, GPT, TPM, and SecureBoot immediately acted as major stumbling blocks to any OS not already installed on the PC at the factory. Insidiously smoothed over to behave on the surface for Windows 8+ no differently than under traditional BIOS, these were and still can be major stumbling blocks to the use of Linux or previous versions of Windows.
Plus with 20/20 hindsight as we have seen, CSM, regardless of the upcoming industry removal schedule for the CSM firmware modules, is not a full BIOS substitute for the real thing.
As predicted without having to possess 20/20 foresight at all.
Yes, this is why I don't put a lot of thought into UAC stuff -- the only place I use Windows is at my job, where it's required.
Which means that SecureBoot is an even greater worry for me.
Secure Boot, under its default configuration, doesn’t actually prevent running a trojaned userspace off a USB stick. It tries to block trojaned kernels on a USB stick, and does every bit as bad a job of this as you would expect given the quality of the spec, the quality of the code, and the degree to which the problem is not very well defined.
You're at the very least confusing threat models. The vendor is a source of trust. Not running unsanctioned code was never supposed to magically fix vulnerabilities. Acting offended when it didn't is just a bad look.
A sole vendor who is not trustworthy is not a source of trust. Microsoft has done little to earn trust other than to make money from users.
Again, you're confusing models here. "source of trust" is a technical term, and not a judgement about trustworthiness.
Secure boot also doesn't rely on Microsoft...
It's always amusing how much people don't understand either of secure or trusted boot and start rambling about it.
In theory it doesn't, in practice it does. New hardware only trusts the MS first party CA out of the box (not even the 3rd party one for booting e.g. Linux distros!) and many systems do not allow removing the MS CAs from the trust store.
OTOH, once the original Microsoft-signed SecureBoot keys for both Windows and Linux became compromised in recent years, triggering the need to blacklist those keys in everyone's firmware which requires an unprecedented worldwide need for a timely firmware update only if available from the original motherboard manufacturer, along with corresponding OS updates to match, neither of which has been fully accomplished yet, there was no-one to rely on other than Microsoft to mitigate the snafu.
More than just amusing, to "quote" Ballmer: "This is by design."
You're the one who is confusing threat models - namely the threat model I care about and the one Microsoft cares about. One has microsoft as the source of trust and one has them as an adversary.
No, its both. And when there is a mismatch then you have a problem.
A TPM is a pretty useful thing, but all it can do is to securely store secrets.
To securely store some secrets from the user?
Yes. E.g. there are cases when it's much better to lose your disk encryption key and the entire disk's contents with it than to let a third party see it. So the disk encryption key may live in a TPM. This is supported by LUKS, for example.
Still pointless when the key can be sniffed in cleartext by intercepting the communication between the TPM and the CPU [1]. You should always combine it with something only known to the user (i.e. a passphrase).
The only attack TPM-backed disk encryption prevents is someone imaging the disk.
[1] https://blog.scrt.ch/2021/11/15/tpm-sniffing/
The fact this isn't mandatory blows me away. I remember when my work machine first got TPM full disk encryption, a passphrase was needed.
A few years later? Meh. Who needs "real" security.
Of course I get it is a convenience trade off, it almost always is.
It's especially because Microsoft doesn't want to deal with people who manage to forget their unlock code or those who die without having a break-glass kit deposited at a bank locker, leaving their relatives in quite the mess.
I get it, for everyday people TPM-only is enough, but anyone remotely security-minded (or anyone traveling to the US and thus subject to the whims of the CBP) is better served with a good passphrase.
If CBP wants to know what's inside your laptop, and you're not an American citizen, your options are to show them or go back to where you came from, and possibly never be allowed back again. Missing a passphrase may mean you're not even presented with this option, though in principle you still should be asked even if the device were completely unlocked and unencrypted, but few would take the second option anyway, so for the majority of travellers, the passphrase is not really extra protection.
It's proven sufficient to show them what they expect to see.
Variations on plausibly deniable rubberhose | TrueCrypt bare metal vmhosts allow for parallel OS's - one that can be booted by default and be "family friendly" with all the apps and photos | IMs etc expected and another (or many) OS's that have non obvious triggers to allow for passphrase entry into journalists document vaults.
The evidence for parallel OS's is two fold:-
* non obvious drivers almost always overlooked and essentially never noticed in border patrol scans, and
* "unused" areas of drive storage with contents indistinguishable from white noise (or multi pass disk shredding).
That is a while extra level above what was being suggested.
The last section of the article you link says TPM 2.0 may fix the sniffing attack. It’s also worth noting that “someone imaging the disk” was really easy if you got even fairly brief access to the computer, whereas the other attacks that may still be viable involve invasive surgery and specialised knowledge of the hardware in question.
(This is my understanding as a developer not particularly informed about boot arrangements, upon reading some relevant material. I could be wrong or have missed some nuance.)
English is not my native language but, as far as i know, _may_ expresses uncertainity.
…or booting from alternate media to retrieve data from the disk in situ (depending on which measurements are used to seal the key in the TPM).
“Don’t let perfect be the enemy of good.” Vulnerabilities/limitations should be understood and you have every right to determine that TPM+PIN is the minimum control that addresses threats you’ve modeled and reduces risk to a tolerable level, but TPM-only encryption is not pointless. It reduces risk by increasing required attack complexity without impacting usability. That’s enough for a lot of people.
From everyone, if needed.
Everyone minus vendor and his jurisdiction, I guess.
Not really, unless you assume that every TPM chip has a backdoor.
Yes, exactly. The user is the enemy.
If a TPM is going to be supported, I want to be the one to install it - whether that means inserting a USB key, or soldering it onto a circuit board.
Otherwise, how do I know that it really serves me? A TPM serves the first person who programmed it.
No, I think this shows that adding customisability features to please a small minority of users into low level components can be a huge security risk. Secure boot and TPMs aren't the problem here, the inclusion of user-configurable images and insecure image decoders is.
Secure boot and TPM that only come with PCIe card drivers signed by the Microsoft CA are definitely the problem here.
If it weren't closed and controlled by one large corporation's interests, there wouldn't be a monoculture, which would make this less severe of a vulnerability.
There's nothing preventing you from installing your own CAs in normal decices. There are some ultra locked down devices (often made by Microsoft, or made extremely cheaply to low standards) and I think that practice should be banned. That's not the case in the laptops and PCs I've come across, though.
Of course, managing your own CA is a pain, but because of Linux's design an externally manageable CA system isn't practical. I believe Fedora is quite close to releasing UKI kernels that will let you import Fedora's keys dor upstream kernels, but those will likely break the moment you need DKMS for proprietory or too-bad-for-upstream drivers.
Please consider what it would take for me to reflash and install a new digital signature in the UEFI device driver for at least the following PCIe devices: (1) a GPU (2) a network card. It's a requirement that the device can cold boot after the reflash on an enthusiast motherboard that costs less than $200 retail with no other changes except the reflash, because that's what average consumers could do. If an average consumer could follow your procedure and not void their warranty, you win.
PoC or GTFO.
Here's an extensive guide on loading your own CA keys into UEFI motherboards: https://wiki.archlinux.org/title/Unified_Extensible_Firmware...
Loading custom CA keys into the UEFI store doesn't void your warranty and is available as a standard menu option on every UEFI setup screen I've seen. The key store can usually also be reset to accept Microsoft's keys again for when you want to run Windows.
As long as you know the password to your UEFI firmware setup, you can enroll keys or reset the key store. Of course this process is way more complicated than necessary (especially in Arch's documentation), but there's nothing preventing vendors like Canonical or Fedora from pre-signing kernels and drivers.
If you want to use your/your vendor's keys for existing drivers, you/the vendor can also have sign those files.
The only difficult step for the end user is enrolling the MOK key, which requires selecting the right option in the UEFI setup and a reboot or two. Anyone can set up a repository of presigned keys and kernels you can rely on, whether that's you or your favourite open source project.
The problem is a lack of availability because the people who would need this, open source enthusiasts running Linux or *BSD, often just disable secure boot entirely. Nobody has made a nice GUI for managing this stuff either, because everyone who messes with this stuff knows the command line anyway.
You lose: this is not a PCIe device.
In that case I have misunderstood you. What do PCIe devices have to do with secure boot?
As far as I'm aware, the Platform Key is used to validate UEFI drivers' signature, so configuring Secure Boot as detailed in the Arch wiki will also provide you with the possibility to sign a UEFI driver.
I'm not sure if there are any open source UEFI drivers out there (I don't think anyone has bothered), but you could write them if you wanted to. Kind of like Windows drivers, there's just not a lot of interest in open source UEFI drivers. Secure boot isn't preventing you from writing your own drivers.
Customization of the POST boot logo image should be as simple as burning an image to a simple flash ROM which remains untrusted and data-only (no-execute) - the fact that performing these ostensibly harmless customizations will introduce a security issue just leads me to believe an intelligence agency is involved somehow...
I mean, it makes sense: one can probably convince the leaders of Hamas or Cuba that it's worthwhile to let the 14yo great nephew twice-removed of an inner party cadre member to replace their laptop's capitalistic imagery from ASUS/Acer/Razer/Dell with some JPEG with far more revolutionary appeal - they'd never suspect a thing...
It is as simple as burning an image into ROM for many devices, that's the entire problem. As it turns out, UEFI vendors and secure programming don't seem to mix well, and this easy customisation option turned into a major infection vector.
If you think compact C file parsing libraries containing vulnerabilities are some kind of conspiracy by intelligence agencies, I've got bad news for you about almost every operating system out there.
Hopefully in the future vendors will pick up languages like Rust with better memory management security (though any programming language can contain vulnerabilities, of course), at least for critical components like UEFI firmware, but as long as the current code bases are used, we'll have parsing bugs. These firmwares have over a decade of legacy at this point, and if they haven't bothered fuzzing up to now, I doubt they will do in the future, let alone rewrite their parsers to be safer.
This is definitely the kind of thing intelligence agencies look for but considering the half century of C programmers utterly failing at decoding files safely makes me highly skeptical that anyone needed to compromise all of these vendors.
Also, the Hamas thing is a bit of a tangent but given that Israel had over a year’s advance notice I suspect that elite hacking is not a prerequisite.
Apple has it working just fine.
Don’t blame shoddy implementations by companies who are only checking checkboxes.
While Apple will spend billions on the latest techniques to ensure your iPhone can only run things Apple signed, there is meanwhile a real lack of enthusiasm around the basics like making the central messaging app not a native code shitshow invoking obscure (native of course) open source libraries they never update, you know, the basics. It's the latter that keeps getting their customers exploited.
Why are you emphasising native code? Is there something about native code being bad that I am missing?
everything not running in a browser is bad, b/c it's not within reach of the fullstack shitheads /s
if they can't go down to the metal, they're not "fullstack" /s
Native code has more direct access to the system, e.g. accessing arbitrary regions of a process's memory via pointer arithmetic, invoking arbitrary syscalls, etc. In contrast, "managed code" like a JVM, CPython VM, etc. is subject to a more structured semantic model, which allows more restrictions to be imposed.
I don't understand how this is not downvoted into oblivion. As others have already pointed out, this is absolute BS.
BS why?
Can a TPM not be used to remotely attest that you are running an unmodified OS and a TPM device that has been approved by the DRM implementer, before handing you an encryption key that never touches the disk?
Can you not restrict the list of approved OS to those that do not allow root/kernel access to the user?
Can you not restrict the list of approved TPMs to those that cannot be "easily" compromised? (i.e. only allow TPMs in the same die as the CPU)
Just because it's not used today does not mean it won't be used tomorrow. Microsoft has not completely pushed this through yet because they know that half of their userbase are pirates. But they are making preparatory steps for it, such as blocking systems without TPM or older CPUs out of Windows 11.
Just look at Android to see what it will look like in a few years.
I have a Linux box set up with secure boot. I manage my own keys, I sign my UKI kernels. I use TPM2 for disk encryption in addition to requiring password. Where does DRM come into this? Where does Microsoft? I use neither. So no, secure boot and TPMs are not "designed for DRM and not to protect you, the user". The fact that some garbage companies have figured out how to use some of these features to harm consumers is another matter but so can millions of other things. Choose the companies you work with well, garbage gonna garbage.
Like I said, just take a look at Android. If you don't have an approved ROM you cannot use banking apps. Or Netflix. Or play most gacha games.
Riot Games's anti-cheat for Valorant will already not let you play if you don't have a TPM and Secure Boot enabled, and I'm pretty sure you need to have factory (Microsoft) keys for it to work.
Google has recently backtracked on their WEI API proposal which would give websites access to TPM remote attestation, but it will be back once people cool off. Once it's released you can count on every website with ads (like YouTube) to slap it on just to ensure you don't block them.
The list of things you cannot do on your Linux box will just keep increasing over time.
Even bigger security flaws were found in OpenSSL, Firefox or the Linux kernel.
Are they also security theater?
Of course they are. Openssl didnt prevent that malware from modifying a ufi driver so of course it's worthless.
This isn't related to secure boot or TPM/hardware-secrets-management. It's just an vulnerability[1] in some configurable[2] software running with privilege. This is downstream of the TPM keys for sure, and running validated blobs from the perspective of secure boot.
You could have this bug with or without that stuff, and it would be bad either way.
Or maybe you're trying to say that secure boot is an impossible problem and we shouldn't try? No, it clearly works. Boot cracks on UEFI devices are real, but comparatively rare.
[1] An image format parser bug
[2] To display that image at boot
Not really. The UEFI firmware is supposed to extend PCRs in the TPM based on what it does, but it looks like these vulnerabilities allow taking over the firmware before it does this and thus allows spoofing of what goes in those PCRs. Which breaks TPM security.
Except I'm not aware of any DRMs that use TPMs or secure boot (the 4k DRM that bluray and netflix uses SGX), but I use secure boot and TPM to handle my FDE keys, which I use everyday.
TPMs can't make sure that there's no vulnerabilities. Are TPMs actually used for DRM?? Apparently not: https://news.ycombinator.com/item?id=25193346
Even worse: they’re security non-theater designed to protect “the computing environment” from you, the user.