These reports led us to believe the problem was likely a firmware issue, as most other issues could be resolved through a factory reset.
My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.
That’s what we used to do on, ahem, satellite receivers, 20 years ago and maybe we all need to treat every device attached to the internet as having a similar susceptibility to “electronic counter-measures”.
Or at least monitor them for updates and light up a light when an update happens if it was my own equipment and I’d know if it should go off or not.
It's a no-win situation. Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.
But what I don't get in this case is why it was not possible to reset the device to its original state. It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.
You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system. It needs a way to be triggered through hardware examining traffic and that also needs to require the seen command be signed. That recovery boot system needs to be as simple and minimal as possibly so you can have good assurance that there aren't problems with it, and should be written in the safest language you can get away with. Guard that signing key with your life, and lock it away for a rainy day, only to be used if much of your fleet of devices is hosed entirely. It should not be the same as a firmware signing key which needs to be pulled out and used sometimes.
I think that could work, to a degree. There's always the risk that your recovery mechanism itself it exploited, so you need to make it as small and hardened a target as possible and reduce its complexity to the bare minimum. That doesn't solve the problem, which might be inherently unsolvable, but it may reduce that likelihood of it to levels where it's not a problem until long past the lifecycle of the devices.
Almost all devices have something like that already in the form of a bootloader or SOC bootstrapping mode. But the idea breaks down if you want to do it OTA. The full storage/kernel/network/UI stack required to make that happen isn't ever going to run under "ROM" in the sense of truly immutable storage.
The best you get is a read-only backup partition (shipped in some form on pretty much all laptops today), but that's no less exploitable really.
Why not? I'm essentially describing a specialized OOB system, and it would just use a carved out small chunk of system RAM or ship with a minimal amount RAM of its own. If you mean actually impossible to change because it's physical ROM ("truly immutable"), that's less important to the design than there's no mechanism that allows that storage area to be written to from the system itself, whether that's just the very locked down and minimal recovery kernel it houses not allowing it, or a jumper.
Sure, but now your device needs two eMMC chips or whatever to store all that extra junk, and it's been priced out of the market. FWIW: an alternative to designs like this is just to ship your customers an extra router to keep in a box if the first stops working: it's exactly the same principle, and fails for the same reasons.
Whether that's likely is entirely based on the cost of the device. Some things are simple and cheap and extra hardware cuts deeply into the profit. Others are not but this sort of thing is also important because they are remote and you don't want to have a person go out on site. When the device is expensive enough or sending someone to the site is expensive enough, "just ship a replacement" is not really a viable solution, unless you're installing it in a high-availability capacity where you can fail over to it without physical intervention.
Obviously it's not a solution for every circumstance. Nothing really is. I don't think it's useful for us to assume that a solution has to be, as that doesn't really help us in the many instances when it's good enough.
I am not most people, but I keep a backup modem of a different brand which is properly configured.
Granted, I use it once a year because lightning toasts many of my appliances and I have to wait for the replacement from the ISP.
At least my ISP modems can disable OTA updates. A happy oversight on their part.
There is a simple solution to this: Make the flash removable. The firmware is stored on some SD card or M.2 device, if it becomes corrupt then you take it out and flash it with clean firmware using any PC.
You don't even need the rest of the device to contain any signing mechanism with keys that could be compromised, because using this method requires physical access, and any compromise that occurs from physical access can be detected or undone with same by checksumming or re-flashing the storage device again from a clean PC.
And you can also do signed firmware updates OTA without worrying that the device can be bricked by a vulnerability or signing key compromise, because it can always be restored via physical access.
Apple has a robust recovery mechanism on their laptops, via T2 security coprocessor.
https://www.macrumors.com/2020/06/25/apple-silicon-macs-new-...
https://support.apple.com/guide/security/
Which is still running out of mutable storage. The point isn't whether you can verify the boot, it's whether you can prevent a compromised device (compromised to the point of being able to write to its own storage) from bricking itself.
Now, as it happens Apple (everyone really, but Apple is a leader for sure) has some great protections in place to prevent that. And that's great. But if you feel you can rely on those protections there's no need to demand the ROM recovery demanded upthread.
There's also Apple DFU mode to restore OS with help from immutable ROM and external device, without depending on installed OS or mutable "Recovery OS" partition, https://theapplewiki.com/wiki/DFU_Mode & https://support.apple.com/en-us/108900
> DFU or Device Firmware Upgrade mode allows all devices to be restored from any state. It is essentially a mode where the BootROM can accept iBSS. DFU is part of the SecureROM which is burned into the hardware, so it cannot be removed.
... right, which as mentioned requires physical access and external hardware and doesn't meet the requirements above either. And it's not particularly notable either: every flashable SOC in every market has something like this, at different levels of sophistication. Again it's just not a solvable problem at the level of integration imagined.
the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore. you'd need to physically access the chip to use an external flashing device. Some devices have non-writable bootloaders. They have an internal fuse that blows after the first write, so the chip's bootloader is locked. That means you can always flash a new firmware, but you can't fix any bugs in the bootloader.
That seems like awful design? Can't you have an alternate immutable bootloader that can only be enable with a physical switch? Or via some alternate port or something? That way they can update the live one while still having a fallback/downgrade path in case it has issues.
That's good idea I wish they would have such a "safety-switch".
However I assume that any malware doesn't want to be detected so I would have hard time knowing whether I should flip the switch or not, in a typical scenario.
That was likely the point that whoever did it was trying to make, that they were an extremely bad device.
1) The ISP exposed some form of external management they used to access them they shoudldn't have 2) The attacker overcame whatever security used on said management interface 3) Once in, the attacker could simply overwrite the first few sectors of the nand to make them unbootable without local hardware serial console. 4) There was no failsafe recovery mechanism it would seem
An actual "modem" would mostly likely prove volatile/immutable by nature, but anything with a "router" built into it is far more vulnerable that typically run for poorly secured tiny linux systems, and subject to Chinese enshittification.
Or a JTAG interface that the chip has in silicon and recovery is always possible from bare-metal. Dunno if that’s technically in the MCU’s bootloader or if the boot loader comes after.
Still requires a truck roll but at least you don’t need a hot air workstation.
If the vendor's actually trying to lock down the platform they'll usually burn the JTAG fuses as well. It's hit or miss though, I've definitely come across heavily locked down devices that still have JTAG/SWD enabled.
Edit: To your question, JTAG is usually physical silicon, not part of the bootloader.
25 years in tech and I’m still waiting for that free lunch
It's an interesting challenge because the device is nominally "under ISP control" but any device located in a customer's home is under the physical control of the customer. The mistrust between the ISP and the customer leads to "trusted" devices where the firmware, including the backup, can be overwritten by the ISP, but then cannot recover if it gets corrupted. And believe me, the corrupt firmware scenario happens a lot due to incompetence.
This is getting attention because it wasn't incompetence this time.
But how does blank, unprovisioned equipment discover a path to its provisioning server? Especially in light of the new "trusted" push, this is an arms race in a market segment such as routers where there isn't any money for high end solutions - only the cheapest option is even considered.
tl;dr: a social and economic problem, likely can't be fixed with a purely technical solution
This was years ago, but I remember getting cable service activated somewhere in Florida with Bright House. I handed the cable guy some ancient motorola cable modem I had found at a discount store. The guy took one look at it and said "look dude, if you hacked this thing to get around bandwidth caps it is your problem if you get caught". I guess apparently that particular modem was pretty easy to modify
Maybe it already was modified!
Technical solution: customer treats ISP's modem/router as untrusted, and daisy chains their own router after it. Neither malware nor ISP's shenanigans can access the inner network.
That’s what I do. Also makes changing providers straightforward (though last time I needed to set up some custom VLAN stuff on my router but didn’t have to fumble with any wifi config).
Humor me; how would that work? If anything, I'd expect it to be easier to overwrite the inactive slot (assuming an A/B setup, ideally with read-only root). If you really wanted, you could have a separate chip that was read-only enforced by hardware, and I've seen that done for really low level firmware (ex. Chromebook boot firmware) but it's usually really limited precisely because the inability to update it means you get stuck with any bugs so it's usually only used to boot to the real (rw) storage.
Generally the way this works is you have two partitions in your flash chip. One contains the current firmware and the second is a place to drop new firmware. Then the bootloader twiddles a bit somewhere and boots to one partition or the other. There's really nothing stopping you from wiping the previous partition once you're done.
I think some routers still have a single flash partition and the update process here is a lot more hairy and will obviously not retain the previous version after an update.
Apart from attacks like this, there's absolutely no reason to have a protected read only copy of the factory firmware. 99.9999% all you would ever need to do to recover from a bad flash is to just fail back to the previous image.
A proper read only factory image would require an extra ROM chip to store it, as well as extra bootloader complexity required to load from ROM or copy to flash on failure. It's just barely expensive enough at scale to not be worth it for an extremely rare event.
But a switch on the route: Flip the switch the router reboots to a known safe OS, that downloads, verifies, and updates the firmware. Then it waits for you to flip the switch back before it will behave as a router again.
Unless attackers manage to steal key-signing codes, and also intercept and redirect traffic to their webserver to send a fake firmware, this seems secure to me. Only downside I'm seeing is that it would be impossible to put in a custom firmware. Maybe add a USB-key firmware option?
I'm not too familiar with customer DSL solutions but for cable modems, that firmware and configuration is managed by the CMTS because technology and configuration changes on the head end may require customer-side changes to ensure continued operation. The config is a pretty dynamic thing as frequency plans, signal rate, etc change over time as the cable plant and head end equipment is upgraded and maintained.
I'd expect that any attempt to lock write enable to the EEPROM would eventually result in your modem failing to provision.
When your provider cuts you off, that’s when you know that your provider has a legit upgrade you need to take. Take the update and then lock stuff up again.
Of course, I don’t think you’re supposed to make mods to your vendor provided equipment…
In the satellite world, this would happen too: old firmware would be cut off. That’s when you go legit for a while with your sub’d card, take the update, and watch your sub’d channels until the new update could be reverse engineered. And probably have some heroes learn the hard way of taking the update and having some negative impacts that are harder to reverse.
I'm not sure what such an approach would accomplish. If the goal is to prevent the kind of problem seen in the OP (which, let's be real - is a rare occurrence) in order to avoid an unplanned outage, you've instead created a situation where it'll fail to connect far more regularly as you're kicked off the network for not correctly handling the provisioning process. You're trading a rare unplanned outage for a common unplanned outage.
Depends how often the provider pushes out updates (and the purpose/necessity of them).
And it’s only that “rare unplanned outage” when a malicious update bricks your device. Much worse is a malicious update that doesn’t result in an outage. Probably still rare but that impact though.
Edit: would also add that there’s probably a big firmware chip that changes infrequently, and frequently changing config stored on a separate and smaller chip (like a 24c or 93 series eeprom that holds a few kilobytes). That way you don’t risk bricking your modem by unplugging it at the wrong time.
> My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.
There was an open hardware project for SD card emulation, where the emulator could reject writes, https://github.com/racklet/meeting-notes/blob/main/community...
OSS emulation for SPI flash, https://trmm.net/Spispy/
Some USB drives (Kanguru) and SSD enclosures (ElecGear M.2 2230 NVME) have firmware and physical switch to block writes, useful to boot custom "live ISOs" that run from RAM.
For the rest of us, there's Ventoy. https://www.ventoy.net/
Eventually in the satellite world, card emulators took over and only the receiver was a vector of attack, but then the receivers started getting simulated too.
The nice thing about emulators is that you could intercept calls that you wanted and send your own response while still taking any and all updates. Hard to break when you have more control than they do.
ISPs can send any firmware to a docsis cablemodem, without the user knowing or accepting.
Imagine the damage that could be done by a malicious actor via the ISPs computers.
Or imagine someone being able to hack the system that does that update even without the ISP.
600K users would be a toy, they could do it to 6 Million.
Doesn't even have to be clever, just brick millions of cablemodems.
North Korea or some other government level entity could manage the resources to figure that out.
Do ISPs and modem vendors roll their own OTA infrastructure and signing key management, or contract it out?
I suppose from the point of view of someone with a black-market HU card, DirecTV was an example of an Advanced Persistent Threat. Never thought of it that way before.
Funny thing about directv is that because they allowed for many manufacturers to build receivers, directv had little control over the receiver firmware, so these counter-counter measures weren’t necessary at the receiver level.
Other providers that rolled out their own receivers had high control over the receiver firmware and once users figured out how to protect their cards, the receivers became an effective attack vector for the lazy.
But that’s where a lot of the public knowledge about JTAGs really started coming to light. Awfully nice of them to put in a cutout at the bottom of the receiver.
"First party malware"
Secure boot schemes can already "fix" this. If a boot image is programmed that isn't signed, the system boots to a write protected backup image. The system can also to some degree block the programming images that aren't signed, but presumably malware has gained root access.