return to table of content

The Pumpkin Eclipse

Scoundreller
40 replies
2d2h

These reports led us to believe the problem was likely a firmware issue, as most other issues could be resolved through a factory reset.

My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.

That’s what we used to do on, ahem, satellite receivers, 20 years ago and maybe we all need to treat every device attached to the internet as having a similar susceptibility to “electronic counter-measures”.

Or at least monitor them for updates and light up a light when an update happens if it was my own equipment and I’d know if it should go off or not.

not2b
26 replies
2d1h

It's a no-win situation. Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.

But what I don't get in this case is why it was not possible to reset the device to its original state. It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.

kbenson
10 replies
2d

You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system. It needs a way to be triggered through hardware examining traffic and that also needs to require the seen command be signed. That recovery boot system needs to be as simple and minimal as possibly so you can have good assurance that there aren't problems with it, and should be written in the safest language you can get away with. Guard that signing key with your life, and lock it away for a rainy day, only to be used if much of your fleet of devices is hosed entirely. It should not be the same as a firmware signing key which needs to be pulled out and used sometimes.

I think that could work, to a degree. There's always the risk that your recovery mechanism itself it exploited, so you need to make it as small and hardened a target as possible and reduce its complexity to the bare minimum. That doesn't solve the problem, which might be inherently unsolvable, but it may reduce that likelihood of it to levels where it's not a problem until long past the lifecycle of the devices.

ajross
9 replies
2d

You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system.

Almost all devices have something like that already in the form of a bootloader or SOC bootstrapping mode. But the idea breaks down if you want to do it OTA. The full storage/kernel/network/UI stack required to make that happen isn't ever going to run under "ROM" in the sense of truly immutable storage.

The best you get is a read-only backup partition (shipped in some form on pretty much all laptops today), but that's no less exploitable really.

kbenson
4 replies
1d23h

The full storage/kernel/network/UI stack required to make that happen isn't ever going to run under "ROM" in the sense of truly immutable storage.

Why not? I'm essentially describing a specialized OOB system, and it would just use a carved out small chunk of system RAM or ship with a minimal amount RAM of its own. If you mean actually impossible to change because it's physical ROM ("truly immutable"), that's less important to the design than there's no mechanism that allows that storage area to be written to from the system itself, whether that's just the very locked down and minimal recovery kernel it houses not allowing it, or a jumper.

ajross
3 replies
1d22h

Sure, but now your device needs two eMMC chips or whatever to store all that extra junk, and it's been priced out of the market. FWIW: an alternative to designs like this is just to ship your customers an extra router to keep in a box if the first stops working: it's exactly the same principle, and fails for the same reasons.

kbenson
0 replies
1d16h

and it's been priced out of the market.

Whether that's likely is entirely based on the cost of the device. Some things are simple and cheap and extra hardware cuts deeply into the profit. Others are not but this sort of thing is also important because they are remote and you don't want to have a person go out on site. When the device is expensive enough or sending someone to the site is expensive enough, "just ship a replacement" is not really a viable solution, unless you're installing it in a high-availability capacity where you can fail over to it without physical intervention.

Obviously it's not a solution for every circumstance. Nothing really is. I don't think it's useful for us to assume that a solution has to be, as that doesn't really help us in the many instances when it's good enough.

catlikesshrimp
0 replies
1d4h

I am not most people, but I keep a backup modem of a different brand which is properly configured.

Granted, I use it once a year because lightning toasts many of my appliances and I have to wait for the replacement from the ISP.

At least my ISP modems can disable OTA updates. A happy oversight on their part.

AnthonyMouse
0 replies
1d19h

There is a simple solution to this: Make the flash removable. The firmware is stored on some SD card or M.2 device, if it becomes corrupt then you take it out and flash it with clean firmware using any PC.

You don't even need the rest of the device to contain any signing mechanism with keys that could be compromised, because using this method requires physical access, and any compromise that occurs from physical access can be detected or undone with same by checksumming or re-flashing the storage device again from a clean PC.

And you can also do signed firmware updates OTA without worrying that the device can be bricked by a vulnerability or signing key compromise, because it can always be restored via physical access.

ajross
2 replies
1d23h

Which is still running out of mutable storage. The point isn't whether you can verify the boot, it's whether you can prevent a compromised device (compromised to the point of being able to write to its own storage) from bricking itself.

Now, as it happens Apple (everyone really, but Apple is a leader for sure) has some great protections in place to prevent that. And that's great. But if you feel you can rely on those protections there's no need to demand the ROM recovery demanded upthread.

stacktrust
1 replies
1d23h

There's also Apple DFU mode to restore OS with help from immutable ROM and external device, without depending on installed OS or mutable "Recovery OS" partition, https://theapplewiki.com/wiki/DFU_Mode & https://support.apple.com/en-us/108900

> DFU or Device Firmware Upgrade mode allows all devices to be restored from any state. It is essentially a mode where the BootROM can accept iBSS. DFU is part of the SecureROM which is burned into the hardware, so it cannot be removed.

ajross
0 replies
1d22h

... right, which as mentioned requires physical access and external hardware and doesn't meet the requirements above either. And it's not particularly notable either: every flashable SOC in every market has something like this, at different levels of sophistication. Again it's just not a solvable problem at the level of integration imagined.

bippihippi1
6 replies
2d1h

the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore. you'd need to physically access the chip to use an external flashing device. Some devices have non-writable bootloaders. They have an internal fuse that blows after the first write, so the chip's bootloader is locked. That means you can always flash a new firmware, but you can't fix any bugs in the bootloader.

dataflow
2 replies
2d

the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore.

That seems like awful design? Can't you have an alternate immutable bootloader that can only be enable with a physical switch? Or via some alternate port or something? That way they can update the live one while still having a fallback/downgrade path in case it has issues.

galaxyLogic
0 replies
1d14h

That's good idea I wish they would have such a "safety-switch".

However I assume that any malware doesn't want to be detected so I would have hard time knowing whether I should flip the switch or not, in a typical scenario.

bastard_op
0 replies
1d23h

That was likely the point that whoever did it was trying to make, that they were an extremely bad device.

1) The ISP exposed some form of external management they used to access them they shoudldn't have 2) The attacker overcame whatever security used on said management interface 3) Once in, the attacker could simply overwrite the first few sectors of the nand to make them unbootable without local hardware serial console. 4) There was no failsafe recovery mechanism it would seem

An actual "modem" would mostly likely prove volatile/immutable by nature, but anything with a "router" built into it is far more vulnerable that typically run for poorly secured tiny linux systems, and subject to Chinese enshittification.

Scoundreller
1 replies
2d

Or a JTAG interface that the chip has in silicon and recovery is always possible from bare-metal. Dunno if that’s technically in the MCU’s bootloader or if the boot loader comes after.

Still requires a truck roll but at least you don’t need a hot air workstation.

tonyarkles
0 replies
1d23h

Or a JTAG interface that the chip has in silicon and recovery is always possible from bare-metal. Dunno if that’s technically in the MCU’s bootloader or if the boot loader comes after.

If the vendor's actually trying to lock down the platform they'll usually burn the JTAG fuses as well. It's hit or miss though, I've definitely come across heavily locked down devices that still have JTAG/SWD enabled.

Edit: To your question, JTAG is usually physical silicon, not part of the bootloader.

incangold
0 replies
2d

25 years in tech and I’m still waiting for that free lunch

sounds
4 replies
2d1h

It's an interesting challenge because the device is nominally "under ISP control" but any device located in a customer's home is under the physical control of the customer. The mistrust between the ISP and the customer leads to "trusted" devices where the firmware, including the backup, can be overwritten by the ISP, but then cannot recover if it gets corrupted. And believe me, the corrupt firmware scenario happens a lot due to incompetence.

This is getting attention because it wasn't incompetence this time.

But how does blank, unprovisioned equipment discover a path to its provisioning server? Especially in light of the new "trusted" push, this is an arms race in a market segment such as routers where there isn't any money for high end solutions - only the cheapest option is even considered.

tl;dr: a social and economic problem, likely can't be fixed with a purely technical solution

sidewndr46
1 replies
2d

This was years ago, but I remember getting cable service activated somewhere in Florida with Bright House. I handed the cable guy some ancient motorola cable modem I had found at a discount store. The guy took one look at it and said "look dude, if you hacked this thing to get around bandwidth caps it is your problem if you get caught". I guess apparently that particular modem was pretty easy to modify

Scoundreller
0 replies
1d21h

Maybe it already was modified!

cuu508
1 replies
2d

Technical solution: customer treats ISP's modem/router as untrusted, and daisy chains their own router after it. Neither malware nor ISP's shenanigans can access the inner network.

Scoundreller
0 replies
2d

That’s what I do. Also makes changing providers straightforward (though last time I needed to set up some custom VLAN stuff on my router but didn’t have to fumble with any wifi config).

yjftsjthsd-h
0 replies
2d1h

It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.

Humor me; how would that work? If anything, I'd expect it to be easier to overwrite the inactive slot (assuming an A/B setup, ideally with read-only root). If you really wanted, you could have a separate chip that was read-only enforced by hardware, and I've seen that done for really low level firmware (ex. Chromebook boot firmware) but it's usually really limited precisely because the inability to update it means you get stuck with any bugs so it's usually only used to boot to the real (rw) storage.

utensil4778
0 replies
22h27m

Generally the way this works is you have two partitions in your flash chip. One contains the current firmware and the second is a place to drop new firmware. Then the bootloader twiddles a bit somewhere and boots to one partition or the other. There's really nothing stopping you from wiping the previous partition once you're done.

I think some routers still have a single flash partition and the update process here is a lot more hairy and will obviously not retain the previous version after an update.

Apart from attacks like this, there's absolutely no reason to have a protected read only copy of the factory firmware. 99.9999% all you would ever need to do to recover from a bad flash is to just fail back to the previous image.

A proper read only factory image would require an extra ROM chip to store it, as well as extra bootloader complexity required to load from ROM or copy to flash on failure. It's just barely expensive enough at scale to not be worth it for an extremely rare event.

ars
0 replies
1d10h

Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.

But a switch on the route: Flip the switch the router reboots to a known safe OS, that downloads, verifies, and updates the firmware. Then it waits for you to flip the switch back before it will behave as a router again.

Unless attackers manage to steal key-signing codes, and also intercept and redirect traffic to their webserver to send a fake firmware, this seems secure to me. Only downside I'm seeing is that it would be impossible to put in a custom firmware. Maybe add a USB-key firmware option?

luma
3 replies
2d1h

I'm not too familiar with customer DSL solutions but for cable modems, that firmware and configuration is managed by the CMTS because technology and configuration changes on the head end may require customer-side changes to ensure continued operation. The config is a pretty dynamic thing as frequency plans, signal rate, etc change over time as the cable plant and head end equipment is upgraded and maintained.

I'd expect that any attempt to lock write enable to the EEPROM would eventually result in your modem failing to provision.

Scoundreller
2 replies
2d

When your provider cuts you off, that’s when you know that your provider has a legit upgrade you need to take. Take the update and then lock stuff up again.

Of course, I don’t think you’re supposed to make mods to your vendor provided equipment…

In the satellite world, this would happen too: old firmware would be cut off. That’s when you go legit for a while with your sub’d card, take the update, and watch your sub’d channels until the new update could be reverse engineered. And probably have some heroes learn the hard way of taking the update and having some negative impacts that are harder to reverse.

luma
1 replies
1d23h

I'm not sure what such an approach would accomplish. If the goal is to prevent the kind of problem seen in the OP (which, let's be real - is a rare occurrence) in order to avoid an unplanned outage, you've instead created a situation where it'll fail to connect far more regularly as you're kicked off the network for not correctly handling the provisioning process. You're trading a rare unplanned outage for a common unplanned outage.

Scoundreller
0 replies
1d22h

Depends how often the provider pushes out updates (and the purpose/necessity of them).

And it’s only that “rare unplanned outage” when a malicious update bricks your device. Much worse is a malicious update that doesn’t result in an outage. Probably still rare but that impact though.

Edit: would also add that there’s probably a big firmware chip that changes infrequently, and frequently changing config stored on a separate and smaller chip (like a 24c or 93 series eeprom that holds a few kilobytes). That way you don’t risk bricking your modem by unplugging it at the wrong time.

stacktrust
2 replies
2d1h

> My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.

There was an open hardware project for SD card emulation, where the emulator could reject writes, https://github.com/racklet/meeting-notes/blob/main/community...

OSS emulation for SPI flash, https://trmm.net/Spispy/

Some USB drives (Kanguru) and SSD enclosures (ElecGear M.2 2230 NVME) have firmware and physical switch to block writes, useful to boot custom "live ISOs" that run from RAM.

ThePowerOfFuet
0 replies
1d14h

Some USB drives (Kanguru) and SSD enclosures (ElecGear M.2 2230 NVME) have firmware and physical switch to block writes, useful to boot custom "live ISOs" that run from RAM.

For the rest of us, there's Ventoy. https://www.ventoy.net/

Scoundreller
0 replies
2d

Eventually in the satellite world, card emulators took over and only the receiver was a vector of attack, but then the receivers started getting simulated too.

The nice thing about emulators is that you could intercept calls that you wanted and send your own response while still taking any and all updates. Hard to break when you have more control than they do.

ck2
1 replies
2d

ISPs can send any firmware to a docsis cablemodem, without the user knowing or accepting.

Imagine the damage that could be done by a malicious actor via the ISPs computers.

Or imagine someone being able to hack the system that does that update even without the ISP.

600K users would be a toy, they could do it to 6 Million.

Doesn't even have to be clever, just brick millions of cablemodems.

North Korea or some other government level entity could manage the resources to figure that out.

stacktrust
0 replies
2d

Do ISPs and modem vendors roll their own OTA infrastructure and signing key management, or contract it out?

aidenn0
1 replies
2d1h

I suppose from the point of view of someone with a black-market HU card, DirecTV was an example of an Advanced Persistent Threat. Never thought of it that way before.

Scoundreller
0 replies
2d1h

Funny thing about directv is that because they allowed for many manufacturers to build receivers, directv had little control over the receiver firmware, so these counter-counter measures weren’t necessary at the receiver level.

Other providers that rolled out their own receivers had high control over the receiver firmware and once users figured out how to protect their cards, the receivers became an effective attack vector for the lazy.

But that’s where a lot of the public knowledge about JTAGs really started coming to light. Awfully nice of them to put in a cutout at the bottom of the receiver.

schmidtleonard
0 replies
2d1h

maybe we all need to treat every device attached to the internet as having a similar susceptibility to “electronic counter-measures”

"First party malware"

russdill
0 replies
2d1h

Secure boot schemes can already "fix" this. If a boot image is programmed that isn't signed, the system boots to a write protected backup image. The system can also to some degree block the programming images that aren't signed, but presumably malware has gained root access.

londons_explore
25 replies
2d1h

Lumen identified over 330,000 unique IP addresses that communicated with one of 75 observed C2 nodes

How does Black Lotus Labs global telemetry know which IP communicated with which other IP if they have control of neither endpoint? Who/what is keeping traffic logs?

If these guys can do it, remind me again how Tor is secure because nobody could possibly be able to follow packets from your machine, through the onion hops, to the exit node where the same packet is available unencrypted...

vieinfernale
5 replies
2d1h

I'm quite disenchanted here. So this means that it is practically impossible to avoid IP fingerprints in any way ? Even with Tor, VMs, etc ? You'll always be at the mercy of whoever runs the show unless you own the physical servers

semiquaver
3 replies
1d18h

Of course a backbone provider can directly inspect the source and destination IP addresses of any traffic transiting its network. How could it be otherwise? Thats not fingerprinting, it’s just pulling fields out of a struct.

Tor does defeat this though. Rather than seeing the true destination of your traffic they see that of a Tor exit node.

londons_explore
2 replies
1d10h

But... That tor exit node then sends the traffic onwards... Again via the internet, and the backbone provider can inspect it again.

Seeing a packet heading to a tor exit node and then a similarly sized packet heading onwards a fraction of a millisecond later is a pretty surefire way to spy on individual tor users.

Thorrez
0 replies
1d9h

I think Tor tries to resize/split/join packets a bit. And each Tor node will in theory be carrying traffic for many different users simultaneously. And Tor uses 3 nodes, each in a different country. So it's not quite as trivial as you make it sound.

If 1, 2, or possibly all 3 nodes are run by a malicious actor, deanonymization becomes easier. At one point 10% of nodes were run by a single malicious actor: https://therecord.media/a-mysterious-threat-actor-is-running...

Pingk
0 replies
1d2h

Yes, being able to see all the traffic on a given network is a legitimate threat to Tor's anonymity.

IIRC There is an alternate method of connecting to an endpoint which uses a 3rd node as a rendezvous point which is meant to be better, but I forget the name of the process...

miohtama
0 replies
1d20h

The physical servers do not matter. Someone owns the physical cable.

kbenson
5 replies
1d16h

If these guys can do it, remind me again how Tor is secure because nobody could possibly be able to follow packets from your machine, through the onion hops, to the exit node where the same packet is available unencrypted...

You're supposed to be protected by the fact that you're going through multiple nodes before exiting TOR, and traffic should be mixed. Can you find some streams if you have most/all the nodes within your network and can analyze the traffic? Probably some, but the more traffic a node handles the harder it would be.

There is a simpler approach though, which is to just run exit nodes.[1]

1: https://en.wikipedia.org/wiki/Tor_(network)#Exit_node_eavesd...

Thorrez
2 replies
1d8h

What do you mean by just run exit nodes? The linked section says that just running exit nodes allows the exit node to steal data sent over plain HTTP. Is that actually a problem? Who's using plain HTTP? The linked section says just running exit nodes doesn't allow deanonymization.

kbenson
1 replies
20h14m

I didn't mean to imply it was exactly analogous, just a lot simpler, and there is a lot to be gleaned from that data. In fact, I would assume the more data you pass over it (e.g. if you proxy all your traffic across TOR) the easier it is to make assumptions about the source, unless it actively splits it across different exit nodes, which seems like it could be problematic in a lot of cases. If you have a DigitalOcean personal server you run some services on and you access it through TOR... well you might have just made the job of anyone trying to deanonymize you much easier.

Thorrez
0 replies
14h6m

unless it actively splits it across different exit nodes

I believe it does. I believe each destination IP you connect to uses a different exit node. And that'll even switch over time. And even if you connect to 2 destination IPs from the same exit node, I don't think there's a way for the exit node to know those 2 connections were from the same user.

1oooqooq
1 replies
1d12h

the only problem with tor are exit nodes. they should not exist.

both for safety and ending abusers.

Thorrez
0 replies
1d9h

What do you mean? Can Tor even function if exit nodes don't exist?

hedora
5 replies
2d1h

This is reasonably standard functionality for backbone routers. They have to parse the TCP headers in hardware anyway, and can track common endpoints with O(1) state.

Of course, on the other end of the spectrum, the NSA has tapped into core internet links, is recording everything it possibly can, and is keeping it forever.

sebzim4500
2 replies
2d

Is that actually feasible with their budget?

If we are generous and assume there a zettabyte of data a year that they want to store.

At consumer prices, you would have to pay $10B per year just buying hard drives yet alone the operational costs/redundancy.

The budget for all of the US intelligence services is ~$65B. I think if they wanted to actually do what you are describing it would be the single biggest intelligence expense they have and I don't see how you hide that.

choilive
1 replies
1d17h

Its not. They don't store the raw IP packet data, instead they store the metadata (this was revealed in a leak a long time ago), like the type in this article (data source and destination, timestamps, size of the data, etc.) the metadata is orders of magnitude less data than the raw packets and likely easily compressible, so I wouldn't be surprised if they keep it all for a decent chunk of time.

sebzim4500
0 replies
1d8h

Presumably this becomes increasingly useless in an eSNI world?

i.e. they can prove I visited a bunch of cloudflare sites, but there are millions of those so who cares?

oasisbob
0 replies
1d22h

They have to parse the TCP headers in hardware anyway

Backbone routers have no need to implement stateful TCP inspection or deal with the transport layer for TCP, dealing with IP is enough.

Hikikomori
0 replies
2d1h

Yup. Pretty much all ISPs collect sflow/netflow from their devices to be able to debug problems or detect ddos.

luma
2 replies
2d1h

Presumably, Windstream is logging customer traffic as a matter of course. It might just be metadata (NetFlow/sFlow/IPFIX/etc), but one way or the other the only way they have this information is if they are recording and retaining it.

Hopefully this is made clear in Windstream's contract terms.

londons_explore
1 replies
2d1h

These aren't likely 'top flows', since the C&C data will probably only be a few kilobytes.

So to capture this, you at a minimum need to be logging every TCP connection's SRC IP and DST IP.

And they seem pretty confident in their worldwide map and fairly exact counts, so I would guess they must have probes covering most of the world doing this monitoring, and it likely isn't just 1-in-a-million sampling either...

luma
0 replies
1d23h

For whatever it's worth, what you describe above is specifically what IPFIX/NetFlow etc does. Not full-take, just metadata for each flow such as the time, src/dst ip/port, tcp sequence #, octets sent, etc.

This is common in datacenters for traffic and flow analysis for troubleshooting, capacity planning, and the occasional incident response.

More details: https://en.wikipedia.org/wiki/IP_Flow_Information_Export

codexon
1 replies
1d23h

Lumen is a tier 1 network so a lot of traffic passes through them. They can man-in-the-middle the traffic and see the TCP packets going through their network.

perlgeek
0 replies
1d23h

"They can man-in-the-middle the traffic" could be interpreted as them having to actively do something to become the man in the middle, when they already are.

It's likely they just do sampling (think netflow) to get some statistics over the data that's already transiting their network.

shrubble
0 replies
2d1h

Lumen (merger of Level3 and CenturyLink) sells services to a large part of the Internet and may provide a lot of the backhaul for Windstream. In which case they would be in the path for monitoring.

rpcope1
0 replies
1d22h

I have a friend who works at Black Lotus (and who may have written this blog post, who knows). Black Lotus is part of Lumen which is Level3 and CenturyLink and is one of the biggest (if not the biggest) backbone traffic provider in the world, with a huge percentage of the worlds traffic transiting their network, and thus I think they get direct insight into the traffic including metrics on it.

ronnier
13 replies
2d2h

For a few years now I only buy a small x86 box with dual nics and run OpenWRT. I love it. It's open source, lots of support, good community. It supports wireguard. Latest version allows you to even run docker containers.

hedora
4 replies
2d1h

I’ve got an old PC Engines board with openbsd on it. It’s been remarkably trouble free for something like 8 years.

ziml77
1 replies
2d1h

The PC Engines boards are great for that. I've got mine running OPNSense (FreeBSD) and it's not required any fuss.

glitchcrab
0 replies
1d22h

Another very happy PC Engines user here, I've been running pfsense and more lately opnsense on one for over 10 years now. It has never missed a beat.

rpcope1
1 replies
1d22h

It's huge shame Pascal basically stopped building those boards since AMD and Intel wouldn't play ball. I'd really like to have something like an APU with 10G connectivity with an x86 processor that was not built and designed in China running open firmware. With PC Engines gone now, I think you're basically out of luck.

MisterTea
0 replies
1d2h

My APU2 died few years back and haven't found a decent replacement. Instead I use a Lenovo Thinkstation M720Q off eBay with the PCIe x8 riser and Intel 2 port 10GbE card. You could also fit a 4 port 1Gb card too. The thing idles at 18-19W which is less than the big white rectangular Verizon "trash can" router which idled at 22W and has horrible WiFi (I use a Unifi APLR.)

fckgw
3 replies
2d1h

Since it were modems that were affected, OpenWRT would do nothing to protect you.

ronnier
0 replies
2d1h

Ah, that's what I get for not reading the article.

nisa
0 replies
2d1h

OpenWrt works for some modems pretty fine. It's not straight forward as the VDSL firmware can't often be distributed but poeple use it on avm Fritzbox devices. Also LTE devices are supported. Not sure about cable modems, probably not. It's probably involved and not straight forward so for most users, even technical ones it's no alternative.

bauruine
0 replies
1d5h

The article says that the "modems" affected are the Sagemcom F5380 and ActionTec T3200 which from a quick search looks like full fledged CPEs aka routers with a web interface and NAT, WIFI and all the stuff. They also write about Censys and banners so it looks like they had their web interface exposed to the Internet.

When I hear people say they use OpenWRT I assume they have their modem in bridge mode so that it doesn't even have an IP. OpenWRT would save you in that case.

jeffbee
2 replies
2d1h

These are DSL modems, though. At some point there has to be some interface between the WAN side, be it DSL or coax or fiber, and your network. Even DSL adapters for PCIe slots are just systems on a stick, coming with all the features and bugs of a "router" but without the enclosure.

ronnier
0 replies
2d1h

You can tell I didn't read the article :)

bauruine
0 replies
1d5h

The interface to DSL or coax only has to be a layer 2 bridge. You can put many modems into bridge mode so they don't do any layer 3 (IP) at all. For fiber, if you don't use PON at least, even a standard SFP(+) will do.

chrisjj
0 replies
1d7h

Latest version allows you to even run docker containers.

What could possibly go wrong...

hcfman
5 replies
2d2h

Which routers are affected ?

prophesi
2 replies
2d2h

And which ISP?

AzN1337c0d3r
1 replies
2d2h

Windstream uses T3200 and T3260.

aschla
0 replies
2d2h

"We began an investigation after seeing repeated complaints mentioning specific ActionTec devices, as a massive number of device owners stated that they were not able to access the internet beginning on October 25, 2023. A growing number of users indicated the outage was common to two different gateway models: the ActionTec T3200s and ActionTec T3260s, both displaying a static red light."

Scoundreller
0 replies
2d2h

“the ActionTec T3200s and ActionTec T3260s, both displaying a static red light.”

“This included a drop of ~480k devices associated with Sagemcom, likely the Sagemcom F5380 as both this model and the ActionTec modems were both modems issued by the ISP.”

skilled
3 replies
2d2h

That doesn’t count as related as it is a rewrite of the original source. Just saying, it adds no details of its own.

thecosas
2 replies
2d1h

It does include information which the original article specifically excluded from mentioning: the ISP involved.

"Windstream" is mentioned in the first paragraph of the Ars article, while the Lumen post makes references to "a rural ISP" throughout the post.

skilled
1 replies
2d

So say that instead of related and make people waste their time reading the same information.

thecosas
0 replies
1d22h

Fair point! That's part of why I added my comment above :-)

steelframe
3 replies
2d

For my home network I've purchased a networking appliance form-factor computer, which is basically a regular old an i3 with VT-x support in a fanless case and 4 2.5GiB NICs. I've installed my favorite stable Linux distro that gets regular automated security updates in both host and a VM, and I've device-mapped 3 of the NICs into that VM. The remaining NIC remains unattached to anything unless I want to SSH in to the host. I'm running UFW and Shorewall in the VM to perform firewall and routing tasks. If I want to tweak anything I just SSH in to that VM. I have a snapshot of the VM disk in case I mess something up so I can trivially roll back to something that I know works.

I've purchased a couple of cheaper commercial WiFi access points, and I've placed them in my house with channels set up to minimize interference.

Prior to this I've gone through several iterations of network products from the likes of Apple, Google, and ASUS, and they all had issues with performance and reliability. For example infuriating random periods of 3-5 seconds of dropped packets in the middle of Zoom conferences and what not.

Since I've rolled my own I've had zero issues, and I have a higher degree of confidence that it's configured securely and is getting relevant security updates. In short, my home network doesn't have a problem unless some significant chunk of the world that's running the same well-known stable Linux distro also has a problem.

tmoertel
1 replies
1d23h

Out of curiosity, which networking appliance form-factor computer did you purchase?

steelframe
0 replies
1d23h

It's a HUNSN RJ36. It came preloaded with pfSense, as many of them do, but I immediately made a full disk backup and then wiped and installed with a Linux distro because, well, "This is Linux. I know this." You're going to find a lot of people who strongly prefer one over the other, and you may find you prefer pfSense over a "do-everything-yourself" Linux distro if you give it a shot. There are also Linux distros that are targeted for network appliances, and setting them up (correctly) can be easier if the distro is built for the task.

There are quite a few machines in this category, and what's in stock at any given time tends to rotate relatively quickly. I think the one I bought might still be available, but you will want to check to see if there is something with specs that will work better for your use case.

fckgw
0 replies
1d21h

This attack affected modems so even with all that fancy hardware you would still be dead in the water.

localfirst
3 replies
2d2h

this along with other recent security incidents suggest somebody is rehearsing for massive campaign tied to another geopolitical ambitions.

waihtis
0 replies
2d1h

well there is a top cyber offensive power whom is de facto at war with the west, hardly surprising

Crosseye_Jack
0 replies
2d1h

I would say that the rehearsal has long been over. Just before Russian troops entered Ukraine Viasat saw an attack on it's network which among other countries serves Ukraine, which saw updates getting pushed to modems designed to disable those modems.

https://en.wikipedia.org/wiki/Viasat_hack

bostonpete
2 replies
2d

What is the significance of the article/post title...?

thamer
0 replies
1d23h

This attack happened a few days before Halloween 2023 (pumpkins), with a large drop in the number of devices connected to the Internet – like how an eclipse suddenly brings a period of darkness, maybe?

This is just my interpretation, I also found it cryptic.

ajb
0 replies
1d18h

Yeah, didn't this have a more comprehensible title a few hours ago?

thimkerbell
1 replies
1d2h

@dang, if there are karma points at HN, you could add some for submitters who improve upon the oft-execrable original clickbait headlines/titles. (Here, I see present verb tense being used for an incident from October of last year.)

thimkerbell
0 replies
20h49m

You could also subtract points for submissions whose titles appear to advocate causing harm.

nisa
1 replies
2d1h

Article is light on the interesting details. How did they came in? Do these routers have open ports and services by default and answer to the Internet in a meaningful way?

Couldn't someone grab different firmware versions and compare them?

Looks like they are doing what everyone else is doing and using OpenWrt with a vendor SDK: https://forum.openwrt.org/t/openwrt-support-for-actiontec-t3...

What's interesting here is speculated the vendor send a malicious/broken update: https://www.reddit.com/r/Windstream/comments/17g9qdu/solid_r...

So why is there no official statement from the ISP? If it was an attack shouldn't there be an investigation?

I'm not familiar with how this is handled in the USA but this looks really strange.

Maybe these machines were bot infested and the vendor pushed an update that broke everything?

Maybe it's like in the article and it was a coordinated attack maybe involving ransom and everyone got told it's a faulty firmware update, keep calm?

which is also kind of bad, as the customer I'd like to know if there security incidents.

Has anyone links to firmware images for these devices? Or any more details?

chrisjj
0 replies
1d7h

So why is there no official statement from the ISP? If it was an attack shouldn't there be an investigation?

We should assume a decision to make no statement was based on the outcome of an investigation.

I wonder how much of the replacement cost is insured. I am guessing none. Leaving the ISP at severe risk of, er, business discontinuity. Another good reason for no statement.

jeffbee
1 replies
2d2h

"Router" being used to mean customer premises equipment, it seems.

TheJoeMan
0 replies
2d1h

Here I was hoping someone was “encouraging” an ISP to upgrade their infra.

Kiboneu
1 replies
2d

Well if you backdoor 600k routers and introduce a firmware bug with one of your patches, this is what happens.

Can't they just stage their updates? Surely, malware authors and users must be too cool for adopting standard prod practices.

perlgeek
0 replies
1d23h

Surely, malware authors and users must be too cool for adopting standard prod practices.

Their economic pressures are just different, it's not their own hardware that they're bricking, nor are likely to be held liable for it.

xacky
0 replies
2d1h

Reminds me of the CIH virus. It's only a matter of time for ransomware authors to start using firmware blanking as a new technique.

scrps
0 replies
1d

I read the lotus labs blog post they linked and they mentioned no analysis of the actual firmware payload that actually bricked them, is this out there or a sample?

I'd be curious to know if it was actually meant to brick or someone f'ed the image and accidentally bricked them trying to be clever.

Also if it was a nation state why would you so publically burn your capability bricking residential routers on an ISP that seems to mostly serve rural areas, if they did it for testing that'd be real dumb.

pragma_x
0 replies
2d1h

For anyone else that was confused by the headline, this is about the destruction of 600,000 individual (small) routers. Not routers that are worth $600,000 (each or combined).

mistrial9
0 replies
1d3h

do they say what US ISP was targeted ? these are the routers in people's homes basically?

its-summertime
0 replies
2d

Is the >2x increase in other devices addressed in any form?

bitnasty
0 replies
1d

Why would someone build a botnet this complex then brick it?