return to table of content

We spent $20 to achieve RCE and accidentally became the admins of .mobi

iscoelho
70 replies
6h36m

Great write-up - the tip of the iceberg on how fragile TLS/SSL is.

Let's add a few:

1. WHOIS isn't encrypted or signed, but is somehow suitable for verification (?)

2. DNS CAA records aren't protected by DNSSEC, as absence of a DNS record isn't sign-able (correction: NSEC is an optional DNSSEC extension)

3. DNS root & TLD servers are poorly protected against BGP hijacks (adding that DNSSEC is optional for CAs to verify)

4. Email, used for verification in this post, is also poorly protected against BGP hijacks.

I'm amazed we've lasted this long. It must be because if anyone abuses these issues, someone might wake up and care enough to fix them (:

candiddevmike
42 replies
5h48m

Our industry needs to finish what it starts. Between IPv6, DNSSEC, SMTP TLS, SCTP/QUIC, etc all of these bedrock technologies feel like they're permanently stuck in a half completed implementation/migration. Like someone at your work had all these great ideas, started implementing them, then quit when they realized it would be too difficult to complete.

iscoelho
13 replies
5h44m

obligatory https://xkcd.com/927/

Honestly: we're in this situation because we keep trying to band-aid solutions onto ancient protocols that were never designed to be secure. (I'm talking about you DNS.) Given xkcd's wisdom though, I'm not sure if this is easily solvable.

sulandor
8 replies
5h32m

dns should not have to be secure, it should be regulated as a public utility with 3rd-party quality control and all the whistles.

only then can it be trustworthy, fast and free/accessible

iscoelho
6 replies
5h17m

If my DNS can be MITM'd, and is thus insecure, it is not trustworthy.

8organicbits
5 replies
5h0m

This sort of all-or-nothing thinking isn't helpful. DNS points you to a server, TLS certificates help you trust that you've arrived at the right place. It's not perfect, but we build very trustworthy systems on this foundation.

quesera
4 replies
4h48m

But DNS is all-or-nothing.

If you can't trust DNS, you can't trust TLS or anything downstream of it.

Even banks are not bothering with EV certificates any more, since browsers removed the indicator (for probably-good reasons). DV certificate issuance depends on trustworthy DNS.

Internet security is "good enough" for consumers, most of the time. That's "adequately trustworthy", but it's not "very trustworthy".

8organicbits
3 replies
4h19m

Bank websites like chase.com and hsbc.com and web services like google.com, amazon.com, and amazonaws.com intentionally avoid DNSSEC. I wouldn't consider those sites less than "very trustworthy" but my point is that "adequately trustworthy" is the goal. All-or-nothing thinking isn't how we build and secure systems.

quesera
2 replies
3h40m

I am definitely not arguing in favor of DNSSEC.

However, I don't think it's reasonable to call DNS, as a system, "very trustworthy".

"Well-secured" by active effort, and consequently "adequately trustworthy" for consumer ecommerce, sure.

But DNS is a systemic weak link in the chain of trust, and must be treated with extra caution for "actually secure" systems.

(E.g., for TLS and where possible, the standard way to remove the trust dependency on DNS is certificate pinning. This is common practice, because DNS is systemically not trustworthy!)

8organicbits
1 replies
3h6m

Is certificate pinning common? On the web we used to have HPKP, but that's obsolete and I didn't think it was replaced. I know pinning is common in mobile apps, but I've generally heard that's more to prevent end-user tampering than any actual distrust of the CAs/DNS.

I think you're "well-secured" comment is saying the same thing I am, with some disagreement about "adequate" vs "very". I don't spend any time worrying that my API calls to AWS or online banking transactions are insecure due to lack of DNSSEC, so the DNS+CA system feels "very" trustworthy to me, even outside ecommerce. The difference between "very" and "adequate" is sort of a moot point anyway: you're not getting extra points for superfluous security controls. There's lots of other things I worry about, though, because attackers are actually focusing their efforts there.

quesera
0 replies
1h21m

I agree that the semantics of "adequate" and "very" are moot.

As always, it ultimately depends on your threat profile, real or imagined.

Re: certificate pinning, it's common practice in the financial industry at least. It mitigates a few risks, of which I'd rate DNS compromise as more likely than a rogue CA or a persistent BGP hijack.

IgorPartola
0 replies
4h57m

There is nothing fundamentally preventing us from securing DNS. It is not the most complicated protocol believe it or not and is extensible enough for us to secure it. Moreover a different name lookup protocol would look very similar to DNS. If you don’t quite understand what DNS does and how it works the idea of making it a government protected public service may appeal to you but that isn’t actually how it works. It’s only slightly hyperbolic to say that you want XML to be a public utility.

On the other hand things like SMTP truly are ancient. They were designed to do things that just aren’t a thing today.

goodpoint
1 replies
4h49m

Standards evolve for good reasons. That's just a comic.

iscoelho
0 replies
4h32m

The comic is about re-inventing the wheel. What you propose "standards evolving" would be the opposite in spirit (and is what has happened with DNSSEC, RPKI, etc)

Dylan16807
1 replies
4h35m

Can we all agree to not link that comic when nobody is suggesting a new standard, or when the list of existing standards is zero to two long? It's not obligatory to link it just because the word "standard" showed up.

I think that covers everything in that list. For example, trying to go from IPv4 to IPv6 is a totally different kind of problem from the one in the comic.

iscoelho
0 replies
4h27m

The point is that, ironically, new standards may have been a better option.

Bolting on extensions to existing protocols not designed to be secure, while improving the situation, has been so far unable to address all of the security concerns leaving major gaps. It's just a fact.

dogleash
9 replies
5h9m

Our industry needs to finish what it starts.

"Our industry" is a pile of snakes that abhor the idea of collaboration on common technologies they don't get to extract rents from. ofc things are they way they are.

iscoelho
8 replies
5h5m

Let's not fool ourselves by saying we're purely profit driven. Our industry argues about code style (:

z3phyr
7 replies
4h14m

Our industry does not argue about code style. There were a few distinct subcultures which were appropriated by the industry who used to argue about code style, lisp-1 vs lisp-2, vim vs emacs, amiga vs apple, single pass vs multi pass compilers, Masters of Deception vs Legion of Doom and the list goes on, depending on the subculture.

The industry is profit driven.

wwweston
3 replies
2h40m

> Our industry argues about code style (:

Our industry does not argue about code style.

QED

brookst
1 replies
1h58m

Our industry does not argue about arguing about code style.

wwweston
0 replies
1h37m

Our industry doesn't always make Raymond Carver title references, but when it does, what we talk about when we talk about Raymond Carver title references usually is an oblique way of bringing up the thin and ultimately porous line between metadiscourse and discourse.

svieira
0 replies
2h11m

I'm pretty sure this is QEF.

iscoelho
2 replies
4h1m

Do you use tabs or spaces? Just joking, but:

The point is that our industry has a lot of opinionated individuals that tend to disagree on fundamentals, implementations, designs, etc., for good reasons! That's why we have thousands of frameworks, hundreds of databases, hundreds of programming languages, etc. Not everything our industry does is profit driven, or even rational.

Joker_vD
1 replies
1h51m

FWIW, all my toy languages consider U+0009 HORIZONTAL TABULATION in a source file to be an invalid character, like any other control character except for U+000A LINE FEED (and also U+000D CARRIAGE RETURN but only when immediately before a LINE FEED).

hinkley
0 replies
1h25m

I’d be a python programmer now if they had done this. It’s such an egregiously ridiculous foot gun that I can’t stand it.

mschuster91
6 replies
5h37m

Like someone at your work had all these great ideas, started implementing them, then quit when they realized it would be too difficult to complete.

The problem is, in many of these fields actual real-world politics come into play - you got governments not wanting to lose the capability to do DNS censorship or other forms of sabotage, you got piss poor countries barely managing to keep the faintest of lights on, you got ISPs with systems that have grown over literal decades where any kind of major breaking change would require investments into rearchitecture larger than the company is worth, you got government regulations mandating stuff like all communications of staff be logged (e.g. banking/finance) which is made drastically more complex if TLS cannot be intercepted or where interceptor solutions must be certified making updates to them about as slow as molasses...

iscoelho
5 replies
5h33m

Considering we have 3 major tech companies (Microsoft/Apple/Google) controlling 90+% of user devices and browsers, I believe this is more solvable than we'd like to admit.

mschuster91
3 replies
4h43m

Browsers are just one tiny piece of the fossilization issue. We got countless vendors of networking gear, we got clouds (just how many AWS, Azure and GCP services are capable of running IPv6 only, or how many of these clouds can actually run IPv6 dual-stack in production grade?), we got even more vendors of interception middlebox gear (from reverse proxies and load balancers, SSL breaker proxies over virus scanners for web and mail to captive portal boxes for public wifi networks), we got a shitload of phone telco gear of which probably a lot has long since expired maintenance and is barely chugging along.

nativeit
2 replies
4h10m

Ok. You added OEMs to the list, but then just named the same three dominant players as clouds. Last I checked, every device on the planet supports IPv6, if not those other protocols. Everything from the cheapest home WiFi router, to every Layer 3 switch sold in the last 20-years.

I think this is a 20-year old argument, and it’s largely irrelevant in 2024.

mschuster91
0 replies
3h58m

I think this is a 20-year old argument, and it’s largely irrelevant in 2024.

It's not irrelevant - AWS lacks support for example in EKS or in ELB target groups, where it's actually vital [1]. GCE also lacks IPv6 for some services and you gotta pay extra [2]. Azure doesn't support IPv6-only at all, a fair few services don't support IPv6 [3].

The state of IPv6 is bloody ridiculous.

[1] https://docs.aws.amazon.com/vpc/latest/userguide/aws-ipv6-su...

[2] https://cloud.google.com/vpc/docs/ipv6-support?hl=de

[3] https://learn.microsoft.com/en-us/azure/virtual-network/ip-s...

gavindean90
0 replies
3h11m

Plenty doesn’t support IPv6.

idunnoman1222
0 replies
2h39m

Those companies have nothing to do with my ISP router or modem

colmmacc
3 replies
5h31m

If you look at say 3G -> 4G -> 5G or Wifi, you see industry bodies of manufacturers, network providers, and middle vendors who both standardize and coordinate deployment schedules; at least at the high level of multi-year timelines. This is also backed by national and international RF spectrum regulators who want to ensure that there is the most efficient use of their scarce airwaves. Industry players who lag too much tend to lose business quite quickly.

Then if you look at the internet, there is a very uncoordinated collection of manufacturers, network providers, and standardization is driven in a more open manner that is good for transparency but is also prone to complexifying log-jams and hecklers vetos. Where we see success, like the promotion of TLS improvements, it's largely because a small number of knowledgable players - browsers in the case of TLS - agree to enforce improvements on the entire eco-system. That in turn is driven by simple self-interest. Google, Apple, and Microsoft all have strong incentives to ensure that TLS remains secure; their ads and services revenue depend upon it.

But technologies like DNSSEC, IPv6, QUIC all face a much harder road. To be effective they need a long chain of players to support the feature, and many of those players have active disincentives. If a home users internet seems to work just fine, why be the manufacturer that is first to support say DNSSEC validation and deal with all of the increased support cases when it breaks, or device returns when consumers perceive that it broke something? (and it will).

dgoldstein0
1 replies
2h10m

IPv6 deployment is extra hard because we need almost every network in the world to get on board.

Dnssec shouldn't be as bad, but for dns resolvers and software that build them in. I think it's a bit worse than TLS adoption in part just because of DNS allowing recursive resolution and in part DNS being applicable to a bit more than TLS was. But the big thing seems to be that there isn't a central authority like web browsers who can entirely force the issue. ... Maybe OS vendors could do it?

Quic is an end to end protocol so should be deployable without every network operator buying in. That said, we probably do need a reduction in udp blocking in some places. But otherwise, how can quic deployment be harder than TLS deployment? I think there just hasn't been incentive to force it everywhere.

MichaelZuo
0 replies
1h7m

Plus IPv6 has significant downsides (more complex, harder to understand, more obscure failure modes, etc…), so the actual cost of moving is the transition cost + total downside costs + extra fears of unknown unknowns biting you in the future.

GTP
0 replies
3h3m

AFAIK, in the case of IPv6 it's not even that: there's still the open drama of the peering agreement between Cogent and Hurricane Electrics.

doubled112
2 replies
4h43m

Doesn't every place have a collection of ideas that are half implemented? I know I often choose between finishing somebody else's project or proving we don't need it and decommissioning it.

I'm convinced it's just human nature to work on something while it is interesting and move on. What is the motivation to actually finish?

Why would the the technologies that should hold up the Internet itself be any different?

kevindamm
0 replies
2h53m

While that's true, it dismisses the large body of work that has been completed. The technologies GP comment mentions are complete in the sense that they work, but the deployment is only partial. Herding cats on a global scale, in most cases. It also ignores the side effect benefit that completing the interesting part -- other efforts benefit from the lessons learned by that disrupted effort, even if the deployment fails because it turns out nobody wanted it. And sometimes it's just a matter of time and getting enough large stakeholders excited or at least convinced the cost of migration is worth it.

All that said, even the sense of completing or finishing a thing only really happens in small and limited-scope things, and in that sense it's very much human nature, yeah. You can see this in creative works, too. It's rarely "finished" but at some point it's called done.

hinkley
0 replies
1h23m

I was weeks away from turning off someone’s giant pile of spaghetti code and replacing it with about fifty lines of code when I got laid off.

I bet they never finished it, since the perpetrators are half the remaining team.

trhway
1 replies
32m

IPv6 instead of being branded as a new implementation should probably have been presented as an extension of IPv4, like some previously reserved IPv4 address would mean that it is really IPv6 with the value in the previously reserved fields, etc. That would be a kludge, harder to implement, yet much easier for the wide Internet to embrace. Like it is easier to feed oatmeal to a toddler by presenting it as some magic food :)

immibis
0 replies
23m

It would have exactly the same deployment problems, but waste more bytes in every packet header. Proposals like this have been considered and rejected.

How is checking if, say, the source address is 255.255.255.255 to trigger special processing, any easier than checking if the version number is 6? If you're thinking about passing IPv6 packets through an IPv4 section of the network, that can already be achieved easily with tunneling. Note that ISPs already do, and always have done, transparent tunneling to pass IPv6 packets through IPv4-only sections of their network, and vice versa, at no cost to you.

Edit: And if you want to put the addresses of translation gateways into the IPv4 source and destination fields, that is literally just tunneling.

ozfive
0 replies
1h15m

Or got fired/laid off and the project languished?

jimt1234
0 replies
2h11m

In my 25+ years in this industry, there's one thing I've learned: starting something isn't all that difficult, however, shutting something down is nearly impossible. For example, brilliant people put a lot of time end effort into IPv6. But that time and effort is nothing compared to what it's gonna take to completely shut down IPv4. And I've dealt with this throughout my entire career: "We can't shut down that Apache v1.3 server because a single client used it once 6 years ago!"

detourdog
7 replies
6h23m

For reasons not important hear I purchase my SSL certificates and barely have any legitimating business documents. If Dunn & Bradstreet calls I hang up...

It took me 3 years of getting SSL certs from the same company through a convoluted process before I tried a different company. My domain has been with the same registrar since private citizens could register DNS names. That relationship meant nothing when trying to prove that I'm me and I own the domain name.

I went back to the original company because I could verify myself through their process.

My only point is that human relationships is the best form of verifying integrity. I think this provides everyone the opportunity to gain trust and the ability to prejudge people based on association alone.

lobsterthief
6 replies
5h55m

Human relationships also open you up to social engineering attacks. Unless they’re face-to-face, in person, with someone who remembers what you actually look like. Which is rare these days.

detourdog
5 replies
5h49m

That is my point. We need to put value on the face to face relationships and extend trust outward from our personal relationships.

This sort of trust is only as strong as it's weakest link but each individual can choose how far to extend their own trust.

dopylitty
3 replies
5h40m

This is such a good point. We rely way too much on technical solutions.

A better approach is to have hyperlocal offices where you can go to do business. Is this less “efficient”? Yes but when the proceeds of efficiency go to shareholders anyway it doesn’t really matter.

mrguyorama
1 replies
4h47m

Is this less “efficient”? Yes but when the proceeds of efficiency go to shareholders anyway it doesn’t really matter.

I agree with this but that means you need to regulate it. Even banks nowadays are purposely understaffing themselves and closing early because "what the heck are you going to do about it? Go to a different bank? They're closed at 4pm too!"

detourdog
0 replies
3h42m

The regulation needs to be focused on the validity of the identity chain mechanism but not on individuals. Multiple human interactions as well as institutional relationships could be leveraged depending on needs.

The earliest banking was done with letters of introduction. That is why banking families had early international success. They had a familial trust and verification system.

detourdog
0 replies
3h46m

It is only efficient based on particular metrics. Change the metrics and the efficiency changes.

GTP
0 replies
2h35m

This is what the Web of Trust does but,

This sort of trust is only as strong as it's weakest link but each individual can choose how far to extend their own trust.

is exactly why I prefer PKI to the WoT. If you try to extend the WoT to the whole Internet, you will eventually end up having to trust multiple people you never met with them properly managing their keys and correctly verifying the identity of other people. Identity verification is in particular an issue: how do you verify the identity of someone you don't know? How many of us know how to spot a fake ID card? Additionally, some of them will be people participating in the Web of Trust just because they heard that encryption is cool, but without really knowing what they are doing.

In the end, I prefer CAs. Sure, they're not perfect and there have been serious security incidents in the past. But at least they give me some confidence that they employ people with a Cyber Security background, not some random person that just read the PGP documentation (or similar).

PS: there's still some merit to your comment. I think that the WoT (but I don't know for sure) was based on the 7 degrees of separation theory. So, in theory, you would only have to certify the identity of people you already know, and be able to reach someone you don't know through a relatively short chain of people where each hop knows very well the next hop. But in practice, PGP ended up needing key signing parties, where people that never met before were signing each other's key. Maybe a reboot of the WoT with something more user friendly than PGP could have a chance, but I have some doubts.

8organicbits
5 replies
6h27m

2. DNS CAA records aren't protected by DNSSEC, as absence of a DNS record isn't sign-able.

NSEC does this.

An NSEC record can be used to say: “there are no subdomains between subdomains X and subdomain Y.
iscoelho
4 replies
6h21m

You're correct - noting that Lets Encrypt supports DNSSEC/NSEC fully.

Unfortunately though, the entire PKI ecosystem is tainted if other CAs do not share the same security posture.

8organicbits
3 replies
5h14m

Tainted seems a little strong, but I think you're right, there's nothing in the CAB Baseline Requirements [1] that requires DNSSEC use by CAs. I wouldn't push for DNSSEC to be required, though, as it's been so sparsely adopted. Any security benefit would be marginal. Second level domain usage has been decreasing (both in percentage and absolute number) since min-2023 [2]. We need to look past DNSSEC.

[1] https://cabforum.org/working-groups/server/baseline-requirem...

[2] https://www.verisign.com/en_US/company-information/verisign-...

iscoelho
2 replies
5h8m

I agree that DNSSEC is not the answer and has not lived up to expectations whatsoever, but what else is there to verify ownership of a domain? Email- broken. WHOIS- broken.

Let's convince all registrars to implement a new standard? ouch.

8organicbits
1 replies
4h37m

I'm a fan of the existing standards for DNS (§3.2.2.4.7) and IP address (§3.2.2.4.8) verification. These use multiple network perspectives as a way of reducing risk of network-level attacks. Paired with certificate transparency (and monitoring services). It's not perfect, but that isn't the goal.

iscoelho
0 replies
4h19m

BGP hijacks unfortunately completely destroy that. RPKI is still extremely immature (despite what companies say) and it is still trivial to BGP hijack if you know what you're doing. If you are able to announce a more specific prefix (highly likely unless the target has a strong security competency and their own network), you will receive 100% of the traffic.

At that point, it doesn't matter how many vantage points you verify from: all traffic goes to your hijack. It only takes a few seconds for you to verify a certificate, and then you can drop your BGP hijack and pretend nothing happened.

Thankfully there are initiatives to detect and alert BGP hijacks, but again, if your organization does not have a strong security competency, you have no knowledge to prevent nor even know about these attacks.

account42
4 replies
4h33m

1. WHOIS isn't encrypted or signed, but is somehow suitable for verification (?)

HTTP-based ACME verification also uses unencrypted port-80 HTTP. Similar for DNS-based verification.

iscoelho
1 replies
4h16m

100% - another for the BGP hijack!

michaelt
0 replies
1h3m

The current CAB Forum Baseline Requirements call for "Multi-Perspective Issuance Corroboration" [1] i.e. make sure the DNS or HTTP challenge looks the same from several different data centres in different countries. By the end of 2026, CAs will validate from 5 different data centres.

This should make getting a cert via BGP hijack very difficult.

[1] https://github.com/cabforum/servercert/blob/main/docs/BR.md#...

bootsmann
0 replies
3h17m

HTTP-based ACME verification also uses unencrypted port-80 HTTP

I mean, they need to bootstrap the verification somehow no? You cannot upgrade the first time you request a challenge.

Arch-TK
0 replies
3h3m

If it used HTTPS you would have a bootstrapping problem.

jrochkind1
3 replies
5h34m

It must be because if anyone abuses these issues, someone might wake up and care enough to fix them

If anyone knows they are being abused, anyway. I conclude that someone may be abusing them, but those doing so try to keep it unknown that they have done so, to preserve their access to the vulnerability.

iscoelho
0 replies
5h22m

Certificate Transparency exists to catch abuse like this. [1]

Additionally, Google has pinned their certificates in Chrome and will alert via Certificate Transparency if unexpected certificates are found. [2]

It is unlikely this has been abused without anyone noticing. With that said, it definitely can be, there is a window of time before it is noticed to cause damage, and there would be fallout and a "call to action" afterwards as a result. If only someone said something.

[1] https://certificate.transparency.dev [2] https://github.com/chromium/chromium/blob/master/net/http/tr...

hinkley
0 replies
1h17m

It’s like the crime numbers. If you’re good enough at embezzling nobody knows you embezzled. So what’s the real crime numbers? Nobody knows. And anyone who has an informed guess isn’t saying.

A big company might discover millions are missing years after the fact and back date reports. But nobody is ever going to record those office supplies.

ChrisMarshallNY
0 replies
5h32m

Didn't Jon Postel do something like this, once?

It was long ago, and I don't remember the details, but I do remember a lot of people having shit hemorrhages.

NovemberWhiskey
2 replies
4h50m

None of these relate to TLS/SSL - that's the wrong level of abstraction: they relate to fragility of the roots of trust on which the registration authorities for Internet PKI depend.

iscoelho
1 replies
4h36m

As long as TLS/SSL depends on Internet PKI as it is, it is flawed. I guess there's always Private PKI, but that's if you're not interested in the internet (^:

NovemberWhiskey
0 replies
4h13m

I would say that TLS/SSL doesn't depend on Internet PKI - browsers (etc) depend on Internet PKI in combination with TLS/SSL.

graemep
0 replies
1h33m

Its used for verification because its cheap, not because its good. Why would you expect anyone to care enough to fix it.

If we really wanted verification we would still be manually verifying the owners of domains. Highly effective but expensive.

donatj
37 replies
4h49m

I have written PHP for a living for the last 20 years and that eval just pains me to no end

    eval($var . '="' . str_replace('"', '\\\\"', $itm) . '";');
Why? Dear god why. Please stop.

PHP provides a built in escaper for this purpose

    eval($var . '=' . var_export($itm, true) . ';');
But even then you don't need eval here!

    ${$var} = $itm;
Is all you really needed... but really just use an array(map) if you want dynamic keys... don't use dynamically defined variables...

xp84
18 replies
3h3m

I mean no disrespect to you, but this sort of thing is exactly the sort of mess I’ve come to expect in any randomly-selected bit of PHP code found in the wild.

It’s not that PHP somehow makes people write terrible code, I think it’s just the fact that it’s been out for so long and so many people have taken a crack at learning it. Plus, it seems that a lot of ingrained habits began back when PHP didn’t have many of its newer features and they just carried on, echoing through stack overflow posts forever.

dartos
13 replies
3h0m

JavaScript land fares little better.

IMO it’s because php and js are so easy to pick up for new programmers.

They are very forgiving, and that leads to… well… the way that php and js is…

numb7rs
4 replies
1h40m

I've heard it said that one of the reasons Fortran has a reputation for bad code is this combination: lots of people who haven't had any education in best practices; and it's really easy in Fortran to write bad code.

hinkley
1 replies
1h34m

Which is why that “you can write Fortran in any language” is such an epithet.

tracker1
0 replies
1h26m

Most horrific code I've ever seen was a VB6 project written by a mainframe programmer... I didn't even know VB6 could do some of the things he did... and wish I never did. Not to mention variables like a, b, c, d .. aa, ab...

MajimasEyepatch
1 replies
35m

Code written by scientists is a sight to behold.

craigmoliver
0 replies
25m

and they think cause they're scientists they can just do it because they're scientists and stuff. Very pragmatic to be sure...but horrifying.

jimkoen
3 replies
2h52m

I'm sorry, I haven't encountered bare eval in years. Do you have an example? And even then it's actually not that easy to get RCE going with that.

donatj
1 replies
2h48m

Something like half of of reported JavaScript vulnerabilities are "prototype pollution" because It's very common practice to write to object keys blindly, using objects as a dictionary, without considering the implications.

It's a very similar exploit.

baq
0 replies
2h31m

arguably worse, since no eval is needed...

johnisgood
0 replies
2h31m

Yeah, same with the use of "filter_input_array", "htmlspecialchars", or how you should use PDO and prepare your statements with parameterized queries to prevent SQL injection, etc.

btown
2 replies
2h32m

The saving grace of JS is that the ecosystem had a reset when React came out; there's plenty of horrifying JQuery code littering the StackOverflow (and Experts Exchange!) landscape, but by the time React came around, Backbone and other projects had already started to shift the ecosystem away from "you're writing a script" to "you're writing an application," so someone searching "how do I do X react" was already a huge step up in best practices for new learners. I don't think PHP and its largest frameworks ever had a similar singular branding reset.

snerbles
0 replies
1h55m

Laravel, maybe. But not as much as React, or the other myriad JS frontend frameworks.

(to include the ones that appeared in the time I spent typing this post)

MajimasEyepatch
0 replies
36m

The other thing making JavaScript a little better in practice is that it very rarely was used on the back end until Node.js came along, and by then, we were fully in the AJAX world, where people were making AJAX requests using JavaScript in the browser to APIs on the back end. You were almost never directly querying a database with JavaScript, whereas SQL injection seems to be one of the most common issues with a lot of older PHP code written by inexperienced devs. Obviously SQL injection can and does happen in any language, but in WordPress-land, when your website designer who happens to be the owner's nephew writes garbage, they can cause a lot of damage. You probably would not give that person access to a Java back end.

hinkley
0 replies
1h35m

At least the node community is mostly allergic to using eval().

The main use I know of goes away with workers.

hinkley
1 replies
1h37m

On a new job I stuck my foot in it because I argued something like this with a PHP fan who was adamant I was wrong.

Mind you this was more than ten years ago when PHP was fixing exploits left and right.

This dust up resolved itself within 24 hours though, as I came in the next morning to find he was too busy to work on something else because he was having to patch the PHP forum software he administered because it had been hacked overnight.

I did not gloat but I had trouble keeping my face entirely neutral.

Now I can’t read PHP for shit but I tried to read the patch notes that closed the hole. As near as I could tell, the exact same anti pattern appeared in several other places in the code.

I can’t touch PHP. I never could before and that cemented it.

podunkPDX
0 replies
42m

PHP: an attack surface with a side effect of hosting blogs.

larsnystrom
0 replies
55m

I mean, in this case the developer really went out of their way to write bad code. TBH it kind of looks like they wanted to introduce an RCE vulnerability, since variable variable assignment is well-known even to novice PHP developers (who would also be the only ones using that feature), and "eval is bad" is just as well known.

A developer who has the aptitude to write a whois client, but knows neither of those things? It just seems very unlikely.

amelius
0 replies
2h24m

Replace PHP by C or C++ in your comment, and then read it again.

zoover2020
16 replies
3h18m

This is why PHP is mostly banned at bigCo

dr_kretyn
4 replies
2h29m

Pretty sure there's plenty of PHP at Amazon and Facebook (just with slightly different names)

jonhohle
1 replies
2h12m

There is no PHP at Amazon (at least not 2009-2016). It was evaluated before my time there and Perl Mason was chosen instead to replace C++. A bunch if that’s still appears to exist (many paths that start with gp/) but a lot was being rebuilt in various internal Java frameworks. I know AWS had some rails apps that were being migrated to Java a decade ago, but I don’t think I ever encountered PHP (and I came in as a programmer primarily writing PHP).

dr_kretyn
0 replies
18m

Ok, my "pretty sure" turns out to be "not sure at all". Thank you for the refresher! I was thinking about Mason and somehow conflated Perl with PHP.

I left Amazon 2020. Had various collaborations with ecommerce (mainly around fulfillment) and there was plenty of Mason around.

joeframbach
1 replies
1h48m

I can *assure* you that php is expressly prohibited for use at Amazon.

johnisgood
0 replies
1h31m

Really? How come? What is the history with regarding to that? What are their reasoning? Does it apply to PHP >=8?

NovemberWhiskey
4 replies
3h8m

To paraphrase: you can write PHP in any language. PHP is a negative bias for bigCo mostly because of the folkloric history of bad security practices by some PHP software developers.

jeremyjh
2 replies
3h0m

By “folkloric history”, don’t you actually mean just “history”?

playingalong
1 replies
2h28m

I guess they mean the stigma that arose based on the reality in the past.

So kind of both.

hinkley
0 replies
1h30m

They fucked themselves and the rest of us moved on.

You can become a good person late in life and still be lonely because all your bridges are burned to the ground.

hinkley
0 replies
1h32m

folkloric

I think the word you’re looking for is “epic” or “legendary”

smsm42
1 replies
2h47m

You're saying all big companies ban whole language ecosystem because somebody on the internet used one function in that language in knowingly unsafe manner contrary to all established practices and warnings in the documentation? This is beyond laughable.

EwanToo
0 replies
1h51m

Laughable, but accurate.

Google for example does exactly this.

phplovesong
1 replies
3h5m

Pretty much. PHP for a banking software? For anything money related? Goomg to have a bad time.

smashed
0 replies
2h39m

Magento, OpenCart or WooCommerce are money related. All terrible but also very popular. But I guess they work, somehow.

What would you use to build and self-host an ecommerce site quickly and that is not a SaaS?

dartos
1 replies
3h1m

Isn’t Facebook one of the biggest?

packetslave
0 replies
2h31m

Hack is not PHP (any longer)

dustywusty
0 replies
1h6m

Couldn't agree more with this. In general, if you're writing eval you've already committed to doing something the wrong way.

sebstefan
29 replies
6h26m

The first bug that our retrospective found was CVE-2015-5243. This is a monster of a bug, in which the prolific phpWhois library simply executes data obtained from the WHOIS server via the PHP ‘eval’ function, allowing instant RCE from any malicious WHOIS server.

I don't want to live on this planet anymore

larsnystrom
16 replies
6h14m

The fact they're using `eval()` to execute variable assignment... They could've just used the WTF-feature in PHP with double dollar signs. $$var = $itm; would've been equivalent to their eval statement, but with less code and no RCE.

hypeatei
15 replies
6h3m

The fact PHP is used for any critical web infrastructure is concerning. I used PHP professionally years ago and don't think it's that awful but certainly not something I'd consider for important systems.

XCSme
7 replies
4h46m

Wouldn't "eval" in any language result in RCE? Isn't that the point of eval, to execute the given string command?

account42
6 replies
4h12m

Fully compiled languages don't even have an eval at all.

mdaniel
1 replies
2h14m

And thanks to the magic of "shoving strings from the Internet into a command line", poof, RCE! It bit GitLab twice

xrisk
0 replies
26m

What incident are you referring to?

sebstefan
1 replies
3h50m

Not with that attitude

Start shipping the compiler with your code for infrastructure-agnostic RCEs

adolph
0 replies
2h22m

When you turn pro you call it security software and add it to the kernel.

cryptonector
0 replies
37m

You can build an eval for a compiled language, absolutely. You can embed an interpreter, for example, or build one using closures. There's entire books on this, like LiSP in Small Pieces.

dpcx
2 replies
5h56m

I'm curious about some specifics of why you wouldn't use PHP for _critical_ web infrastructure?

sebstefan
1 replies
5h47m

https://duckduckgo.com/?q=hash+site:reddit.com/r/lolphp

https://duckduckgo.com/?q=crypt+site:reddit.com/r/lolphp

crc32($str) and hash("crc32",$str) use different algorithms ..

Password_verify() always returns true with some hash

md5('240610708') == md5('QNKCDZO')

crypt() on failure: return <13 characters of garbage

strcmp() will return 0 on error, can be used to bypass authentication

crc32 produces a negative signed int on 32bit machines but positive on 64bit mahines

5.3.7 Fails unit test, released anyway

The takeaway from these titles is not the problems themselves but the pattern of failure and the issue of trusting the tool itself. Other than that if you've used php enough yourself you will absolutely find frustration in the standard library

If you're looking for something more exhaustive there's the certified hood classic "PHP: A fractal of bad design" article as well that goes through ~~300+~~ 269 problems the language had and/or still has.

https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

Though most of it has been fixed since 2012, there's only so much you can do before the good programmers in your community (and job market) just leave the language. What's left is what's left.

rty32
0 replies
4h45m

People keep saying "oh it's php 5.3 and before that are bad, things are much better now", but ...

KMnO4
1 replies
5h48m

Any language can be insecure. There’s nothing inherently bad about PHP, other than it’s the lowest-hanging fruit of CGI languages and has some less-than-ideal design decisions.

sebstefan
0 replies
5h3m

Don't just swipe the "less-than-ideal design decisions" under the rug

krageon
0 replies
5h2m

It's very easy to make PHP safe, certainly now that we've passed the 7 mark and we have internal ASTs. Even when using eval, it's beyond trivial to not make gross mistakes.

bell-cot
0 replies
5h53m

Modern PHP is about as solid as comparable languages. It's two biggest problems are:

Lingering bad reputation, from the bad old days

Minimal barrier to entry - which both makes it a go-to for people who should not be writing production code in any language, and encourages many higher-skill folks to look down on it

coldpie
8 replies
4h58m

As has been demonstrated many, many (many, many (many many many many many...)) times: there is no such thing as computer security. If you have data on a computer that is connected to the Internet, you should consider that data semi-public. If you put data on someone else's computer, you should consider that data fully public.

Our computer security analogies are modeled around securing a home from burglars, but the actual threat model is the ocean surging 30 feet onto our beachfront community. The ocean will find the holes, no matter how small. We are not prepared for this.

GTP
2 replies
2h24m

Our computer security analogies are modeled around securing a home from burglars

Well, no home is burglar-proof either. Just like with computer security, we define , often just implicitly, a threat model and then we decide which kind of security measures we use to protect our homes. But a determined burglar could still find a way in. And here we get to a classic security consideration: if the effort required to break your security is greater than the benefit obtained from doing so, you're adequately protected from most threats.

myself248
0 replies
13m

And I think there's some cognitive problem that prevents people from understanding that "the effort required to break your security" has been rapidly trending towards zero. This makes the equation effectively useless.

(Possibly even negative, when people go out and deliberately install apps that, by backdoor or by design, hoover up their data, etc. And when the mainstream OSes are disincentivized to prevent this because it's their business model too.)

There was a time, not very long ago, when I could just tcpdump my cable-modem interface and know what every single packet was. The occasional scan or probe stuck out like a sore thumb. Today I'd be drinking from such a firehose of scans I don't even have words for it. It's not even beachfront property, we live in a damn submarine.

coldpie
0 replies
1h29m

I agree, my point is we need to be using the correct threat model when thinking about those risks. You might feel comfortable storing your unreplaceable valuables in a house that is reasonably secure against burglars, even if it's not perfectly secure. But you'd feel otherwise about an oceanfront property regularly facing 30 foot storm surges. I'm saying the latter is the correct frame of mind to be in when thinking about whether to put data onto an Internet-connected computer.

It's no huge loss if the sea takes all the cat photos off my phone. But if you're a hospital or civil services admin hooking up your operation to the Internet, you gotta be prepared for it all to go out to sea one day, because it will. Is that worth the gains?

ruthmarx
1 replies
1h55m

As has been demonstrated many, many (many, many (many many many many many...)) times: there is no such thing as computer security.

Of course there is, and things are only getting more secure. Just because a lot of insecurity exists doesn't mean computer security isn't possible.

coldpie
0 replies
1h27m

It's a matter of opinion, but no, I disagree. People are building new software all the time. It all has bugs. It will always have bugs. The only way to build secure software is to increase its cost by a factor of 100 or more (think medical and aviation software). No one is going to accept that.

Computer security is impossible at the prices we can afford. That doesn't mean we can't use computers, but it does mean we need to assess the threats appropriately. I don't think most people do.

ffsm8
1 replies
4h29m

by this logic, every picture you'll ever take with your phone would be considered semi-public as phones are Internet connected.

While I wouldn't have too much of an issue with that, I'm pretty sure I'm a minority with that

callalex
0 replies
15m

Do you use a bank account? Or do you still trade using only the shells you can carry in your arms? Perhaps networked computers are secure enough to be useful after all.

detourdog
0 replies
6h19m

Always look on the bright side of Life.

The non-sensicalness of it is just a phase. Remember the Tower of Babel didn't stop humanity.

Here is a link that was posted a few days ago regarding how great things are compared to 200 years ago. Ice cream has only become a common experience in the last 200 years..

https://ourworldindata.org/a-history-of-global-living-condit...

brynb
0 replies
6h17m

that seems like a bigger lift than just deciding to help fix the bug

“be the change” or some such

JW_00000
0 replies
1h47m

Have you ever witnessed a house being built? Everywhere is the same :) At least in our industry these issues are generally not life-threatening.

post-it
14 replies
3h39m

Obviously there are a lot of errors by a lot of people that led to this, but here's one that would've prevented this specific exploit:

As part of our research, we discovered that a few years ago the WHOIS server for the .MOBI TLD migrated from whois.dotmobiregistry.net to whois.nic.mobi – and the dotmobiregistry.net domain had been left to expire seemingly in December 2023.

Never ever ever ever let a domain expire. If you're a business and you're looking to pick up a new domain because it's only $10/year, consider that you're going to be paying $10/year forever, because once you associate that domain with your business, you can never get rid of that association.

declan_roberts
9 replies
3h35m

Always use subdomains. Businesses only ever need a single $10 domain for their entire existence.

playingalong
3 replies
2h26m

I think it's a sane practice to keep the marketing landing page on a separate domain than the product in case of SaaS.

stagalooo
1 replies
2h1m

Could you elaborate on why? The companies I have worked for have pretty much all used domain.com for marketing and app.domain.com for the actual application. What's wrong with this approach?

darkr
0 replies
10m

If there’s any scope for a user to inject JavaScript, then potentially this gives a vector of attack against other internal things (e.g admin.domain.com, operations.domain.com etc)

declan_roberts
0 replies
2h1m

Why? I always get frustrated when I end up in some parallel universe of a website (like support or marketing) and I can't easily click back to the main site.

craftkiller
2 replies
1h38m

Not true. If you are hosting user content, you want their content on a completely separate domain, not a subdomain. This is why github uses githubusercontent.com.

https://github.blog/engineering/githubs-csp-journey/

joelanman
1 replies
34m

interesting, why is this?

varun_ch
0 replies
23m

I can think of two reasons: 1. it's immediately clear to users that they're seeing content that doesn't belong to your business but instead belongs to your business's users. maybe less relevant for github, but imagine if someone uploaded something phishing-y and it was visible on a page with a url like google.com/uploads/asdf.

2. if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff), csp rules exist, but it's a lot easier to sandbox users content entirely like this.

shafoshaf
0 replies
3h13m

And a second for when your main domain gets banned for spam for innocuous reasons.

ganoushoreilly
0 replies
3h26m

I actually think they need 2, usually need a second domain / setup for failover. Especially if the primary domain is a novelty TLD like.. .IO which showed that things can happen at random to the TLD. If the website down it's fine, but if you have systems calling back to subdomains on that domain, you're out of luck. A good failover will help mitigate / minimize these issues. I'd also keep it on a separate registrar.

Domains are really cheap, I try to just pay for 5-10 year blocks (as many as I can), when I can just to reduce the issues.

yumraj
2 replies
9m

If you're a business and you're looking to pick up a new domain because it's only $10/year, consider that you're going to be paying $10/year forever, because once you associate that domain with your business, you can never get rid of that association.

Please elaborate...

Also, what about personal domains? Does it apply there as well?

judge2020
0 replies
1m

People bookmark stuff. Random systems (including ones you don’t own) have hardcoded urls. Best to pay for it forever since it’s so low of a cost and someone taking over your past domain could lead to users getting duped.

Personal domains are up to you.

MontagFTB
0 replies
0m

As per the article, the old domain expired and was picked up by a third party for $20. Said domain was hard-coded into a vast number of networking tools, effectively letting the new domain owner unfettered access into WHOIS internals.

rixthefox
7 replies
5h32m

We recently performed research that started off "well-intentioned" (or as well-intentioned as we ever are) - to make vulnerabilities in WHOIS clients and how they parse responses from WHOIS servers exploitable in the real world (i.e. without needing to MITM etc).

R̶i̶g̶h̶t̶ o̶f̶f̶ t̶h̶e̶ b̶a̶t̶, S̶T̶O̶P̶. I̶ d̶o̶n̶'t̶ c̶a̶r̶e̶ w̶h̶o̶ y̶o̶u̶ a̶r̶e̶ o̶r̶ h̶o̶w̶ "w̶e̶l̶l̶-̶i̶n̶t̶e̶n̶t̶i̶o̶n̶e̶d̶" s̶o̶m̶e̶o̶n̶e̶ i̶s̶. I̶n̶t̶e̶n̶t̶i̶o̶n̶a̶l̶l̶y̶ s̶p̶r̶i̶n̶k̶l̶i̶n̶g̶ i̶n̶ v̶u̶l̶n̶e̶r̶a̶b̶l̶e̶ c̶o̶d̶e̶, K̶N̶O̶W̶I̶N̶G̶L̶Y̶ a̶n̶d̶ W̶I̶L̶L̶I̶N̶G̶L̶Y̶ t̶o̶ "a̶t̶ s̶o̶m̶e̶ p̶o̶i̶n̶t̶ a̶c̶h̶i̶e̶v̶e̶ R̶C̶E̶" i̶s̶ b̶e̶h̶a̶v̶i̶o̶r̶ t̶h̶a̶t̶ I̶ c̶a̶n̶ n̶e̶i̶t̶h̶e̶r̶ c̶o̶n̶d̶o̶n̶e̶ n̶o̶r̶ s̶u̶p̶p̶o̶r̶t̶. I̶ t̶h̶o̶u̶g̶h̶t̶ t̶h̶i̶s̶ k̶i̶n̶d̶ o̶f̶ r̶o̶g̶u̶e̶ c̶o̶n̶t̶r̶i̶b̶u̶t̶i̶o̶n̶s̶ t̶o̶ p̶r̶o̶j̶e̶c̶t̶s̶ h̶a̶d̶ a̶ g̶r̶e̶a̶t̶ e̶x̶a̶m̶p̶l̶e̶ w̶i̶t̶h̶ t̶h̶e̶ U̶n̶i̶v̶e̶r̶s̶i̶t̶y̶ o̶f̶ M̶i̶n̶n̶e̶s̶o̶t̶a̶ o̶f̶ w̶h̶a̶t̶ n̶o̶t̶ t̶o̶ d̶o̶ w̶h̶e̶n̶ t̶h̶e̶y̶ g̶o̶t̶ a̶l̶l̶ t̶h̶e̶i̶r̶ c̶o̶n̶t̶r̶i̶b̶u̶t̶i̶o̶n̶s̶ r̶e̶v̶o̶k̶e̶d̶ a̶n̶d̶ f̶o̶r̶c̶e̶ r̶e̶v̶i̶e̶w̶e̶d̶ o̶n̶ t̶h̶e̶ L̶i̶n̶u̶x̶ k̶e̶r̶n̶e̶l̶.

EDIT: This is not what the group has done upon further scrutiny of the article. It's just their very first sentence makes it sound like they were intentionally introducing vulnerabilities in existing codebases to achieve a result.

I definitely can see that it should have been worded a bit better to make the reader aware that they had not contributed bad code but were finding existing vulnerabilities in software which is much better than where I went initially.

projektfu
1 replies
5h25m

I think you misinterpreted the sentence. They don't need to change the WHOIS client, it's already broken, exploitable, and surviving because the servers are nice to it. They needed to become the authoritative server (according to the client). They can do that with off-the-shelf code (or netcat) and don't need to mess with any supply chains.

This is the problem with allowing a critical domain to expire and fall into evil hands when software you don't control would need to be updated to not use it.

rixthefox
0 replies
5h23m

Yes, getting through the article I was happy to see that wasn't the case and was just vulnerabilities that had existed in those programs.

Definitely they could have worded that better to make it not sound like they had been intentionally contributing bad code to projects. I'll update my original post to reflect that.

drekipus
1 replies
5h24m

You're right. They should have just done it and told no one.

We need to focus on the important things: not telling anyone, and not trying to break anything. It's important to just not have any knowledge on this stuff at all

rixthefox
0 replies
5h4m

That was not my intention at all. My concern is groups who do that kind of red team testing on open source projects without first seeking approval from the maintainers risk unintentionally poisoning a lot more machines than they might initially expect. While I don't expect this kind of research to go away, I would rather it be done in a way that does not allow malicious contributions to somehow find their way into mission critical systems.

It's one thing if you're trying to make sure that maintainers are actually reviewing code that is submitted to them and fully understanding "bad code" from good but a lot of open source projects are volunteer effort and maybe we should be shifting focus to how maintainers should be discouraged from accepting pull requests where they are not 100% confident in the code that has been submitted. Not every maintainer is going to be perfect but it's definitely not an easy problem to solve overnight by a simple change of policy.

rmnoon
0 replies
5h29m

Make sure you read the article since it doesn't look like they're doing that at all. The sentence you cited is pretty tricky to parse so your reaction is understandable.

josephg
0 replies
5h28m

I hear you. And I mostly agree. I’ve refused a couple genuine sounding offers lately to take over maintaining a couple packages I haven’t had time to update.

But also, we really need our software supply chains to be resilient. That means building a better cultural immune system toward malicious contributors than “please don’t”. Because the bad guys won’t respect our stern, disapproving looks.

SSLy
0 replies
3h30m

you'd rather have blackhats do it and sell it to asian APT's?

devvvvvvv
4 replies
6h24m

Entertaining and informative read. Main takeaways for me from an end user POV:

- Be inherently less trustworthy of more unique TLDs where this kind of takeover seems more likely due to less care being taken during any switchover.

- Don't use any "TLS/SSL Certificate Authorities/resellers that support WHOIS-based ownership verification."

DexesTTP
3 replies
6h11m

None of these are true for the MitM threat model that caused this whole investigation:

- If someone manages to MitM the communication between e.g. Digicert and the .com WHOIS server, then they can get a signed certificate from Digicert for the domain they want

- Whether you yourself used LE, Digicert or another provider doesn't have an impact, the attacker can still create such a certificate.

This is pretty worrying since as an end user you control none of these things.

devvvvvvv
2 replies
6h8m

Thank you for clarifying. That is indeed much more worrying.

If we were able to guarantee NO certificate authorities used WHOIS, this vector would be cut off right?

And is there not a way to, as a website visitor, tell who the certificate is from and reject/distrust ones from certain providers, e.g. Digicert? Edit: not sure if there's an extension for this, but seems to have been done before at browser level by Chrome: https://developers.google.com/search/blog/2018/04/distrust-o...

tetha
1 replies
4h35m

CAA records may help, depending on how the attacker uses the certificate. A CAA record allows you to instruct the browser that all certs for "*.tetha.example" should be signed by Lets Encrypt. Then - in theory - your browser could throw an alert if it encounters a DigiCert cert for "fun.tetha.example".

However, this depends strongly on how the attacker uses the cert. If they hijack your DNS to ensure "fun.tetha.example" goes to a record they control, they can also drop or modify the CAA record.

And sure, you could try to prevent that with long TTLs for the CAA record, but then the admin part of my head wonders: But what if you have to change cert providers really quickly? That could end up a mess.

tialaramex
0 replies
2h23m

CAA records are not addressed to end users, or to browsers or whatever - they are addressed to the Certificate Authority, hence their name.

The CAA record essentially says "I, the owner of this DNS name, hereby instruct you, the Certificate Authorities to only issue certificates for this name if they obey these rules"

It is valid, and perhaps even a good idea in some circumstances, to set the CAA record for a name you control to deny all issuance, and only update it to allow your preferred CA for a few minutes once a month while actively seeking new certificates for any which are close to expiring, then put it back to deny-all once the certificates were issued.

Using CAA allows Meta, for example, to insist only Digicert may issue for their famous domain name. Meta has a side deal with Digicert, which says when they get an order for whatever.facebook.com they call Meta's IT security regardless of whether the automation says that's all good and it can proceed, because (under the terms of that deal) Meta is specifically paying for this extra step so that there aren't any security "mistakes".

In fact Meta used to have the side deal but not the CAA record, and one day a contractor - not realising they're supposed to seek permission from above - just asked Let's Encrypt for a cert for this test site they were building and of course Let's Encrypt isn't subject to Digicert's agreement with Meta so they issued based on the contractor's control over this test site. Cue red faces for the appropriate people at Meta. When they were done being angry and confused they added the CAA record.

[Edited: Fix a place where I wrote Facebook but meant Meta]

hansjorg
3 replies
6h12m

Why are tools using hardcoded lists of WHOIS servers?

Seems there is a standard (?) way of registering this in DNS, but just from a quick test, a lot of TLDs are missing a record. Working example:

    dig _nicname._tcp.fr SRV +noall +answer

    _nicname._tcp.fr. 3588 IN SRV 0 0 43 whois.nic.fr.
Edit:

There's an expired Internet Draft for this: https://datatracker.ietf.org/doc/html/draft-sanz-whois-srv-0...

xyst
0 replies
39m

because people build these tools as part of one time need, publish it for others (or in case they need to reference it themselves). Other "engineers" copy and paste without hesitating. Then it gets into production and becomes a CVE like discussed.

Developer incompetence is one thing, but AI-hallucination will make this even worse.

rty32
0 replies
4h43m

The reality of life is that there are way more hardcoded strings than you imagine or there should be.

crote
0 replies
3h15m

A plain

  mobi.whois.arpa. CNAME whois.nic.mobi
could've already solved the issue. But getting everyone to agree and adopt something like that is hard.

Although as fanf2 points out below, it seems you could also just start with the IANA whois server. Querying https://www.iana.org/whois for `mobi` will return `whois: whois.nic.mobi` as part of the answer.

worthless-trash
1 replies
6h16m

These days people use "RCE" for local code execution.

acdha
0 replies
4h33m

I would clarify that as running code somewhere you don’t already control. The classic approach would be a malformed request letting them run code on someone else’s server, but this other pull-based approach also qualifies since it’s running code on a stranger’s computer.

develatio
2 replies
4h55m

I have the feeling that any day now I’m gonna wake up in the morning and I’ll find out that there just isn’t internet anymore because somebody did something from a hotel room in the middle of nowhere with a raspberry pi connected to a wifi hotspot of a nearby coffee shop.

deisteve
0 replies
2m

even worse, the raspberry pi, tripped, fell, and burst into flames for no good reason.

Suppafly
0 replies
3h21m

Reminds me of the dorms in college where the internet would get messed up because someone would plug in a random router from home that would hand out junk dhcp ip addresses. It's like that but for the whole world.

wbl
1 replies
3h49m

Is this in the bugzilla/MDSP yet?

tomaskafka
1 replies
6h38m

O.M.G. - the attack surface gained by buying a single expired domain of an old whois server is absolutely staggering.

deisteve
0 replies
1m

wait till you realize a decent sized quantized LLM can do quite a bit of vulnerability discovery.

I originally thought the author spent $20 on some LLM and discovered RCE's

xnorswap
0 replies
6h26m

This blog is a fantastic journey, it was well worth reading the whole thing.

whafro
0 replies
3h7m

I’ve seen so many teams that fail to realize that once you use a domain in any significant way, you’re basically bound to renewing it until the heat death of the universe – or at least the heat death of your team.

Whether it’s this sort of thing, a stale-but-important URL hanging out somewhere, someone on your team signing up for a service with an old domain-email, or whatever, it’s just so hard to know when it’s truly okay let an old domain go.

vool
0 replies
6h18m

TLDR

While this has been interesting to document and research, we are a little exasperated. Something-something-hopefully-an-LLM-will-solve-all-of-these-problems-something-something.
sundarurfriend
0 replies
3h25m

The article puts the blame on

Never Update, Auto-Updates And Change Are Bad

as the source of the problem a couple of times.

This is pretty common take from security professionals, and I wish they'd also call out the other side of the equation: organizations bundling their "feature" (i.e. enshittification) updates and security updates together. "Always keep your programs updated" is just not feasible advice anymore given that upgrades as just as likely to be downgrades these days. If that were to be realistic advice, we need more pressure on companies to separate out security-related updates and allow people to get updates only on that channel.

shipp02
0 replies
6h27m

I love the overall sense of we didn't want to this but things just keep escalating and they keep getting more than they bargained for at each step.

If only the naysayers had listened and fixed their parsing, the post authors might've been spared.

peterpost2
0 replies
4h27m

That is so neat. Good job guys!

nusl
0 replies
6h21m

Pretty horrible negligence on the part of .mobi to leave a domain like this to expire.

mnau
0 replies
1h50m

I think the whole computer approach is doomed to failure. It relies on perfect security that is supposed to be achieved by SBOM checking and frequent updates.

That is never going to work. Even log4j, 40% of all downloads are vulnerable versions. Much less when a vendor in a chain goes out of business or stops maintaining a component.

Everything is always going to be buggy and full of holes, just like our body is always full of battlefields with microbes.

mannyv
0 replies
5h6m

"He who seeks finds." - old proverb.

lovasoa
0 replies
5m

$ sqlite3 whois-log-copy.db "select source from queries"|sort|uniq|wc -l

Oh cool they saved the logs in a database ! Wait... |sort|uniq|wc -l ?? But why ?

giancarlostoro
0 replies
5h45m

I still remember when websites would redirect you on your phone to their .mobi website, completely screwing up the original intent. They didn't show you the mobile version of whatever Google let you towards, they just lazily redirected you to the .mobi homepage. I bet they asked a non-dev to do those redirects, that one IT neckbeard who shoved a redirect into an Apache2 config file and moved on with life. :)

But seriously, it was the most frustrating thing about the mobile web.

Is this TLD even worth a damn in 2024?

forgotpwd16
0 replies
1h24m

Very cool work.

The dotmobiregistry.net domain, and whois.dotmobiregisry.net hostname, has been pointed to sinkhole systems provided by ShadowServer that now proxy the legitimate WHOIS response for .mobi domains.

If those domains were meant to be deprecated should be better to return a 404. Keeping them active and working like normal reduces the insensitive to switch to the legitimate domain.

fanf2
0 replies
4h45m

This is a fantastic exploit and I am appalled that CAs are still trying to use whois for this kind of thing. I expected the rise of the whois privacy services and privacy legislation would have made whois mostly useless for CAs years ago.

<< maintainers of WHOIS tooling are reluctant to scrape such a textual list at runtime, and so it has become the norm to simply hardcode server addresses, populating them at development time by referring to IANA’s list manually. Since the WHOIS server addresses change so infrequently, this is usually an acceptable solution >>

This is the approach taken by whois on Debian.

Years ago I did some hacking on FreeBSD’s whois client, and its approach is to have as little built-in hardcoded knowledge as possible, and instead follow whois referrals. These are only de-facto semi-standard, i.e. they aren’t part of the protocol spec, but most whois servers provide referrals that are fairly easy to parse, and the number of exceptions and workarounds is easier to manage than a huge hardcoded list.

FreeBSD’s whois starts from IANA’s whois server, which is one of the more helpful ones, and it basically solves the problem of finding TLD whois servers. Most of the pain comes from dealing with whois for IP addresses, because some of the RIRs are bad at referrals. There are some issues with weird behaviour from some TLD whois servers, but that’s relatively minor in comparison.

dt3ft
0 replies
3h11m

I wish I had the time they have…

asimpleusecase
0 replies
6h4m

Wonderful article! Well done chaps.

aftbit
0 replies
4h9m

You would, at this point, be forgiven for thinking that this class of attack - controlling WHOIS server responses to exploit parsing implementations within WHOIS clients - isn’t a tangible threat in the real world.

Let's flip that on its head - are we expected to trust every single WHOIS server in the world to always be authentic and safe? Especially from the point of view of a CA trying to validate TLS, I would not want to find out that `whois somethingarbitrary.ru` leaves me open to an RCE by a Russian server!

adolph
0 replies
2h28m

Conjecture: control over tlds should be determined by capture the flag. Whenever an organization running a registry achieves a level of incompetence whereby its tld is captured, the tld becomes owned by the attacker.

Sure there are problems with this conjecture, like what if the attacker is just as incompetent (it just gets captured again), or "bad actor" etc. A concept similar to capture the flag might provide for evolving better approaches toward security than the traditional legal and financial methods of organizational capture the flag.

Tepix
0 replies
6h15m

Wow! Highly entertaining and scary at the same time. Sometimes ijust wish i was clueless about all those open barn doors.