Great write-up - the tip of the iceberg on how fragile TLS/SSL is.
Let's add a few:
1. WHOIS isn't encrypted or signed, but is somehow suitable for verification (?)
2. DNS CAA records aren't protected by DNSSEC, as absence of a DNS record isn't sign-able (correction: NSEC is an optional DNSSEC extension)
3. DNS root & TLD servers are poorly protected against BGP hijacks (adding that DNSSEC is optional for CAs to verify)
4. Email, used for verification in this post, is also poorly protected against BGP hijacks.
I'm amazed we've lasted this long. It must be because if anyone abuses these issues, someone might wake up and care enough to fix them (:
Our industry needs to finish what it starts. Between IPv6, DNSSEC, SMTP TLS, SCTP/QUIC, etc all of these bedrock technologies feel like they're permanently stuck in a half completed implementation/migration. Like someone at your work had all these great ideas, started implementing them, then quit when they realized it would be too difficult to complete.
obligatory https://xkcd.com/927/
Honestly: we're in this situation because we keep trying to band-aid solutions onto ancient protocols that were never designed to be secure. (I'm talking about you DNS.) Given xkcd's wisdom though, I'm not sure if this is easily solvable.
dns should not have to be secure, it should be regulated as a public utility with 3rd-party quality control and all the whistles.
only then can it be trustworthy, fast and free/accessible
If my DNS can be MITM'd, and is thus insecure, it is not trustworthy.
This sort of all-or-nothing thinking isn't helpful. DNS points you to a server, TLS certificates help you trust that you've arrived at the right place. It's not perfect, but we build very trustworthy systems on this foundation.
But DNS is all-or-nothing.
If you can't trust DNS, you can't trust TLS or anything downstream of it.
Even banks are not bothering with EV certificates any more, since browsers removed the indicator (for probably-good reasons). DV certificate issuance depends on trustworthy DNS.
Internet security is "good enough" for consumers, most of the time. That's "adequately trustworthy", but it's not "very trustworthy".
Bank websites like chase.com and hsbc.com and web services like google.com, amazon.com, and amazonaws.com intentionally avoid DNSSEC. I wouldn't consider those sites less than "very trustworthy" but my point is that "adequately trustworthy" is the goal. All-or-nothing thinking isn't how we build and secure systems.
I am definitely not arguing in favor of DNSSEC.
However, I don't think it's reasonable to call DNS, as a system, "very trustworthy".
"Well-secured" by active effort, and consequently "adequately trustworthy" for consumer ecommerce, sure.
But DNS is a systemic weak link in the chain of trust, and must be treated with extra caution for "actually secure" systems.
(E.g., for TLS and where possible, the standard way to remove the trust dependency on DNS is certificate pinning. This is common practice, because DNS is systemically not trustworthy!)
Is certificate pinning common? On the web we used to have HPKP, but that's obsolete and I didn't think it was replaced. I know pinning is common in mobile apps, but I've generally heard that's more to prevent end-user tampering than any actual distrust of the CAs/DNS.
I think you're "well-secured" comment is saying the same thing I am, with some disagreement about "adequate" vs "very". I don't spend any time worrying that my API calls to AWS or online banking transactions are insecure due to lack of DNSSEC, so the DNS+CA system feels "very" trustworthy to me, even outside ecommerce. The difference between "very" and "adequate" is sort of a moot point anyway: you're not getting extra points for superfluous security controls. There's lots of other things I worry about, though, because attackers are actually focusing their efforts there.
I agree that the semantics of "adequate" and "very" are moot.
As always, it ultimately depends on your threat profile, real or imagined.
Re: certificate pinning, it's common practice in the financial industry at least. It mitigates a few risks, of which I'd rate DNS compromise as more likely than a rogue CA or a persistent BGP hijack.
There is nothing fundamentally preventing us from securing DNS. It is not the most complicated protocol believe it or not and is extensible enough for us to secure it. Moreover a different name lookup protocol would look very similar to DNS. If you don’t quite understand what DNS does and how it works the idea of making it a government protected public service may appeal to you but that isn’t actually how it works. It’s only slightly hyperbolic to say that you want XML to be a public utility.
On the other hand things like SMTP truly are ancient. They were designed to do things that just aren’t a thing today.
Standards evolve for good reasons. That's just a comic.
The comic is about re-inventing the wheel. What you propose "standards evolving" would be the opposite in spirit (and is what has happened with DNSSEC, RPKI, etc)
Can we all agree to not link that comic when nobody is suggesting a new standard, or when the list of existing standards is zero to two long? It's not obligatory to link it just because the word "standard" showed up.
I think that covers everything in that list. For example, trying to go from IPv4 to IPv6 is a totally different kind of problem from the one in the comic.
The point is that, ironically, new standards may have been a better option.
Bolting on extensions to existing protocols not designed to be secure, while improving the situation, has been so far unable to address all of the security concerns leaving major gaps. It's just a fact.
"Our industry" is a pile of snakes that abhor the idea of collaboration on common technologies they don't get to extract rents from. ofc things are they way they are.
Let's not fool ourselves by saying we're purely profit driven. Our industry argues about code style (:
Our industry does not argue about code style. There were a few distinct subcultures which were appropriated by the industry who used to argue about code style, lisp-1 vs lisp-2, vim vs emacs, amiga vs apple, single pass vs multi pass compilers, Masters of Deception vs Legion of Doom and the list goes on, depending on the subculture.
The industry is profit driven.
QED
Our industry does not argue about arguing about code style.
Our industry doesn't always make Raymond Carver title references, but when it does, what we talk about when we talk about Raymond Carver title references usually is an oblique way of bringing up the thin and ultimately porous line between metadiscourse and discourse.
I'm pretty sure this is QEF.
Do you use tabs or spaces? Just joking, but:
The point is that our industry has a lot of opinionated individuals that tend to disagree on fundamentals, implementations, designs, etc., for good reasons! That's why we have thousands of frameworks, hundreds of databases, hundreds of programming languages, etc. Not everything our industry does is profit driven, or even rational.
FWIW, all my toy languages consider U+0009 HORIZONTAL TABULATION in a source file to be an invalid character, like any other control character except for U+000A LINE FEED (and also U+000D CARRIAGE RETURN but only when immediately before a LINE FEED).
I’d be a python programmer now if they had done this. It’s such an egregiously ridiculous foot gun that I can’t stand it.
The problem is, in many of these fields actual real-world politics come into play - you got governments not wanting to lose the capability to do DNS censorship or other forms of sabotage, you got piss poor countries barely managing to keep the faintest of lights on, you got ISPs with systems that have grown over literal decades where any kind of major breaking change would require investments into rearchitecture larger than the company is worth, you got government regulations mandating stuff like all communications of staff be logged (e.g. banking/finance) which is made drastically more complex if TLS cannot be intercepted or where interceptor solutions must be certified making updates to them about as slow as molasses...
Considering we have 3 major tech companies (Microsoft/Apple/Google) controlling 90+% of user devices and browsers, I believe this is more solvable than we'd like to admit.
Browsers are just one tiny piece of the fossilization issue. We got countless vendors of networking gear, we got clouds (just how many AWS, Azure and GCP services are capable of running IPv6 only, or how many of these clouds can actually run IPv6 dual-stack in production grade?), we got even more vendors of interception middlebox gear (from reverse proxies and load balancers, SSL breaker proxies over virus scanners for web and mail to captive portal boxes for public wifi networks), we got a shitload of phone telco gear of which probably a lot has long since expired maintenance and is barely chugging along.
Ok. You added OEMs to the list, but then just named the same three dominant players as clouds. Last I checked, every device on the planet supports IPv6, if not those other protocols. Everything from the cheapest home WiFi router, to every Layer 3 switch sold in the last 20-years.
I think this is a 20-year old argument, and it’s largely irrelevant in 2024.
It's not irrelevant - AWS lacks support for example in EKS or in ELB target groups, where it's actually vital [1]. GCE also lacks IPv6 for some services and you gotta pay extra [2]. Azure doesn't support IPv6-only at all, a fair few services don't support IPv6 [3].
The state of IPv6 is bloody ridiculous.
[1] https://docs.aws.amazon.com/vpc/latest/userguide/aws-ipv6-su...
[2] https://cloud.google.com/vpc/docs/ipv6-support?hl=de
[3] https://learn.microsoft.com/en-us/azure/virtual-network/ip-s...
Plenty doesn’t support IPv6.
Those companies have nothing to do with my ISP router or modem
If you look at say 3G -> 4G -> 5G or Wifi, you see industry bodies of manufacturers, network providers, and middle vendors who both standardize and coordinate deployment schedules; at least at the high level of multi-year timelines. This is also backed by national and international RF spectrum regulators who want to ensure that there is the most efficient use of their scarce airwaves. Industry players who lag too much tend to lose business quite quickly.
Then if you look at the internet, there is a very uncoordinated collection of manufacturers, network providers, and standardization is driven in a more open manner that is good for transparency but is also prone to complexifying log-jams and hecklers vetos. Where we see success, like the promotion of TLS improvements, it's largely because a small number of knowledgable players - browsers in the case of TLS - agree to enforce improvements on the entire eco-system. That in turn is driven by simple self-interest. Google, Apple, and Microsoft all have strong incentives to ensure that TLS remains secure; their ads and services revenue depend upon it.
But technologies like DNSSEC, IPv6, QUIC all face a much harder road. To be effective they need a long chain of players to support the feature, and many of those players have active disincentives. If a home users internet seems to work just fine, why be the manufacturer that is first to support say DNSSEC validation and deal with all of the increased support cases when it breaks, or device returns when consumers perceive that it broke something? (and it will).
IPv6 deployment is extra hard because we need almost every network in the world to get on board.
Dnssec shouldn't be as bad, but for dns resolvers and software that build them in. I think it's a bit worse than TLS adoption in part just because of DNS allowing recursive resolution and in part DNS being applicable to a bit more than TLS was. But the big thing seems to be that there isn't a central authority like web browsers who can entirely force the issue. ... Maybe OS vendors could do it?
Quic is an end to end protocol so should be deployable without every network operator buying in. That said, we probably do need a reduction in udp blocking in some places. But otherwise, how can quic deployment be harder than TLS deployment? I think there just hasn't been incentive to force it everywhere.
Plus IPv6 has significant downsides (more complex, harder to understand, more obscure failure modes, etc…), so the actual cost of moving is the transition cost + total downside costs + extra fears of unknown unknowns biting you in the future.
AFAIK, in the case of IPv6 it's not even that: there's still the open drama of the peering agreement between Cogent and Hurricane Electrics.
Doesn't every place have a collection of ideas that are half implemented? I know I often choose between finishing somebody else's project or proving we don't need it and decommissioning it.
I'm convinced it's just human nature to work on something while it is interesting and move on. What is the motivation to actually finish?
Why would the the technologies that should hold up the Internet itself be any different?
While that's true, it dismisses the large body of work that has been completed. The technologies GP comment mentions are complete in the sense that they work, but the deployment is only partial. Herding cats on a global scale, in most cases. It also ignores the side effect benefit that completing the interesting part -- other efforts benefit from the lessons learned by that disrupted effort, even if the deployment fails because it turns out nobody wanted it. And sometimes it's just a matter of time and getting enough large stakeholders excited or at least convinced the cost of migration is worth it.
All that said, even the sense of completing or finishing a thing only really happens in small and limited-scope things, and in that sense it's very much human nature, yeah. You can see this in creative works, too. It's rarely "finished" but at some point it's called done.
I was weeks away from turning off someone’s giant pile of spaghetti code and replacing it with about fifty lines of code when I got laid off.
I bet they never finished it, since the perpetrators are half the remaining team.
IPv6 instead of being branded as a new implementation should probably have been presented as an extension of IPv4, like some previously reserved IPv4 address would mean that it is really IPv6 with the value in the previously reserved fields, etc. That would be a kludge, harder to implement, yet much easier for the wide Internet to embrace. Like it is easier to feed oatmeal to a toddler by presenting it as some magic food :)
It would have exactly the same deployment problems, but waste more bytes in every packet header. Proposals like this have been considered and rejected.
How is checking if, say, the source address is 255.255.255.255 to trigger special processing, any easier than checking if the version number is 6? If you're thinking about passing IPv6 packets through an IPv4 section of the network, that can already be achieved easily with tunneling. Note that ISPs already do, and always have done, transparent tunneling to pass IPv6 packets through IPv4-only sections of their network, and vice versa, at no cost to you.
Edit: And if you want to put the addresses of translation gateways into the IPv4 source and destination fields, that is literally just tunneling.
Or got fired/laid off and the project languished?
In my 25+ years in this industry, there's one thing I've learned: starting something isn't all that difficult, however, shutting something down is nearly impossible. For example, brilliant people put a lot of time end effort into IPv6. But that time and effort is nothing compared to what it's gonna take to completely shut down IPv4. And I've dealt with this throughout my entire career: "We can't shut down that Apache v1.3 server because a single client used it once 6 years ago!"
For reasons not important hear I purchase my SSL certificates and barely have any legitimating business documents. If Dunn & Bradstreet calls I hang up...
It took me 3 years of getting SSL certs from the same company through a convoluted process before I tried a different company. My domain has been with the same registrar since private citizens could register DNS names. That relationship meant nothing when trying to prove that I'm me and I own the domain name.
I went back to the original company because I could verify myself through their process.
My only point is that human relationships is the best form of verifying integrity. I think this provides everyone the opportunity to gain trust and the ability to prejudge people based on association alone.
Human relationships also open you up to social engineering attacks. Unless they’re face-to-face, in person, with someone who remembers what you actually look like. Which is rare these days.
That is my point. We need to put value on the face to face relationships and extend trust outward from our personal relationships.
This sort of trust is only as strong as it's weakest link but each individual can choose how far to extend their own trust.
This is such a good point. We rely way too much on technical solutions.
A better approach is to have hyperlocal offices where you can go to do business. Is this less “efficient”? Yes but when the proceeds of efficiency go to shareholders anyway it doesn’t really matter.
I agree with this but that means you need to regulate it. Even banks nowadays are purposely understaffing themselves and closing early because "what the heck are you going to do about it? Go to a different bank? They're closed at 4pm too!"
The regulation needs to be focused on the validity of the identity chain mechanism but not on individuals. Multiple human interactions as well as institutional relationships could be leveraged depending on needs.
The earliest banking was done with letters of introduction. That is why banking families had early international success. They had a familial trust and verification system.
It is only efficient based on particular metrics. Change the metrics and the efficiency changes.
This is what the Web of Trust does but,
is exactly why I prefer PKI to the WoT. If you try to extend the WoT to the whole Internet, you will eventually end up having to trust multiple people you never met with them properly managing their keys and correctly verifying the identity of other people. Identity verification is in particular an issue: how do you verify the identity of someone you don't know? How many of us know how to spot a fake ID card? Additionally, some of them will be people participating in the Web of Trust just because they heard that encryption is cool, but without really knowing what they are doing.
In the end, I prefer CAs. Sure, they're not perfect and there have been serious security incidents in the past. But at least they give me some confidence that they employ people with a Cyber Security background, not some random person that just read the PGP documentation (or similar).
PS: there's still some merit to your comment. I think that the WoT (but I don't know for sure) was based on the 7 degrees of separation theory. So, in theory, you would only have to certify the identity of people you already know, and be able to reach someone you don't know through a relatively short chain of people where each hop knows very well the next hop. But in practice, PGP ended up needing key signing parties, where people that never met before were signing each other's key. Maybe a reboot of the WoT with something more user friendly than PGP could have a chance, but I have some doubts.
NSEC does this.
You're correct - noting that Lets Encrypt supports DNSSEC/NSEC fully.
Unfortunately though, the entire PKI ecosystem is tainted if other CAs do not share the same security posture.
Tainted seems a little strong, but I think you're right, there's nothing in the CAB Baseline Requirements [1] that requires DNSSEC use by CAs. I wouldn't push for DNSSEC to be required, though, as it's been so sparsely adopted. Any security benefit would be marginal. Second level domain usage has been decreasing (both in percentage and absolute number) since min-2023 [2]. We need to look past DNSSEC.
[1] https://cabforum.org/working-groups/server/baseline-requirem...
[2] https://www.verisign.com/en_US/company-information/verisign-...
I agree that DNSSEC is not the answer and has not lived up to expectations whatsoever, but what else is there to verify ownership of a domain? Email- broken. WHOIS- broken.
Let's convince all registrars to implement a new standard? ouch.
I'm a fan of the existing standards for DNS (§3.2.2.4.7) and IP address (§3.2.2.4.8) verification. These use multiple network perspectives as a way of reducing risk of network-level attacks. Paired with certificate transparency (and monitoring services). It's not perfect, but that isn't the goal.
BGP hijacks unfortunately completely destroy that. RPKI is still extremely immature (despite what companies say) and it is still trivial to BGP hijack if you know what you're doing. If you are able to announce a more specific prefix (highly likely unless the target has a strong security competency and their own network), you will receive 100% of the traffic.
At that point, it doesn't matter how many vantage points you verify from: all traffic goes to your hijack. It only takes a few seconds for you to verify a certificate, and then you can drop your BGP hijack and pretend nothing happened.
Thankfully there are initiatives to detect and alert BGP hijacks, but again, if your organization does not have a strong security competency, you have no knowledge to prevent nor even know about these attacks.
HTTP-based ACME verification also uses unencrypted port-80 HTTP. Similar for DNS-based verification.
100% - another for the BGP hijack!
The current CAB Forum Baseline Requirements call for "Multi-Perspective Issuance Corroboration" [1] i.e. make sure the DNS or HTTP challenge looks the same from several different data centres in different countries. By the end of 2026, CAs will validate from 5 different data centres.
This should make getting a cert via BGP hijack very difficult.
[1] https://github.com/cabforum/servercert/blob/main/docs/BR.md#...
I mean, they need to bootstrap the verification somehow no? You cannot upgrade the first time you request a challenge.
If it used HTTPS you would have a bootstrapping problem.
If anyone knows they are being abused, anyway. I conclude that someone may be abusing them, but those doing so try to keep it unknown that they have done so, to preserve their access to the vulnerability.
Certificate Transparency exists to catch abuse like this. [1]
Additionally, Google has pinned their certificates in Chrome and will alert via Certificate Transparency if unexpected certificates are found. [2]
It is unlikely this has been abused without anyone noticing. With that said, it definitely can be, there is a window of time before it is noticed to cause damage, and there would be fallout and a "call to action" afterwards as a result. If only someone said something.
[1] https://certificate.transparency.dev [2] https://github.com/chromium/chromium/blob/master/net/http/tr...
It’s like the crime numbers. If you’re good enough at embezzling nobody knows you embezzled. So what’s the real crime numbers? Nobody knows. And anyone who has an informed guess isn’t saying.
A big company might discover millions are missing years after the fact and back date reports. But nobody is ever going to record those office supplies.
Didn't Jon Postel do something like this, once?
It was long ago, and I don't remember the details, but I do remember a lot of people having shit hemorrhages.
None of these relate to TLS/SSL - that's the wrong level of abstraction: they relate to fragility of the roots of trust on which the registration authorities for Internet PKI depend.
As long as TLS/SSL depends on Internet PKI as it is, it is flawed. I guess there's always Private PKI, but that's if you're not interested in the internet (^:
I would say that TLS/SSL doesn't depend on Internet PKI - browsers (etc) depend on Internet PKI in combination with TLS/SSL.
Its used for verification because its cheap, not because its good. Why would you expect anyone to care enough to fix it.
If we really wanted verification we would still be manually verifying the owners of domains. Highly effective but expensive.