The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPsec, with PKI provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.
(Repost of <https://news.ycombinator.com/item?id=38570370>)
This sounds like a reasonable point, but the more I think about it, the more it sounds like digital flagellation.
IPv6 was released in 1998. It had been 21 (!) years since the release of IPv6 and still what you're describing had not been implemented when Tailscale was released in 2019. Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?
It's easy to paint companies as bad actors, especially since they often are, but Google, Cloudflare and Tailscale all became what they are for a reason: they solved a real problem, so people gave them money, or whatever is money-equivalent, like personal data.
If your argument is inverted, it's a kind of inverse accelerationism (decelerationism?) whereby only in making the Internet worse for everyone, the really good solutions can see the light. I don't buy it.
Tailscale is not the reason we're not seeing what you're describing, the immense work involved in creating it is why, and it's only when that immense amount of work becomes slightly less immense that any solution at all emerges. Tailscale for example would probably not exist if they had to invent Wireguard, and the fact that Tailscale now exists has led to Headscale existing, creating yet another springboard in a line of springboards to create "something" like what you describe -- for those willing to put in the time.
The folks who either (a) got in early on the IPv4 address land rush (especially the Western developed countries), or (b) with buckets of money who buy addresses.
If you're India, there probably weren't enough IPv4 address in the first place to handle your population, so you're doing IPv6:
* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
Or even if you're in the West, if you're poor (a community Native American ISP):
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
IPv4 'wasn't a problem' because the megacorps who generally run things where I'm guessing you're from (the West) were able to solve it in other means… until they can't. T-Mobile US has 120M and a few years ago it turns out that money couldn't solve IPv4-only anymore so they went to IPv6:
* https://www.youtube.com/watch?v=QGbxCKAqNUE
IPv6 is not taking off because IPv4 (and NAT/STUN/TURN) is 'better', but rather because (a) inertial, and (b) it 'works' (with enough kludges thrown at it).
There is another reason: the addresses are long and impossible to remember and hard to type.
I always bring this up and it’s always dismissed because tech people continue to dismiss usability concerns.
Even “small” usability differences can have a huge effect on adoption.
If only there was some mechanism in which we could use a human-friendly label and have that translated to a computer-usable address…
I don't bother remembering IPv4 addresses, so I'm not sure why I would bother to remember IPv6 addresses. Heck, phone numbers are generally short as well, and who remembers them nowadays? ("0118 999 881 999 119 725… 3")
Maybe it's dismissed because people see it as a non-issue. I regularly work at OSI Layer 2 (and even 1, pulling fibres in a DC), and Layer 3, and am not sure what the concerns are about.
The problem is that DNS is not zero configuration. ARP and NDP are which is why nobody complains about Ethernet being hard to type. DNS has to be “stood up” which is a whole extra deployment.
In modern devops in particular it is common to create and tear down IP networks in seconds and sling stuff everywhere. The extra moving part is an extra thing to break.
DNS also runs over IP which means if IP is down DNS doesn’t work. What do you have to do then? You have to debug IP without DNS.
There is mDNS but it’s not reliable and doesn’t scale to large networks. It also runs on the IP layer so if there is a problem there it can break.
Certainly it is not-zeroconf, but it is the same not-zeroconf for both IPv4 and IPv6.
But extra work with DHCP is needed for IPv4, and extra-extra work if you need to do things like configure 'IP helper', whereas IPv6 can be configured using only a router (which you need regardless) and some on-link packets (RAs).
And? At least you have fe80/64 as a basic starting point. Run a tcpdump to see if you're on-link in any way (or in the correct VLAN), and if you are, you can then ping(6) ff02::2 to find if there any on-link routers. You've now debugged Layer 2 and Layer 3 connectivity. Tada.
You're making IPv6 (sound) way more complicated than it is. It is no more or less complicated than IPv4 or IPX/SPX or …. It's protocol data units at OSI Layer 2 or 3 in different formats with different fields.
Me, that’s the new number for the emergency services.
Actually, that’s the only phone number I can remember :D
Yes, like Ethernet addresses. Those are impossible to remember, too, so obviously Ethernet is no good. /s
The solution for IPv6 addresses is the same for Ethernet addresses; don’t use them directly. Leave it to the name resolution system, and use host names.
It is a problem when, for instance, Google chooses not to implement SRV (and later HTTPS) DNS record support in their web browser. The problems which SRV (and now HTTPS) DNS records solves is not a problem for Google, since they solved the problem by sheer scale and brute force, and Google only benefits from everybody else still having the problem; it’s a great moat for them.
Zerotier does kind of that. It's a tunnel, but also the traffic is direct (unless double Nat is involved) and if you could route the traffic directly to the endpoint IPs, you can skip zt. The location service can be self-hosted if you want. You don't have to use them as a service if you don't want to. Apart from dnssec it's pretty much what you're asking for.
Double NAT is now almost everywhere in the world, except maybe USA.
What kind of Nat though? You can use upnp, predictable mapping, etc. and still allow the traffic through. And that's only with ipv4, because you can run zerotier over IPv6.
Your computer can talk to your home router (CPE) and punch a hole for a connection, but if your WAN port does not have a public IP address, but rather itself also has a private address (probably 100.64/10), the CPE cannot talk to the ISP's router to punch a hole:
* https://en.wikipedia.org/wiki/Carrier-grade_NAT
The two layers of NAT (home network (192.168) -> CPE NAT (100.64/10) -> ISP NAT ('real' public IPv4)) prevent hole punching.
Double Nat on one side is not that universal. Across Europe and Australia I've seen it maybe once on a residential connection. I'm sure it's used, but the comment about the US in the post above just doesn't match my experience.
Great for you for not having to experience it, but that doesn't mean it sucks any less for those less fortunate:
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
You can't over double NAT because the second layer of NAT is not going to support UPnP
foreseeable yet still somewhat surprising that having a clean v4 address on the cpe has become a very privileged position.
just the other day i was discouraging a youngster from manually populating his hosts-file in order to circumvent a dmca-related dns block.... what has the world come to.
For a car analogy:
The problem with car manufacturers selling spare parts is that by fixing a car, they're incentivised to keep producing bad cars.
This is a poor analogy. Historically there is a significant cost to making bad cars with frequent repair needs.
this is a poor statement. the cost is not in dispute, but the bearer of it.
historically car owners need to pay for repairs.
Look at how the early 90s Ford Tempo resale value compared to Toyotas of the era. Trash cars don't keep their value. Toyota could then charge a premium because they were known for quality.
is resale value what the manufacturer wants? I mean they want to sell new cars after all...
They have a higher resale value because they have a reputation of lasting a long time, and people are thus perhaps more willing to pay a higher initial purchase price because they know their "investment" will last longer.
And while they may not be planing to sell their car after only a few years, knowing that they'll get back more of their "investment" is also probably sitting in the back of their mind ('just in case').
Resale values do have an impact on new car prices. The better a vehicle holds its value the easier it is for the company to charge more for a new car.
Its also worth considering that, for better or worse, very few people actually own their cars today. When you have a loan on it the resale value becomes really important. If the manufacturer wants the kind of customer that buys a new car every few years they'll need resale value that at least keeps up with the principle on the loan over that time.
I think that is excessively negative take. Tailscales value proposition is also "you can connect to your network wherever you are, safely, and others cannot". That does not go away because of IPsec.
Network- and location-based security is ultimately unworkable. It’s like if you, in order to work, had to go to a ”virtual office” to even send mail to your colleagues. Mail, and related internet-enabled services, should be accessible from anywhere, and be secured at the end points, not at the network layer. (Most attacks are internal, anyway.)
Why should you have access to the SSH host for my pie?
Or, more to the point, the server that I use to run my RSS feed reader?
Or my NAS?
Tailscale makes these more secure and more accessible for me. They are never meant to have the world access them.
Now for email and a few other things, sure, their nature is that they need to access the world.
Because that is how the internet is meant to work. It is an end-to-end network. If SSH would not be secure enough to handle this, it would need a secure replacement.
What is a NAS, if not a Network-Attached Storage, i.e. meant to be accessed from the network? The concept of a ”local”, ”secure” network is a dangerous illusion. Embrace ”zero trust” networking.
Most people do need to be on a VPN or in an office to work. That's entirely normal, and makes sense even if you also require authentication for applications.
it's not a problem specific to any kind of corporation or corporations per se, but organizations or even broader, solutions.
though, do you really think that having a solution to a problem is worse than just having the problem?
It is a problem if a company makes a lot of money ”solving” the problem, but:
• This does not really solve the problem, since a real solution would be to change the internet to make the problem go away
• A company making a lot of money gets to have an enormous influence on what is considered reasonable to standardize on. See for instance Google’s and Microsoft’s influence on things like the W3C. (Or if Tailscale is allowed to define what ”The New Internet” will be.)
seems indeed that microsoft is making lots of money by defending the status quo
All large incumbents defend the status quo, except when advocating for larger barriers to entry for new and smaller competitors.
So far as I’m aware, TailScale has been at all times a good actor.
I have no problem criticizing tech companies, but I try to wait until they behave badly.
Wouldn't the point be they're an indecent, possibly bad, actor by default since they're a business at all rather than just creating or contributing to protocols/standards to resolve the issues their product relies on to exist? The only way they could be a good actor is if they're using the money from their sales to fund that initiative with a plan to obsolete themselves.
I suppose if you follow that thread though a lot of businesses just shouldn't exist except for fulfilling the need they fill for the sake of those in need.
Companies are allowed to solve problems for a profit. People can choose to sell their time and energy or give it away. The choice is the default.
In fact, I prefer that capitalist model at this point having seen countless OSS/nonprofit efforts turn into glorified abandonware.
At least the business has an interest in remaining a going concern and maintaining the stack.
No, we definitely don't want "automatic IPSec" (especially IPSec!), or really any enforced encryption at the network level, even if it's something sane at this moment like WireGuard. Look at old VPN protocols or authentication schemes like RADIUS which have glaring security holes and are impossible to fix because of compatibility issues, and they're running at much smaller scales than the whole internet. Hell, the way the industry is solving TCP ossification problems is by throwing TCP away and reimplementing it on top of UDP, that should tell us something.
Your argument seems to be to never implement anything, because eventually it will become old and it will be hard to move away from it? This seems to be an argument against anything new, and it is therefore hard to take seriously.
It’s an argument against complexity. IP had amazing longevity because of its simplicity and openness.
Even if something is open, complexity is almost like closed as we can see with crazy complicated web standards for which there are few implementations.
I never thought of this. Forces me to rethink every negative post people made against DNSSEC which shaped my opinion. I still think that IPv6 and DNSSEC do more harm in practice than what they solve. Maybe the SCW podcast can do a deepdive on this together with somebody who is militantly-pro DNSSEC. <3 ...
edit: maybe even invite 2 or 3 DNSSEC advocates @tptacek :)
I don't think the analysis upthread should make you rethink DNSSEC, since it, too, is a centralized system; rather than being controlled by Avery Pennarun (you could do worse), it's controlled by an unholy alliance of world governments and companies like Verisign.
If we could find a credible DNSSEC advocate (for our audience; that is: a cryptography engineer, vulnerability researcher, or an engineering leader at a major firm), we would absolutely invite them on.
'teddyh below gave you links to two pro-DNSSEC resources; fun note: the latter source (Geoff Huston, one of the world's more respected networking researchers) has since then written this:
https://blog.apnic.net/2024/05/28/calling-time-on-dnssec/.
Regarding DNSSEC:
• Blog post: <https://blog.technitium.com/2023/05/for-dnssec-and-why-dane-...>
• As a podcast episode: <https://blog.apnic.net/2023/03/16/podcast-dnssec-the-case-fo...>
And, worse, incentivized to require users to use a "coordination server" which helps with the NAT and firewall traversal problem by being something you can reach from outbound-only clients. There's a lot of verbiage there, but the general idea seems to be that Tailscale sits at the middle of this as the means by which machines find each other.
There are other ways to do that.
There are dynamic DNS schemes, so you can give your machine which only has a temporary IP address a permanent name. That's been around for decades, and seems to have a bad reputation.
There are schemes with multiple coordination nodes that know about each other, and published lists of such nodes. The list may be out of date, but as long as the published list has one live node, you can connect and get updated. That's how Kademlia, which underlies Etherium's network and some file sharing systems, works. That's about 20 years old, and sort of has a sketchy reputation.
It's possible to go only halfway, and separate discovery from transmission. Peertube does that. You find a file to stream via ordinary HTTP to a server you find by ordinary web search means. Anybody can set up such a server. The actual streaming, for files wanted by many clients, is distributed, with people currently watching also sending out blocks to other people watching. This scales well, in case your video goes viral. It's not used much, though.
So it's definitely possible to do this without someone in the middle able to cut off your air supply.
How is trusting a dynamic DNS provider different than trusting Tailscale's coordination nodes?
Not everybody has to use the same dynamic DNS provider.
Ie. A form of perverse incentive or the cobra effect. Endemic to capitalism, especially in infrastructure.
Companies like you list solve people problems. Their business is about using abstraction to create customer experiences that match the market demand. I want to say that "institutions" solve computer science problems, but it's much more complicated than that.
Cloudflare sells bulletproof vests