return to table of content

.INTERNAL is now reserved for private-use applications

8organicbits
99 replies
20h26m

My biggest frustration with .internal is that it requires a private certificate authority. Lots of organizations struggle to fully set up trust for the private CA on all internal systems. When you add BYOD or contractor systems, it's a mess.

Using a publicly valid domain offers a number of benefits, like being able to use a free public CA like Lets Encrypt. Every machine will trust your internal certificates out of the box, so there is minimal toil.

Last year I built getlocalcert [1] as a free way to automate this approach. It allows you to register a subdomain, publish TXT records for ACME DNS certificate validation, and use your own internal DNS server for all private use.

[1] https://www.getlocalcert.net/

yjftsjthsd-h
42 replies
17h56m

Do you mean to say that your biggest frustration with HTTPS on .internal is that it requires a private certificate authority? Because I'm running plain HTTP to .internal sites and it works fine.

lysace
21 replies
17h37m

There's some every packet shall be encrypted, even in minimal private VPCs lore going on. I'm blaming PCI-DSS.

bruce511
9 replies
17h16m

The big problem with running unencrypted HTTP on a LAN is that it's terribly easy for (most) LANs to be compromised.

Let's start with the obvious; wifi. If you're visiting a company and ask the receptionist for the wifi password you'll likely get it.

Next are eternity ports. Sitting waiting in a meeting room, plug your laptop into the ethernet port and you're in.

And of course it's not just hardware, any software running on any machine makes the LAN just as vulnerable.

Sure, you can design a LAN to be secure. You can make sure there's no way to get onto it. But the -developer- and -network maintainer- are 2 different guys, or more likely different departments. As a developer are you convinced the LAN will be as secure in 10 years as it is today? 5 years? 1 year after that new intern arrives and takes over maintainence 6 weeks in?

What starts out as "minimal private VPC" grows, changes, is fluid. Treating it as secure today is one thing. Trusting it to remain secure 10 years from now is another.

In 99.9% of cases your LAN traffic should be secure. This us the message -developers- need to hear. Don't rely on some other department to secure your system. Do it yourself.

slimsag
5 replies
13h58m

Also, make sure your TLS certificates are hard-coded/pinned in your application binary. Just like the network, you really cannot trust what is happening on the user's system.

This way you can ensure you as the developer have full control over your applications' network communication; by requiring client certificates issued by a CA you control, you can assert there is no MITM even if a sysadmin, user, or malware tries to install a proxy root CA on the system.

Finally, you can add binary obfuscation / anticheat mechanisms used commonly in video games to ensure that even if someone is familiar with the application in question they cannot alter the certificates your application will accept.

Lots of e.g. mobile banking apps, etc. do this for maximal security guarantees.

PokestarFan
1 replies
12h55m

At some point you have to wonder if your app even matters that much.

bruce511
0 replies
4h38m

The App probably not. The server maybe, the data probably.

swiftcoder
0 replies
9h30m

In practice pinning tends to be very "best effort", if not outright disadvantageous.

All our apps had to auto-disable pinning less than a year after the build date, because if the user hadn't updated the app by the time we had to renew all our certs... they'd be locked out.

Also dealt with the fallout from a lovely little internet-of-things device that baked cert pinning into the firmware, but after a year on store shelves the clock battery ran out, so they booted up in 1970 and decided the pinned certs wouldn't become valid for ~50 years :D

frogsRnice
0 replies
12h1m

Pinning is very complex, there is always the chance that you forget to update the pins and perform a denial of service against your own users. At the point where the device itself is compromised, you can’t really assert to anything. Furthermore, there is always the risk that your developers implement pinning incorrectly and introduce a chain validation failure.

Lots of apps use the anticheat/obfuscation mechanisms added by mobile apps are also trivial to bypass using instrumentation - ie frida codeshare. I know you aren’t implying that people should use client side controls to protect an app running on a device and an environment that they control, but in my experience even some technical folk will try and to do this

Too
0 replies
9h25m

This is way overkill, unless you are making a nuclear rocket launch application. If you can not trust the system root CA, the whole internet breaks down.

You will also increase the risk that your already understaffed ops-team messes up and creates even worse exposure or outages, while they are trying to figure out what ssl-keygen does.

Spooky23
1 replies
12h59m

The big issue with encrypted HTTP on the local LAN is that you’re stuck running a certificate authority, ignoring TLS validation, or exposing parts of your network in the name of transparency.

Running certificate authority is one of those a minute to learn, lifetime to master scenarios.

You are often trading “people can sniff my network scenario” to a “compromise the CA someone setup 10 years ago that we don’t touch” scenario.

bruce511
0 replies
4h40m

I agree that setting up a self-signed CA is hard, and harder to keep going.

However DNS challenge allow for you to map an internal address to an IP number. The only real information that leaks is the subnet address of my LAN. And given the choice of that or unencrypted traffic I'll take that all day long.

gorgoiler
0 replies
14h23m

Well said. I used to be of the mindset that if I ran VLANs I could at least segregate the good guys from the evil AliExpress wifi connected toasters. Now everything feels like it could become hostile at any moment and so, on that basis, we all share the same network with shields up as if it were the plain, scary Internet. It feels a lot safer.

I guess my toaster is going to hack my printer someday, but at least it won’t get into my properly-secured laptop that makes no assumptions the local network is “safe”.

kortilla
6 replies
17h4m

Hoping datacenter to datacenter links are secure is how the NSA popped Google.

Turn on crypto, don’t be lazy

otabdeveloper4
5 replies
16h15m

Pretty sure state-level actors sniffing datacenter traffic is literally the very last of your security issues.

This kind of theater actively harms your organization's security, not helps it. Do people not do risk analysis anymore?

shawnz
1 replies
15h55m

Taking defense in depth measures like using https on the local network is "theatre" that "actively harms your organization's security"? That seems like an extreme opinion to me.

Picking some reasonable best practices like using https everywhere for the sake of maintaining a good security posture doesn't mean that you're "not doing risk analysis".

the8472
0 replies
7h9m

I have seen people disabling all cert validation in an application because SSL was simultaneously required and no proper CA was provided for internal things. The net effect was thus that even the traffic going to the internet was no longer validated.

soraminazuki
0 replies
8h31m

NSA sniffs all traffic through various internet choke points in what's known as upstream surveillance. It's not just data center traffic.

https://www.eff.org/pages/upstream-prism

These kind of risks are obvious, real, and extensively documented stuff. I can't imagine why anyone serious about improving security for everyone would want to downplay and ridicule it.

kortilla
0 replies
3h5m

It’s not theatre, it’s real security. And state level actors are absolutely not the only one capable of man in the middle attacks.

You have:

- employees at ISPs

- employees at the hosting company

- accidental network misconfigurations

- one of your own compromised machines now part of a ransomware group

- the port you thought was “just for internal” that a dev now opens for some quick testing from a dev box

Putting anything in open comms is one of the dumbest things you can do as an engineer. Do your job and clean that shit up.

It’s funny you mention risk analysis, plaintext traffic is one of the easiest things to compromise.

TimTheTinker
0 replies
6h30m

Found the NSA goon.

Seriously, your statement is demonstrably wrong. That's exactly the sort of traffic the NSA actively seeks to exploit.

yarg
1 replies
16h27m

Blame leaked documents from the intelligence services.

No one really bothered until it was revealed that organisations like the NSA were exfiltrating unencrypted internal traffic from companies like Google with programs like PRISM.

baq
0 replies
6h46m

Echelon was known about before Google was even a thing. I remember people adding Usenet headers with certain keywords. Wasn’t much, but it was honest work.

unethical_ban
0 replies
10h46m

That's some "it's okay to keep my finger on the trigger when the gun is unloaded" energy.

j1elo
14 replies
17h14m

Try running anything more complicated than a plain and basic web server! See what happens if you attempt to serve something that browsers deem to require a mandatory "Secure Context", so they will reject running it when using HTTP.

For example, you won't be able to run internal videocalls (no access to webcams!), or a web page able to scan QR codes.

Here's the full list:

* https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

A true hassle for internal testing between hosts, to be honest. I just cannot run an in-development video app on my PC and connect from a phone or laptop to do some testing, without first worrying about certs at a point in development where they are superfluous and a loss of time.

akira2501
13 replies
16h49m

localhost is a secure context. so.. presumably we're just waiting for .internal to be added to the white list.

Too
5 replies
10h37m

No. The concept of a DMZ died decades ago. You could still be MITM within your company intranet. Any system designed these days should follow zero-trust principles.

tsimionescu
4 replies
8h58m

Sure, but people still need to test things, and HTTPS greatly complicates things. Browsers' refusal to make it poasible to run anything unencrypted when you know what you're doing is extremely annoying, and has caused significant losses of productivity throughout the industry.

If they're so worried about users getting duped to activate the insecure mode, they could at least make it a compiler option and provide an entirely separate download in a separate place.

Also, don't get me started on HSTS and HSTS preloading making it impossible to inspect your own traffic with entities like Google. It's shameful that Firefox is even more strict about this idiocy than Chrome.

the8472
2 replies
7h13m

To inspect your own traffic you can use SSLKEYLOGFILE and then load it into wireshark.

tsimionescu
1 replies
6h22m

Most apps don't support SSLKEYLOGFILE. OpenSSL, the most popular TLS library, doesn't support it.

haradion
0 replies
15m

OpenSSL does provide a callback mechanism to allow for key logging, but the application does have to opt in. IIRC, at least Curl does support it by default.

freedomben
0 replies
3h24m

Indeed. Nothing enrages me more as a user when my browser refuses to load a page and doesn't give me any way to override it.

Whose computer is this? I guess the machine I purchased doesn't belong to me, but instead belongs to the developer of the browser, who has absolutely no idea what I'm trying to do, what my background is and qualifications and what my needs are? It seems absurd to give that person the ultimate say over me on my system, especially if they're going to give me some BS about protecting me from myself for my own good or something like that. Yet, that is clearly the direction things are headed.

JonathonW
4 replies
16h31m

Unlikely. Localhost can be a secure context because localhost traffic doesn't leave your local machine; .internal names have no guarantees about where they go (not inconceivable that some particularly "creative" admin might have .internal names that resolve to something on the public internet).

Wicher
3 replies
9h37m

One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).

So it's not a rock-hard guarantee that traffic to localhost never leaves your system. It would be unconventional and uncommon for it to, though, except for the likes of us who like to ssh-tunnel all kinds of things on our loopback interfaces :-)

The sweet spot of security vs convenience, in the case of browsers and awarding "secure origin status" for .internal, could perhaps be on a dynamic case by case basis at connect time:

- check if it's using a self-signed cert - offer TOFU procedure if so - if not, verify as usual

Maaaaybe check whether the connection is to an RFC1918 private range address as well. Maybe. It would break proxying and tunneling. But perhaps that'd be a good thing.

This would just be for browsers, for the single purpose of enabling things like serviceworkers and other "secure origin"-only features, on this new .internal domain.

thayne
0 replies
2h16m

No, you can't. Besides the /etc/hosts point mentioned in the sibling, localhost is often hard-coded to use 127.0.0.1 without doing an actual DNS lookup.

im3w1l
0 replies
8h56m

localhost is pretty special in that it's like the only domain typically defined in a default /etc/hosts.

JonathonW
0 replies
2h9m

One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).

The secure context spec [1] addresses this-- localhost should only be considered potentially trustworthy if the agent complies with specific name resolution rules to guarantee that it never resolves to anything except the host's loopback interface.

[1] https://w3c.github.io/webappsec-secure-contexts/#localhost

miah_
0 replies
6h27m

Years back I ran into a issue at work because somebody named their computer "localhost" on a network with automatic DNS registration. Because of DNS search path configuration it would resolve. So, "localhost" ended up resolving to something other than an address on 127.0.0.0/8! It was a fun discovery and fixed soon after I reported it.

TeMPOraL
0 replies
6h26m

Doesn't matter for mixed content, like e.g. when you run a client-side only app that happens to be loaded from a public domain over HTTPS, and want it to call out to an API endpoint running locally. HTTP won't fly. And good luck reverse-proxying it without a public CA cert either.

jve
1 replies
8h55m

I consider HTTPS to be easier to run - you get less trouble in the end.

As mentioned, some browser features are HTTPS only. You get security warnings on HTTP. Many tools now default to HTTPS by default - like newer SQL Server drivers. Dev env must resemble prod very closely so having HTTP in DEV and HTTPS in prod is asking for pain and trouble. It forces you to have some kind of expiration registry/monitoring and renewal procedures. And you happen to go throught dev env first and gain confidence and then prod.

Then there are systems where client certificate is mandatory and you want to familiarize yourself already in dev/test env.

Some systems even need additional configuration to allow OAuth via HTTP and that makes me feel dirty thus I rather not do it. Why do it if PROD won't have HTTP? And if one didn't know such configuration must be done, you'd be troubleshooting that system and figuring out why it doesn't work with my simple setup?

Yeah, we have internal CA set up, so issuing certs are pretty easy and mostly automated and once you go HTTPS all in, you get the experience why/how things work and why they may not and got more experience to troubleshoot HTTPS stuff. You have no choice actually - the world has moved to TLS secured protocols and there is no way around getting yourself familiar with security certificates.

8organicbits
0 replies
7h44m

At my first job out of college we built an API and a couple official clients for it. The testing endpoint used self-signed certs so we had to selectively configure clients to support it. Right before product launch we caught that one of our apps was ignoring certificate verification in production too due to a bug. Ever since then I've tried to run publicly valid certificates on all endpoints to eliminate those classes of bugs. I still run into accidentally disabled cert validation doing security audits, it's a common mistake.

this_user
0 replies
17h8m

A lot of services default to HTTPS. For instance, try setting up an internal Gitlab instance with runners, pipelines, and package/container registries that actually works. It's an absolute nightmare, and some things outright won't work. And if you want to pull images from HTTP registries with Docker, you have enable that on every instance for each registry separately. You'd be better off registering a real domain, using Let's Encrypt with the DNS challenge, and setting up an internal DNS for your services. That is literally an order of magnitude less work than setting up HTTP.

IshKebab
0 replies
9h38m

A lot of modern web features now require HTTPS.

8organicbits
0 replies
2h13m

If you're on a laptop or phone that switches between WiFi networks then you are potentially spilling session cookies and other data unencrypted onto other networks that also happen to resolve .internal. HTTPS encrypts connections, but it also authenticates servers. The later is important too.

prussian
17 replies
16h16m

Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas. I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.

[1] https://crt.sh/

Helmut10001
14 replies
15h36m

Just use wildcard certs and internal subdomains remain internal information.

qmarchi
8 replies
13h25m

There's a larger risk that if someone breaches a system with a wildcard cert, then you can end up with them being able to impersonate _every_ part of your domain, not just the one application.

eru
3 replies
7h33m

Can't you have a limited wildcard?

Something like *.for-testing-only.company.com?

kevincox
2 replies
7h12m

Yes, but then you are putting more information into the publically logged certificate. So it is a tradeoff between scope of certificate and data leak.

I guess you can use a pattern like {human name}.{random}.internal but then you lose memoribility.

lacerrr
0 replies
6h45m

Made up problem, that approach is fine.

8organicbits
0 replies
6h49m

I've considered building tools to manage decoy certificates, like it would register mail.example.com if you didn't have a mail server, but I couldn't justify polluting the cert transparency logs.

qwertox
1 replies
3h54m

I issue a wildcard cert for *.something.example.com.

All subdomains which are meant for public consumption are at the first level, like www.example.com or blog.example.com, and the ones I use internally (or even privately accessible on the internet, like xmpp.something.example.com) are not up for discovery, as no public records exist.

Everything at *.something.example.com, if it is supposed to be privately accessible on the internet, is resolved by a custom DNS server which does not respond to `ANY`-requests and logs every request. You'd need to know which subdomains exist.

something.example.com has an `NS`-record entry with the domain name which points to the IP of that custom DNS server (ns.example.com).

The intranet also has a custom DNS server which then serves the IPs of the subdomains which are only meant for internal consumption.

brewmarche
0 replies
2h5m

This is the DNS setup I’d have in mind as well.

Regarding the certificates, if you don’t want to set up stuff on clients manually, the only drawback is the use of a wildcard certificate (which when compromised can be used to hijack everything under something.example.com).

An intermediate CA with name constraints (can only sign certificates with names under something.example.com) sounds like a better solution if you deem the wildcard certificate too risky. Not sure which CA can issue it (letsencrypt is probably out) and how well supported it is

politelemon
1 replies
11h21m

It's the opposite - there is a risk, but not a larger risk. Environment traversal is easier through a certificate transparency log, there is almost zero work to do. Through a wildcard compromise, the environment is not immediately visible. It's much safer to do wildcard for certs for internal use.

ixfo
0 replies
9h17m

Environment visibility is easy to get. If you pwn a box which has foo.internal, you can now impersonate foo.internal. If you pwn a box which has *.internal, you can now impersonate super-secret.internal and everything else, and now you're a DNS change away from MITM across an entire estate.

Security by obscurity while making the actual security of endpoints weaker is not an argument in favour of wildcards...

ivankuz
4 replies
3h16m

A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.

nightpool
2 replies
2h58m

Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future

ivankuz
1 replies
2h14m

It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.

ploxiln
0 replies
1h14m

That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.

therein
0 replies
36m

There is definitely that. There is also some sort of strange bug with Chromium based browsers where you can get a tab to entirely fail making a certain connection. It will not even realize it is not connecting properly. That tab will be broken for that website until you close that tab and open a new one to navigate to that page.

If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.

I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.

I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.

moontear
0 replies
6h18m

I wish there was a way to remove public information such as this. Just like historical website ownership records. Maybe interesting for research purposes, but there is so much stuff in public records I don't want everyone to have access to. Should have thought about that before creating public records - but one may not be aware of all the ramifications of e.g. just creating an SSL cert with letsencrypt or registering a random domain name without privacy extensions.

wkat4242
12 replies
16h52m

The problem with internal CAs is also that it's really hard to add them on some OSes now. Especially on android since version 7 IIRC, you can no longer get certs into the system store, and every app is free to ignore the user store (I think it was even the default to ignore it). So a lot of apps will not work with it.

thaumasiotes
9 replies
9h10m

The problem with internal CAs is also that it's really hard to add them on some OSes now. Especially on android since version 7 IIRC

That's because the purpose of certificate pinning is to protect software from the user. Letting you supply your own certificates would defeat the purpose of having them.

okanat
5 replies
7h8m

Protect the software from the user? Why are you giving them the software then?

TeMPOraL
2 replies
6h32m

Most software is tools of control and exploitation, and remains in an adversarial relationship with its users. You give software to users to make them make money for you; you protect the software from users so they don't cut you out, or use software to do something you'd rather they don't do.

Software that isn't like that is in a minority, and most of it is only used to build software that is like that.

cobbal
1 replies
5h0m

It's interesting that cert pinning cuts both ways though. It can also be a tool to give users power against the IT department (typically indistinguishable from malware)

TeMPOraL
0 replies
2h24m

Cert pinning often annoyingly works against both - software devs are a third party to both the organizational users and their IT dept overlords.

Trusted computing is similar, too. It's a huge win for the user in terms of security, as long as the user owns the master key and can upload their own signatures. If not, then it suddenly becomes a very powerful form of control.

The more fundamental issue is the distinction between "user" and "owner" of a computer - or its component, or a piece of software - as they're often not the same people. Security technologies assert and enforce control of the owner; whether that ends up empowering or abusive depends on who the owners are, and why.

noirscape
0 replies
6h15m

A lot of mobile software is just a UI around an external web API. The main reason why Android makes it difficult to get the OS to accept an external certificate (you need root for it) is because without it, you can just do a hosts hack through a vpn/dns to redirect it to your own version of that API. Which app manufacturers want to prevent since it's a really easy way to snoop on what endpoints an app is calling and to say, build your own API clone of that app (which is desirable if you're say, selfhosting an open source server clone of said software... but all the official applications are owned by the corporate branch and don't let you self-configure the domain/reduce the experience when you point it to a selfhosted domain).

It's extremely user-hostile since Android has a separate user store for self-signed CAs, but apps are free to ignore the user store and only accept the system store. I think by default only like, Chrome accepts the user store?

evandrofisico
0 replies
6h16m

For example, to make it harder to reverse engineer the protocol between the app and the server.

Arch-TK
1 replies
7h6m

Certificate pinning and restricting adding custom certificates to your OS except if you're using MDM are two completely unrelated things. Overriding system trust doesn't affect certificate pinning and certificate pinning is no longer recommended anyway.

freedomben
0 replies
3h34m

They are certainly different things, but they're not unrelated. The inability of the user to change the system trust store is part of why certificate pinning is no longer (broadly) recommended.

Terr_
1 replies
15h4m

Speculating a bit out of my depth here, but I'm under the impression that most of those sometimes-configurable OS-level CA lists are treated as "trust anything consistent with this data", as opposed to "only trust this CA record for these specific domain-patterns because that's the narrow purpose I chose to install it for."

So there are a bunch of cases where we only want the second (simpler, lower-risk) case, but we have to incur all the annoyance and risk and locked-down-ness of the first use-case.

8organicbits
0 replies
8h1m

Yes! Context specific CA trust would be great, but AFAIK isn't possible yet. Even name constraints, which are domain name limitations a CA or intermediate cert place on itself, are slowly being supported by relevant software [1].

As a contractor, I'll create a per-client VM for each contract and install any client network CAs only within that VM.

[1] https://alexsci.com/blog/name-non-constraint/

layer8
10 replies
6h10m

I don’t understand the frustration. The use of .internal is explicitly for when you don’t want a publicly valid domain. Nobody is forcing anyone to use .internal otherwise.

pas
6 replies
5h19m

the frustration comes when non-corporate-provisoned clients get on the .internal network and have trouble using the services because of TLS errors (or the problem is lack of TLS)

and the recommendation is to simply do "*.internal.example.com" with LetsEncrypt (using DNS-01 validation), so every client gets the correct CA cert "for free"

...

obviously if you want mTLS, then this doesn't help much. (but still, it's true that using a public domain has many advantages, as having an airgapped network too)

layer8
4 replies
5h2m

You’re basically saying that .internal can cause frustration when it is used without good reason. Fair enough, but also not surprising. When it is used for the intended reasons though, then there’s just no other solution. It’s a trade-off between conflicting goals. “Simply do X instead” doesn’t remove the trade-off.

nightpool
3 replies
2h54m

What do you see as the intended reasons with no other solutions?

NegativeK
0 replies
41m

As a side point, there _needs_ to be something equivalent. People were doing all sorts of bad ideas before, and they had all the problems of .internal as well as the additional problems the hacks were causing -- like using .dev and then dealing with the fallout when the TLD was registered.

8organicbits
0 replies
2h4m

The biggest benefit of .internal IMO is that it is free to use. Free domains used to be a thing, but after the fall of Freenom you're stuck with free subdomains.

8organicbits
0 replies
4h38m

I'll add that anyone using VMs or containers will also run into trust issues too without extra configuration. I've seen lots of contractors resort to just ignoring certificate warnings instead of installing the corporate certs for each client they work with.

thayne
2 replies
2h27m

My frustration is because using a private CA is more difficult than it should be.

You can't just add the CA to system trust stores on each device, because some applications, notably browsers and java, use their own trust stores, you have to add it to.

You also can't scope the CA to just .internal, which means in a BYOD environment, you have to require your employees to trust you not to sign certs for other domains.

And then there is running the CA itself. Which is more difficult than using let's encrypt.

fleminra
1 replies
1h45m

The Name Constraints extension can limit the applicability of a CA cert to certain subdomains or IP addresses.

thayne
0 replies
1h0m

How well supported is that?

mschuster91
4 replies
19h3m

Lots of organizations struggle to fully set up trust for the private CA on all internal systems.

Made worse by the fact phone OSes have made it very difficult to install CAs.

booi
3 replies
18h10m

And in on some platforms and configurations, impossible.

Same with the .dev domain

jhardy54
1 replies
17h26m

.dev isn’t a TLD for internal use though, do you have the same problem when you use .test?

dijit
0 replies
4h14m

gonna go ahead and cast shade at Google because of how they handled that.

Their original application for .dev was written to "ensure its reserved use for internal projects - since it is a common internal TLD for development" - then once granted a few years later they started selling domains with it.

** WITH HSTS PRELOADING ** ensuring that all those internal dev sites they were aware of would break.

kortilla
0 replies
17h3m

.dev is a real domain

francislavoie
0 replies
14h39m

No, that's a public CA. No public domain registrars will be allowed to sell .internal domains so no public DNS servers will resolve .internal and that's a requirement for let's encrypt to validate that you control the domain. So you must use a private CA (one that you create yourself, with something like Smallstep, Caddy, or OpenSSL commands) and you'll need to install that CA's root certificate on any devices you want to be able to connect to your server(s) that use .internal

TheRealPomax
1 replies
19h22m

I'm pretty sure that if letsencrypt localhost certs work, they'll work fine with .internal too?

merb
0 replies
19h18m

let’s encrypt does not support certain for localhost.

xer0x
0 replies
19h36m

Oh neat, thanks for sharing this idea

jacooper
0 replies
16h32m

This is why I'm using a FQDN for my home lab, I'm not going to setup a private CA for this, I can just use ACME-dns and get a cert that will work everywhere, for free!

derefr
0 replies
1h30m

It would be impossible for .internal domains to be publicly CAed, because they're non-unique; the whole point of .internal domains is that, just like private-use IP space, anyone can reuse the same .internal DNS names within their own organization.

X.509 trust just doesn't work if multiple entities can get a cert for the same CN under the same root-of-trust, as then one of the issuees can impersonate the other.

If public issuers would sign .internal certs, then presuming you have access to a random org's intranet, you could MITM any machine in that org by first setting up your own intranet with its own DNS, creating .internal records in it, getting a public issuer to issue certs for those domains, and then using those certs to impersonate the .internal servers in the org-intranet you're trying to attack.

AndyMcConachie
0 replies
9h27m

If you read the document that originally lead the ICANN Board to reserve .INTERNAL (SAC113) you will find this exact sentiment.

The SSAC's recommendation is to only use .INTERNAL if using a publicly registered domain name is not an option. See Section 4.2.

https://itp.cdn.icann.org/en/files/security-and-stability-ad...

7bit
0 replies
8h56m

My biggest frustration with .internal is that it requires a private certificate authority

So don't use it?

jcrites
45 replies
21h16m

Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.

You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.

Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.

When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.

ghshephard
16 replies
20h5m

Number one reason that comes to mind is you prevent the possibility of information leakage. You can't screw up your split-dns configuration and end up leaking your internal IP space if everything is .internal.

It's much the same reason why some very large IPv6 services deploy some protected IPv6 space in RFC4193 FC::/7 space. Of course you have firewalls. And of course you have all sorts of layers of IDS and air-gaps as appropriate. But, if by design you don't want to make this space reachable outside the enterprise - the extra steps are a belt and suspenders approach.

So, even if I mess up my firewall rules and do leak a critical control point: FD41:3165:4215:0001:0013:50ff:fe12:3456 - you wouldn't be able to route to it anyways.

Same thing with .internal - that will never be advertised externally.

kelnos
5 replies
16h34m

Presumably you don't trust the CA that signed the certificate on the server at the company you're visiting. As long as you heed the certificate error and don't visits the site, you're fine.

hsbauauvhabzb
3 replies
12h32m

So we’re back to trusting the user?

0l
2 replies
7h52m

Use HSTS, browsers are specifically designed not to let users bypass these.

hsbauauvhabzb
1 replies
6h43m

Hsts forces encryption, it has no impact on certificate invalidity, at least to my knowledge.

0l
0 replies
5h45m

Visit your .internal site -> website uses TLS cert signed by root CA that is preloaded on your device. Succeeds and HSTS flag is set.

Visit other .internal site -> uses TLS cert NOT signed by root CA that is preloaded on your device -> certificate error, and cannot be bypassed due to HSTS.

thayne
0 replies
2h4m

Now suppose you are a contractor who did some work for company A, then went to do some work for company B, and still have some cookies set from A's internal site.

fulafel
3 replies
13h11m

Yep, ambiguous addressing doesn't save you, same as 10.x IPv4 networks. And one day you'll need to connect or merge or otherwise coexist with disparate uses if it's a common one (like in .internal and 10.x)...

kevincox
2 replies
7h7m

IPv6 solves this as you are strongly recommend to use a random component at the top of the internal reserved space. So the chance of a collision is quite low.

pas
0 replies
5h6m

there's some list of ULA ranges allocated to organizations, no?

edit: ah, unfortunately it's not really standard, just a grassroots effort https://ungleich.ch/u/projects/ipv6ula/

fulafel
0 replies
1h22m

There's usually little reason to use reserved space vs internet addresses, unless you just want to relive the pain of NAT+IPv4. The exception is if you lack PI space and can't copy with potential renumbering.

viraptor
0 replies
10h31m

Ideally, you use "testing.company-name.internal" for that kind of things. (Especially if you think you'll ever end up interacting at that level)

thebeardisred
0 replies
15h46m

May god have mercy on the person using this in their mobile applications.

mrkstu
0 replies
15h17m

I'm assuming you wouldn't import their CA as authoritative just to use their wifi...

dudus
0 replies
17h55m

Great question. I think they leak but this happens regardless.

bawolff
5 replies
20h32m

I think there is a benefit that it reduces possibility of misconfiguration. You can't accidentally publish .internal. If you see a .internal name, there is never any possibility of confusion on that point.

thebeardisred
1 replies
15h42m

Additionally how do you define publish?

When someone embeds https://test.internal with a cert validation turned off (rather then fingerprint pinning or setting up an internal CA) in their mobile application that client will greedily accept whatever response is provided by their local resolver... Correct or malicious.

bawolff
0 replies
11h5m

That seems kind of besides the point. If you turn off cert validation, it doesn't matter if the domain name is internal or external.

zrm
0 replies
16h4m

You can't accidentally publish .internal.

Well sure you can. You expose your internal DNS servers to the internet, or use the same DNS servers for both and they're on the internet. The root servers are not going to delegate a request for .internal to your nameservers, but anybody can make the request directly to your servers if they're publicly accessible.

samstave
0 replies
20h11m

This. And it allows for much easier/trustworthy automated validation of [pipeline] - such as ensuring that something doesnt leak, exfil, egress inadvertently. (even perhaps with exclusive/unique routing?)

mnahkies
0 replies
19h53m

Somewhat off topic, but I'm a big fan of fail safe setups.

One of the (relatively few) things that frustrate me about GKE is the integration with GCP IAP and k8 gateways - it's a separate resource to the http route and if you fail to apply it, or apply one with invalid configuration then it fails open.

I'd much prefer an interface where I could specify my intention next to the route and have it fail atomically and/or fail closed

leeter
4 replies
21h10m

I can't speak for others but HSTS is a major reason. Not everybody wants to deal with setting up certs for every single application on a network but they want HSTS preload externally. I get why for AWS the solution of having everything from a .com works. But for a lot of small businesses it's just more than they want to deal with.

Another reason is information leakage. Having DNS records leak could actually provide potential information on things you'd rather not have public. Devs can be remarkably insensitive to the fact they are leaking information through things like domains.

jcrites
3 replies
21h5m

Having DNS records leak could actually provide potential information on things you'd rather not have public.

This is true, but using a regular domain name as your root does not require you to actually publish those DNS records on the Internet.

For example, say that you own the domain `example.com`. You can build a private service `foo.example.com` and only publish its DNS records within the networks where it needs to be resolved – in exactly the same way that you would with `foo.internal`.

If you ever decide that you want an Internet-facing endpoint, just publish `foo.example.com` in public DNS.

nine_k
0 replies
19h0m

The wisdom goes: "Make invalid states unrepresentable".

In this case, foo.internal cannot represent a publicly accessible domain, much like 10.x.x.x cannot represent a publicly routable IP address.

No matter how badly you misconfigure things, you are still protected from exposure. Sometimes it's really valuable.

luma
0 replies
11h9m

It's not DNS that's leaking those names, it's certificate transparency. If you are using certs on foo.example.com, that's publicly discoverable due to CTLs. As others have mentioned here it leaves you with a dilemma, either you have good working certs internally but are also exposing all of your internal hostnames, or you keep your hostnames private but have cert problems (either dealing with trusting a private CA or dealing with not having certs).

leeter
0 replies
21h2m

I'm not disagreeing at all. But Hanlon's Razor applies:

Never attribute to malice what can better be explained by incompetence

You can't leak information if you never give access to that zone in any way. More than once I've run into well meaning developers in my time. Having a .internal inherently documents that something shouldn't be public. Whereas foo.example.com does not.

quectophoton
3 replies
20h36m

Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

That assumes you are able to pay to rent a domain name, and keep paying for it, and that you are reasonably sure that the company you're renting it from is not going to take it away from you because of a selectively-enforced TOS, and that you are reasonably sure that both yourself and your registrar are doing anything possible to avoid getting your account compromised (resulting in your domain being transferred to someone else's and probably lost forever unless you can take legal action).

So it might depend on your threat model.

Also, a good example, and maybe the main reason for this specific name instead of other proposals, is that big corps are already using it (e.g. DNS search domains in AWS EC2 instances) and don't want someone else to register it.

justin_oaks
2 replies
18h1m

If you control the DNS resolution in your company and use an internal certificate authority, technically you don't have to rent a domain name. You can control how it resolves and "hijack" whatever domain name you want. It won't be valid outside your organization/network, but if you're using it only for internal purposes then that doesn't matter.

Of course, this is a bad idea, but it does allow you to avoid the "rent".

zrm
0 replies
16h0m

One of the reasons that it's a bad idea is that whoever does have the domain can get a certificate for any name under it from any public CA, which your devices would generally still trust in addition to your private CA.

OJFord
0 replies
6h38m

But then you still need a private CA (public one is going to resolve the domain correctly and find you don't control it) so you may as well have used .internal?

macromaniac
2 replies
3h36m

Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

These local TLDs should IMO be used on all home routers, it fixes a lot of problems.

If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.

Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

Also, p. sure I grew up playing wc3 w you?

e28eta
1 replies
2h35m

Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

dnsmasq has this feature. I think it’s commonly available in alternative router firmware.

On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.

There are undoubtably other options, but these are the two I’ve worked with.

macromaniac
0 replies
2h25m

Wasn't aware of dnsmasq/pihole, I have a BIND9 configured to do it on my network and yeah its much nicer. I've seen people get bit by this all the time in college and still even now join projects with like weird hosts file usage. Instead of having 3 different systems for apple/ms/linux name resolution that don't interop the problem is better fixed higher up.

briHass
2 replies
5h33m

I just got burned on my home network by running my own CA (.home) and DNS for connected devices. The Android warning when installing a self-signed CA ('someone may be monitoring this network') is fine for my case, if annoying, but my current blocker is using webhooks from a security camera to Home Assistant.

HA allows you to use a self-signed cert, but if you turn on HTTPS, your webhook endpoints must also use HTTPS with that cert. The security camera doesn't allow me to mess with its certificate store, so it's not going to call a webhook endpoint with a self-signed/untrusted root cert.

Sure, I could probably run a HTTP->HTTPS proxy that would ignore my cert, but it all starts to feel like a massive kludge to be your own CA. Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

Trying to save a few bucks by not buying a vanity domain for internal/test stuff just isn't worth the effort. Most systems (HA included) support ACME clients to get free certs, and I guess for IoT stuff, you could still do one-off self-signed certs with long expiration periods, since there's no way to automate rotation of wildcards for LE.

yjftsjthsd-h
1 replies
5h10m

Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

Depending on your threat model, I'm not sure that's true. Encryption without verification prevents a passive observer from seeing the content of a connection, but does nothing to prevent an active MITM from decrypting it.

briHass
0 replies
1h27m

I meant more: centralized verification. I'm fine with deploying a self-CA cert to verify in my personal world, but browsers and devices have become increasingly hostile to certs that aren't signed by the standard players.

zzo38computer
0 replies
21h5m

Sometimes it may be reasonable to use subdomains of other domain names that you have registered, but I would think that sometimes it would not be appropriate, such as if you are not using it with internet at all and therefore should not need to register a domain name, or for other reasons; if it is not necessary to use internet domain names then you would likely want to avoid it (or, at least, I would).

slashdave
0 replies
18h18m

it's helpful to have that flexibility in the future

On the contrary, it is helpful to make this is impossible. Otherwise you invite leaking private info by configuration mistake.

pid-1
0 replies
20h14m

leading Amazon's strategy for cloud-native AWS usage internally

I've been on the other end of the business scale for the past decade, mostly working for SMBs like hedge funds.

That made me a huge private DNS hater. So much trouble for so little security gain.

Still, it seems common knowledge is to use private DNS for internal apps, AD and such, LAN hostnames and likes.

I've been using public DNS exclusively everywhere I've worked and I always feel like it's one of the best arch decisions I'm bringing to the table.

johannes1234321
0 replies
18h16m

A big area are consumer devices like WiFi routers. They can advertise the .internal name and probably even get TLS certificates for those names and things may work.

See for instance the trouble with AVM's fritz.box domain, which was used by their routers by default, then .box wasade an TLD and AVM was too late to register it.

colejohnson66
0 replies
21h12m

Why? Remember the .dev debacle?

TheRealPomax
0 replies
19h21m

Pretty much "anything that has to use a real network address, resolved via DNS" rather than using the hosts file based loopback device, or the broadcast IP.

csdreamer7
22 replies
22h16m

Can we get .local or .l added for private-use applications too?

duskwuff
12 replies
21h59m

.local is already reserved for mDNS.

jeroenhd
10 replies
21h18m

.local is in this weird state where it's _technically_ not reserved, but most PCs in the world already resolve it with special non-DNS software because of the Bonjour/mDNS protocol.

So you end up with the IETF standardising .local, because Apple was already using it, but ICANN never did much with that standardisation.

I doubt ICANN will actually touch .local, but they could. One could imagine a scheme where .local is globally registered to prevent Windows clients (who don't always support mDNS) from resolving .local domains wrong.

arjvik
3 replies
21h13m

Modern windows supports mDNS these days!

jeroenhd
2 replies
18h43m

It does! I generally assume mDNS to just be available on every device these days. But I've also seen managed environments where mDNS has been turned off or blocked at the firewall.

dboreham
1 replies
17h40m

mDNS is a broadcast protocol so always "blocked at the firewall ".

oasisbob
0 replies
15h32m

Multicast too. If you've never needed to manipulate ACLs for multicast traffic, you're not really living.

candiddevmike
2 replies
20h58m

It's reserved per RFC 6762:

This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local.

https://datatracker.ietf.org/doc/html/rfc6762

Applications can/will break if you attempt to use .local outside of mDNS (such as systemd-resolved). Don't get upset when this happens.

Interesting fact: RFC 6762 predates Kubernetes (one of the biggest .local violators), they should really change the default domain...

wlonkly
1 replies
19h47m

But that's an IETF standard, not an ICANN policy. AFAIK there's nothing in place today that would _prevent_ ICANN from granting .local to a registry other than it just being a bad idea.

throw0101d
1 replies
20h13m

.local is in this weird state where it's _technically_ not reserved […] I doubt ICANN will actually touch .local, but they could.

It is. See §2.2.1.2.1, "Reserved Names", of ICANN's gTLD Applicant Guidebook:

* https://newgtlds.icann.org/sites/default/files/guidebook-ful...

jeroenhd
0 replies
18h39m

This document describes the process for requesting gTLDs. Some internal ICANN project could ignore the contents of the guidebook without breaking "the rules". Or they could invent some kind of new TLD system; branded gTLDs didn't exist twenty years ago and I doubt most people would've assumed them to become real, yet blog.google is a real thing that exists.

abtinf
0 replies
14h46m

but they could.

Presumably, ICANN, like any other committee, is not interested in self-castration. Which is what would happen if they challenged Apple.

ICANN could do anything with enough rule changes. And then everyone will ignore them.

mjevans
0 replies
2h32m

Give Apple / mDNS .mdns and let it use THAT instead of .local which should NEVER have been taken from local use in the first place.

duskwuff
2 replies
21h30m

The ICANN root zone only contains gTLDs and ccTLDs which are delegated. Other TLDs which are explicitly reserved for non-public use, like .localhost, .test, or .invalid, don't appear on that list either.

csdreamer7
0 replies
20h48m

Ty for the information.

mjevans
1 replies
2h31m

Please also reserve .lan which is what I now prefer to use since .local got stolen from private networks.

LeoPanthera
0 replies
6h32m

Using .local causes big problems with mDNS/Bonjour/Rendezvous, which also uses that TLD.

tetris11
12 replies
22h15m

I need a dumbed down version of this.

quectophoton
3 replies
21h1m

When you need to assign an IP address for a host, the safest thing to do is to either use an IP address you own^Ware renting, or to use an IP address nobody will be able to "own" in the foreseeable future.

This is that but for domain names. When you need to use a domain name to refer to a host, the safest thing to do is to either use a domain name you own^Ware renting, or to use a domain name nobody will be able to "own" in the foreseeable future.

For an IP address, you might usually choose from 192.168.0.0/16 or similar reserved ranges. Your "192.168.1.1" is not the same as my "192.168.1.1", we both can use it and neither of us can "officially" own it.

For a domain name, you can use ".internal" or other similar (if uglier) reserved TLDs. Your "nas.internal" is not the same as my "nas.internal", we both can use it and neither of us can "officially" own it.

Since you're asking this question you might also be wondering how people can even use custom domains like that, and the answer is by self-hosting a DNS server, and using that as a DNS server instead of a public one (so you'd use your self-hosted server instead of, say, "8.8.8.8"). Then you configure your DNS server so that whenever someone requests "google.com" it does "the normal thing", but when someone requests "nas.internal" it returns whatever IP address you want.

ninkendo
2 replies
15h14m

There’s similar discussions about this in other threads, but I’ve taken to just using a real domain name (lan.<my-vanity-domain>.me) even for my house stuff, but otherwise doing something like you say above.

The advantage is that I can run real letsencrypt certs for services in my house, which is nicer than having to agree to self signed cert warnings or otherwise having my browser nag me about plaintext passwords/etc.

If anyone cares about the details, I run an nginx instance on port 80 through an ipv6 address which I allow through my network firewall (no NAT, so I don’t have to burn my only incoming ipv4 port 80 for this, although I block that anyway) and let certbot manage its configs. Wildcard external dns pointing AAAA records to said v6 address. The certbot vhost just renders an empty 404 for all requests except for the ACME challenges, so there’s nothing being “leaked” except generic 404 headers. I get certs dumped to my nginx config dir, then from there I use them for an internal-only reverse proxy listening on my local subnet, for all my internal stuff. The only risk is if I mess up the config and expose the RP to the internet, but so far I haven’t managed to screw it up.

PokestarFan
1 replies
11h54m

Why not just use ACME DNS?

ninkendo
0 replies
7h29m

Because this setup works fine, and I haven’t bothered getting to that level of automation with my external DNS provider.

jeroenhd
2 replies
21h24m

Remember how tons of developers got surprised when Google got the .dev TLD, because they were using domains they didn't own to develop software? Well, now .internal has been reserved so developers and companies can safely use .internal domains without that happening to them.

soneil
1 replies
4h7m

.local being used for mDNS while Microsoft were using it in AD examples/documentation is another good example.

.internal is just admitting there's only so many times we can repeat the same mistake before we start to look silly.

alt227
0 replies
5m

Our internal domain is still .local and has been since Microsoft recomended we do it that way 15 years ago.

pixl97
0 replies
21h43m

When setting up local networks people commonly use a top level domain like 'my.lan', 'my.network', 'my.local'. Instead of using one of these non-reserved domains that may one day end up as a TLD, it is recommended to use 'my.internal'.

If the 'private' TLD you're using suddenly becomes real, then you can ship off data, every possibly unencrypted data and connection requests to computers you do not control.

kijeda
0 replies
21h44m

https://www.ietf.org/archive/id/draft-davies-internal-tld-00...

There are certain circumstances where private network operators may wish to use their own domain naming scheme that is not intended to be used or accessible by the global domain name system (DNS), such as within closed corporate or home networks.

The "internal" top-level domain is reserved to provide this purpose in the DNS. Such domains will not resolve in the global DNS, but can be configured within closed networks as the network operator sees fit.

This reservation is intended for a similar purpose that private-use IP address ranges that are set aside (e.g. [RFC1918]).

GuB-42
0 replies
21h39m

The dumbed down version is that no one will be allowed to register a .internal domain on the internet, ever. So you are free to use it for your internal network in any way you like and it will not come into conflict with registered domains and internet standard.

AndyMcConachie
0 replies
7h49m

.INTERNAL will never appear in the DNS root zone.

huijzer
11 replies
21h47m

1. Buy .intern TLD

2. Sell to scammers.

3. Profit.

(I want to appreciate how hard it probably is for ICANN to figure out proper TLDs.)

gjsman-1000
7 replies
21h33m

Um... no? .intern is not a valid TLD; you can't get any domains with it, nobody has proposed that TLD, and if someone did that issue would be discovered then.

jeroenhd
6 replies
21h26m

If you've got a couple hundred grant laying about, you could probably set up a shell company and acquire .intern through a several-year ccTLD acquisition process.

I'd like to think people learned from .dev and such. I doubt any scammer will be able to use it.

n_plus_1_acc
0 replies
21h18m

People were using .dev for internal things and acted surprised when Google decided to use it on the internet.

jeroenhd
0 replies
21h16m

To expand on my comment: Google bought .dev and started selling domains. In truth, developers probably only noticed because Google pre-loaded their .dev TLD into HSTS, which meant that any domain ending in .dev, even if it's a local one or one you own, must communicate over HTTPS if you want a browser to interact with it.

As a result, even if you bought steves-laptop.dev for yourself, you still wouldn't be able to run an HTTP dev environment on it, you'd need to set up HTTPS. I think that was probably a good move by Google, because otherwise it could've taken weeks for most devs to notice.

deathanatos
1 replies
20h37m

I think you're referring to the new gTLD process, which yes, costs a small boatload. Those aren't, and .intern isn't, a ccTLD, nor do I believe there is a means of acquiring a ccTLD (…outside of somehow becoming a country, I guess).

jeroenhd
0 replies
18h32m

You're right, I meant gTLD. Unfortunately I can't edit my comment anymore.

I think ccTLDs are restricted to two letter codes even if the country of Internia were to be be founded. The only exceptions I can think of are the localized names (.台湾 and 中国 for countries like Taiwan and China) which are technically encoded as .xn--kprw13d and .xn--fiqs8s. Pakistan's پاکستان. is the first ccTLD I've seen that's more than two visual characters when rendered (with the added bonus of being right-to-left to make URL rendering a tad more complex) so for Internia to claim .intern as a ccTLD, they'd probably need a special script.

toast0
0 replies
17h32m

At present, you need money and a time machine. New TLDs were allocated in batches, and there's no current application process.

NewJazz
0 replies
8m

Aren't those real hard to come by because you have to be a UN agency or maybe a prominent NGO to get one?

colejohnson66
0 replies
21h1m

What about .intern.al?

dawnerd
11 replies
19h24m

I'm still peeved they let google take over .dev when they knew tons of us used that in the older days for dev environments.

TheRealPomax
9 replies
19h22m

to be fair, ".dev" is not a full word, unlike INTERNAL or EXAMPLE. You're free to petition them to reserve .DEVELOPMENT, though, of course.

cowsup
4 replies
7h3m

.com is not a full word either (company), or .org (organization), .net (internet), .gov (government), ...

TheRealPomax
1 replies
2h15m

.com is literally the opposite of a "reserved to never be used" word though?

saghm
0 replies
5m

I'm not sure how that leads to the conclusion that other short, convenient TLDs like `.dev` should just be given to companies like Google to use very sparingly, if at all.

EDIT: Looks like I misunderstood what Google having .dev meant in the above discussion; domains using it are available to purchase through their registrar (or more precisely resellers since I guess they don't sell directly anymore)

PartiallyTyped
1 replies
6h23m

I thought .com was for "commercial".

dsr_
0 replies
5h41m

.com is for .com. You can interpret it any way you'd like and it doesn't make a difference to anyone who isn't currently interested in the history of DNS.

My preferred reading is .com for commonlymisinterpretedbypeoplewhodonotreadrfcsbutitdoesnotmatterintheslightest, which is a Welsh word meaning "oddly shaped sheep".

nine_k
3 replies
18h50m

A convenient TLD is short, not excruciatingly loquacious. In ease of typing .dev certainly wins over .development.

slaymaker1907
0 replies
9h15m

Luckily, we have *.test. I’ve used that one quite a bit.

TheRealPomax
0 replies
2h10m

Yes, but a convenient reserved TLD, formally declared never to be used by anyone and guaranteed to never resolve to anything by global DNS, is not accepted based on convenience alone. The ".dev" TLD is plenty useful as real domain. Plus, and this one's hard to believe, calling programming related work "dev" work is a surprisingly recent thing.

Jerrrrrrry
0 replies
15h53m

It's not convenient if 99% of users (internet users) can't (effectively) use it.

.dev is great; even if Google's motives were evil-truistic; and, *.development should be among the Reserved, Internet Use only.

The abbreviated vs verbose TLD name is consistence.

There aren't any folks more appreciable than consistency then the RFC goons.

undersuit
0 replies
1h36m

I used .coffee on my home network until it became a for-profit TLD. https://icannwiki.org/.coffee

zzo38computer
5 replies
21h8m

I think it is good to have a .internal TLD for internal use.

(I also think that a .pseudo TLD should be made up which also cannot be assigned on the internet, but is also not for assigning on local networks either. Uusually, in the cases where it is necessary to be used, either the operating system or an application program will handle them, although the system administrator can assign them manually on a local system if necessary.)

Denvercoder9
4 replies
21h6m

I also think that a .pseudo TLD should be made up which also cannot be assigned on the internet, but is also not for assigning on local networks either.

There's already .example, .invalid, .test and .localhost; which are reserved. What usecase do you have that's not covered by one of them?

zzo38computer
3 replies
20h15m

.example is used for examples in documentation and stuff like that.

.invalid means that a domain name is required but a valid name should not be used; for example, a false email address in a "From:" header in Usenet, to indicate that you cannot send email to the author in this way.

.test is for a internal testing use, of DNS and other stuff.

.localhost is for identifying the local computer.

.internal is (presumably) for internal use in your own computer and local network, when you want to assign domain names that are for internal use only.

.pseudo is for other cases that do not fit any of the above, when a pseudo-TLD which is not used as a usual domain name, is required for a specialized use by a application, operating system, etc. You can then assign subdomains of .pseudo for specific kind of specialized uses (these assignments will be specific to the application or otherwise). Some programs might treat .pseudo (or some of its subdomains) as a special case, or might be able to be configured to do so.

(One example of .pseudo might be if you want to require a program to use only version 4 internet or only version 6 internet, and where this must be specified in the domain name for some reason; the system or a proxy server can then handle it as a special case. Other examples might be in some cases, error simulations, non-TCP/IP networks, specialized types of logging or access restrictions, etc. Some of these things do not always need to be specified as a domain name; but, in some cases they do, and in such cases then it is helpful to do so.)

zzo38computer
0 replies
25m

I did not know about that; thank you for mentioning that to me

yjftsjthsd-h
0 replies
17h49m

I'm not following; the examples you're giving for .pseudo sound like they would fit under .internal. Could you give a more concrete example of a usecase?

amelius
5 replies
8h30m

Of course, scammers will register variations of .internal

Like .lnternal

Or .ιnternal

endorphine
4 replies
8h16m

How? Do these gTLDs even exist?

amelius
2 replies
5h7m

Then why does .americanexpress exist?

Sounds like someone simply pulled their wallet.

Or maybe you forgot "/s"

soneil
1 replies
4h5m

It's a bit of both - you do have to pull out your wallet, but there's also an approval process. Just because you can buy a gTLD, doesn't mean you can buy .con

NewJazz
0 replies
5m

[delayed]

xvilo
4 replies
20h55m

Any ideas on how you would run SSL/TLS on these set-ups?

the8472
0 replies
20h46m

Either pin the appropriate server cert in each application or run your internal CA (scoped to that domain via name constriants) and deploy the root cert to all client machines.

rileymat2
0 replies
20h46m

I think you can still run self signed, with a private CA/root cert?

jeroenhd
0 replies
18h25m

An internal certificate authority would probably be the easiest option. Combined with MDM/group policy, you could tell most devices in your network to set up a trust chain of your own. From then on you can automate access by running your own ACME server internally to automatically hand out certificates to local devices.

The automated setup probably isn't very secure, though. Anyone can register any .local name on the network, so spoofing hostnames becomes very easy once you get access to any device on the network. Send a fax with a bad JPEG and suddenly your office printer becomes xvilo.local, and the ACME server has no way to determine that it's not.

That means you probably need to deal with manual certificate generation, manually renewing your certificates every two years (and, if you're like me, forgetting to before they expire).

Hamuko
0 replies
10h7m

I just got myself a proper domain name. You can get a domain for pretty cheap if you're not picky about what you get. You could for example register cottagecheese.download on Cloudflare for about $5/year right now.

I have my domain's DNS on Cloudflare, so I can use DNS verification with Let's Encrypt to get myself a proper certificate that works on all of my devices. Then I just have Cloudflare DNS set up with a bunch of CNAME records to .internal addresses.

For example, if I needed to set up a local mail server, I'd set mail.cottagecheese.download to have a CNAME record pointing to localserver.internal and then have my router resolve localserver.internal to my actual home server's IP address. So if I punch in https://mail.cottagecheese.download in my browser, the browser resolves that to localserver.internal and then my router resolves that to 10.x.x.x/32, sending me to my internal home server that greets me with a proper Let's Encrypt certificate without any need to expose my internal IP addresses.

Windows doesn't seem to like my CNAME-based setup though. Every time I try to use them, it's a diceroll if it actually works.

wolpoli
3 replies
19h47m

Anyone know when I should use .internal and when I should use .local?

SoftTalker
1 replies
17h57m

And what about .localdomain?

radicality
0 replies
16h12m

I’ve been using .home.arpa for a while at home now.

VoodooJuJu
2 replies
19h3m

Why did something so useful and simple like this take so long to make official?

poikroequ
0 replies
18h46m

Things like this are rarely simple or obvious. I don't know what potential gotchas there could have been, but I'm sure there were strange and unusual things they had to carefully consider before making this an official standard.

Hamuko
0 replies
10h24m

ICANN didn't understand why you weren't simply just using the recommend .home.arpa TLD.

Filligree
2 replies
18h29m

I’m going to go right on using .lan.

throwaway290
1 replies
17h5m

.la and .land are already valid domains so don't make a typo. And I guess .lan can be sold eventually if it turns out it's a word somewhere.

flemhans
0 replies
15h58m

They already got .cat, so why not the ending as well.

NietTim
1 replies
7h49m

Ever since this kind of stuff was introduced I've been annoyed that there is no way to disable it for yourself. And it's allowed for straight up evil stuff like google buying the .dev TLD

NewJazz
0 replies
12m

Your mention of .dev seems like a complete non-sequiter to me. What happened to .internal here is the exact opposite of what happened to .dev. And how would you even propose to "disable" reservation of a TLD. Sorry your comment just makes no sense from my POV.

2snakes
1 replies
20h59m

There used to be issues with the public part of a .com getting sent weird private windows traffic iirc. This was discovered with honeypot analysis and the potential for information exposure if you could register a .com and another company was using it as their AD domain.

quectophoton
0 replies
20h17m

On this topic, whoever owns "test.com" must be getting a lot of sensitive information.

zigzag312
0 replies
1h20m

Too many letters.

ryukoposting
0 replies
17h32m

I'll probably just keep using .lan, but it's nice to know that ICANN is thinking about this use case.

myshkin5
0 replies
3h49m

Does this mean .svc.cluster.local for Kubernetes should migrate to .svc.cluster.internal?

joncfoo
0 replies
1d1h

[...] the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications. The Board recommends that efforts be undertaken to raise awareness of its reservation for this purpose through the organization's technical outreach.

gxt
0 replies
19m

Is there an appliance or offline service to setup a private CA, do secure remote attestation, and issue certificates only to authenticated peers? Also preferably with fido2 support for administrative purposes.

ahoka
0 replies
10h15m

Now we just wait until browsers stop doing a search if you type anything ending with .internal, which is the biggest issue with using non standard private domains.

Arch-TK
0 replies
5h45m

I've just used i.slow.network. for my internal domain.