An interesting thing that I've noticed is that some of the attackers watch the Certificate Transparency logs for newly issued certificates to get their targets.
I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits just like those listed in the article. After a few hours, it always dies down somewhat.
The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.
Honestly it feels like you'll need at least something like basicauth in front of your stuff from the first minutes it's publicly exposed. Well, either that, or run on your own CA and use self-signed certs (with mTLS) before switching over.
For example, when some software still has initial install/setup screens where you create the admin user, connect to the DB and so on, as opposed to specifying everything initially in the environment variables, config files, or other more specialized secret management solutions.
Generally I'd recommend not exposing anything unless you deployed the security for it.
Just SSH scanning can be a big issue.
A big issue how? If you block password auth, ssh scanning is a nonissue.
DDoS attack on your sshd?
Are we just speculating? SSH scanners are not sources of DDoS. Large companies have ssh bastions on the internet and do not worry about ssh DDoS. Its not really a thing that happens.
You don't need to freak out if you see a bunch of failed ssh auth attempts in your logs. Just turn off password based authentication and rest easy.
Agreed. Another thing you can do to drastically reduce the amount of bots hitting your sshd is to listen on a port that is not 22. In my experience, this reduces ~90% of the clutter in my logs. (Disclaimer: this may not be the case for you or anyone else)
Just to reduce the crap in the log and also because I can, I have my SSH servers (not saying what their IPs are) using a very effective measure: traffic is dropped from the entire world, except for the CIDR blocks, which I put in ipsets, of the five ISPs over three countries I could reasonably be in when I need to access the SSH servers.
And if I'm really, say, in China or Russia an really need to access one of my servers through SSH, I can use a jump host in one of the three countries that I allow.
So effectively: DROPping traffic from 98% of the planet.
Boom.
Deny by default, allow only those sources that are considered trustworthy. And frequently re-evaluate who and what should be considered trustworthy.
Or close the ports, and install an agent that phones out such as Tailscale or twingate.
This is the way, outbound only connections so you can stop all external unauthenticated attacks. I wrote a blog 2 years back comparing zero trust networking using Harry Potter analogies... what we are describing is making our resources 'invisible' to silly muggles - https://netfoundry.io/demystifying-the-magic-of-zero-trust-w...
Or just Wireguard itself.
I don't want to be able to auth whilst physically in authoritarian regimes. If I had to be physically there it'd be via burner devices.
I used to have an iptables config that just drops everything by default on the SSH port and run a DNS server that when queried a magic string would allow my IP to connect to SSH. It did help that the DNS server was actually used to manage a domain and was seeing traffic so you couldn't isolate my magic queries so easily.
Until there is a new zero day on sshd.
You want to keep these things behind multiple locked doors, not just one.
For the servers themselves, you shouldn't be able to get to sshd unless you're coming from one of the approved bastion servers.
You shouldn't be able to get to one of the approved bastion servers unless you're coming from one of the approved trusted sources, on the approved user access list, and using your short-lived sshd certificate that was signed through the use of a hardware key.
And all those approved sources should be managed by your corporate IT department, and appropriately locked down by the corporate MDM process.
And you might want to think about whether you should also be required to be on the corporate VPN. Or, to be using comparable technologies to access those approved sources.
What if there's a zero-day in your bastion service, whatever that is?
One option could be to lock the port down to only your jump/bastion server source IP.
That fails to when you lose that IP address, or you lose that server.
I suppose if you don’t have console access, sure. But inconvenient at worst imv.
If you have a way around that bastion server, then at least you've got a backup. But then you also have to worry about the security of that backup.
Yes, better to make your bastion 'dark' without being tied to an IP address. This is how we do it at my company with the open source tech we have developed - https://netfoundry.io/bastion-dark-mode/
Until a junior from another project enables password-based root logins because Juniper team that was on site to help them install beta-version of some software they collaborated on asked them to.
Few days after they asked to redirect an entire subnet to their rack.
And yes, you still need to remember to close password logins or at least pick serious password if you need them. Helps to have no root login over SSH and normal users that aren't defaults for some distro...
It sounds like your organization has a bunch of problems. I'm sorry to hear that. But I don't think you can really blame those on ssh.
There are plenty of organizations and individuals that can competently run ssh directly on the internet (on port 22) with zero risk to "ssh scanners."
Your security is only as good as the people running your system. Unfortunately not everyone has teams of the best of the best. Sometimes you get a the jr dev guy assigned to things. They do not know any better and just do as they are told. It is the deputized sheriff problem.
In that case it wasn't even the junior's fault - they were following experts from Juniper who were supposed to be past-masters on installing that specific piece of crap (as someone who later accidentally became developer of that piece of crap for a time, I feel I have the basis for the claim).
And those people told him the install system didn't support SSH keys (hindsight: they did) and got him to make the root logins possible with passwords. Passwords that weren't particularly hard to guess because their only expected and planned use was for the other team to login for the first time and set their own, before the machines were to be exposed to internet, using BMC.
I am not blaming this on SSH (also, no longer in that org for many years).
I am just pointing out (also in few other, off-site discussions), that one should not even think of exposing a port before finishing locking it down.
Because sometimes people forget, even experienced people (including myself), and sometimes that's enough (I think someone few weeks ago submitted a story which involved getting pwned through accidentally exposed postgres?).
And there's enough people who get it wrong for various reasons that lowest of low script kiddies can profit buying ready-made extortion kits on chinese forums, getting a single VM with windows to run them, and extort money from gambling/gameserver sites. Not to mention all the fun stuff if you search for open VNC/RDP.
Yes, if you follow the advice of "not exposing anything unless you deployed the security for it" of course you block password auth before exposing SSH to the internet.
Not everyone is following that advice. Just last week I taught a friend about using tmux for long-running sessions on their lab's GPU server, and during the conversation it transpired that everyone was always sshing in using the root password. Of course plugging that hole will require everyone from the CTO downward to learn about SSH keys and start using them, so I doubt anything will change without a serious incident.
I had a similar issue 15 years ago, and tied the Linux boxes into Active Directory and authenticated via Kerberos. Worked nice, no SSH keys needed!
Unless an attack on the sshd is employed. Which is also possible
Pre-auth sshd vulnerabilities are extremely rare and are not what ssh scanners are looking for.
I wish. I use basicauth to protect all my personal servers, the problem is Safari doesn't appear to store the password! I always have to re-authenticate when I open the page. Sometimes even three seconds later.
Have you considered using a different browser?
or get your certificates using dns auth a week or so prior to exposing the service
It is also useful to rely more on wildcard certs, as it makes it difficult to determine from CT logs the specific subdomains to attack.
A compromised wildcard certificate has a much higher potential for abuse. The strong preference in IT security is a single-host or UCC (SAN) certificate.
Renewing a wildcard is also unfun when you have services which require a manual import.
Renewing any certificate that requires a manual import is not fun. Why are wildcard certs less fun to manually import than individual certificates?
Presumably one purchases a wildcard for multiple distinct systems.
Using them like that never occurred to me. I was thinking multiple sites on one host or vanity hostnames: dfc.example.com / nullindividual.example.com. etc.
There's really no reason to avoid wildcard certs for your domains, unless you have so many subdomains that are managed by various business interests.
I use LE wildcard certs and they're great, you can use them internally.
It seems like the principle of least power would apply here. There's value in restricting capability to no more than strictly necessary. Consider the risk of a compromised some-small-obscure-system.corporate.com in the presence of a mission-critical-system.corporate.com when both are issued wildcard certs.
Wildcard certs are indeed a valuable tool, but there is no free lunch.
You'd usually put a reverse proxy exposing the services and terminating TLS with the wildcard cert.
The individual services can still have individual non-wildcard internal-only certs signed by an internal CA. These don't need to touch an external CA or appear in CT logs - only the reverse proxy/proxies should ever hit these, and can be configured to trust the internal CA (only) explicitly.
Yeah, I switched to wildcard certs at some point for this reason.
Was looking into Certificate Transparency logs recently. Are there any convenient tools/methods for querying CT logs? i.e. search for domains within a timeframe
Cloudflare’s Merkle Town[0] is useful for getting overviews, but I haven’t found an easy way to query CT logs. ct-woodpecker[1] seems promising, too
[0] https://ct.cloudflare.com/
[1] https://github.com/letsencrypt/ct-woodpecker
Steampipe have a fun SQLite extension that lets you query them via SQL: https://til.simonwillison.net/sqlite/steampipe#user-content-...
It uses an API provided by https://crt.sh/
Querying crt.sh helped me identify a dev service I was supposed to take down, but forgot about it. Nice alternative use case :D
https://certstream.calidog.io/
https://crt.sh/
These aren't attackers - they're usually services like urlscan.io and others who crawl the web for malware by monitoring CT logs.
The thread is specifically talking about logs of attacks
The message they responded to was not, it was:
"I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits"
What I wrote was ".. hundreds of hits just like those listed in the article ...", and the article listed attacks.
I host so many services, but I gave up totally on exposing them to the internet. Modern VPNs are just too good. It lets me sleep at night. Some of my stuff is, for example, photo hosting and backup. Just nope all the way.
If you're the only one accessing those services, then why use a VPN instead of port mapping those services to localhost of the server, and then forwarding that localhost port to your client machine's localhost port via SSH?
I didn’t understand any of that, sorry. Haha
A VPN lets me access my stuff from my phone while out of the house, for example.
This might clear up a few things then: https://web.archive.org/web/20220522192804/https://www.dbsys...
Original article which doesn't contain the first graphic: https://www.xmodulo.com/access-linux-server-behind-nat-rever...
What? Ideally..before? Seriously? It is 2024.. and this was true even decades ago, absolutely mandatory.
(Still remembering that dev that discovered file sharing in his exposed mongo instance (yes, that!! :D) without password only hours after putting it up.. "but how could they know the host it is secret!!" :D ).
Fun anecdote - I wrote a new load balancer for our services to direct traffic to an ECS cluster. The services are exposed by domain name (e.g. api-tools.mycompany.com), and the load balancer was designed to produce certificates via letsencrypt for any host that came in.
I had planned to make the move over the next day, but I moved a single service over to make sure everything was working. Next day as I'm testing moving traffic over, I find that I've been rate limited by Lets Encrypt for a week. I check the database and I had provisioned dozens of certificates for vpn.api-tools.mycompany.com, phpmyadmin.api-tools.mycompany.com, down the list of anything you can think of.
There was no security issue, but it was very annoying that I had to delay the rollout by a week and add a whitelist feature.
Same! As soon as a new cert is registered for a new subdomain, I get a small burst of traffic. It threw me off at first assuming I had some tool running that was scanning it.
I work as a security engineer and, yes, the CT logs are extremely useful not only for identifying new targets the moment you get a certificate but also for identifying patterns in naming your infra (e.g., dev-* etc.).
A good starting point for hardening your servers is CIS Hardening Guides and the relevant scripts.
I'm still getting crawlers looking for an old printer i got a letsencrypt certificate for.