Having written an SSH server that is used in a few larger places, I find the perspective of enabling these features on a per-address basis by default in the future troubling. First, with IPv4 this will have the potential to increasingly penalize innocent bystanders as CGNs are deployed. Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network. With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.
From my experiments with several honeypots over a longer period of time, most of these attacks are dumb dictionary attacks. Unless you are using default everything (user, port, password), these attacks don't represent a significant threat and more targeted attacks won't be caught by this. (Please use SSH keys.)
I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.
If you want to have a read about unsolved problems around SSH that should be addressed, Tatu Ylonen (the inventor of SSH) has written a paper about it in 2019: https://helda.helsinki.fi/server/api/core/bitstreams/471f0ff...
So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?
Your concerned are addressed in TFA:
It's easy for the original owner to find the list of all the IP blocks the three or four ISPs he's legitimately be connecting from to that exemption list.
I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".
There's nothing more depressing than that approach.
Kudos to the author of that new functionality: there may be issues, it may not be the panacea, but at least he's trying.
It would be frustrating to be denied access to your own servers because you are traveling and are on a bad IP for some reason.
Picture the amount of Captchas you already getting from a legitimate Chrome instance, but instead of by-passable annoying captchas, you are just locked out.
I have fail2ban configured on one of my servers for port 22 (a hidden port does not have any such protections on it) and I regularly lock out my remote address because I fat finger the password. I would not suggest doing this for a management interface unless you have secondary access
Why would you use password based auth instead of priv/pub key auth? You'd avoid this and many other security risks.
what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?
After this party, this guy needed help, he lost his wallet and his phone, his sister also went to the party and gave him a ride there but had left. he didn't know her number to call her, and she'd locked down her socials so we couldn't use my phone to contact her. we were lucky that his socials weren't super locked down and managed to find someone that way, but priv keys are only good so long as you have them.
I use a yubikey. You need a password to use the key. It has it's own brute force management that is far less punishing than a remote SSH server deciding to not talk to me anymore.
but what do you do if you don't have the key? unless it's implanted (which, https://dangerousthings.com/), I don't know that I won't lose it somehow.
My keyboard has a built in USB hub and ports. They key lives there. They keyboard travels with me. It's hard to lose.
I have a backup key in storage. I have escrow mechanisms. These would be inconvenient, but, it's been 40 years since I've lost any keys or my wallet, so I feel pretty good about my odds.
Which is what the game here is. The odds. Famously humans do poorly when it comes to this.
My ssh keys are encrypted. They need a password, or they are worthless.
Sure, I can mistype that password as well, but doing so has no effect on the remote system, as the ssh client already fails locally.
You can and you should back up your keys. There isn't a 100% safe, secure and easy method that shields you from everything that can possibly happen, but there are enough safe, secure and easy ones to cover vast majority of cases other than a sheer catastrophe, which is good enough not to use outdated and security prone mechanisms like passwords on network exposed service.
What's the alternative? If you get onto a bad IP today, you're essentially blocked from the entire Internet. Combined with geolocks and national firewalls, we're already well past the point where you need a home VPN if you want reliable connectivity while traveling abroad.
What happens when your home VPN is inaccessible from your crappy network connection? There are plenty of badly administered networks that block arbitrary VPN/UDP traffic but not ssh. Common case is the admin starts with default deny and creates exceptions for HTTP and whatever they use themselves, which includes ssh but not necessarily whatever VPN you use.
Same as when a crappy network blocks SSH, you get better internet. Or if SSH is allowed, use a VPN over TCP port 22.
Better internet isn't always available. A VPN on the ssh port isn't going to do you much good if someone sharing your IP address is doing brute force attempts against the ssh port on every IP address and your system uses that as a signal to block the IP address.
Unless you're only blocking connection attempts to ssh and not the VPN, but what good is that? There is no reason to expect the VPN to be any more secure than OpenSSH.
If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it. If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).
Hacking into the VPN doesn't get the attacker into the SSH server too, so there's defense in depth, if your concern is that sshd might have a vulnerability that can be exploited with repeated attempts. If your concern is that your keys might be stolen, this feature doesn't make sense to begin with.
Websites usually don't care about ssh brute force attempts because they don't listen on ssh. But the issue isn't websites anyway. The problem is that your server is blocking you, regardless of what websites are doing.
Then you have a VPN exposed to the internet in addition to SSH, and if you're not rate limiting connections to that then you should be just as concerned that the VPN "might have a vulnerability that can be exploited with repeated attempts." Whereas if the SSH server is only accessible via the VPN then having the SSH server rate limiting anything is only going to give you the opportunity to lock yourself out through fat fingering or a misconfigured script, since nobody else can access it.
Also notably, the most sensible way to run a VPN over TCP port 22 is generally to use the VPN which is built into OpenSSH. But now this change would have you getting locked out of the VPN too.
It would also be very rare. The penalties described here start at 30s, I don't know the max, but presumably whatever is issuing the bad behavior from that IP range will give up at some point when the sshd stops responding rather than continuing to brute force at 1 attempt per some amount of hours.
And that's still assuming you end up in a range that is actively attacking your sshd. It's definitely possible but really doesn't seem like a bad tradeoff
lol. depending were you travel the whole continent is already blanket banned anyway. but that only happens because nobody travels there. so it is never a problem.
The thing is, we have tools to implement this without changing sshd's behavior. `fail2ban` et. al. exist for a reason.
Sure but if I only used fail2ban for sshd why should I install two separate pieces of software to handle the problem which the actual software I want to run has it built in?
Turning every piece of software into a kitchen sink increases its security exposure in other ways.
Normally I would agree with you, but fail2ban is a Python routine which forks processes based on outcomes from log parsing via regex. There’s so many ways that can go wrong…and has gone wrong, from one or two experiences I’ve had in the past.
This is exactly the sort of thing that should be part of the server. In exactly the same way that some protocol clients have waits between retries to avoid artificial rate limiting from the server.
still better trying to improve fail2ban than to add a (yet another) kitchen sink on sshd
fail2ban has been around for so long, people get impatient at some point
Impatient about what exactly? fail2ban is battle tested for well over a decade. It is also an active project with regular updates: https://github.com/fail2ban/fail2ban/commits/master/
There are a lot of ways a builtin facility of one service can go wrong, especially if it ends up being active by default on a distro.
`fail2ban` is common, well known, battle-tested. And its also [not without alternatives][1].
[1]: https://alternativeto.net/software/fail2ban/
fail2ban has a lot of moving parts, I don't think that's necessarily more secure.
I would trust the OpenSSH developers to do a better job with the much simpler requirements associated with handling it within their own software.
a system where sshd outputs to a log file then someone else picks it up and then pokes at iptables, seems much more of hacky than having sshd supporting that natively, imo. Sshd is already tracking connection status, having it set the status to deny seems like less of a kitchen sink and more just about security. the S in ssh for secure, and this is just improving that.
https://alanj.medium.com/do-one-thing-and-do-it-well-a-unix-...
Yeah, they exist because nothing better was available at that time.
It doesn’t hurt to have this functionality in openssh too. If you still need to use fail2ban, denyhosts, or whatever, then don’t enable the openssh behaviour feature. It’s really that simple.
How is baking this into sshd "better"?
UNIX Philosophy: "Do one thing, and do it well". An encrypted remote shell protocol server should not be responsible for fending off attackers. That's the job of IDS and IPS daemons.
Password-based ssh is an anachronism anyway. For an internet-facing server, people should REALLY use ssh keys instead (and preferably use a non-standard port, and maybe even port knocking).
The issue is that the log parsing things like fail2ban work asynchronously. It is probably of only theoretical importance, but on the other hand the meaningful threat actors are usually surprisingly fast.
Yes, because as soon as the security clowns find out about these features, we have to start turning it on to check their clown boxes.
There is nothing wrong with this approach if enabled as an informed decision. It's the part where they want to enable this by default I have a problem with.
Things that could be done is making password auth harder to configure to encourage key use instead, or invest time into making SSH CAs less of a pain to use. (See the linked paper, it's not a long read.)
Random brute force attempts against SSH are already a 100% solved problem, so doing nothing beyond maintaining the status quo seems pretty reasonable IMO.
Setting this up by default (as is being proposed) would definitely break a lot of existing use cases. The only risk that is minuscule here is the risk from not making this change.
I don't see any particularly reason to applaud making software worse just because someone is "trying".
OpenSSH already seems to take that into account by allowing you to penalize not just a single IP, but also an entire subnet. Enable that to penalize an entire /64 for IPv6, and you're in pretty much the same scenario as "single IPv4 address".
I think there's some limited value in it. It could be a neat alternative to allowlisting your own IP which doesn't completely block you from accessing it from other locations. Block larger subnets at once if you don't care about access from residential connections, and it would act as a very basic filter to make annoying attacks stop. Not providing any real security, but at least you're not spending any CPU cycles on them.
On the other hand, I can definitely see CGNAT resulting in accidental or intentional lockouts for the real owner. Enabling it by default on all installations probably isn't the best choice.
FYI it's pretty common to get a /48 or a /56 from a data center, or /60 from Comcast.
Maybe the only equivalent is to penalize a /32, since there are roughly as many of those as there are ipv4 addresses.
That may be true mathematically, but there are no guarantees that a small provider won't end up having only a single /64, which would likely be the default unit of range-based blocking. Yes, it "shouldn't" happen.
Right. It's analogous to how blocking an ipv4 is unfair to smaller providers using cgnat. But if someone wants to connect to your server, you might want them to have skin in the game.
The provider doesn't care, the owner of the server who needs to log in from their home internet at 2AM in an emergency cares. Bad actors have access to botnets, the server admin doesn't.
Unfortunately the only answer is "pay to play." If you're a server admin needing emergency access, you or your employer should pay for an ISP that isn't using cgnat (and has reliable connectivity). Same as how you probably have a real phone sim instead of a cheap voip number that's banned in tons of places.
Or better yet, a corp VPN with good security practices so you don't need this fail2ban-type setup. It's also weird to connect from home using password-based SSH in the first place.
The better answer is to just ignore dull password guessing attempts which will never get in because you're using strong passwords or public key authentication (right?).
Sometimes it's not a matter of price. If you're traveling your only option for a network connection could be whatever dreck the hotel deigns to provide.
Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd. If you're traveling, cellular and VPN are both options. VPN could have a similar auth dilemma, but there's defense in depth.
Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.
DoS in this context is generally pretty boring. Your CPU would end up at 100% and the service would be slower to respond but still would. Also, responding to a DoS attempt by blocking access is a DoS vector for anyone who can share or spoof your IP address, so that seems like a bad idea.
If someone is trying to exploit sshd, they'll typically do it on the first attempt and this does nothing.
It is when the hotel is using the cheapest available ISP with CGNAT.
Good point on the DoS. Exploit on first attempt, maybe, I wouldn't count on that. Can't say how likely a timing exploit is.
If the hotel is using such a dirty shared IP that it's also being used to spam random SSH servers, that connection is probably impractical for several other reasons, e.g. flagged on Cloudflare. At that point I'd go straight to a VPN or hotspot.
Novel timing attacks like that are pretty unlikely, basically someone with a 0-day, because otherwise they quickly get patched. If the adversary is someone with access to 0-day vulnerabilities, you're pretty screwed in general and it isn't worth a lot of inconvenience to try to prevent something inevitable.
And there is no guarantee you can use another network connection. Hotspots only work if there's coverage.
Plus, "just use a hotspot or a VPN" assumes you were expecting the problem. This change is going to catch a lot of people out because the first time they realize it exists is during the emergency when they try to remote in.
That may not be an option at all, especially with working from home or while traveling.
For example at my home all ISPs i have available use cgnat.
You cannot reasonably build an ISP network with single /64. RIPE assigns /32s to LIRs and LIRs are supposed to assign /48s downstream (which is somewhat wasteful for most of kinds of mass-market customers, so you get things like /56s and /60s).
What if it uses NAT v6 :D
i cannot tell if facetious or business genius.
That’s what Azure does. They also only allow a maximum of 16(!) IPv6 addresses per Host because of that.
Well seriously, I remember AT&T cellular giving me an ipv6 behind a cgnat (and also an ipv4). Don't quote me on that though.
As I said, "should". In some places there will be enough people in the chain that won't be bothered to go to the LIR directly. Think small rural ISPs in small countries.
Well, allocating anything smaller than a /64 to a customer breaks SLAAC, so even a really small provider wouldn't do that as it would completely bork their customers' networks. Yes, DHCPv6 technically exists as an alternative to SLAAC, but some operating systems (most notably Android) don't support it it all.
There are plenty of ISPs that assign /64s and even smaller subnet to their customers. There are even ISPs that assign a single /128, IPv4 style.
We should not bend over backwards for people not following the standard.
Build tools that follow the standard/best practices by default, maybe build in an exception list/mechanism.
IPv6 space is plentiful and easy to obtain, people who are allocating it incorrectly should feel the pain of that decision.
I can't imagine why any ISP would do such absurd things when in my experience you're given sufficient resources on your first allocation. My small ISP received a /36 of IPv6 space, I couldn't imagine giving less than a /64 to a customer.
I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"
People should write 80/48 or 48/80 to be clear
It's not about how many bits are 1 - it's about how many bits are important. And the first bits are always most important. So it's the first x bits.
If you have a /48 then 48 bits are used to determine the address is yours. Any address which matches in the first 48 bits is yours. If you have a /64, any address which matches in the first 64 bits is yours.
It's about how many bits are 1, in the subnet mask.
/x is almost always the number of network bits (so the first half). There are some Cisco ISO commands that are the opposite but those are by far the minority.
99/100 it means the first bits.
The clarity is found implied in your preferred example.
- "80/" would mean "80 bits before"
- "/48" would mean "48 bits after"
related: https://adam-p.ca/blog/2022/02/ipv6-rate-limiting/
IPv6 has the potential to be even worse. You could be knocking an entire provider offline. At any rate, this behavior should not become default.
And even with IPv4, botnets are a common attack source, so hitting from many endpoints isn't that hard.
I'd say "well, it might catch the lowest effort attacks", but when SSH keys exist and solve many more problems in a much better way, it really does feel pointless.
Maybe in an era where USB keys weren't so trivial, I'd buy the argument of "what if I need to access from another machine", but if you really worry about that, put your (password protected) keys on a USB stick and shove it in your wallet or on your keyring or whatever. (Are there security concerns there? Of course, but no more than typing your password in on some random machine.)
You can use SSH certificate authorities (not x509) with OpenSSH to authorize a new key without needing to deploy a new key on the server. Also, Yubikeys are useful for this.
Just a warning for people who are planning on doing this: it works amazingly well but if you're using it in a shared environment where you may end up wanting to revoke a key (e.g. terminating an employee) the key revocation problem can be a hassle. In one environment I worked in we solved it by issuing short-term pseudo-ephemeral keys (e.g. someone could get a prod key for an hour) and side-stepped the problem.
The problem is that you can issue keys without having to deploy them to a fleet of servers (you sign the user's pubkey using your SSH CA key), but you have no way of revoking them without pushing an updated revocation list to the whole fleet. We did have a few long-term keys that were issued, generally for build machines and dev environments, and had a procedure in place to push CRLs if necessary, but luckily we didn't ever end up in a situation where we had to use it.
Setting up regular publishing of CRLs is just part of setting up a CA. Is there some extra complexity with ssh here, or are you (rightfully) just complaining about what a mess CRLs are?
Fun fact: it was just a few months ago that Heimdall Kerberos started respecting CRLs at all, that was a crazy bug to discover
The hard part is making sure every one of your servers got the CRL update. Since last I checked OpenSSH doesn't have a mechanism to remotely check CRLs (like OCSP), nor does SSH have anything akin to OCSP stapling, it's a little bit of a footgun waiting to happen.
Oh wow... That's pretty nuts. I guess the reason is to make it harder for people to lock themselves out of all their servers if OSCP or whatever is being used to distribute the CRL is down.
Not necessarily. There is a fork of OpenSSH that supports x509, but I remember reading somewhere that it's too complex and that's why it doesn't make it into mainline.
There's extra complexity with ssh, it has its own file of revoked keys in RevokedKeys and you'll have to update that everywhere.
see https://man.openbsd.org/ssh-keygen.1#KEY_REVOCATION_LISTS for more info
And unlike some other sshd directives that have a 'Command' alternative to specify a command to run instead of reading a file, this one doesn't, so you can't just DIY distribution by having it curl a shared revocation list.
You might want to check out my project OpenPubkey[0] with uses OIDC ID Tokens inside SSH certs. For instance this let's you SSH with your gmail account. The ID token in SSH certificate expires after a few hours which makes the SSH certificate expire. You can also do something similar with SSH3 [1].
[0] OpenPubkey - https://github.com/openpubkey/openpubkey/
[1] SSH3 - https://github.com/francoismichel/ssh3
Why not just make the certificate short-lived instead of having a certificate with shorter-lived claims inside?
You can definitely do that, but it has the downside that the certificate automatically expires when you hit that the set time and then you have to reauth again. With OpenPubkey you can be much more flexible. The certificate expires at a set time, but you can use your OIDC refresh token to extend certificate expiration.
With a fixed expiration, if you choose a 2 hour expiry, the user has to reauth every 2 hours each time they start a new SSH session.
With a refreshable expiration, if you choose a 2 hour expiry, the user can refresh the certificate if they are still logged in.
This lets you set shorter expiry times because the refresh token can be used in the background.
With normal keys you have a similar issue of removing the key from all servers. If you can do this, you can also deploy a revocation list.
Easier to test if Jenkins can SSH in than to test a former employee cannot. Especially if you don't have the unencrypted private key.
Moneysphere lets you do this with tsigs on gpg keys. I find the web of trust marginally less painful than X509
I like being able to log into my server from anywhere without having to scrounge for my key file, so I end up enabling both methods. Never quite saw how a password you save on your disk and call a key is so much more secure than another password.
I’ve seen lots of passwords accidentally typed into an IRC window. Never seen that happen with an SSH key.
I heard that if you type your password in HN it will automatically get replaced by all stars.
My password is **********
See: it works! Try it!
So if I type hunter2 you see ****?
Putting aside everything else. How long is your password vs how long is your key?
It's this, plus the potential that you've reused your password, or that it's been keylogged.
My home IP doesn’t change much so I just open ssh port only to my own IP. If I travel I’ll add another IP if I need to ssh in. I don’t get locked out because I use VPS or cloud provider firewall that can be changed through console after auth/MFA. This way SSH is never exposed to the wider internet.
Another option is putting SSH on an IP on the wireguard only subnet.
It's more secure because it's resistant to MITM attacks or a compromised host. Because the password is sent, the private key isn't.
Use TOTP (keyboard-interactive) and password away!
A few more things:
An SSH key can be freely reused to log in to multiple SSH servers without compromise. Passwords should never be reused between multiple servers, because the other end could log it.
An SSH key can be stored in an agent, which provides some minor security benefits, and more importantly, adds a whole lot of convenience.
An SSH key can be tied to a Yubikey out of the box, providing strong 2FA.
This is definitely a common fallacy. While passwords and keys function similarly via the SSH protocol, there's two key things that are different. 1, your password is likely to have much lower entropy as a cryptographic secret (ie: you're shooting for 128 bits of entropy, which takes a pretty gnarly-sized password to replicate), and 2. SSH keys introduce a second layer of trust by virtue of you needing to add your key ID to the system before you even begin the authentication challenge.
Password authentication, which only uses your password to establish you are authentically you, does not establish the same level of cryptographic trust, and also does not allow the SSH server to bail out as quickly, instead needing to perform more crypto operations to discover that an unauthorized authentication attempt is being made.
To your point, you are storing the secret on your filesystem, and you should treat it accordingly. This is why folks generally advocate for the use of SSH Agents with password or other systems protecting your SSH key from being simply lifted. Even with requiring a password to unlock your key though, there's a pretty significant difference between key based and password based auth.
Wait, how often do you connect to a ssh remote that isn't controlled by you or say, your workplace? Genuinely asking, I have not seen a use case for something like that in recent years so I'm curious!
GitHub is an example of a service that would want to disable this option. They get lots of legit ssh connections from all over the world including people who may be behind large NATs.
I somehow didn't think about that, even if I used that feature just a few hours ago! Now I'm curious about how GitHub handles the ssh infra at that scale...
GitHub, as I've read[1], uses a different implementation of SSH which is tailored for their use case.
The benefits is that it is probably much lighter weight than OpenSSH (which supports a lot of different things just because it is so general[2]) and can more easily integrate with their services, while also providing the benefit of not having to spin up a shell and deal with the potential security risks that contains.
And even if somehow a major flaw is found in OpenSSH, GitHub (at least their public servers) wouldn't be affected in this case since there's no shell to escape to.
[1]: I read it on HN somewhere that I don't remember now, however you can kinda confirm this yourself if you open up a raw TCP connection to github.com, where the connection string says
SSH-2.0-babeld-9102804c
According to an HN user[2], they were using libssh in 2015.
[2]: https://news.ycombinator.com/item?id=39978089
[3]: This isn't a value judgement on OpenSSH, I think it is downright amazing. However, GitHub has a much more narrow and specific use case, especially for an intentionally public SSH server.
Even the amount of SSH authorized_keys they would need to process is a little mind boggling, they probably have some super custom stuff.
I sometimes use this: https://pico.sh/
Perhaps at a university where all students in the same class need to SSH to the same place, possibly from the same set of lab machines. A poorly configured sshd could allow some students to DoS other students.
This might be similar to the workplace scenario that you have in mind, but some students are more bold in trying dodgy things with their class accounts, because they know they probably won't get in big trouble at an university.
One of my clients has a setup for their clients - some of which connect from arbitrary locations, and others of which need to be able to scripted automated uploads - to connect via sftp to upload files.
Nobody is ever getting in, because they require ed25519 keys, but it is pounded nonstop all day long with brute force attempts. It wastes log space and IDS resources.
This is a case that could benefit from something like the new OpenSSH feature (which seems less hinky than fail2ban).
Another common case would be university students, so long as it's not applied to campus and local ISP IPs.
Git over SSH
I had a similar experience with a Postgres database once. It only mirrored some publicly available statistical data, and it was still in early development, so I didn't give security of the database any attention. My intention was anyway to only expose it to localhost.
Then I started noticing that the database was randomly "getting stuck" on the test system. This went on for a few times until I noticed that I exposed the database to the internet with postgres/postgres as credentials.
It might have been even some "friendly" attackers that changed the password when they were able to log in, to protect the server, maybe even the hosting provider. I should totally try that again once and observe what commands the attackers actually run. A bad actor probably wouldn't change the password, to stay unnoticed.
How did you accidentally expose it to the Internet, was your host DMZ?
I saw a Postgres story like this one. Badly managed AWS org with way too wide permissions, a data scientist sort of person set it up and promptly reconfigured the security group to be open to the entire internet because they needed to access it from home. And this was a rather large IT company.
Yeah on some cloud provider, the virtual networks can be all too confusing. But this story sounded like a home machine.
DMZ setting on a router makes this pretty easy.
I've faced the DMZ at an IP on DHCP. Later when the host changed I had noticed traffic from the internet getting blocked on the new host and realized my mistake.
docker compose, I accidentially commited the port mappings I set up during local development.
Is it possible to create some kind of reverse proxy for SSH which blocks password-based authentication, and furthermore only allows authentication by a known list of public keys?
The idea would be SSH to the reverse proxy, if you authenticate with an authorised public key (or certificate or whatever) it forwards your connection to the backend SSH server; all attempts to authenticate with a password are automatically rejected and never reach the backend.
In some ways what I'm describing here is a "bastion" or "jumphost", but in implementations of that idea I've seen, you SSH to the bastion/jumphost, get a shell, and then SSH again to the backend SSH – whereas I am talking about a proxy which automatically connects to the backend SSH using the same credentials once you have authenticated to it.
Furthermore, using a generic Linux box as a bastion/jumphost, you run the same risk that someone might create a weak password account–you can disable password authentication in the sshd config but what if someone turns it on? With this "intercepting proxy" idea, the proxy wouldn't even have any code to support password authentication, so you couldn't ever turn it on.
sshd_config requires root to modify, so you've got bigger problems than weak passwords at this point.
It is a lot more likely for some random admin to inappropriately change a single boolean config setting as root, than for them to replace an entire software package which (by design) doesn't have code for a certain feature with one that does.
Check out the ProxyJump and ProxyCommand option in ssh config. They let you skip the intermediate shell.
I'd love to penalize any attempt at password auth. Not the IP addresses, just if you're dumb enough to try sending a password to my ssh server, you're going to wait a good long time for the failure response.
Actually I might even want to let them into a "shell" that really screws with them, but that's far outside of ssh's scope.
I certainly don't want to expose any more surface area than necessary to potential exploits by an attacker who hasn't authenticated successfully.
Yeah you're right, the screw-with-them-shell would have to be strictly a honeypot thing, with a custom-compiled ssh and all the usual guard rails around a honeypot. The password tarpit could stay, though script kiddie tools probably scale well enough now that it's not costing them much of anything.
According to to the article, you can exempt IPs from being blocked. So it won’t impact those coming from known IPs (statics, jump hosts, etc).
most places barely even have the essential monthly email with essential services' ip in case of dns outage.
nobody cares about ips.
Serious question: why doesn't OpenSSH declare, with about a year's notice ahead of time, the intent to cut a new major release that drops support for password-based authentication?
There are very legit reasons to use passwords, for example in conjunction with a second factor. Authentication methods can also be chained.
I’m sure this will be fixed by just telling everyone to disable IPv6, par for the course.
The alternative to ipv6 is ipv4 over cgnat, which arguable has the same problem.
pam_pwnd[1], testing passwords against the Pwned Passwords database, is a(n unfortunately abandoned but credibly feature complete) thing. (It uses the HTTP service, though, not a local dump.)
[1] https://github.com/skx/pam_pwnd
meh. enabling any of the (fully local) complexity rules pretty much had the same practical effect of checking against a leak.
if the password have decent entropy, it won't be in the top 1000 of the leaks so not used in blond brute force like this.
Yes, I agree. This seems a naive fix.
Just silencing all the failed attempts may be better. So much noise in these logs anyway.
Fail2ban can help with that
Interesting paper from Tatu Ylonen. He seem to be quick on throwing out the idea of certificates only because there is no hardened CA available today? Wouldn’t it be better to solve that problem, rather than going in circles and making up new novel ways of using keys? Call it what you want, reduced to their bare essentials, in the end you either have delegated trust through a CA or a key administration problem. Whichever path you choose, it must be backed by a robust and widely adopted implementation to be successful.
As far as OpenSSH is concerned, I believe the main problem is that there is no centralized revocation functionality. You have to distribute your revocation lists via an external mechanism and ensure that all your servers are up to date. There is no built-in mechanism like OCSP, or better yet, OCSP stapling in SSH. You could use Kerberos, but it's a royal pain to set up and OpenSSH is pretty much the defacto standard when it comes to SSH servers.
Agreed. In addition to the problems you mentioned, this could also cause people to drop usage of SSH keys and go with a password instead, since it's now a "protected" authentication vector.
SSH is not HTTPS, a resource meant for the everyday consumer. If you know that you're behind a CGN, as a developer, an admin or a tool, you can solve this by using IPv6 or a VPN.
Which is kind of good? Should you access your own server if you are compromised and don't know it? Plus you get the benefit of noticing that you have a problem in your intranet.
I understand the POV that accessing it via CGN can lead to undesirable effects, but the benefit is worth it.
Then again, what benefit does it offer over fail2ban?
It's not quite fair, but if you want the best service, you have to pay for your own ipv4 or, in theory, a larger ipv6 block. Only alternative is for the ISP deploying the CGN to penalize users for suspicious behavior. Classic ip-based abuse fighter, Wikipedia banned T-Mobile USA's entire ipv6 range: https://news.ycombinator.com/item?id=32038215 where someone said they will typically block a /64, and Wikipedia says they'll block up to a /19.
Unfortunately there's no other way. Security always goes back to economics; you must make the abuse cost more than it's worth. Phone-based 2FA is also an anti-spam measure, cause clean phone numbers cost $. When trying to purchase sketchy proxies or VPNs, it basically costs more to have a cleaner ip.
Just throw away that document and switch to kerberos.
All the problems in this document are solved immediately.