return to table of content

OpenSSH introduces options to penalize undesirable behavior

janosdebugs
129 replies
1d

Having written an SSH server that is used in a few larger places, I find the perspective of enabling these features on a per-address basis by default in the future troubling. First, with IPv4 this will have the potential to increasingly penalize innocent bystanders as CGNs are deployed. Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network. With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.

From my experiments with several honeypots over a longer period of time, most of these attacks are dumb dictionary attacks. Unless you are using default everything (user, port, password), these attacks don't represent a significant threat and more targeted attacks won't be caught by this. (Please use SSH keys.)

I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.

If you want to have a read about unsolved problems around SSH that should be addressed, Tatu Ylonen (the inventor of SSH) has written a paper about it in 2019: https://helda.helsinki.fi/server/api/core/bitstreams/471f0ff...

TacticalCoder
34 replies
23h28m

First, with IPv4 this will have the potential to increasingly penalize innocent bystanders... Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network.

So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Your concerned are addressed in TFA:

... and to shield specific clients from penalty

A PerSourcePenaltyExemptList option allows certain address ranges to be exempt from all penalties.

It's easy for the original owner to find the list of all the IP blocks the three or four ISPs he's legitimately be connecting from to that exemption list.

I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".

There's nothing more depressing than that approach.

Kudos to the author of that new functionality: there may be issues, it may not be the panacea, but at least he's trying.

hartator
16 replies
22h51m

It would be frustrating to be denied access to your own servers because you are traveling and are on a bad IP for some reason.

Picture the amount of Captchas you already getting from a legitimate Chrome instance, but instead of by-passable annoying captchas, you are just locked out.

grepfru_it
7 replies
22h49m

I have fail2ban configured on one of my servers for port 22 (a hidden port does not have any such protections on it) and I regularly lock out my remote address because I fat finger the password. I would not suggest doing this for a management interface unless you have secondary access

bartekrutkowski
6 replies
21h41m

Why would you use password based auth instead of priv/pub key auth? You'd avoid this and many other security risks.

fragmede
5 replies
21h17m

what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?

After this party, this guy needed help, he lost his wallet and his phone, his sister also went to the party and gave him a ride there but had left. he didn't know her number to call her, and she'd locked down her socials so we couldn't use my phone to contact her. we were lucky that his socials weren't super locked down and managed to find someone that way, but priv keys are only good so long as you have them.

akira2501
2 replies
20h55m

I use a yubikey. You need a password to use the key. It has it's own brute force management that is far less punishing than a remote SSH server deciding to not talk to me anymore.

fragmede
1 replies
20h51m

but what do you do if you don't have the key? unless it's implanted (which, https://dangerousthings.com/), I don't know that I won't lose it somehow.

akira2501
0 replies
20h43m

My keyboard has a built in USB hub and ports. They key lives there. They keyboard travels with me. It's hard to lose.

I have a backup key in storage. I have escrow mechanisms. These would be inconvenient, but, it's been 40 years since I've lost any keys or my wallet, so I feel pretty good about my odds.

Which is what the game here is. The odds. Famously humans do poorly when it comes to this.

usrbinbash
0 replies
10h30m

what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?

My ssh keys are encrypted. They need a password, or they are worthless.

Sure, I can mistype that password as well, but doing so has no effect on the remote system, as the ssh client already fails locally.

bartekrutkowski
0 replies
9h52m

You can and you should back up your keys. There isn't a 100% safe, secure and easy method that shields you from everything that can possibly happen, but there are enough safe, secure and easy ones to cover vast majority of cases other than a sheer catastrophe, which is good enough not to use outdated and security prone mechanisms like passwords on network exposed service.

hot_gril
5 replies
22h30m

What's the alternative? If you get onto a bad IP today, you're essentially blocked from the entire Internet. Combined with geolocks and national firewalls, we're already well past the point where you need a home VPN if you want reliable connectivity while traveling abroad.

AnthonyMouse
4 replies
18h7m

What happens when your home VPN is inaccessible from your crappy network connection? There are plenty of badly administered networks that block arbitrary VPN/UDP traffic but not ssh. Common case is the admin starts with default deny and creates exceptions for HTTP and whatever they use themselves, which includes ssh but not necessarily whatever VPN you use.

hot_gril
3 replies
17h52m

Same as when a crappy network blocks SSH, you get better internet. Or if SSH is allowed, use a VPN over TCP port 22.

AnthonyMouse
2 replies
17h46m

Better internet isn't always available. A VPN on the ssh port isn't going to do you much good if someone sharing your IP address is doing brute force attempts against the ssh port on every IP address and your system uses that as a signal to block the IP address.

Unless you're only blocking connection attempts to ssh and not the VPN, but what good is that? There is no reason to expect the VPN to be any more secure than OpenSSH.

hot_gril
1 replies
17h40m

If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it. If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).

Hacking into the VPN doesn't get the attacker into the SSH server too, so there's defense in depth, if your concern is that sshd might have a vulnerability that can be exploited with repeated attempts. If your concern is that your keys might be stolen, this feature doesn't make sense to begin with.

AnthonyMouse
0 replies
15h44m

If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it.

Websites usually don't care about ssh brute force attempts because they don't listen on ssh. But the issue isn't websites anyway. The problem is that your server is blocking you, regardless of what websites are doing.

If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).

Then you have a VPN exposed to the internet in addition to SSH, and if you're not rate limiting connections to that then you should be just as concerned that the VPN "might have a vulnerability that can be exploited with repeated attempts." Whereas if the SSH server is only accessible via the VPN then having the SSH server rate limiting anything is only going to give you the opportunity to lock yourself out through fat fingering or a misconfigured script, since nobody else can access it.

Also notably, the most sensible way to run a VPN over TCP port 22 is generally to use the VPN which is built into OpenSSH. But now this change would have you getting locked out of the VPN too.

semi
0 replies
22h35m

It would also be very rare. The penalties described here start at 30s, I don't know the max, but presumably whatever is issuing the bad behavior from that IP range will give up at some point when the sshd stops responding rather than continuing to brute force at 1 attempt per some amount of hours.

And that's still assuming you end up in a range that is actively attacking your sshd. It's definitely possible but really doesn't seem like a bad tradeoff

1oooqooq
0 replies
17h44m

lol. depending were you travel the whole continent is already blanket banned anyway. but that only happens because nobody travels there. so it is never a problem.

usrbinbash
13 replies
23h14m

So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

The thing is, we have tools to implement this without changing sshd's behavior. `fail2ban` et. al. exist for a reason.

sleepybrett
9 replies
22h28m

Sure but if I only used fail2ban for sshd why should I install two separate pieces of software to handle the problem which the actual software I want to run has it built in?

pixl97
7 replies
21h51m

Turning every piece of software into a kitchen sink increases its security exposure in other ways.

hnlmorg
4 replies
20h46m

Normally I would agree with you, but fail2ban is a Python routine which forks processes based on outcomes from log parsing via regex. There’s so many ways that can go wrong…and has gone wrong, from one or two experiences I’ve had in the past.

This is exactly the sort of thing that should be part of the server. In exactly the same way that some protocol clients have waits between retries to avoid artificial rate limiting from the server.

1oooqooq
2 replies
17h46m

still better trying to improve fail2ban than to add a (yet another) kitchen sink on sshd

hot_gril
1 replies
17h21m

fail2ban has been around for so long, people get impatient at some point

usrbinbash
0 replies
9h39m

There’s so many ways that can go wrong

There are a lot of ways a builtin facility of one service can go wrong, especially if it ends up being active by default on a distro.

`fail2ban` is common, well known, battle-tested. And its also [not without alternatives][1].

[1]: https://alternativeto.net/software/fail2ban/

nerdbert
0 replies
17h38m

fail2ban has a lot of moving parts, I don't think that's necessarily more secure.

I would trust the OpenSSH developers to do a better job with the much simpler requirements associated with handling it within their own software.

fragmede
0 replies
21h18m

a system where sshd outputs to a log file then someone else picks it up and then pokes at iptables, seems much more of hacky than having sshd supporting that natively, imo. Sshd is already tracking connection status, having it set the status to deny seems like less of a kitchen sink and more just about security. the S in ssh for secure, and this is just improving that.

hnlmorg
1 replies
20h54m

Yeah, they exist because nothing better was available at that time.

It doesn’t hurt to have this functionality in openssh too. If you still need to use fail2ban, denyhosts, or whatever, then don’t enable the openssh behaviour feature. It’s really that simple.

usrbinbash
0 replies
10h41m

How is baking this into sshd "better"?

UNIX Philosophy: "Do one thing, and do it well". An encrypted remote shell protocol server should not be responsible for fending off attackers. That's the job of IDS and IPS daemons.

Password-based ssh is an anachronism anyway. For an internet-facing server, people should REALLY use ssh keys instead (and preferably use a non-standard port, and maybe even port knocking).

dfox
0 replies
21h54m

The issue is that the log parsing things like fail2ban work asynchronously. It is probably of only theoretical importance, but on the other hand the meaningful threat actors are usually surprisingly fast.

linuxftw
0 replies
16h37m

So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Yes, because as soon as the security clowns find out about these features, we have to start turning it on to check their clown boxes.

janosdebugs
0 replies
22h33m

There is nothing wrong with this approach if enabled as an informed decision. It's the part where they want to enable this by default I have a problem with.

Things that could be done is making password auth harder to configure to encourage key use instead, or invest time into making SSH CAs less of a pain to use. (See the linked paper, it's not a long read.)

benchaney
0 replies
22h19m

So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Random brute force attempts against SSH are already a 100% solved problem, so doing nothing beyond maintaining the status quo seems pretty reasonable IMO.

I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".

Setting this up by default (as is being proposed) would definitely break a lot of existing use cases. The only risk that is minuscule here is the risk from not making this change.

I don't see any particularly reason to applaud making software worse just because someone is "trying".

crote
29 replies
23h9m

With IPv6 on the other hand, it is trivially easy to get a new IP

OpenSSH already seems to take that into account by allowing you to penalize not just a single IP, but also an entire subnet. Enable that to penalize an entire /64 for IPv6, and you're in pretty much the same scenario as "single IPv4 address".

I think there's some limited value in it. It could be a neat alternative to allowlisting your own IP which doesn't completely block you from accessing it from other locations. Block larger subnets at once if you don't care about access from residential connections, and it would act as a very basic filter to make annoying attacks stop. Not providing any real security, but at least you're not spending any CPU cycles on them.

On the other hand, I can definitely see CGNAT resulting in accidental or intentional lockouts for the real owner. Enabling it by default on all installations probably isn't the best choice.

aftbit
27 replies
22h56m

FYI it's pretty common to get a /48 or a /56 from a data center, or /60 from Comcast.

hot_gril
20 replies
22h44m

Maybe the only equivalent is to penalize a /32, since there are roughly as many of those as there are ipv4 addresses.

janosdebugs
19 replies
22h22m

That may be true mathematically, but there are no guarantees that a small provider won't end up having only a single /64, which would likely be the default unit of range-based blocking. Yes, it "shouldn't" happen.

hot_gril
8 replies
22h20m

Right. It's analogous to how blocking an ipv4 is unfair to smaller providers using cgnat. But if someone wants to connect to your server, you might want them to have skin in the game.

janosdebugs
7 replies
22h8m

The provider doesn't care, the owner of the server who needs to log in from their home internet at 2AM in an emergency cares. Bad actors have access to botnets, the server admin doesn't.

hot_gril
6 replies
22h6m

Unfortunately the only answer is "pay to play." If you're a server admin needing emergency access, you or your employer should pay for an ISP that isn't using cgnat (and has reliable connectivity). Same as how you probably have a real phone sim instead of a cheap voip number that's banned in tons of places.

Or better yet, a corp VPN with good security practices so you don't need this fail2ban-type setup. It's also weird to connect from home using password-based SSH in the first place.

AnthonyMouse
4 replies
18h15m

The better answer is to just ignore dull password guessing attempts which will never get in because you're using strong passwords or public key authentication (right?).

Sometimes it's not a matter of price. If you're traveling your only option for a network connection could be whatever dreck the hotel deigns to provide.

hot_gril
3 replies
17h55m

Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd. If you're traveling, cellular and VPN are both options. VPN could have a similar auth dilemma, but there's defense in depth.

Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.

AnthonyMouse
2 replies
17h49m

Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd.

DoS in this context is generally pretty boring. Your CPU would end up at 100% and the service would be slower to respond but still would. Also, responding to a DoS attempt by blocking access is a DoS vector for anyone who can share or spoof your IP address, so that seems like a bad idea.

If someone is trying to exploit sshd, they'll typically do it on the first attempt and this does nothing.

Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.

It is when the hotel is using the cheapest available ISP with CGNAT.

hot_gril
1 replies
17h33m

Good point on the DoS. Exploit on first attempt, maybe, I wouldn't count on that. Can't say how likely a timing exploit is.

If the hotel is using such a dirty shared IP that it's also being used to spam random SSH servers, that connection is probably impractical for several other reasons, e.g. flagged on Cloudflare. At that point I'd go straight to a VPN or hotspot.

AnthonyMouse
0 replies
15h29m

Novel timing attacks like that are pretty unlikely, basically someone with a 0-day, because otherwise they quickly get patched. If the adversary is someone with access to 0-day vulnerabilities, you're pretty screwed in general and it isn't worth a lot of inconvenience to try to prevent something inevitable.

And there is no guarantee you can use another network connection. Hotspots only work if there's coverage.

Plus, "just use a hotspot or a VPN" assumes you were expecting the problem. This change is going to catch a lot of people out because the first time they realize it exists is during the emergency when they try to remote in.

thayne
0 replies
13h52m

you or your employer should pay for an ISP that isn't using cgna

That may not be an option at all, especially with working from home or while traveling.

For example at my home all ISPs i have available use cgnat.

dfox
5 replies
21h58m

You cannot reasonably build an ISP network with single /64. RIPE assigns /32s to LIRs and LIRs are supposed to assign /48s downstream (which is somewhat wasteful for most of kinds of mass-market customers, so you get things like /56s and /60s).

hot_gril
3 replies
21h53m

What if it uses NAT v6 :D

1oooqooq
2 replies
17h56m

i cannot tell if facetious or business genius.

pantalaimon
0 replies
5h33m

That’s what Azure does. They also only allow a maximum of 16(!) IPv6 addresses per Host because of that.

hot_gril
0 replies
17h25m

Well seriously, I remember AT&T cellular giving me an ipv6 behind a cgnat (and also an ipv4). Don't quote me on that though.

janosdebugs
0 replies
21h46m

As I said, "should". In some places there will be enough people in the chain that won't be bothered to go to the LIR directly. Think small rural ISPs in small countries.

Sanzig
3 replies
18h19m

Well, allocating anything smaller than a /64 to a customer breaks SLAAC, so even a really small provider wouldn't do that as it would completely bork their customers' networks. Yes, DHCPv6 technically exists as an alternative to SLAAC, but some operating systems (most notably Android) don't support it it all.

tsimionescu
2 replies
12h52m

There are plenty of ISPs that assign /64s and even smaller subnet to their customers. There are even ISPs that assign a single /128, IPv4 style.

patmorgan23
0 replies
1h36m

We should not bend over backwards for people not following the standard.

Build tools that follow the standard/best practices by default, maybe build in an exception list/mechanism.

IPv6 space is plentiful and easy to obtain, people who are allocating it incorrectly should feel the pain of that decision.

cereal_cable
0 replies
2h57m

I can't imagine why any ISP would do such absurd things when in my experience you're given sufficient resources on your first allocation. My small ISP received a /36 of IPv6 space, I couldn't imagine giving less than a /64 to a customer.

dheera
4 replies
9h27m

I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"

People should write 80/48 or 48/80 to be clear

immibis
1 replies
5h9m

It's not about how many bits are 1 - it's about how many bits are important. And the first bits are always most important. So it's the first x bits.

If you have a /48 then 48 bits are used to determine the address is yours. Any address which matches in the first 48 bits is yours. If you have a /64, any address which matches in the first 64 bits is yours.

patmorgan23
0 replies
1h41m

It's about how many bits are 1, in the subnet mask.

patmorgan23
0 replies
1h41m

/x is almost always the number of network bits (so the first half). There are some Cisco ISO commands that are the opposite but those are by far the minority.

99/100 it means the first bits.

merlincorey
0 replies
8h10m

I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"

People should write 80/48 or 48/80 to be clear

The clarity is found implied in your preferred example.

- "80/" would mean "80 bits before"

- "/48" would mean "48 bits after"

janosdebugs
0 replies
22h39m

IPv6 has the potential to be even worse. You could be knocking an entire provider offline. At any rate, this behavior should not become default.

Latty
13 replies
23h52m

And even with IPv4, botnets are a common attack source, so hitting from many endpoints isn't that hard.

I'd say "well, it might catch the lowest effort attacks", but when SSH keys exist and solve many more problems in a much better way, it really does feel pointless.

Maybe in an era where USB keys weren't so trivial, I'd buy the argument of "what if I need to access from another machine", but if you really worry about that, put your (password protected) keys on a USB stick and shove it in your wallet or on your keyring or whatever. (Are there security concerns there? Of course, but no more than typing your password in on some random machine.)

janosdebugs
12 replies
23h46m

You can use SSH certificate authorities (not x509) with OpenSSH to authorize a new key without needing to deploy a new key on the server. Also, Yubikeys are useful for this.

tonyarkles
10 replies
23h30m

Just a warning for people who are planning on doing this: it works amazingly well but if you're using it in a shared environment where you may end up wanting to revoke a key (e.g. terminating an employee) the key revocation problem can be a hassle. In one environment I worked in we solved it by issuing short-term pseudo-ephemeral keys (e.g. someone could get a prod key for an hour) and side-stepped the problem.

The problem is that you can issue keys without having to deploy them to a fleet of servers (you sign the user's pubkey using your SSH CA key), but you have no way of revoking them without pushing an updated revocation list to the whole fleet. We did have a few long-term keys that were issued, generally for build machines and dev environments, and had a procedure in place to push CRLs if necessary, but luckily we didn't ever end up in a situation where we had to use it.

tiberious726
4 replies
23h2m

Setting up regular publishing of CRLs is just part of setting up a CA. Is there some extra complexity with ssh here, or are you (rightfully) just complaining about what a mess CRLs are?

Fun fact: it was just a few months ago that Heimdall Kerberos started respecting CRLs at all, that was a crazy bug to discover

janosdebugs
2 replies
22h24m

The hard part is making sure every one of your servers got the CRL update. Since last I checked OpenSSH doesn't have a mechanism to remotely check CRLs (like OCSP), nor does SSH have anything akin to OCSP stapling, it's a little bit of a footgun waiting to happen.

tiberious726
1 replies
20h9m

Oh wow... That's pretty nuts. I guess the reason is to make it harder for people to lock themselves out of all their servers if OSCP or whatever is being used to distribute the CRL is down.

janosdebugs
0 replies
12h57m

Not necessarily. There is a fork of OpenSSH that supports x509, but I remember reading somewhere that it's too complex and that's why it doesn't make it into mainline.

semi
0 replies
22h27m

There's extra complexity with ssh, it has its own file of revoked keys in RevokedKeys and you'll have to update that everywhere.

see https://man.openbsd.org/ssh-keygen.1#KEY_REVOCATION_LISTS for more info

And unlike some other sshd directives that have a 'Command' alternative to specify a command to run instead of reading a file, this one doesn't, so you can't just DIY distribution by having it curl a shared revocation list.

EthanHeilman
2 replies
21h2m

You might want to check out my project OpenPubkey[0] with uses OIDC ID Tokens inside SSH certs. For instance this let's you SSH with your gmail account. The ID token in SSH certificate expires after a few hours which makes the SSH certificate expire. You can also do something similar with SSH3 [1].

[0] OpenPubkey - https://github.com/openpubkey/openpubkey/

[1] SSH3 - https://github.com/francoismichel/ssh3

lmz
1 replies
13h48m

Why not just make the certificate short-lived instead of having a certificate with shorter-lived claims inside?

EthanHeilman
0 replies
3h22m

You can definitely do that, but it has the downside that the certificate automatically expires when you hit that the set time and then you have to reauth again. With OpenPubkey you can be much more flexible. The certificate expires at a set time, but you can use your OIDC refresh token to extend certificate expiration.

With a fixed expiration, if you choose a 2 hour expiry, the user has to reauth every 2 hours each time they start a new SSH session.

With a refreshable expiration, if you choose a 2 hour expiry, the user can refresh the certificate if they are still logged in.

This lets you set shorter expiry times because the refresh token can be used in the background.

Too
1 replies
11h3m

With normal keys you have a similar issue of removing the key from all servers. If you can do this, you can also deploy a revocation list.

therein
0 replies
9h18m

Easier to test if Jenkins can SSH in than to test a former employee cannot. Especially if you don't have the unencrypted private key.

tiberious726
0 replies
23h1m

Moneysphere lets you do this with tsigs on gpg keys. I find the web of trust marginally less painful than X509

jimmaswell
11 replies
22h33m

I like being able to log into my server from anywhere without having to scrounge for my key file, so I end up enabling both methods. Never quite saw how a password you save on your disk and call a key is so much more secure than another password.

cubesnooper
2 replies
22h26m

I’ve seen lots of passwords accidentally typed into an IRC window. Never seen that happen with an SSH key.

arp242
1 replies
21h3m

I heard that if you type your password in HN it will automatically get replaced by all stars.

My password is **********

See: it works! Try it!

julesallen
0 replies
17h35m

So if I type hunter2 you see ****?

sleepybrett
1 replies
22h29m

Putting aside everything else. How long is your password vs how long is your key?

hot_gril
0 replies
22h24m

It's this, plus the potential that you've reused your password, or that it's been keylogged.

marcrosoft
1 replies
22h27m

My home IP doesn’t change much so I just open ssh port only to my own IP. If I travel I’ll add another IP if I need to ssh in. I don’t get locked out because I use VPS or cloud provider firewall that can be changed through console after auth/MFA. This way SSH is never exposed to the wider internet.

dizhn
0 replies
5h48m

Another option is putting SSH on an IP on the wireguard only subnet.

swinglock
0 replies
22h12m

It's more secure because it's resistant to MITM attacks or a compromised host. Because the password is sent, the private key isn't.

janosdebugs
0 replies
22h15m

Use TOTP (keyboard-interactive) and password away!

cubesnooper
0 replies
22h22m

A few more things:

An SSH key can be freely reused to log in to multiple SSH servers without compromise. Passwords should never be reused between multiple servers, because the other end could log it.

An SSH key can be stored in an agent, which provides some minor security benefits, and more importantly, adds a whole lot of convenience.

An SSH key can be tied to a Yubikey out of the box, providing strong 2FA.

belthesar
0 replies
21h25m

This is definitely a common fallacy. While passwords and keys function similarly via the SSH protocol, there's two key things that are different. 1, your password is likely to have much lower entropy as a cryptographic secret (ie: you're shooting for 128 bits of entropy, which takes a pretty gnarly-sized password to replicate), and 2. SSH keys introduce a second layer of trust by virtue of you needing to add your key ID to the system before you even begin the authentication challenge.

Password authentication, which only uses your password to establish you are authentically you, does not establish the same level of cryptographic trust, and also does not allow the SSH server to bail out as quickly, instead needing to perform more crypto operations to discover that an unauthorized authentication attempt is being made.

To your point, you are storing the secret on your filesystem, and you should treat it accordingly. This is why folks generally advocate for the use of SSH Agents with password or other systems protecting your SSH key from being simply lifted. Even with requiring a password to unlock your key though, there's a pretty significant difference between key based and password based auth.

mardifoufs
8 replies
22h7m

Wait, how often do you connect to a ssh remote that isn't controlled by you or say, your workplace? Genuinely asking, I have not seen a use case for something like that in recent years so I'm curious!

asveikau
3 replies
19h12m

GitHub is an example of a service that would want to disable this option. They get lots of legit ssh connections from all over the world including people who may be behind large NATs.

mardifoufs
2 replies
18h45m

I somehow didn't think about that, even if I used that feature just a few hours ago! Now I'm curious about how GitHub handles the ssh infra at that scale...

jkrejcha
0 replies
12h49m

GitHub, as I've read[1], uses a different implementation of SSH which is tailored for their use case.

The benefits is that it is probably much lighter weight than OpenSSH (which supports a lot of different things just because it is so general[2]) and can more easily integrate with their services, while also providing the benefit of not having to spin up a shell and deal with the potential security risks that contains.

And even if somehow a major flaw is found in OpenSSH, GitHub (at least their public servers) wouldn't be affected in this case since there's no shell to escape to.

[1]: I read it on HN somewhere that I don't remember now, however you can kinda confirm this yourself if you open up a raw TCP connection to github.com, where the connection string says

SSH-2.0-babeld-9102804c

According to an HN user[2], they were using libssh in 2015.

[2]: https://news.ycombinator.com/item?id=39978089

[3]: This isn't a value judgement on OpenSSH, I think it is downright amazing. However, GitHub has a much more narrow and specific use case, especially for an intentionally public SSH server.

asveikau
0 replies
15h58m

Even the amount of SSH authorized_keys they would need to process is a little mind boggling, they probably have some super custom stuff.

palata
0 replies
21h38m

I sometimes use this: https://pico.sh/

omoikane
0 replies
21h35m

Perhaps at a university where all students in the same class need to SSH to the same place, possibly from the same set of lab machines. A poorly configured sshd could allow some students to DoS other students.

This might be similar to the workplace scenario that you have in mind, but some students are more bold in trying dodgy things with their class accounts, because they know they probably won't get in big trouble at an university.

nerdbert
0 replies
17h42m

One of my clients has a setup for their clients - some of which connect from arbitrary locations, and others of which need to be able to scripted automated uploads - to connect via sftp to upload files.

Nobody is ever getting in, because they require ed25519 keys, but it is pounded nonstop all day long with brute force attempts. It wastes log space and IDS resources.

This is a case that could benefit from something like the new OpenSSH feature (which seems less hinky than fail2ban).

Another common case would be university students, so long as it's not applied to campus and local ISP IPs.

heavyset_go
0 replies
20h11m

Git over SSH

andix
5 replies
22h55m

I had a similar experience with a Postgres database once. It only mirrored some publicly available statistical data, and it was still in early development, so I didn't give security of the database any attention. My intention was anyway to only expose it to localhost.

Then I started noticing that the database was randomly "getting stuck" on the test system. This went on for a few times until I noticed that I exposed the database to the internet with postgres/postgres as credentials.

It might have been even some "friendly" attackers that changed the password when they were able to log in, to protect the server, maybe even the hosting provider. I should totally try that again once and observe what commands the attackers actually run. A bad actor probably wouldn't change the password, to stay unnoticed.

hot_gril
4 replies
22h40m

How did you accidentally expose it to the Internet, was your host DMZ?

janosdebugs
2 replies
22h18m

I saw a Postgres story like this one. Badly managed AWS org with way too wide permissions, a data scientist sort of person set it up and promptly reconfigured the security group to be open to the entire internet because they needed to access it from home. And this was a rather large IT company.

hot_gril
1 replies
22h17m

Yeah on some cloud provider, the virtual networks can be all too confusing. But this story sounded like a home machine.

pixl97
0 replies
21h50m

DMZ setting on a router makes this pretty easy.

I've faced the DMZ at an IP on DHCP. Later when the host changed I had noticed traffic from the internet getting blocked on the new host and realized my mistake.

andix
0 replies
15h18m

docker compose, I accidentially commited the port mappings I set up during local development.

skissane
3 replies
16h22m

I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.

Is it possible to create some kind of reverse proxy for SSH which blocks password-based authentication, and furthermore only allows authentication by a known list of public keys?

The idea would be SSH to the reverse proxy, if you authenticate with an authorised public key (or certificate or whatever) it forwards your connection to the backend SSH server; all attempts to authenticate with a password are automatically rejected and never reach the backend.

In some ways what I'm describing here is a "bastion" or "jumphost", but in implementations of that idea I've seen, you SSH to the bastion/jumphost, get a shell, and then SSH again to the backend SSH – whereas I am talking about a proxy which automatically connects to the backend SSH using the same credentials once you have authenticated to it.

Furthermore, using a generic Linux box as a bastion/jumphost, you run the same risk that someone might create a weak password account–you can disable password authentication in the sshd config but what if someone turns it on? With this "intercepting proxy" idea, the proxy wouldn't even have any code to support password authentication, so you couldn't ever turn it on.

lcampbell
1 replies
14h34m

what if someone turns [password authentication back] on

sshd_config requires root to modify, so you've got bigger problems than weak passwords at this point.

skissane
0 replies
14h8m

It is a lot more likely for some random admin to inappropriately change a single boolean config setting as root, than for them to replace an entire software package which (by design) doesn't have code for a certain feature with one that does.

Too
0 replies
11h8m

Check out the ProxyJump and ProxyCommand option in ssh config. They let you skip the intermediate shell.

chuckadams
2 replies
21h36m

I'd love to penalize any attempt at password auth. Not the IP addresses, just if you're dumb enough to try sending a password to my ssh server, you're going to wait a good long time for the failure response.

Actually I might even want to let them into a "shell" that really screws with them, but that's far outside of ssh's scope.

mike_hock
1 replies
18h50m

I certainly don't want to expose any more surface area than necessary to potential exploits by an attacker who hasn't authenticated successfully.

chuckadams
0 replies
18h14m

Yeah you're right, the screw-with-them-shell would have to be strictly a honeypot thing, with a custom-compiled ssh and all the usual guard rails around a honeypot. The password tarpit could stay, though script kiddie tools probably scale well enough now that it's not costing them much of anything.

yardstick
1 replies
19h45m

Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network.

According to to the article, you can exempt IPs from being blocked. So it won’t impact those coming from known IPs (statics, jump hosts, etc).

1oooqooq
0 replies
17h52m

most places barely even have the essential monthly email with essential services' ip in case of dns outage.

nobody cares about ips.

solatic
1 replies
10h37m

Serious question: why doesn't OpenSSH declare, with about a year's notice ahead of time, the intent to cut a new major release that drops support for password-based authentication?

janosdebugs
0 replies
10h27m

There are very legit reasons to use passwords, for example in conjunction with a second factor. Authentication methods can also be chained.

overstay8930
1 replies
23h49m

With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.

I’m sure this will be fixed by just telling everyone to disable IPv6, par for the course.

dmm
0 replies
23h0m

The alternative to ipv6 is ipv4 over cgnat, which arguable has the same problem.

mananaysiempre
1 replies
23h52m

I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment".

pam_pwnd[1], testing passwords against the Pwned Passwords database, is a(n unfortunately abandoned but credibly feature complete) thing. (It uses the HTTP service, though, not a local dump.)

[1] https://github.com/skx/pam_pwnd

1oooqooq
0 replies
17h50m

meh. enabling any of the (fully local) complexity rules pretty much had the same practical effect of checking against a leak.

if the password have decent entropy, it won't be in the top 1000 of the leaks so not used in blond brute force like this.

hartator
1 replies
22h54m

Yes, I agree. This seems a naive fix.

Just silencing all the failed attempts may be better. So much noise in these logs anyway.

grepfru_it
0 replies
22h50m

Fail2ban can help with that

Too
1 replies
10h51m

Interesting paper from Tatu Ylonen. He seem to be quick on throwing out the idea of certificates only because there is no hardened CA available today? Wouldn’t it be better to solve that problem, rather than going in circles and making up new novel ways of using keys? Call it what you want, reduced to their bare essentials, in the end you either have delegated trust through a CA or a key administration problem. Whichever path you choose, it must be backed by a robust and widely adopted implementation to be successful.

janosdebugs
0 replies
10h34m

As far as OpenSSH is concerned, I believe the main problem is that there is no centralized revocation functionality. You have to distribute your revocation lists via an external mechanism and ensure that all your servers are up to date. There is no built-in mechanism like OCSP, or better yet, OCSP stapling in SSH. You could use Kerberos, but it's a royal pain to set up and OpenSSH is pretty much the defacto standard when it comes to SSH servers.

waihtis
0 replies
23h35m

Agreed. In addition to the problems you mentioned, this could also cause people to drop usage of SSH keys and go with a password instead, since it's now a "protected" authentication vector.

qwertox
0 replies
7h19m

innocent bystanders as CGNs are deployed

SSH is not HTTPS, a resource meant for the everyday consumer. If you know that you're behind a CGN, as a developer, an admin or a tool, you can solve this by using IPv6 or a VPN.

Worst case, this will give bad actors the option to lock the original owner out of their own server

Which is kind of good? Should you access your own server if you are compromised and don't know it? Plus you get the benefit of noticing that you have a problem in your intranet.

I understand the POV that accessing it via CGN can lead to undesirable effects, but the benefit is worth it.

Then again, what benefit does it offer over fail2ban?

hot_gril
0 replies
22h36m

It's not quite fair, but if you want the best service, you have to pay for your own ipv4 or, in theory, a larger ipv6 block. Only alternative is for the ISP deploying the CGN to penalize users for suspicious behavior. Classic ip-based abuse fighter, Wikipedia banned T-Mobile USA's entire ipv6 range: https://news.ycombinator.com/item?id=32038215 where someone said they will typically block a /64, and Wikipedia says they'll block up to a /19.

Unfortunately there's no other way. Security always goes back to economics; you must make the abuse cost more than it's worth. Phone-based 2FA is also an anti-spam measure, cause clean phone numbers cost $. When trying to purchase sketchy proxies or VPNs, it basically costs more to have a cleaner ip.

Grimeton
0 replies
21h42m

Just throw away that document and switch to kerberos.

All the problems in this document are solved immediately.

Someone1234
53 replies
1d

This is great, and helps solve several problems at once.

I would like to remind everyone that an internet facing SSH with a password is very unwise. I would argue you need to be able to articulate the justification for it, using keys is actually more convenient and significantly more secure.

Aside from initial boot, I cannot think of the last time I used a password for SSH instead of a key even on a LAN. Support for keys is universal and has been for most of my lifespan.

mianosm
16 replies
1d

There's a high bar to set for most organizations. Leveraging certificates is excellent if the supporting and engineering actors are all in accordance with how to manage and train the users and workforce how to use them (think root authorities, and revoking issued certificates from an authority).

I've seen a few attempts to leverage certificates, or GPG; and keys nearly always are an 'easier' process with less burden to teach (which smart(er) people at times hate to do).

wkat4242
6 replies
1d

You can store your regular keys in gpg, it's a nice middle ground especially if you store them on a yubikey with openpgp.

Of course OpenSSH also supports fido2 now but it's pretty new and many embedded servers don't support it. So I'm ignoring it for now. I need an openpgp setup for my password manager anyway.

KAMSPioneer
5 replies
1d

I use both PKCS#11 and OpenPGP SSH keys and in my opinion, PKCS#11 is a better user experience if you don't also require PGP functionality. Especially if you're supporting MaxOS clients as you can just use Secretive[0]. As you say, FIDO is even better but comes with limitations on both client and server, which makes life tough.

[0] https://github.com/maxgoedjen/secretive

wkat4242
4 replies
22h32m

Oh yeah I don't really use macOS anymore. And I do really need PGP functionality for my password manager.

I used pkcs11 before with openct and opensc (on OpenHSM PIV cards) and the problem I had with it was that I always needed to runtime-link a library to the SSH binary to make it work which was often causing problems on different platforms.

The nice thing about using PGP/GPG is that it can simulate an SSH agent so none of this is necessary, it will just communicate with the agent over a local socket.

palata
3 replies
21h33m

And I do really need PGP functionality for my password manager.

Just curious: is it https://www.passwordstore.org/?

wkat4242
2 replies
15h12m

Yes it is! It's great!

wkat4242
1 replies
7h19m

By the way, to elaborate, I love it because it's really secure when used with yubikeys, it's fully self hosted, it works on all the platforms I use including android and it's very flexible. There's no master password to guess which is always a bit of an Achilles heel with traditional PW managers. Because you have to use it so much you don't really want to have it too long or complex. This solves that while keeping it very secure.

The one thing I miss a bit is that it doesn't do passkeys. But well.

palata
0 replies
7h10m

I use it as well (with a Yubikey) and I love it! On Android I use Android-Password-Store [1], which is nice too. There is just this issue with OpenKeychain that concerns me a bit, I am not sure if Android-Password-Store will still support hardware keys when moving to v2... but other than that it's great!

[1]: https://github.com/android-password-store/Android-Password-S...

upon_drumhead
5 replies
1d

SSH Certificates are vastly different then the certificates you are referencing.

SSH Certificates are actually just a SSH Key attested by another SSH Key. There's no revocation system in place, nor anything more advanced then "I trust key x and so any keys signed by X I will trust"

karmarepellent
2 replies
1d

I am no familiar with SSH certificates either. But if there is no revocation system in place, how can I be sure access from a person can be revoked?

At our org we simply distribute SSH public keys via Puppet. So if some leaves, switches teams (without access to our servers) or their key must be renewed, we simply update a line in a config file and call it a day.

That way we also have full control over what types of keys are supported and older, broken kex and signature algorithms are disabled.

hotdogs
1 replies
23h46m

The certificates have a validity window that sshd also checks. So the CA can sign a certificate for a short window (hours), until the user has to request a new one.

chgs
0 replies
22h9m

One department in my cops y does this - you authenticate once with your standard company wide oidc integration (which has instant JML), and you get a key for 20 hours (enough for even the longest shift but not enough that you don’t need to reauth the next day).

magmastonealex
0 replies
1d

There is a revocation system in place (the RevokedKeys directive in the sshd configuration file, which seems to be system-wide rather than configured at the user-level. At least, that’s the only way I’ve used it)

I agree with the sentiment though, it is far less extensive than traditional X.509 certificate infrastructure.

gnufx
0 replies
22h8m

SSH Certificates are vastly different then the certificates you are referencing.

And the SSH maintainers will refuse offers of X.509 support, with a justification.

jeroenhd
2 replies
20h58m

I like SSH certificates, and I use them on my own servers, but for organizations there's a nasty downside: SSH certificates lack good revocation logic. OCSP/CRL checks and certificate transparency protect browsers from this, but SSH doesn't have that good a solution for that.

Unless you regenerate them every day or have some kind of elaborate synchronisation process set up on the server side, a malicious ex-employee could abuse the old credentials post-termination.

This could be worked around leveraging TPMs, which would allow storing the keys themselves on hardware that can be confiscated, but standard user-based auth has a lot more (user-friendly) tooling and integration options.

CGamesPlay
1 replies
16h21m

It seems to me like short-lived certificates are the way to go, which would require tooling. I am actually a little surprised to hear that you're using long-lived certificates on your own servers (I'm imagining a homelab setup). What benefit does that provide you over distributing keys? Who's the CA?

jeroenhd
0 replies
14m

I'm my own CA; SSH certificates don't usually use X509 certificate chains. I dump a public key and a config file in /etc/ssh/sshd_config.d/ to trust the CA, which I find easier to automate than installing a list of keys in /home/user/.ssh/authorized_keys.

I started using this when I got a new laptop and kept running into VMs and containers that I couldn't log into (I have password auth disabled). Same for some quick SSH sessions from my phone. Now, every time I need to log in from a new key/profile/device, I enroll one certificate (which is really just an id_ecdsa-cert.pub file next to id_ecdsa.pub) and instantly get access to all of my servers.

I also have a small VM with a long-lasting certificate that's configured to require username+password+TOTP, in case I ever lose access to all of my key files for some reason.

jppittma
0 replies
1d

Holy shit. I wondered if this was possible a few weeks ago and couldn't find anything on it. Thanks for the link!

hot_gril
0 replies
22h23m

The more complicated something is, the higher chance I screw it up.

gnufx
0 replies
22h11m

Some would argue that in an organization where you'd consider SSH certificates, it's best to use Kerberos and have general SSO. (Some of the GSSAPI functionality is patched in by most distributions, and isn't in vanilla OpenSSH.)

GordonS
0 replies
1d

I setup a test smallstep instance recently, and it works really well. Setup is... complicated though, and the CLI has a few quirks.

timw4mail
11 replies
1d

Any time you access an SSH connection from a different computer, you basically need the password.

krisoft
9 replies
1d

This is not true. SSH keys are a viable alternative.

sseagull
7 replies
1d

If I can be charitable, I think they mean a different computer than one you usually use (that doesn’t have the SSH key already in authorized_keys). Spouses computer, etc.

traceroute66
5 replies
1d

If I can be charitable, I think they mean a different computer than one you usually use

If I can be charitable ....

What the hell are you doing storing your SSH keys on-disk anyway ? :)

Put your keys on a Yubikey, take your keys with you.

unethical_ban
2 replies
23h58m

Right, much easier than a password! And so easy to backup!

I'm not arguing it isn't more secure. The point of this subthread is that SSH keys are not as easy to do ad-hoc as passwords, especially when moving workstations.

nottorp
1 replies
23h1m

Right, much easier than a password! And so easy to backup!

Extremely easy to recover from when the device you rely on to authenticate for everything gets lost or stolen too!

unethical_ban
0 replies
18h46m

Exactly.

If I can't use TOTP with backup codes, I'm not using MFA.

doublepg23
1 replies
23h1m

Does that work with macOS? I’m currently using 1Password as my ssh key agent.

koito17
0 replies
21h55m

It indeed works on Mac OS. I have been using SoloKeys with ed25519-sk keys for about three years now. It should be sufficient to run

  ssh-keygen -t ed25519-sk
while a FIDO2 key is connected. You may need to touch the key to confirm user presence. (At least SoloKeys do).

If I recall correctly, the SSH binaries provided by Apple don't have built-in support for signing keys, but if you install OpenSSH from Nix, MacPorts, etc., then you don't have to worry about this.

Another thing to be mindful of is that some programs have a very low timeout for waiting on SSH authentication, particularly git. SSH itself will wait quite a long time for user presence when using a signing key, whereas Git requires me to confirm presence within about 5 seconds or else operations fail with a timeout.

nerdbert
0 replies
17h33m

Why would you ever do that? How do you know it is not compromised?

Carry your phone (many people already do this on a daily or near-daily basis in 2024) and use that in an emergency.

Rucadi
0 replies
1d

It's just an usually bigger password.

LtWorf
0 replies
1d

If it's in the cloud, you pass the public key when creating the vm. If it's a real machine, ask the data center person to do it.

joelthelion
8 replies
22h2m

internet facing SSH with a password is very unwise

If your password is strong, it's not.

sneak
5 replies
13h48m

Nope, still unwise. Easy to steal, easy to clone, hard to script. Keys stored in hardware is simple and easy on most platforms these days. Yubikeys or Mac SEP is ideal.

joelthelion
2 replies
12h52m

It depends on your use case. I have a personal server only I use use. In this use case, being able to access it from anywhere without any device trumps other considerations. The password is ideal.

In a corporate setting, things are of course different.

sneak
1 replies
11h54m

My use case is the same as yours. Malware can steal your credentials, it cannot steal mine. I also don't need fail2ban or to configure any of these new OpenSSH features. Users added to the server can't get compromised due to use of weak passwords.

Passwords are obsolete in 2024, and using them is very nearly universally bad.

joelthelion
0 replies
3h43m

I also don't need fail2ban or to configure any of these new OpenSSH features

Me neither. If your password has sufficient entropy, you don't need any of this.

Malware can steal your credentials, it cannot steal mine

The only solution around this is a hardware key or MFA. I find the convenience of not needing anything with me to be superior to the low risk of malware. I understand your opinion may differ here.

daneel_w
1 replies
6h40m

Technically it's easier to steal a private key off of disk than it is to steal a password from inside a person's head or to plant a keylogger. If a keylogger is in place, someone can likely already also access your disk and the password used to protect the private key (or your password manager).

sneak
0 replies
5h44m

I was recommending the use of secure processor hardware (Mac SEP or Yubikey) that does not allow such malware shenanigans.

oofabz
0 replies
15h55m

A strong username also helps! Most SSH brute force attempts are for root, admin, or ubnt.

GordonS
4 replies
1d

Another good option is making SSH only accessible over Tailscale or a VPN.

vaylian
1 replies
22h27m

How do you protect the access to the VPN/Tailscale? I suppose you are not using a password?

GordonS
0 replies
21h29m

SSO and MFA, with a Microsoft account.

nativeit
0 replies
1d

This, with key pairs, is the best blend of security and convenience. I use ZeroTier and UFW on the server and it’s really very simple and extremely reliable. On the very rare occasion that ZeroTier encounters a problem, or my login fails, I still have IPMI access through Proxmox/VMWare and/or my server provider.

Someone1234
0 replies
1d

The two aren't exclusive of one another. We've also witnessed situations, with major companies, wherein an SSH "leaks" outside the VPN due to network misconfiguration or misconfiguring interfaces on the server.

As I said above, keys are actually more convenient than passwords. Only reason people still use passwords is because they believe keys are difficult to use or manage.

KennyBlanken
1 replies
22h35m

I would like to remind everyone that an internet facing SSH with a password is very unwise.

Bullshit. You can have a terrible password and your system will still be nearly impossible to get into. Also, these attackers are usually looking for already exploited systems that have backdoor account/password combos, unless they are specifically attacking your organization.

Repeat after me: dictionary attack concerns have nothing to do with remote access authentication concerns.

Let's say my password is two common-use English words (100k-200k.) That's ten billion possibilities. Assume you hit on my password half-way through. That would be fifteen years of continuous, 24x7x365 testing at 10 password attempts per second....and then there's the small matter of you not knowing what my username is, or even whether you've got the right username or not, unless the ssh server has a timing-based detection attack.

The only argument for putting this functionality in the daemon itself is that by locating it in the daemon, it can offer advanced application-layer capabilities, such as failing auth attempts no matter what after the limit is tripped so that brute-forcing becomes more pointless - unless you get it right within the first few attempts, you could hit the right password and never know it. If they intend to implement features like that in the future, great - but if it's just going to do what fail2ban does, then...just run fail2ban.

Fail2ban has a higher overview of auth on the system, is completely disconnected from the ssh daemon in terms of monitoring and blocking, and the blocking it does happens at the kernel level in the networking stack instead of in userspace with much more overhead and in a proprietary system specific to SSH.

As a sysadmin, this is 'yet another place you have to look' to see why something isn't working.

palata
0 replies
21h30m

You can have a terrible password and your system will still be nearly impossible to get into.

Ok, let's try an example of a terrible password for the user "root": "password". Is that nearly impossible to get into? Or does that not qualify as a "terrible password" per your definition?

im3w1l
0 replies
22h36m

I resent that every application needs its own special snowflake auth method. One uses a certain 2fa app. Another uses another 2fa app. Another uses emailed code. Another uses text code. Another uses special ssh keys. Another opens a prompt in the browser where I have to confirm. Another uses special scoped tokens.

Yes there are good reasons. But it is quite a hassle to manage too.

_JamesA_
0 replies
1d

The number of expect scripts I find in production that are used to automate ssh password authentication is ridiculous.

TacticalCoder
0 replies
23h24m

... using keys is actually more convenient and significantly more secure.

And for those for whom it's an option, using U2F keys (like Yubikeys) is now easily doable with SSH.

So unless the attacker can hack the HSM inside your Yubikey, he's simply not getting your private SSH keys.

cedws
19 replies
1d

The OpenBSD approach to security always seems to be adding things rather than removing them. For example, this feature is intended to make it harder for attackers to break in by guessing the password. So why not remove password authentication, or enforce a minimum password complexity? Password auth for SSH is almost always a bad idea anyway - good, secure software should nudge people towards using it securely, not give them the option to configure it with a gaping security hole.

It's the same with the OpenBSD operating system. There's so much extremely obscure, complex code that attempts to address the same problems we have been dealing with for 30+ years. What if we started removing code and reducing the attack surface instead of trying to patch over them, or we came up with an entirely new approach?

A good example of how code should be stripped down like this is WireGuard vs the old VPNs. WireGuard came along with fresh cryptography, took all of the bells, whistles, and knobs away and now provides an in-kernel VPN in a fraction of the LOC of IPsec or OpenVPN. As a result, it can be proven to be significantly more secure, and it's more performant too.

SoftTalker
5 replies
1d

You can (and need) to do both. And OpenBSD does. LibreSSL as one example removed a huge amount of dead/spaghetti/obsolete code from OpenSSL. And they are removing old/obsolete features all the time. Do you use OpenBSD? Do you read the release notes?

cedws
4 replies
1d

That's not really good enough though, the distros just enable the build flags that let them do naughty things. The software needs to be opinionated on how to use it securely, not leave it up to the users, because the developers that wrote it probably know best! The code simply needs to not exist. If users want to fork and maintain their own insecure branch, let them.

adamrt
2 replies
1d

OpenBSD is also known for this. They constantly push back against adding configuration knobs or running non standard configurations.

Have you used OpenBSD? You're telling them they should be doing something, that is already basically their mission statement.

cedws
1 replies
1d

Looking at OpenSSH tells a different story. It is a massive, overly configurable behemoth. The 'WireGuard of SSH' would be 1% of the LOC. It would not provide password auth, or let you log in as root with password auth, or let you use old insecure ciphers.

Maybe OpenBSD itself is better at sticking to these principles than OpenSSH. I haven't used (experimented with) it for ~5 years but read about various updates every so often.

djao
0 replies
3h39m

You seem to be confusing "OpenSSH" with "OpenSSH Portable Release". As explained here: https://www.openssh.com/portable.html

Normal OpenSSH development produces a very small, secure, and easy to maintain version for the OpenBSD project. The OpenSSH Portability Team takes that pure version and adds portability code so that OpenSSH can run on many other operating systems.

Unless you actually run OpenBSD, what you think is "OpenSSH" is in fact "OpenSSH Portable Release". These are very different things.

akerl_
0 replies
1d

As the parent comments note, LibreSSL ripped out tons of code. Not "hidden behind build flags". Deleted.

There's plenty of flaws with any project, but OpenBSD is pretty well known for doing exactly the thing you're claiming they don't do.

idunnoman1222
3 replies
1d

If you want password auth, you already have to change a default setting in SSHD and restart it. How exactly is removing that as a option ‘less complex’ for the downstream distros?

cedws
1 replies
1d

I don't really understand your question. Removing password auth reduces code complexity and therefore attack surface whilst also preventing users from using the software with a dangerous configuration. Maybe the users don't want that, but tough shit, maybe it's the nudge they need to use SSH keys.

joshuaissac
0 replies
1d

In practice, this will just result in people and organisations using the last version of OpenSSH that supports password authentication.

PhilipRoman
0 replies
9h40m

Last time I checked "apt install openssh-server" on debian still launched sshd with password login enabled

yjftsjthsd-h
1 replies
1d

It's the same with the OpenBSD operating system. There's so much extremely obscure, complex code that attempts to address the same problems we have been dealing with for 30+ years. What if we started removing code and reducing the attack surface instead of trying to patch over them, or we came up with an entirely new approach?

OpenBSD absolutely removes things: Bluetooth, Linux binary compatibility, and sudo, off the top of my head, with the sudo->doas replacement being exactly what you're asking for.

arp242
0 replies
20h57m

Also Apache → httpd, sendmail → (open)smtpd, ntpd → (open)ntpd. Probably some other things I'm forgetting.

I've seen a number of reasonable criticisms of OpenBSD over the years. Not being minimalist enough is certainly a novel one.

freedomben
1 replies
1d

By removing password auth from openssh, you're not reducing the complexity, you're just moving it somewhere else. I would argue that you're actually adding significantly more complexity because now users/admins can't just bootstrap by using a default or generated root password on a new machine, creating a user, copying over the public key, and then disabling password auth. Now you have to figure out how to get that key into the image, an image you may or may not control. God help you if you don't have physical access to the machine.

Edit: I realized after posting that I was equivocating on "complexity" a bit because you're talking about code complexity for openssh itself. I don't disagree with you that openssh itself would be less complex and more secure without password auth features, but I think it would have spillover effect that isn't a net positive when considering the whole picture.

PhilipRoman
0 replies
9h46m

Now you have to figure out how to get that key into the image, an image you may or may not control

I'd say this is a good thing, initial secret distribution is an unavoidable complexity and avoiding it leads to "admin/admin" logins which get hacked within seconds of internet access. There is plenty of tooling developed for this, even when setting up a VPS or flashing a Raspberry PI you can put a public key on the device to be active on first boot.

dd_xplore
1 replies
1d

OpenSSH is used in variety of platforms, enforcing secret key will prohibit it's usage in lot of places due to the added complexity.

wkat4242
0 replies
1d

Indeed. And then someone will just fork it and the situation will be messier.

tedunangst
0 replies
1d

That's a pretty weird summary of openbsd development.

karmarepellent
0 replies
1d

Just for info: there are alternative SSH server implementations out there that disable features that are discouraged (e.g. password authentication)[0]

Tinyssh is just one I already knew, I suppose you would find more via a proper search.

[0] https://tinyssh.org/

adamrt
0 replies
1d

I can't think of any long term, open source project that has removed and ripped out more code than OpenBSD.

They are know for doing exactly what you are suggesting.

Go ask @tedunangst. It was literally called "tedu'd" for ripping out old crusty code.

yjftsjthsd-h
12 replies
1d1h

I've seen MaxAuthTries used for similar reasons, and of course fail2ban, but this seems like a nice improvement and it's built in which is probably a win in this case.

tomxor
5 replies
1d

I've used fail2ban in production for many years but eventually removed it due to causing very large iptables leading high memory use and ultimately a source of instability for other services (i.e it turns into a DDoS vulnerability for the whole server). I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

I wonder how this SSH feature differs since it's implemented at the SSH level.

So long as the SSH and or PAM config requires more than a password (I use hardware keys), the main concern to me is log noise (making it hard to identify targeted security threats) and SSH DDoS. I know tarpits and alternative ports are another way of dealing with that, but when SSH is used for many things having to change the port is kind of annoying.

I think I'm probably just going to end up layering it like everyone else and stick everything behind a wireguard gateway, although that concept makes me slightly anxious about that single point of access failure.

yjftsjthsd-h
1 replies
23h57m

I've used fail2ban in production for many years but eventually removed it due to causing very large iptables leading high memory use and ultimately a source of instability for other services (i.e it turns into a DDoS vulnerability for the whole server). I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

Didn't the advice switch to using ipset a while back, precisely in the name of efficiency?

tomxor
0 replies
23h53m

Interesting thanks, I hadn't heard of that option.

KennyBlanken
1 replies
22h10m

causing very large iptables leading high memory use

I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

The purpose is to make random password attempts even more impractical. With even fairly lax fail2ban rules, it'll take multiple lifetimes to find a password made up of just two common use english words.

However, that's not really their goal. I think that these SSH probes are mostly intended to find systems that have been compromised and have common backdoor passwords, though. ...and they use networks of zombie machines to do it.

That's where stuff like Crowdsec and IP ban lists come in, with the side benefit of your IP addresses becoming less 'visible'

tomxor
0 replies
15h36m

> I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

The purpose is to make random password attempts even more impractical. With even fairly lax fail2ban rules, it'll take multiple lifetimes to find a password made up of just two common use english words.

True, but the other reason to use such a measure is layered security. For instance I systematically disable all password only access, which kind of makes fail2ban seem a little pointless, but if there were to be some obscure bug in PAM, or SSH, or more likely a misconfiguration, then there is another layer that makes it more difficult.

yardstick
0 replies
19h38m

Sounds like you were running SSH on the default port tcp/22? I would expect attacks to exponentially drop off as soon as you move to a custom port.

lxgr
4 replies
1d1h

It does seem to be very similar in spirit and implementation:

PerSourceNetBlockSize > Specifies the number of bits of source address that are grouped together for the purposes of applying PerSourceMaxStartups limits. Values for IPv4 and optionally IPv6 may be specified, separated by a colon. The default is 32:128, which means each address is considered individually.

Just like fail2ban, this seems like it can be equal parts helpful, a false sense of security, and a giant footgun.

For example, allowing n invalid login attempts per time interval and (v4) /24 is not a big problem for botnet-based brute force attacks, while it's very easy for somebody to get unintentionally locked out when connecting from behind a CG-NAT.

SoftTalker
3 replies
1d

ufw/iptables and other firewalls can also throttle repeated connection attempts, which is almost always fine but could be something you don't want if you have a legitmate need to support many rapid ssh connections from the same source (CM tools, maybe?)

yjftsjthsd-h
1 replies
1d

if you have a legitmate need to support many rapid ssh connections from the same source (CM tools, maybe?)

If you're doing that, I strongly suggest using ControlMaster to reuse the connections; it makes security tools like this less grumpy, but it's also a nice performance win.

aflukasz
0 replies
10h36m

Just remember that only first connection, the one creating ControlMaster socket, is being authenticated, subsequent ones are not.

megous
0 replies
5h57m

It's easy to do per source IP address and reasonably easy to add source IP address to whitelist automatically after successful auth.

tgv
0 replies
1d

I managed one machine with such a mechanism, and I had to remove it, because it basically took all resources. I can't remember which daemon it was, but now the machine is only accessible from a limited set of ip addresses.

kelnos
2 replies
23h24m

This seems like something I wouldn't want. I already use fail2ban, which does exactly the same thing, in a more generic manner. sshd is a security-sensitive piece of software, so ideally I want less code running in that process, not more.

akvadrako
0 replies
21h12m

The security sensitive parts of SSH run in a separate process. I would assume that most of the new code would be in the unprivileged part.

3abiton
0 replies
20h41m

There is also endlessh, a very fun project to deploy

idunnoman1222
2 replies
1d

It’s just fail2ban, should have been in core years ago

RockRobotRock
1 replies
1d

Did you forget to submit a patch for it?

idunnoman1222
0 replies
4h12m

Sounds like almost as much fun as commenting on hacker news

idoubtit
2 replies
1d

I've read the commit message in the post, and read it again, but I did not understand how it would be configured. The penalty system seems complex but only 2 parameters are mentioned.

From the documentation, one of these parameters is in fact a group of 8 parameters. I guess the separator is space, so one could write:

    PerSourcePenalties authfail:1m noauth:5m grace-exceeded:5m min:2m
See https://man.openbsd.org/sshd_config.5#PerSourcePenalties

Unfortunately, the default values are undocumented. So `PerSourcePenalties yes` (which will be the default value, according to the blog post) will apply some penalties. I did attempt to read the source code, but I'm reluctant to install a CVS client, two decades after dropping that versioning system.

mananaysiempre
0 replies
1d

The OpenBSD project provides a CVSWeb interface[1] and a GitHub mirror[2]. The portable OpenSSH project[2] that most of the world gets their OpenSSH from uses a Git repo[4] that also has a Web interface (at the same address) and a GitHub mirror[5]. Per the code there[6], the default configuration seems to be

  PerSourcePenalties crash:90 authfail:5 noauth:1 grace-exceeded:20 max:600 min:15 max-sources:65536 overflow:permissive
[1] https://cvsweb.openbsd.org/

[2] https://github.com/openbsd/src

[3] https://www.openssh.com/portable.html

[4] https://anongit.mindrot.org/openssh.git

[5] https://github.com/openssh/openssh-portable

[6] https://anongit.mindrot.org/openssh.git/tree/servconf.c?id=0...

enasterosophes
2 replies
21h32m

People keep mentioning fail2ban. I claim that both this new behavior in sshd, and fail2ban, are unprincipled approaches to security. Now, I know fail2ban is a crowd favorite, so let me explain what I mean by unprincipled.

This is the problem fail2ban (and now sshd) try to solve: I want a few people to log into my computer, so I open my computer to billions of other computers around the world and allow anyone to make a login attempt, and then I want to stop all the illegitimate attempts, after they were already able to access port 22.

It's simple Bayesian probability that any attempt to head off all those illegitimate accesses will routinely result in situations where legitimate users are blocked just due to random mistakes rather than malicious intent. Meanwhile, illegitimate attempts continue to come en masse thanks to botnets, allowing anyone with an ssh exploit the chance to try their luck against your server.

A more principled approach to security is to not roll out the welcome mat in the first place. Instead of opening up sshd to the world, allowing anyone to try, and then blocking them, instead don't open up sshd to the world in the first place.

1. If possible, only permit logins from known and (relatively) trusted networks, or at least networks where you have some recourse if someone on the same network tries to attack you.

2. If access is needed from an untrusted network, use wireguard or similar, so sshd only needs to trust the wireguard connection. Any attempt at illegitimate access needs to crack both wireguard and ssh.

With those one or two simple measures in place, have another look at your sshd auth logs and marvel at the silence of no one trying to attack you a million times per day, while also having confidence that you will never accidentally lock yourself out.

marshray
1 replies
11h57m

1. Sure, there may be cases where you already know the source IP or network block. But there are many scenarios where your legitimate users may be traveling, or using a mobile provider that won't guarantee much about the source IP. If you open your firewall too wide, a sophisticated attacker can find some box they can proxy through.

2. Doesn't wireguard then have the same challenge as SSH? Isn't that just pushing the problem around?

Another way to cut down on the log spam is by configuring sshd to listen on a nonstandard port.

aflukasz
0 replies
10h21m

Doesn't wireguard then have the same challenge as SSH? Isn't that just pushing the problem around?

Yeah, it's actually weird how frequently in those discussions people say some version of "just use vpn". I guess they really mean "just make someone else responsible".

kazinator
1 replies
23h12m

If these people don't know what to do with themselves next that much, they should so something useful, like learn git, instead of implementing fail2ban-style features that nobody needs or wants in the software itself.

People who want this sort of thing and already have a single solution that handles multiple services have to complicate their setup in order to integrate this. They keep their existing solution for monitoring their web and mail server logs or whatever and then have this separate config to deal with for OpenSSH.

What if you don't want to refuse connections that exhibit "undesirable behavior" but do something else, like become a black hole to that IP address, and perhaps others in the IP range?

You want the flexibility to script arbitrary actions when arbitrary events are observed.

In my log monitoring system (home grown), the rules are sensitive to whether the account being targeted is the superuser or not.

SoftTalker
0 replies
22h50m

What if you don't want to refuse connections that exhibit "undesirable behavior"

Then you disable the behavior by turning it off in /etc/ssh/sshd_config.

gnufx
1 replies
22h5m

Do the people who are going on about fail2ban know whether that's even ported to, and included in, OpenBSD? I suspect not.

tedunangst
0 replies
19h46m

Nobody seems to have noticed that fail2ban is GPL, either.

ComodoHacker
1 replies
1d

Will it really help today, when anyone with any serious intent doesn't launch their attacks from one or two standalone hosts, but buys botnet capacity?

verandaguy
0 replies
23h20m

I don't think this attempts to address botnet attacks, but to be fair, there are very few tools that you can just run on a single physical or VPS host that can effectively defend against a botnet. Frankly, most things that aren't Cloudflare (or in the same ballpark) will be ineffective against well-crafted botnet attacks.

This is useful in a defence-in-depth scenario, same as fail2ban. You might be able to defeat the odd hacker or researcher doing recon on your host, and sometimes that's good enough for you.

If you need botnet protection, you shop around for botnet protection providers, and you get a botnet protection solution. Easy as.

tonymet
0 replies
21h53m

Can you trigger a command when they are in the "penalty box" ? it would be nice to firewall those sources so they stop consuming sshd cpu

textninja
0 replies
22h1m

A “SSHal credit score” tied to a pooled resource, yes, that will work out well! Kind of like how a used car purchase should come with all its tickets!

EDIT: To this feature’s credit, it’s not federated centrally, so a DDOS to nuke IP reputation would have its blast radius limited to the server(s) under attack.

sleepydog
0 replies
22h32m

I'm not a fan of this feature. First, I don't think it's going to help all that much for the reasons other people have stated (it's easy to obtain a new IP, and using ssh key-only remote login nullifies most attacks anyway).

More importantly, though, is that it is difficult to debug why you can't login to a remote system, unless you've been diligent enough to setup remote logging and some kind of backdoor you can use in a pinch. I imagine many companies have some unimportant script running in the background that logs into a remote system over ssh, and the person who set it up left the company years ago. One password change/key rotation later, and suddenly 25% of employees cannot login to that remote system because the script got one of the office's 4 public IPv4 addresses blocked on the remote server.

It's very easy to say "you should manage your systems better, you should separate your networks better", and so on. But in my line of work (customer support), I only hear about the issue after people are already locked out. And I've been on many phone calls where users locked themselves out of their server that had fail2ban setup (ubuntu setup fail2ban by default in one of its releases).

semi
0 replies
22h22m

This is interesting but something i feel like id disable on most of my ssh servers as they are only exposed through a shared jump host, and I don't want users that have too many keys in their agent to cause the jump host IP to be penalized.

On the jump host itself it makes sense though

pluc
0 replies
23h38m

So fail2ban?

password4321
0 replies
16h17m

I would like to see support for blocking by client identifier, though if it were a default all the bots would recompile libssh.

Until then this has been a great differentiator for Bitvise SSH.

opentokix
0 replies
22h43m

Chocker :D

olooney
0 replies
23h9m

This reminds me of Zed Shaw's Utu protocol from back in the day:

https://weblog.masukomi.org/2018/03/25/zed-shaws-utu-saving-...

I am not a crypto guy, but my understanding is that users can downvote each other, and the more downvotes a user gets the harder the proof-of-work problem they had to solve before they post. If you received enough hate, your CPU would spike for a couple of minutes each time you tried to post, thus disincentivizing bad behavior.

I see on github the project is officially dead now:

https://github.com/zedshaw/utu

nazgu1
0 replies
22h10m

Is it something that can replace fail2ban or sshguard?

kyrofa
0 replies
22h18m

Why are we building this into SSH itself? Isn't this what things like fail2ban are for?

juancn
0 replies
1d

This looks like an easy DDoS exploit waiting to happen.

gweinberg
0 replies
23h40m

If a client "causes" sshd to crash, isn't that a server error?

est
0 replies
15h19m

no penalize, just forward the tty to a honeypot and waste the attacker's time. Any login would pass, but you have to figure out if the shell is real.

cess11
0 replies
22h9m

So, like a crude fail2ban service?

a-dub
0 replies
22h28m

ip addresses are kinda meaningless these days, and address based accounting and penalization can penalize legitimate users. (bitcoind has a banscore system, it's kinda cute but these kinds of things tend to be bandaidy)

it's a hard problem. wireguard has a pretty good attempt at it built into its handshaking protocol, but like all of these things, it's not perfect.

could maybe do something interesting with hashcash stamps for client identity assertion (with some kind of temporal validity window). so a client creates a hashcash stamped cookie that identifies itself for 30 minutes, and servers can do ban accounting based on said cookie.

WhatIsDukkha
0 replies
23h6m

This seems like a bad fix to the problem of insisting that ssh continue to only use TCP.

Wireguard only responds to a complete useful key from incoming UDP (as I understand). Probe resistant.

I get the legacy argument here but it seems like almost two decades of "this tcp thing has some downsides for this job"?

WhackyIdeas
0 replies
16h4m

Love it. Now I don’t even need to change the default port numbers to stop all of those damn log entries.

Wonder if this is related to why Fail2Ban wasn’t in the pkg repos when I last tried to install it on OpenBSD?

There is only one thing on my wish list from the OpenBSD devs out there - that you’ll figure out Nvidia drivers.

Grimeton
0 replies
1d

pam-script with xt_recent works just fine.

Everytime when an authentication fails, you add the ip address to the xt_recent list in /proc and in iptables you just check via --hits and --seconds and then reject the connection attempt the next time.