return to table of content

SSH3: SSHv2 using HTTP/3 and QUIC

Ayesh
28 replies
1d1h

SSH over QUIC would be nice.

I don't see any advantage of layering HTTP/3 here. It adds more friction, and the only advantage it brings is being able to "hide" the SSH server over a URL path. I guess x.509 certificates would be fine, but SSH hostkeys, SSHFP or TOFU is enough and far more secure (because it implicitly pins the server public key).

It's a relatively new project from the looks of it, so I'd definitely not use it anywhere half important having to create something interesting with QUIC and HTTP/3.

acdha
8 replies
1d

They list other advantages in the README such as tying into the web authentication model, which is pretty big for enterprise use as everything moves towards OIDC. If they could eventually use passkeys that’d be really nice.

kdklol
4 replies
1d
acdha
3 replies
20h2m

With hardware tokens, yes, I use that. I was thinking that building it on a web server would be really handy with an integrated client you could use with iCloud, Windows Hello, etc.

kdklol
1 replies
3h50m

With /passkeys/, actually! It's more generic than just hardware keys. I don't know of any good implementation yet, but there were a few projects on github mentioned in some passkey-related discussions here. I do not use anything like iCloud or Windows Hello and I don't know what these services actually use, but if they implement these open standards, it's only a matter of adding some glue code. I'd say it's likely that Putty will implement this over on Windows eventually. That is my speculation, as I said, I don't actually use any of this.

acdha
0 replies
3h4m

I mentioned those because key management is the hard part and most people are going to be using platform authenticators for that reason. In some cases there are APIs (this was one of the features in the last macOS / iOS release) but I was also thinking that moving it closer to a browser is interesting because between platform passkeys and SSO, there are a lot of people who have all of their credentials & MFA ready in a browser and would like to reuse that.

Some searching suggests there’s at least one implementation of the SSH agent protocol using Windows Hello, which is great.

ikiris
0 replies
19h56m

This already exists too

ycombinatrix
0 replies
20h18m

I already already use passkeys for ssh. libfido.

ikiris
0 replies
19h56m

You can already do all of this with ssh

e12e
0 replies
17h15m
mkesper
7 replies
1d1h

Ssh hostkeys offer no solution for first connect to ephemeral hosts.

Ayesh
2 replies
1d

Strict SSHFP can theoretically solve it [1], assuming it's used in the first place and has DNSSEC. I personally use it for all servers I manage purely because I like the additional security, but it not at all common and DNSSEC isn't all that perfect either.

[1] https://aye.sh/blog/sshfp-verification

overstay8930
1 replies
18h26m

It's crazy that SSHFP hasn't taken off, I don't think a single person on earth has ever verified a host key before attempting to connect, and deploying DNSSEC is trivial now that you can use ECC and ED25519.

tptacek
0 replies
13h35m

* Deploying DNSSEC is obviously not trivial, as doing so has taken some of the largest companies on the Internet fully off the Internet for multiple hours, within the last year, so much so that it has become a running joke when companies have prolonged outages to suggest that DNSSEC is the culprit.

* There are still resolvers that can't handle Ed25519

* Being able to use Ed25519 was never the ops problem with getting DNSSEC rolled out!

* It's weird to assume that people would want to enroll their server integrity --- something that doesn't in any way depend on an Internet PKI designed to allow strangers to verify your identity, and that enlists de facto government support to make that use case work --- in a global PKI, especially when SSH already has a perfectly good certificate system that solves the same problem without any of the above liabilities.

What boggles my mind, and I mean this sincerely, not as snark, is that anybody in the entire world takes SSHFP seriously. Even if you stipulate that DNSSEC (and/or DANE) works, just arguendo, it's still a totally different use case than resolving SSH key continuity problems.

jeroenhd
1 replies
1d

If hosts are configured with SSH certificates as part or their setup, you can definitely skip TOFU and determine trust on the first connection. That won't work for the "I need to connect to a random IP address" scenario, but any cloud server exposing SSH can be configured with a certificate signed by a company/personal SSH certificate authority.

You could configure something delightfully atrocious like https://github.com/mjg59/ssh_pki but I think for most use cases where you connect to loads of SSH servers, host keys and certificate authorities will work just fine. We can do with an ACME-like protocol for distributing these certificates, though.

mcfedr
0 replies
23h47m

Given how rare this is, using https seems like a great idea

ikiris
0 replies
19h55m

SSh hosts have supported certs for at least a decade.

cortesoft
0 replies
23h42m

You can set up your ephemeral hosts to come up with properly signed host keys.

InvaderFizz
6 replies
23h44m

It sure would be nice to be able to easily throw up a CDN like Cloudflare in front of my ssh server with no client side special sauce required.

I didn't see it stated in the documentation, it this feels like something that might work for that setup.

ryandvm
5 replies
23h16m

Why are people obsessed with implanting CloudFlare right in the middle of everything they do? There is absolutely nobody that needs DDoS for their SSH server.

I get that CloudFlare has been a well behaved netizen so far, but let's be real, it won't last forever. It never does. Eventually the shareholders start turning the screws and CloudFlare is going to succumb to the same pressures every company does and they're going to start extracting advertising value from their "customers".

How about we save the CDNs for the serious stuff and just run our SSH servers and low traffic HTTP sites ourselves?

JCharante
1 replies
23h3m

Every personal blog is low traffic until it lands at the top of HN

chrisandchris
0 replies
22h57m

Which still means that the HTTP-server can be behind CloudFlare, but nobody accesses your blog through SSH (hence not necessary to put it behind CloudFlare).

ranting-moth
0 replies
18h3m

I'm pretty sure even if CF went rouge you'd be able to keep your SSH connections ad free for a low $1.99/month subscription.

adastra22
0 replies
22h55m

Why would you intentionally MITM your SSH connection?

InvaderFizz
0 replies
21h33m

Absolutely nothing to do with DDoS in my case. I want censorship regimes to have to break large portions of the internet for their citizens to stop even the most simple leak vector. Let them block Cloudflare, Akamai, and Cloudfront.

bitwize
3 replies
22h56m

In the 90s it was commonplace to design a new protocol on top of TCP/IP. These days, all the tooling and infrastructure is for HTTP. Designing a new protocol, you'd be starting from scratch; HTTP is much, much easier to build an application on top of.

unilynx
2 replies
21h44m

I doubt QUIC is easier than TCP to build on. But it's much easier to get your new protocol through firewalls and other middleware when using port 443 than trying to introduce a new port (or worse: a new protocol number)

tengwar2
1 replies
18h8m

Which sounds like more of a disadvantage if you are running a firewall.

anticensor
0 replies
9h6m

If your firewall drops unknown protocols, then it is broken. Internet is designed to be default-open, not default-closed.

adontz
17 replies
1d1h

I believe security models for HTTP and SSH are pretty different. HTTP is usually public and anonymous, SSH is usually private and authenticated. While QUIC is definitely a great technology for HTTP use case, I'm not so sure about SSH. Not saying it's not, just it's something to reason about.

For example x509 seems to be a disadvantage to me. I do not want anyone with a cheap DigiCert certificate be able to log in to my server, even as a result of some fat finger misconfiguration. OAuth assumes that both client and service provider can reach identity provider. Is it so for most serves? I am used to see pretty restricted setups, where many servers have no internet access and only update from a private package repository.

From one side, I really like the idea of reusing HTTP. Who does new protocols these days? Everything is JSON or XML over HTTP. And it's good enough for most cases. But is it good enough for SSH? WinRM works over HTTP, but it uses Kerberos for authentication.

Are there any significant real practical advantages? I don't see any. Are there any vulnerabilities, possibilities for misconfiguration, architectural flaws? Quite possible.

sargun
5 replies
22h14m

You can already use certificates to login via SSH. Usually you setup your own certificate authority and sign your own certs because they need special attributes.

adontz
4 replies
21h53m

SSH certificates (I encourage using them!) are not x509, absolutely incompatible.

dilyevsky
1 replies
21h2m

It’s same general principle and same security model just different way of going about it. It supports CAs and extensions just like x509

adontz
0 replies
6h23m

No. For instance SSH certificates are one level only, do not support trust chains, cross signing, and a lot of other x509 complexity.

ReK_
1 replies
11h58m

You're thinking of SSH keys, which are not certificates. SSH certificates are indeed x.509: https://datatracker.ietf.org/doc/html/rfc6187

adontz
0 replies
6h26m

No, I'm thinking of SSH certificates.

Here is the description of file format, it's nothing like x509

https://github.com/openssh/openssh-portable/blob/master/PROT...

nixgeek
5 replies
1d1h

“cheap DigiCert certificate” is already possible with misconfiguration of SSH’s TrustedUserCAKeys and without any out of tree patches. https://smallstep.com/blog/use-ssh-certificates/

frutiger
4 replies
22h17m

SSH Certs are not related to x509 PKI certs. SSH certs are created with ssh-keygen and is the result of one key signing another. The public portion of the signing key (ie. the “cert”) needs to be distributed separately.

vetinari
3 replies
20h47m

Did you follow the link? The point was exactly setting up X.509 PKI for SSH authentication. Yes, it can be used with SSH, that was the GP's point.

whatisyour
0 replies
20h41m

you can disable X.509 for SSH

frutiger
0 replies
20h34m

I’m replying to parent not the overall post.

e12e
0 replies
17h19m

The link talks about setting up a ssh ca, not x509?

For our part, the most recent release of step & step-ca (v0.12.0) adds basic SSH certificate support. In other words:

step-ca is now an SSH CA (in addition to being an X.509 CA)

step makes it easy for users and hosts to get certificates from step-ca

It's a tool that do x509 ca for x509 things and ssh ca for ssh.

HeckFeck
1 replies
1d

Who does new protocols these days?

My business brain understands why, but my engineer’s heart laments.

iknowstuff
0 replies
23h8m

QUIC is damn good though! Its minimal header has a very tiny overhead, and the protocol gives us so much for free. What’s to lament? The userspace impl?

vore
0 replies
1d

I don't see anything about using an X.509 certificate for logging in, just for a client authenticating the remote server. And, even then, TLS has support for mutual authentication so someone with a cheap DigiCert certificate logging into your server is not really a problem if you could configure mTLS on the server side to accept only certificates in a certain chain.

magicalhippo
0 replies
1d

OAuth assumes that both client and service provider can reach identity provider.

You could use client credentials flow with a certificate. Then all you need is to register the public key with the server, much like good old SSH.

lakomen
0 replies
16h57m

Yeah those filthy cheap DigiCert certs plebs pshh gasp. Only expensive Verisign golden batch certs get to log into my computers snobbynose

SCNR, I know it's HN which is short for "Humor? No we don't understand humor".

egberts1
13 replies
1d2h

Not going there with anything HTTP/3.

Disclaimer: I write network packet parsers for XNS/IPS/IDS for a living, to look for "bad things".

mmaunder
6 replies
1d2h

How intentionally vague.

lajamerr
5 replies
1d2h

I assume he means with the encrypted metadata in HTTP/3 / QUIC that it makes it harder as a security admin to "peek" at what is going on in the network.

In my opinion its short sighted, because if we care about security, then we should care about user security and privacy as well. Because if the security admin has the ability to packet inspect stuff, so does a potential malicious app.

insanitybit
2 replies
1d

Odd, surely SSHv2 already suffers from inability to inspect on the wire.

egberts1
1 replies
22h35m

From the Github:

SSH3 is a complete revisit of the SSH protocol, mapping its semantics on top of the HTTP mechanisms. In a nutshell, SSH3 uses QUIC+TLS1.3 for secure channel establishment and the HTTP Authorization mechanisms for user authentication.

So, it has nothing to do with SSH2; more about HTTP/3-QUIC security theater: hostname is still being sent over TLS/1.3 negotiation.

insanitybit
0 replies
20h56m

To be clear, my reading of the parent post is that the grandparent doesn't like HTTP/3-QUIC making it harder to read data off of the wire (ie: for internal security analytics).

But I don't see how this is worse than SSHv2. In both cases retrieving the hostname / IP is obviously trivial since you just instrument DNS for the hostname and, of course, the IP is cleartext.

egberts1
0 replies
23h19m

More like incomplete state machine for HTtP/3-QUIc

NegativeK
0 replies
21h16m

The owning organization or user should already have full admin on all endpoints.

Malicious apps and attackers should not.

nixgeek
2 replies
1d1h

The standards bodies don’t seem to buy the “bad things” argument and appear resolute on making it harder to MITM traffic on the wire and attempting to force IDS/IPS to all be run on the client.

Is there a 5-10 year future where you just can’t do this as a middlebox?

jeroenhd
1 replies
1d

Protocols supported MITM with correct configurations and it led to complete ossification of said protocols because middleboxes suck at following standards.

It seems that at the time these features were dropped, most middleboxes have ignored features like exporting keys or configuring static RSA keys and went for CA-MitM attacks instead. You should expect these tools to break if they're actively trying to subvert protocols to do things they're not designed to support.

I don't really see what changed, though. I guess static keys were dropped to provide forward secrecy, but other than that running your own rogue CA is as possible as it was 20 years ago. Middleboxes lagging behind in support for features like HTTP/3 is probably annoying, but that's because of a lack of implementation more than anything.

You can still use your domain tools/MDM configuration/settings to configure an HTTPS proxy and firewall off the normal ports if you want to MitM your network reliably. If yiur prozy doesnt support http/3, it will happily downgrade your connection to HTTP/1.1 for you. Android's insistence on not actually applying user-installed certificates is a pain for many apps, but other operating systems will happily and silently drop security measures like certificate transparency when they encounter a user-operated MitM CA.

The lack of MitMability comes down to Android being fussy, IoT devices you had no chance of ever controlling needing workarounds, and devices you don't have permissions to manage not being manageable. I really do wish Android would let MDM solutions inject certificates into the system store (though I can see why they don't with the wide range of stalkerware in the wild).

egberts1
0 replies
23h17m

Second paragraph: "most middleboxes ..."

It is the "others".

xorcist
0 replies
22h33m

Not sure why this is downvoted. HTTP3/QUIC is a lot more complex to implement than SSH.

SSL is a very well studied standard, but it is clearly a committee product with lots of features built on enterprise standards like X.509, and SSH is made by a few protocol engineers with a razor sharp use case.

It is easy to see why someone who audits parsers for a living would be much more comfortable with SSH as compared to a something over HTTP/SSL/QUIC.

qwertox
0 replies
1d2h

I wish you would share some of your thoughts.

password4321
0 replies
1d1h

My understanding is limited, but HTTP/3 boils down roughly to HTTP/2+QUIC.

Major cloud providers are still shaking issues out of HTTP/2 like "Rapid Reset" 2 months ago, the nesting and layers open gaps and new edge cases as naive implementations were clearly not yet battle hardened even against old attack families like amplification/resource exhaustion.

https://news.ycombinator.com/item?id=37830987 The novel HTTP/2 'Rapid Reset' DDoS attack

https://news.ycombinator.com/item?id=37837043 HAProxy is not affected by the HTTP/2 Rapid Reset Attack

znpy
12 replies
1d2h

Is this wayland all over again? We do three or four new things well, and everybody is supposed to just stop doing everything else ?

Is this "a secure shell" (as in, somebody's personal spin on the topic) or like a new "official" direction?

The readme isn't clear on these aspects.

amluto
8 replies
1d2h

What does “official” mean? The OpenSSH team? IETF?

Anyway, SSH authentication is extremely inflexible, and the protocol is not particularly performant, especially on large bandwidth-delay links. Moving to HTTP3 seems like an excellent idea if it’s implemented well.

(Although… we really need a way to do TLS/QUIC to an endpoint without a domain name.)

uxp8u61q
3 replies
1d2h

The RFCs for SSH2 were all published by IETF, so I would definitely expect IETF to be involved in a project that claims to be "SSH3". If some random person started an OS project and called it "Windows 12", people would rightfully be confused.

tlivolsi
0 replies
1d2h

Agreed. The name is terribly misleading.

lnxg33k1
0 replies
1d2h

Agree I also think that we should leave naming stuff SSH to IETF and OpenSSH to OpenBSD, in this case the maintainer seems to be unrelated to both

ctz
0 replies
21h59m

The IETF SSH working group was disbanded in 2006. You will be waiting a long time if you expect anything else to come from there by magic.

the8472
1 replies
1d2h

(Although… we really need a way to do TLS/QUIC to an endpoint without a domain name.)

Generate self-signed cert, let the client TOFU. And skip the HTTP part. Just like SSH. There's no reason to do things like a browser.

amluto
0 replies
1d

I think we could do much better than this with a small amount of creativity.

But it really ought to be possible to securely configure locally-connected devices with a browser, and this is not really possible today.

aseipp
1 replies
1d2h

Yeah, right now for auth, if you want to use e.g. OIDC I think the best you can do is to essentially shove everything in the square hole using very short-lived SSH certificates and OOB auth flow; e.g. "open this browser link to get a cert for the next 15 minutes/24 hours." So, you're basically treating short-lived certs like session tokens, more or less. I got this working with my own homegrown SSH CA infrastructure last year, but never took it out of prototype stage. Even slightly more flexible authentication would be very welcome.

amluto
0 replies
1d2h

Cloudflare offers this as a service. It’s obviously a second class citizen (it’s intensely buggy, has low availability, doesn’t work especially well even on a good day, has incoherent configuration, and no support whatsoever).

On the other hand, all Cloudflare configuration seems incoherent, and it gets more so over time. I was recently highly entertained when I tried to access one of the Zero Trust [0] pages. The UI cheerfully informed me that only the new UI could configure Zero Trust, and it redirected me to a new domain that was IIRC “one.dash.cloudflare.com”. You can’t make this up — maybe it’s called One Trust internally? The new panel looked quite a lot like the old one except that the Zero Trust pages worked.

Well, “worked”. None of the Zero Trust config makes any sense.

[0] Is there any logic at all to what lives under the Zero Trust umbrella?

uxp8u61q
1 replies
1d2h

It's just someone's project. As far as I can tell it's unrelated to IETF, if that's what you mean by "official". In any case it's presumptuous for the author to call this "SSH3".

brunoqc
0 replies
1d2h

it's probably just ssh + http/3

mseepgood
0 replies
1d2h

It's obviously a personal project.

alphazard
11 replies
1d2h

This is pretty neat, definitely better to move to UDP, so that we can have the proper response to unauthorized contact--no response.

QUIC is fine as is though, no need to layer HTTP3 on top of it.

vlovich123
7 replies
1d2h

The reason they’re using HTTP is to allow for hiding the SSH server so that it pretends to be a dummy HTTP server that responds to 404 on all requests unless you know the special random URL that hosts the SSH capabilities. It’s a neat idea but overkill when you’re not using that capability (didn’t dig into the code so maybe it is bypassed if you don’t ask for a secret URL). It does make me hesitant as I don’t know how secure Go’s HTTP stack is since an exploit there could expose quite a bit and I don’t know that it’s been hardened to host directly, but it is an interesting idea. May be worth hand-rolling a custom server to do the routing but at the same time it makes it easier to fingerprint. I think it makes more sense to separate the routing secret to a standard reverse proxy that’s harder to fingerprint. One could imagine that the secret URL idea in a normal HTTP stack is susceptible to scanning techniques since there’s only one route to guess.

withinboredom
2 replies
1d1h

It's likely the URL can be discovered via basic timing attacks as virtually zero routers do constant-time comparisons for route matching.

codetrotter
1 replies
23h53m

If the URL path is meant to be kept secret, such as here, they should use a password hashing algorithm such as for example bcrypt or scrypt for the URL path and hash the path of every incoming request and then check the hash of the path instead of the path itself

withinboredom
0 replies
17h29m

I'm unaware of any http router that does this out-of-the-box. It would have to be something custom.

gnyman
1 replies
22h51m

It's not exactly the same but you can "hide" a ssh server on port 443 with haproxy

It's not really hidden as if you initiate a connection with ssh it will reveal itself, but if you try https it will reply as a webserver

It world by looking at the first few bytes and deciding what to do based on that

At one point I was running https/ssh/openvpn/custom-tcp all on one port..

Here is someone else explaining how to do it, I don't think I ever wrote it up

https://news.ycombinator.com/item?id=8925938

stjohnswarts
0 replies
22h35m

I never even thought of that, atlassian has a page on it: https://confluence.atlassian.com/bitbucketserver/setting-up-...

neild
0 replies
22h32m

HTTP/3 is almost indistinguishable from any other protocol running over QUIC, and QUIC itself is almost indistinguishable from random noise in UDP packets. If you want to masquerade as HTTP/3 traffic, just using UDP on port 443 will generally be sufficient.

(Only “almost” indistinguishable, because it’s possible to decrypt the first packets of the client’s handshake and examine the ALPN parameters used to negotiate an application protocol. And QUIC may be further distinguishable from other UDP traffic through statistical analysis of packet sizes and response latencies, as well as the few unencrypted header bits.)

jofla_net
0 replies
23h58m

Yeah its overkill. Its like trying to hide a meth lab by just putting up a sign that says daycare. (and if you do a special knock on the door you go to the secret room) Edit: vs having an invisible building which if you knock on it the right way, materializes...

The best thing about QUIC if its udp is definately that it could be made un portscannable.

heyoni
2 replies
1d2h

Isn’t it necessary for oauth?

vlovich123
0 replies
1d2h

It might be easier to integrate but you’ve got a custom server and client in this case so it should be possible to do both without HTTP being involved for the server/client layer without an HTTP server? At least I think that’s right but it’s been a long long while since I wrote OAUTH code.

Seems like it’s primarily to implement the masking feature to pretend to be a normal HTTP server hosted on a port until the shared secret URL is knocked.

the8472
0 replies
1d2h

The client and server need to talk http to the authorization server, that doesn't mean they need to talk http to each other.

wackget
10 replies
1d

What if you already run a web server which uses port 443? Strange that the readme doesn't mention that scenario because it's extremely common.

Presumably you'd choose a different port but then it'd be pretty obvious you're running something if your server has a random HTTPS server exposed on port 444 or whatever.

okasaki
3 replies
1d

You can run different hosts on one web server, like company.com goes to localhost:5555 (your app or whatever) and ssh.company.com goes to localhost:8443 (let's say you're running ssh3 on that port)

qingcharles
2 replies
1d

* depending on your web server/reverse proxy configuration

[for instance, I run Kestrel and it really isn't designed to target more than one site; I do it, but it's like bending that Lego brick to make it fit where it shouldn't go]

okasaki
1 replies
1d

Yes I suppose, although you can always put a web server that does support it (like nginx) in front of your web server that doesn't support it.

qingcharles
0 replies
20h9m

Yeah, this is what MS used to recommend, although they now say their own version YARP is better.

xyst
2 replies
1d

Front load service with nginx server or load balancer…

nginx exposed on 443:

if route is /web then route to web service on port 1234

if route is /ssh3-secret-string -> route to ssh3 server service on port 1265

if route doesn’t exist then 404

extraduder_ire
1 replies
23h56m

Can nginx proxy-pass encrypted data now? I tried this before and failed pretty hard, had to use HAproxy at the time and pass based on the hostname in the SNI header. Was still pretty unreliable.

If so, I assume the encryption on the SSH is handled separately from the http headers.

xyst
0 replies
23h17m
xgbi
0 replies
1d

Can't you simply `proxy_pass` the traffic with any load balancer or reverse proxy (that you probably have anyways if you use TLS)?

eichin
0 replies
19h22m

That already has a (brutal) solution now - sslh https://www.rutschle.net/tech/sslh/README.html - the current version is more sophisticated, but it was originally just a perl script that would send the connection to sshd or the https web server, based on regex matching on an initial string (and I probably timing out and going to sshd if it didn't see one? Something like that, I haven't dug out the old code to check.)

ajross
0 replies
20h56m

SSHv2 is likewise trivially probable though ("nc $HOST 22" replies with "SSH-2.0-whatever"!), and that never hurt it. If you want to hide your services from attackers, there are many tools for that. I don't see why it needs to be part of the application protocol.

georgyo
8 replies
1d1h

People are being perhaps a little too over dramatic here.

Yes, this is not SSHv3 as defined by a standards body. It is very much SSHv2 over HTTP/3. (Which sorta sounds like how HTTP/3 is actually HTTP/2 over QUIC)

But there is lots of SSH servers and clients, such as Dropbear SSH, OpenSSH, libssh, libssh2 (which is very different from libssh which also supports sshv2), and more. So I don't blame the creators from putting SSH in the name.

The code itself looks like mostly glue code to other more well established libraries. I'm not saying that they didn't introduce new flaws, just that they did not roll their own crypto here.

Their paper on their work is pretty interesting: https://arxiv.org/pdf/2312.08396.pdf

I kinda hope this succeeds. The faster connection time is nice, but really OpenSSH is so change adverse that it's painful.

IE, I have to have a pretty large patch sets to open SSH, one of them being HPN ssh for getting any kind of reasonable throughput over high latency links. This patch set is decades old and the problems well known, but OpenSSH maintainers do not care. Replacing the transport layer would force things like having reasonable window scaling.

Another is loadbalancing and routing SSH connections. You cannot know where a client wants to connect to till after they done a full hand shake. This is pretty painful. If we had something like SNI we could route clients to the correct servers using only a single IP and port.

I fully welcome these ideas and am glad a group is working on testing these concepts.

Please don't dismiss things too hard too soon.

dang
4 replies
1d1h

Ok, I've put SSHv2 in the title to make that clearer. If there's a better way, we can change it again.

nixgeek
3 replies
1d1h

I don’t think this is SSHv2 though the GitHub talks about reimplementation on HTTP semantics, and the paper illustrates SSHv2 vs SSH3 as being extremely different for session setup.

In naming; Francois also explains SSH3 is a concatenation of SSH and HTTP/3 — we can not like that here on HN (due seemingly to the lack of IETF involvement?) but it’s what the project creators picked.

gchamonlive
1 replies
1d1h

Would SSHTTP3 have been more adequate for the project's name?

magicalhippo
0 replies
1d

SHT3 as in SSH over HTTP3

dang
0 replies
20h57m

Sure, but that project name is still in the title - what I change was the description. I don't know enough to say if the description needs to be more accurate. Others here surely do?

wmf
2 replies
1d

I think something like QSH (QUIC shell) might be a better name.

e12e
1 replies
17h11m

Quiche? (Or quissh)

wmf
0 replies
16h54m
out_of_protocol
7 replies
1d2h

If you want SSH via UDP, try mosh. If you have it installed on both client and server side, it just works, re-using auth, sessions etc fron ssh itself and only replacing sending actual session bytes back and forth. Don't break on unstable connections, have way lower latency

v3ss0n
4 replies
1d2h

eternal is better than mosh https://eternalterminal.dev/

password4321
0 replies
1d

I would probably choose between the two based mostly on their security track record, but I haven't needed the comparison yet.

ckwalsh
0 replies
1d1h

Do you happen to know where I can read about how ET and Mosh each establish their connections?

I have used Mosh for years and recently heard of ET, but when I tried it I experienced noticeable hangs that I don’t get with Mosh, and I went back.

I heard from several people that “ET is the new Mosh”, but it won’t be for me unless I can figure out/resolve those hangs

chungy
0 replies
1d

"ET uses TCP"

Right there, Eternal doesn't even try to cover the same use case as Mosh. It might be an alternative, same way regular SSH is an alternative, but there's no way it can be "better"

binkHN
0 replies
1d1h

Neat.

While mosh provides the same core functionality as ET, it does not support native scrolling nor tmux control mode (tmux -CC).
georgyo
1 replies
1d2h

Mosh and this project have fairly different goals.

Mosh uses regular TCP SSHv2 to authenticate and setup the udp connection. As such your initial connection time is actually slower than just normal v2, and you cannot auth with something like oauth.

Mosh is heavily focused on interactive sessions. You could not use mosh for batch programs easily.

gabeio
0 replies
1d

Mosh is heavily focused on interactive sessions. You could not use mosh for batch programs easily.

Correct, the goals are better human interaction with a high delay internet or server. Effectively allowing the client side to guess a bit as to where your input went (it does decently at it). But the key thing that I've loved is even if my client machine goes to sleep and I go to a different building I'm still connected to the server. That is wonderful. Agreed the connection time is slower. Mosh = Mobile shell.

password4321
6 replies
1d2h

If you want to tunnel UDP (WireGuard) or TCP (SSH) over the WebSocket protocol, check out https://github.com/erebe/wstunnel

PrimeMcFly
3 replies
1d1h

Why would you tunnel WireGuard over SSH?

password4321
1 replies
1d1h

Although not relevant to my post above, tunneling WireGuard over SSH sounds like an interesting challenge...

Because WireGuard and SSH are at different layers of the network stack, it might be necessary (though slow) to bridge two WireGuard networks through a single TCP socket port-forwarded by SSH. I'm actually curious now what tools would best be used to accomplish this, how much effort would be needed to configure things, and how badly performance would suffer when faced with normal internet traffic congestion.

oarsinsync
0 replies
1d

I'm actually curious now what tools would best be used to accomplish this

“Tunnel UDP over SSH”

https://superuser.com/questions/53103/udp-traffic-through-ss... has some suggestions

tssva
0 replies
1d1h

Perhaps reread the comment. It presents info for running Wireguard or SSH over websockets and not running Wireguard over ssh.

kernel_cat
0 replies
1d1h

Also https://github.com/nnathan/monopiped. Just plugging my own project.

IAmLiterallyAB
0 replies
22h22m

Looks neat. Have you looked into using the new WebTransport protocol?

https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-ov... https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-ht...

Still early stages, but it looks promising! Notably it supports multiple streams and unreliable datagrams since it goes over QUIC

mmaunder
6 replies
1d2h

WARNING: This is not ssh3. This is someone’s project. Install at your own risk.

arp242
3 replies
1d1h

Anyone should treat any new crypto project which hasn't seen a lot of testing from others as such, no matter who it came from. Even if this was some sort of proposal of the OpenSSH people.

The project is associated with the Louvain university; I would rate the risk of outright malicious tomfoolery to be quite low.

tjoff
2 replies
1d1h

Well, if it was a proposal of the OpenSSH people you'd bet it would get a lot of testing from others real quick.

But to even consider calling it SSH3 is really quite silly, first impression doesn't exactly inspire confidence.

arp242
1 replies
1d1h

"Confidence" in what? Ability to name things? It's making a mountain out of a molehill. It's certainly not an issue with the crypto or code.

tjoff
0 replies
1d

Judgement? Having attention for detail? Being in touch with the community?

Just a bad look / first impression.

heyoni
1 replies
1d1h

They should have called it SSHTTP3

tsimionescu
0 replies
1d1h

Secure SHell -Text Transfer Protocol 3?

ironhaven
6 replies
23h35m

Why is there http/3 in the middle? SSH over QUIC makes a lot of sense and was something I thought about before.

The SSH protocol is designed to multiplex many “channels” over an encrypted tcp socket. Over each channel you can run things like a shell or SFTP.

It would need some engineering but you could keep the same SSH features but replace the multiplexing channels over tcp with QUIC channels over udp. Where does HTTP/3 fit in besides to add overhead?

wmf
2 replies
21h9m

You can use HTTP for authentication.

ironhaven
1 replies
13h59m

Unless you can use an off the shelf http client to make ssh connections I can’t think of a benefit for using http headers vs non-http key value pairs.

gonzo
0 replies
12h21m

Looks like http(s).

egberts1
2 replies
22h32m

Do you want to maintain a state machine for each QUIC-SSH session 1-N pairing? Or worse, M-N pairing?

anticensor
0 replies
9h16m

Why not have 1:1 relationship between QUIC connection IDs and open SSH sessions?

CoolGuySteve
0 replies
21h44m

Yes. The attack surface would be much smaller.

zaik
5 replies
1d2h

This is not related to OpenSSH or the RFCs for SSH3 published by IETF. It is just someone's random project.

stjohnswarts
1 replies
22h42m

It still seems like a nice experiment if nothing else. If only it was done in rust ;)

say_it_as_it_is
0 replies
21h4m

Not if they wanted to complete the work in time

nextaccountic
1 replies
21h46m

I think it was a lot of hubris to call this ssh3, as if it were to be picked officially

pdntspa
0 replies
15h41m

Yeah, it is really rude of them

say_it_as_it_is
0 replies
1d

2 phd students

gsu2
5 replies
20h47m

This seems bad?

- SSH3 is a bad name: this isn't a successor to SSHv2 and will only cause confusion

- The authors don't seem to understand that SSHv2 predates all of their chosen technologies, and provides "robust and time-tested mechanisms" they claim to be adding

- How is "hiding your server behind a secret link" a feature? This is, at best, security through obscurity, which can be layered on any network protocol (e.g. https://en.wikipedia.org/wiki/Port_knocking); this implies that the authors don't have much of a security background...?

- ...Which explains why they think something as complicated as OpenID Connect is a good thing to add to SSH (i.e. https://security.stackexchange.com/questions/148292/why-is-o...)

- The abstract in the linked paper seems to conflate SSHv1 and SSHv2; I couldn't really bring myself to read much past that

In summary: this seems bad.

janosdebugs
0 replies
20h16m

I concur. They seem to have reinvented a part of the protocol without actually addressing many of the issues of SSH. The paper also doesn't bother to go into details on any the advancements that have been made to SSH since the original RFC, such as keyboard-interactive, GSSAPI, etc.

Some SSH implementations such as OpenSSH or Tectia support other ways to authenticate users. Among them is the certificate-based user authentication: only users in possession of a certificate signed by a trusted certificate authority (CA) can gain access to the remote server [12]. Available for more than 10 years, this authentication method requires setting up a CA and distributing the certificates to new users and is still not commonly used nowadays.

Somebody had an agenda to make SSH look as bad as possible. You can implement OIDC authentication with keyboard-interactive, no need for HTTP/3 for that. However, it gets very tricky if you want automated / script access, so it doesn't solve the authentication problem.

As an aside, Tatu Ylonen, the original author of the SSH protocol, published a paper in 2019 titled "SSH Key Management Challenges and Requirements"[1], which is an interesting read. It would seem the authors of this paper should have at least read it.

[1] https://www.ylonen.org/papers/ssh-key-challenges.pdf

insanitybit
0 replies
19h39m

This is, at best, security through obscurity, which can be layered on any network protocol (e.g. https://en.wikipedia.org/wiki/Port_knocking); this implies that the authors don't have much of a security background...?

This isn't security through obscurity. The url would be a secret. This is a form of capability security, where to connect to the server you must be able to name the server.

A URL with a secret is, in my opinion, far more sane than port knocking, and will be much more efficient as well.

(i.e. https://security.stackexchange.com/questions/148292/why-is-o...)

Your link doesn't support your statement at all. No one there answers "here's why oid is less secure", they say the opposite.

idlephysicist
0 replies
19h37m

I'd agree with you. The readme calls out "Significantly faster session establishment" and goes into greater detail later on.

Establishing a new session with SSHv2 can take 5 to 7 network round-trip times, which can easily be noticed by the user. SSH3 only needs 3 round-trip times. The keystroke latency in a running session is unchanged.

I, for one, can say that sometimes session establishment can take a little while but not to the extent that it would be a selling point (so to speak) for me to adopt SSH3.

commandersaki
0 replies
19h38m

SSH over HTTP/(url) is a killer feature if you're working on hostile networks that block SSH and go even as far as to try and detect the protocol over the wire.

badrabbit
0 replies
19h52m

Your points are great but SSH is extensible so openid connect support doesn't mean much since you can do it with existing ssh.

"Security by obscurity" is only a thing if you're relying on that mechanism for security. People already configure SSH port knocking as you noted. It can be considered attack surface reduction and is a good feature given they're not using a secret link for any security control.

One benefit of their approach might be how you can use TLS pki now instead if setting up ssh-ca's. Potentially you would need to manage less pki.

But a criticism I have is how http* has much more vulns and new attack techniques being developed all the time unlike ssh. I can imagine LFI or request smuggling on the same http/2 web server causing RCE via their protocol.

r1ch
3 replies
23h45m

I feel like they're missing some benchmarks here, show off the benefit that QUIC brings! OpenSSH's fixed window size significantly bottlenecks throughput on long fat links. I'd love to see ssh+rsync running at 2+ gbps.

pulpfictional
2 replies
23h23m

There is also: https://www.psc.edu/hpn-ssh-home/

HPN-SSH is a series of modifications to OpenSSH, the predominant implementation of the ssh protocol. It was originally developed to address performance issues when using ssh on high speed long distance networks (also known as Long Fat Networks: LFNs). By taking advantage of automatically optimized receive buffers HPN-SSH could improve performance dramatically on these paths. Later advances include; disabling encryption after authentication to transport non-sensitive bulk data, modifying the AES-CTR cipher to use multiple CPU cores, more detailed connection logging, and peak throughput values in the scp progress bar. More information can be found on HPN-SSH page on the PSC website.
formerly_proven
1 replies
22h2m

SSH started out with a maximum window size of 128K, which was bumped to 2M in the mid-2000s. It'd be entirely reasonable to bump this to the 64M to 128M range; it's not a fixed buffer allocated for each channel, and the peers explicitly manage the window size, so there really shouldn't be any compatibility issues. This would already solve most of these issues, the more complicated parts of HPN-SSH aren't really needed, and things like multithreaded crypto are entirely unnecessary with modern CPUs unless you need to saturate a 100G link with one connection.

nubinetwork
0 replies
21h49m

unless you need to saturate a 100G link with one connection

Maybe not 100gig, but I routinely transfer data over 10gig links. I used to be a heavy user of HPN, but Gentoo pretty much stopped supporting it because the multithreading is supposedly broken.

epaulson
2 replies
1d

I know this isn't an actual v3 of the SSH protocol, but if there ever is a version 3 of SSH, it really needs some kind of (encrypted) SNI or at least a standardized metadata block that can be passed to any jumphost without having the know the specifics of the ProxyCommand on that middlebox.

qudat
0 replies
18h7m

SNI is absolutely needed. Over at https://pico.sh we have to request an IP for each ssh server even though from a resource perspective we really only need 1 VM. It increases the complexity of our deployments and overall makes us want to figure out how to merge all of our SSH apps into one.

idorosen
0 replies
1d

`ProxyJump` already exists, so you don’t need to know where netcat resides on the jumphost anymore.

SNI-like metadata might have some adverse security implications, but a fancier ProxyJump with session routing would be nice.

Bu9818
2 replies
16h2m

For faster session establishment in OpenSSH consider ControlMaster in ssh_config(5), which multiplexes multiple sessions in one connection instead of creating a new connection for each session.

mynameisnoone
1 replies
11h10m

    # ~/.ssh/config
    # Place at the *End-of-file*
    Host *
      ControlMaster auto
      ControlPath ~/.ssh/sockets/%C.sock
      ControlPersist 600

      ServerAliveInterval 60
      ServerAliveCountMax 10
      IPQoS throughput
      TCPKeepAlive yes

      # :: Security Exception :: Purposeful for UX usability of machine-to-machine hops
      ForwardAgent yes

      # ssh-audit recommendations https://www.ssh-audit.com/hardening_guides.html 
      #
      CASignatureAlgorithms sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256
      HostKeyAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256
      HostbasedAcceptedAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-256
      PubkeyAcceptedAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-256
      KexAlgorithms sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org
      MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com
      Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
      # GSSAPIKexAlgorithms gss-curve25519-sha256-,gss-group14-sha256-,gss-group16-sha512-

ColCh
0 replies
5h47m

looks like there is no Compression=yes ?

k__
1 replies
1d2h

How does it compare to Mosh?

mcfedr
0 replies
23h45m

Sounds like it as to solve completely different set of problems

jedisct1
1 replies
23h7m

This is cool, but calling that SSH3 is not appropriate. It's an independent project, not a new version of the SSH protocol. Sure, it's "SSH3" and not "SSHv3". Still, introducing a confusion with something that could be an official protocol is not nice.

netsharc
0 replies
17h53m

I'm surprised no one's opened an issue on their repo and that no brigade has opined with lots of comments and emojis...

zelly
0 replies
22h40m

Seems cool but mosh already exists

xg15
0 replies
3h8m

It's an interesting project, but I think a clarification would be important: Is SSH3 supposed to be "SSH-over-HTTP" that happens to use QUIC as a transport, or is it "SSH-over-QUIC" that happens to use HTTP as an auth/adressing layer?

The difference is not just philosophical, it also has practical implications, in particular what part of the different protocols (SSH, HTTP, TLS and QUIC) clients, servers and intermediates are expected to implement and which can be left out.

E.g., if it's "SSH-over-HTTP", I'd expect the protocol to work well with HTTP proxies and application servers and I'd expect to be able to run a SSH3 server and a regular HTTP server on the same port. On the other hand, I'd expect features that require precise control over the low-level QUIC connection - like UDP port forwarding and session resumption - to be less reliable.

If it's "SSH-over-QUIC", the expectations would be the opposite: That you can treat QUIC (and TLS) as an integral part of the protocol, in the same way that the encryption, auth and transport layers in standard SSH are seen as an integral part of SSH. However then the server should generally be deployed as a standalone process on a separate port and should not be considered a fully compatible HTTP endpoint. That might diminish the "stealth" ability of the protocol a bit.

Or to sum it up, which parts of the protocol stack would an SSH3 client or server be expected to provide by themselves and which parts would be delegated to the OS/infrastructure/intermediaries etc?

wslh
0 replies
23h34m

Is there a security audit for this code?

throwawaaarrgh
0 replies
22h1m

Throwaway's maxim: no new protocol works without HTTP

tenebrisalietum
0 replies
1d

I'm going to use this to do UUCP over QUIC.

skywhopper
0 replies
21h47m

This is a really cool project and a great idea that seems to be decently implemented.

Calling it “SSH3”, however, is misleading at best, and misrepresents what the project is. Please consider choosing a better name.

rubyfan
0 replies
23h52m

Something about calling this SSH3 feels like when Comcast named a thing 10G.

password4321
0 replies
1d1h

SSH3 seems a bit of a clickbait project name, it's not clear to me that this project uses anything protocol-wise from SSH though it offers similar functionality.

A PhD project from Belgium that combines several Golang libraries to offer HTTP-based authentication on top of backwards compatibility with OpenSSH keys, configuration, agents, etc. -- it looks pretty solid but the associated paper titled "Towards SSH3" acknowledges "This article is a first step" in the conclusion.

kiitos
0 replies
13h53m

The interesting thing here isn't so much the improved latency or whatever, it's the ability to ssh from a client that's on a network which restricts access to anything other than 80/443

jsiepkes
0 replies
1d1h

Missed chance not to call it SSHTTP3 ;-)

jhatemyjob
0 replies
1d

So let me get this straight, from reading the README, the only tangible benefit is faster session establishment? With the downsides being, he's using a more complicated protocol, which apparently has slower throughput?? I guess this is a cool experiment but why would anyone use this over OpenSSH or libssh2?

ikiris
0 replies
1d1h

Aside from the weird claim of being SSH3, this project seems to not understand that ssh already supported cert auth.

hackernudes
0 replies
1d1h

Maybe similar to https://github.com/moul/quicssh

I've done ssh over websocket before (to bypass a corp proxy)... been thinking about it a lot lately. I would love if mosh got support for different transports than just udp and it would be cool if the initial handshake could be done over http instead of ssh.

egberts1
0 replies
22h33m

SSH3 does not equate to SSHv2 over HTTP/3-QUIC.

cwillu
0 replies
14h52m

Not related to IETF, no RFC, entirely unrelated to the processes involved in the standard.

Protecting against the potential for confusion from and/or abuses like this is what trademarks are for.

cvalka
0 replies
23h30m

Why no support for mTLS? Certificate based authentication for clients is a must.

apatheticonion
0 replies
19h44m

Interesting. Having HTTP/3 layered over the top, which I presume allows for SSL certificates to be applied to the connection, might result in the SSH connection appearing to observers as standard - uninteresting - website traffic.

Assuming one could connect to an SSH server this way and tunnel ports, could this allow for a means to bypass China's GFW?

China's firewall allows http and https connections through however VPNs, SSH and similar are detected upon connection and blocked on demand.

Hiding a VPN connection by tunneling to a remote SSH server over HTTP/3, forwarding the VPN port and connecting to it might fly under the radar as it could be perceived as regular web traffic.

Would be an interesting thing to try.

anthk
0 replies
23h26m

Meh. OpenBSD does it fine. If not, Mosh works great on flakey connections.

Sleaker
0 replies
23h7m

What makes this any better than say, MOSH?

Jhsto
0 replies
1d2h

Sounds like something to try in an internal network where you want to do X11 or Wayland application forwarding!

JCharante
0 replies
23h6m

Oauth for ssh sounds annoying