I do hate the name ssh3. I was glad to see this at the top of the repo:
> SSH3 is probably going to change its name. It is still the SSH Connection Protocol (RFC4254) running on top of HTTP/3 Extended connect, but the required changes are heavy and too distant from the philosophy of popular SSH implementations to be considered for integration. The specification draft has already been renamed ("Remote Terminals over HTTP/3"), but we need some time to come up with a nice permanent name.
LDAP2 or nextVFS... but point awarded. Feels that way because it is. Though my examples aren't great. These things just are; not really versioned. I don't know if major differences would call for ++
A better 'working name' would be something like sshttp3, lol. Obviously not the successor to SSH2
Eh. JSON forfeited version numbers, and if this analogy ran all the way through then we'd be looking at a scenario where SSH is based on HTTP 1 or 2. In that situation calling the HTTP/3 version SSH3 would make a lot of sense.
Doesn't /3 mean v3? I mean, for HTTP itself, doesn't the HTTP/3 == HTTPv3? If so, I don't see how this is any better than SSH3 - both SSH3 and SSH/3 read to me like "SSH v3"
Yes, but HTTP is about the only thing that versions with a slash. By writing it SSH/3, it would emphasize its relationship with HTTP/3, instead of it being the third version of SSH.
Easy: hhs instead of ssh (since the even more obvious shh is essentially impossible to google). Stands for, idk, HTTP/3 Hardened Shell or something ("host shell"? sounds like windows)
I was skeptical of the claim that it's faster than traditional SSH, but the README specifies that it is faster at establishing a connection, and that active connections are the same speed. That makes a lot of sense and seems like a reasonable claim to make.
It is not faster in this sense. However, an SSH connection can have multiple substreams, especially for port forwarding. Over a single classical connection, this can lead to head-of-line blocking, where an issue in one stream slows everything down. QUIC/HTTP3 protocol can solve this.
The answer is yes according to code and documentation [0]:
> The stream multiplexing capabilities of QUIC allow reducing the head-of-line blocking that SSHv2 encounters when multiplexing several SSH channels over the same TCP connection
....
> Each channel runs over a bidirectional HTTP/3 stream and is attached to a single remote terminal session
Although, dollars-to-donuts my bet is that this tool/protocol is much faster than SSH over high-latency links, simply by virtue of using UDP. Not waiting for ack's before sending more data might be a significant boost for things like scp'ing large files from part of the world to the another.
SSH has low throughput on high latency links, but not because it uses TCP. It is because SSH hardcodes a too-small maximum window size in its protocol, in addition to the one of TCP.
This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.
I think it's silly that this exists. They should just let TCP handle this.
I tend to use 'rclone', does SSH/more. The '--transfers' arg is useful for handling several files, lol. One, if I recall correctly, isn't parallelized.
That makes even less sense, unless we are talking about XMODEM every protocol uses windowing to avoid getting stuck waiting for ACKs.
Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.
Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.
Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks:
https://fasterdata.es.net/host-tuning/linux/
There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.
No it’s not completely mitigated by tmux. mosh has two main use cases (that I know of)
1. High latency, maybe even packet-dropping connections;
2. You’re roaming and don’t want to get disconnected all the time.
For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.
For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.
It also tracks with HTTP/3 and QUIC as a whole, as one of the main "selling points" has always been reduced round trips leading to faster connection setup.
> Establishing a new session with SSHv2 can take 5 to 7 network round-trip times, which can easily be noticed by the user. SSH3 only needs 3 round-trip times. The keystroke latency in a running session is unchanged.
Bummer. From a user perspective, I don't see the appeal. Connection setup time has never been an annoyance for me.
SSH is battle-tested. This feels risky to trust, even whenever they end up declaring it production-ready.
If you're doing repeated connections to the same host to run one-off commands, SSH multiplexing would be helpful for you. SSH in and it'll open up a local unix domain socket. Point additional client connections to the UDS and they'll just go over the existing connection with out requiring round trips or remote authentication. The socket can be configured to keep itself alive for a while and then close after inactivity. Huge huge speed boost over repeated fresh TCP connections.
Head-of-line blocking is likely fully addressed by ssh3 where multiplexing several ports/connections over a single physical ssh3 connection should be faster.
I've seen it a lot with communication protocols for some reason, I guess it's just relatively clear it means "the non virtualized" even though it's clearly a misnomer. E.g. with VRRP a ton of people just say "the physical IP" when talking about the address that's not the VIP, even though the RFC refers to it as "the primary" IP. Arguably "primary IP" is more confusing as to which is being referred to, even though it's more technically accurate.
Of course, maybe there's a perfectly obvious word which can apply to all of those kinds of situations just as clearly without being a misnomer I've just never thought to mention in reply :D.
Mature should still be fixing bugs, which something like mosh is bound to always run into. From that perspective, it doesn't seem like it's just mature. There doesn't seem to be a clear all-in-one successor fork taking the reins either. E.g. https://github.com/mobile-shell/mosh/issues/1339, as a random sample.
If this were really the case, it would indeed be sad, as the standard HTTP request/response model is both too restrictive and too overengineered for many usecases.
But both HTTP/2 and QUIC (the "transport layer" of HTTP/3) are so general-purpose that I'm not sure the HTTP part really has a lot of meaning anymore. At least QUIC is relatively openly promoted as an alternative to TCP, with HTTP its primary usecase.
This is actually good because every protocol ideally must look the same to make traffic shaping and censorship harder. Either random stream of bytes or HTTP.
If you are designing a protocol, unless you have a secret deal with telcos, I suggest you masquerade it as something like HTTP so that it is more difficult to slow down your traffic.
Yeah we got those good old network ppl or their corporate (don't knows much about tech) overlord to thank for that.
If you ever using wifi in the airport or even some hotel with work suite unit around the world, you will notice that Apple Mail can't send or receive emails. It is probably some company wide policy to first block port 25 (that is even the case with some hosting providers) all in the name of fighting SPAM. Pretty soon, 143, 587, 993, 995.... are all blocked. Guess 80 and 443 are the only ones that can go through any firewalls now days. It is a shame really. Hopefully v6 will do better.
So there you go. And know EU wants to do ChatControl!!!! Please stop this none-sense, listen to the people who actually knows tech.
Port 25 is insecure and unencrypted; EU doesn't even need ChatControl to hoover up that data, and you'd better believe anything going through an airport wifi router unencrypted is being hoovered by someone no matter what jurisdiction you're in. Apple mail prefers 587 for secure SMTP and 993 for secure IMAP.
People were (wisely) blocking port 25 twenty years ago.
> People were (wisely) blocking port 25 twenty years ago.
20 years ago (2005) STARTTLS was still widely in use. Clients can be configured to call it when STARTTLS isn't available. But clients can also be served bogus or snake oil TLS certs. Certificate pinning wasn't widely in use for SMTP in 2005.
Seems STARTTLS is deprecated since 2018 [1]
Quote: For email in particular, in January 2018 RFC 8314 was released, which explicitly recommends that "Implicit TLS" be used in preference to the STARTTLS mechanism for IMAP, POP3, and SMTP submissions.
Ah thanks for the correction. Just changed my post above to 587. What I mean is why block all the ports, just keep it open let the user decide if they want to use it. And linux people can always use ufw on their side to be safe. Back in the dot com days, there were people also using telnet, but that got changed to ssh.
Is it because it is hard to detect what type of the request that is being sent? Stream vs Non Stream etc?
Having all protocol look the same makes traffic shaping harder. If you develop a new protocol, do not make your protocol stand out, you won't win anything from it. Ideally all protocols should look like stream of random bytes without any distinctive features.
It feels a little like a kludge as long as we keep calling it http. The premise makes sense -- best practices for connection initialization have become very complex and a lot of protocols need the same building blocks, so its beneficial to piggyback on the approach taken by one of the most battle tested protocols -- but it's not really hypertext we're using it to transfer anymore so it feels funny.
It's on top of HTTP CONNECT, which is intended for converting an existing request (QUIC stream) into a transparent byte stream. This removes the need to deal with request/response semantics.
The reasons states to use http3 and not QUIC directly makes sense with littlest downside - you can run it behind any standard http3 reverse proxy, under some subdomain or path of your choosing, without standing out to port scanners. While security through obscurity is not security, there's no doubt that it reduces the CPU overhead that many scanners might incur if they discover your SSH server and try a bunch of login attempts.
Running over HTTP3 has an additional benefit. It becomes harder to block. If your ssh traffic just looks like you're on some website with lots of network traffic, eg google meet, then it becomes a lot harder to block it without blocking all web traffic over http3. Even if you do that, you could likely still get a working but suboptimal emulation over http1 CONNECT
> you can run it behind any standard http3 reverse proxy
As long as said proxy supports a http CONNECT to a bi-directional connection. Which most I know of do, but may require additional configuration.
Another advantage of using http/3 is it makes it easier to authenticate using something like oauth 2, oidc, saml, etc. since it can use the normal http flow instead of needing to copy a token from the http flow to a different flow.
It also gives you two authenticated protocol layers, which helps them because most standard protocols don’t support multiple authenticated identities. Their zero trust model uses it to authenticate each time you make a connection that your machine has authorization to connect to that endpoint via a client certificate, and then the next protocol layer authenticates the user.
Is there some indication that this is going to be adopted? The linked ietf submission is an expired individual draft (which anyone can send in) and not from the ssh spec working group, sounds like this is from some reseachers that used SSH3 as an optimistic name.
I hear you that it feels like something is off. The lack of diversity feels like we're losing robustness in the ecosystem. But it can be a good thing too. A lot of security issues are concentrated into one stack that is very well maintained. So that means everything built on top of it shares the same attack surface. Which yes means it can all come crashing down at once, but also that there are many eyes looking for vulnerabilities and they'll get fixed quickly. Similarly perf optimizations are all shared, and when thing get this popular can get pushed down into hardware even.
It's not like we see a lot of downsides that the world collectively agreed on TCP/IP over IPX/SPX or DECNet or X.25. Or that the linux kernel is everywhere.
SSH is slow, but in my experience the primary cause of slowdown is session setup.
Be it PAM, or whatever OpenBSD is doing, the session setup kills performance, whether you're re-using the SSH connection or not, every time you start something within that connection.
Now obviously for long running stuff, that doesn't matter as much as the total overhead. But if you're doing long running ssh you're probably using SSH for its remote terminal purposes and you don't care if it takes 0.5 seconds or 1 second before you can do anything. And if you want file transfer, we already had a HTTP/3 version of that - it's called HTTP/3.
Ansible, for example, performs really poorly in my experience precisely because of this overhead.
Which is why I ended up writing my own mini-ansible which instead runs a remote command executor which can be used to run commands remotely without the session cost.
It's cool that SSH is getting some love but I'm a little sad they're not being a little more ambitious with regard to new features, considering it seems like they're more or less creating a new thing. Looks like they're going to support connection migration but it would be cool (to me anyway) if they supported some of the roaming/intermittent connectivity of Mosh[1].
One of the things I really like about Mosh is the responsiveness - there's no lag when typing text, if feels like you're really working on a local shell.
I'm guessing SSH3 doesn't do anything to improve that aspect? (although I guess QUIC will help a bit, but isn't quite the same as Mosh is it?)
AIUI connection migration (as well as multipath handling) is a QUIC feature. And how would that roaming feature differ from "built-in tmux"? I'm not sure the built-in part there would really be an advantage…
Mosh connections don't drop from merely wifi flipping around; you get replies back to the address and port the last uplink packet came from.
You can just continue typing and a switch between Wi-Fi and mobile data (for example on a phone while sitting on public transit) shows as merely a lag spike during which typed characters will be predictive echoed by underlining them after an initial delay that serves to avoid flickering from rapidly retracted/changed predictions (predictions are underlined) during low-latency steady-state.
Mosh is like vnc or rdp for terminal contents: natively variable frame rate and somewhat adaptive predictive local echo for reducing latency perception; think client side cursor handling with vnc or with rdp I'd even assume there might be capability for client-side text echo rendering.
If you haven't tried mosh in situations with a mobile device that have you experience connection changes during usage, you don't know just how much better it is than "mere tmux over ssh".
I honestly don't know of a more resilient protocol than mosh that's in regular usage, other than possibly link-layer 802.11n aka "the Wi-Fi that got these 150 Mbit and those 300 Mbit and some 450 Mbit speed claims advertised onto the marker", where link-layer retransmissions and adaptive negotiation of coding parameters and actively-multipath-exploiting MIMO-OFDM (and AES crypto from WPA2) combine for a setup that hides radio interference to not be visible to higher level protocols beyond the unavoidable jitter of the retransmissions and varying throughput potentials from varying radio conditions.
Oh, I think when viewed regarding computers not the congestion control schemes adjusting the individual connection speeds, there'd also be BitTorrent with DHT and PEX that only needs an infohash: with 160 bits of hash a client seeded into the (mainline) DHT swarm can go and retrieve a (folder of) files from an infohash-specific swarm that's at least partially connected to the DHT (PEX takes care of broadening the connectivity among those that care about the specific infohash).
In the realm of digital coding schemes that are widely used but aren't of the "transmission" variety, there's also Redbook CD audio that starts off easy with lossless error correction, followed by perceptually effective lossy interpolation to cover severe scratches to the disc's surface.
I'm not sure why you're explaining mosh (I know what it is and have used it before), I was asking what there is other than migration (= handled by QUIC) and resumption (= tmux).
I wonder what the current plans are with the project, it's been over a year since the last release - yet alone commits or other activity on GitHub. As they've started working on the project with a paper - I guess they'll might be continuously working on other associated aspects?
Thanks for pointing that out. I'm gonna assume it's a dead project. It has only 239 commits, basically a proof of concept. Nothing to take seriously. OpenBSD on the other hand is extremely active, there's no way OpenSSH will be dethroned anytime soon.
I sincerely don't understand this obsession with short names and aliases. I absolutely dislike it. Names should be long and descriptive. I understand that in the past we needed short names because every character cost space and space was precious but it isn't the case anymore.
Please don't give short abbreviated names. Useful full names for commands. Teach full names. When you present something, show full names. If this project used a full name like `remote-terminals-over-http3`, we would not be having this debate about ssh3.
Of course, end users and system administrators and even package managers/distributions are free to add abbreviations but we should be teaching people to use full names.
Prefer things like Set-Location over cd. Prefer npm install --global over npm i -g. Prefer remote-terminals-over-http3 over ssh3.
I haven’t seen anyone yet comment on the design constraint that SSH uses a lot of separation in its multiplexing for the purpose of sandboxing/isolation. Would want transport to be as straightforward as possible. SSH needs to be the reliable, secure tunnel that you can use to manage your high performance gateways. It has a lot of ways to turn things off to avoid attack surface. HTTP protocols have a lot of base requirements. Different problems.
So with HTTP requests you can see the domain name in the header and forward it to the correct host. That was never a thing you could do with SSH, does this allow that to work?
"It is often the case that some SSH hosts can only be accessed through a gateway. SSH3 allows you to perform a Proxy Jump similarly to what is proposed by OpenSSH. You can connect from A to C using B as a gateway/proxy. B and C must both be running a valid SSH3 server. This works by establishing UDP port forwarding on B to forward QUIC packets from A to C. The connection from A to C is therefore fully end-to-end and B cannot decrypt or alter the SSH3 traffic between A and C."
More or less, maybe but not automatically like you suggest, I think. I don't see why you couldn't configure a generic proxy to set it up, though.
You can forward any ssh traffic based on the domain name with SNI redirection. You can also use that with, lets say the nginx stream module, to run ssh and http server on the same port.
in the SSH client config would make everything in that domain hop over that hop server. It's one extra connection - but with everything correctly configured that should be barely noticeable. Auth is also proxied through.
Is there a way to configure the jump (hop) server to reroute the request based on the value of %h and/or %p? Otherwise, it's going to be quite difficult to configure something like HTTP virtual hosts.
EDIT: Looking at the relevant RFC [1] and the OpenSSH sshd_config manual [2], it looks like the answer is that the protocol supports having the jump server decide what to do with the host/port information, but the OpenSSH server software doesn't present any relevant configuration knobs.
> the keystroke latency during a session remains unchanged
That’s a shame. Lowered latency (and persistent sessions, so you don’t pay the connection cost each time) are the best things about Mosh (https://mosh.org/).
I feel like this should really be SSH over QUIC, without the HTTP authorization mechanisms. Apart from the latter not really being used at all for users (only for API calls, Bearer auth), shell logins have a whole truckload of their own semantics. e.g. you'd be in a rather large amount of pain trying to wire PAM TOTP (or even just password+OTP) into HTTP auth…
I view it orthogonally: Making it easier to use our single company identity we use for every single service for SSH as well would make it so much easier to handle authorization and RBAC properly for Linux server management. Right now, we have to juggle SSH keys; I always wanted to move to SSH certificates instead, but there's not a lot of software around that yet (anyone interested in building some? Contact me).
So having the ease of mind that when I block someone in Entra ID, they will also be locked out of all servers immediately—that would be great actually.
> PAM TOTP (or even just password+OTP) into HTTP auth
But why would you? Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.
> use our single company identity we use for every single service for SSH as well
How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?
And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.
> Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.
Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it. And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.
Coincidentally, SSH's mechanisms are also an incredibly bad fit; password authentication is in there as a "hard" feature; it's not an interactive dialog and you can't do password+TOTP there either. For that you need keyboard-interactive auth, which I'm not sure but feels like it was bolted on afterwards to fix this. Going with HTTP auth would probably repeat history quite exactly here, with at some point something else getting bolted to the side…
> Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal?
You start the ssh client in the terminal, it opens a browser to authenticate, and once you're logged in you go back to the terminal. The usual trick to exfiltrate the authentication token from the browser is that the ssh client runs an HTTP server on localhost to which you get redirected after authenticating.
That, or the SSH client opens a separate connection to the authorization server and polls for the session state until the user has completed the process; that would be the device code grant, which would solve this scenario just fine.
> How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?
That's pretty well covered in RFC8628 and doesn't even require a browser on the same device where the SSH client is running.
> And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.
That depends entirely on the implementation. It could also be a redirect response which the client chooses to delegate to the user's web browser for external authentication. It's just the protocol. How the client interprets responses is entirely up to the implementation.
> Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it.
I don't see why, really. It might just as well be an opaque part of a newer system to reconcile remote authorization with local identity, without any interaction with PAM itself necessary at all.
> And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.
But isn't that the nice part about HTTP auth, that it's so extensible it can solve everyone's problems just fine? At least it does so on the web, daily, for billions of users.
Note the OAuth listed there is OAuth 1.0. Support for "native" HTTP authentication was removed in OAuth 2.0.
This discussion is about using HTTP authentication. I specifically said HTTP authentication in the root post. If you want to do SSH + web authentication, that's a different thread.
Rule of thumb: if you need HTML in any step of it —and that includes as part of generating a token— it's web auth, not HTTP.
I hate that web-enshittification of SSH is considered the solution to this problem, and many other modern application-level problems.
It's done because the web stack exists and is understood by the web/infrastructure folks, not because it represents any kind of local design optima in the non-web space.
Using the web stack draws in a huge number of dependencies on protocols and standards that are not just very complex, but far more complex than necessary for a non-web environment, because they were designed around the constraints and priorities of the web stack. Complicated, lax, text-based formats easily parsed by javascript and safe to encode in headers/json/query parameters/etc, but a pain to implement anywhere else.
Work-arounds (origin checks, CORS, etc) for the security issues inherent in untrusted browsers/javascript being able to make network connections/etc.
We'be been using kerberos and/or fetching SSH keys out of an LDAP directory to solve this problem for literal decades, and it worked fine, but if that won't cut it, solving the SSH certificate tooling problem would be a MUCH lighter-weight solution here than adopting OAuth and having to tie your ssh(1) client implementation to a goddamn web browser.
I see your point, but I think you're missing the broader picture here. Web protocols are not just used because they are there, but because the stack is very elegantly layered and extensible, well understood and tested, and offer strong security guarantees. It's not like encryption hasn't been tacked onto HTTP retroactively, but at least that happened using proper staples instead of a bunch of duct tape and hope as with other protocols.
All of that isn't really important, though. What makes a major point for using HTTP w/ TLS as a transport layer is the ecosystem and tooling around it. You'll get authorization protocols like OIDC, client certificate authentication, connection resumption and migration, caching, metadata fields, and much more, out of the box.
> the stack is very elegantly layered and extensible
I have to disagree pretty strongly on this one. Case in point: WebSockets. That protocol switch is "nifty" but breaks fundamental assumptions about HTTP and to this day causes headaches in some types of server deployments.
I guess it didn't get traction… whether that happens honestly feels like a fickle, random thing.
To be fair, a go project as sole implementation (I assume it is that?) is a no-go, for example we couldn't even deploy it on all our systems since last I checked Go doesn't support ppc64. (BE, not ppc64le)
I also don't see a protocol specification in there.
[edit] actually, no, this is not SSH over QUIC. This is SSH over single bidi stream transport over QUIC, it's just a ProxyCommand. That's not how SSH over QUIC should behave, it needs to be natively QUIC so it can take advantage of the multi-stream features. And the built-in TLS.
TFA says "port scanning attacks", but in my opinion it's not. It's barely jiggling the door knob. Securing SSH isn't hard to do properly, and port scans or connection attempts isn't something anyone needs to be concerned about whatsoever.
I am concerned however about the tedious trend of cramming absolutely everything into HTTP. DNS-over-HTTP is already very dumb, and I'm quite sure SSH-over-HTTP is not something I'm going to be interested in at all.
Don't get me wrong, this might likely be a fantastic tool. But something as essential as a secure connection would definitely need a good pair of eyes for audit before I'd use that for anything in production.
But it's a good start. Props to exploring that kind of space that needs improvement but is difficult to get a foothold in.
This is actually a common/desirable feature to permit a group of people to access an ephemeral machine (e.g. engineers accessing a k8s node, etc.). Authorizing "engineers" via OAuth is much more ergonomic and safe vs traditional unix auth which is designed more for non-transient or non-shared users.
I recently looked into this at it looks like the IETF(?) RFC draft for SSH3 was abandoned? It's great this exists but I think the standard needs to be done as well.
Firstly, I love the satirical name of tempaccount420, I was also just watching memes and this post is literally me (ryan gosling)
As I was also thinking about this thing literally yesterday being a bit delusional on hoping to create a better ssh using http/3 or something or some minor improvement because I made a comment about tor routing and linking it to things like serveo, I was thinking of enhancing that idea or something lol.
Actually, it seems that I have already starred this project but I had forgotten about it, this is primarily the reason why I star github project and this might be where I might have got some inspiration of http/3 in the first place with SSH.
Seems like a really great project (I think)
Now, one question that I have is could SSH be made modular in the sense that we can split the transport layer apart from SSH as this project does, without too much worries?
Like, I want to create a SSH-ish something to lets say something like iroh being the transport layer, are there any libraries or resources which can do something like that? (I won't do it for iroh but I always like mixing and matching and I am thinking of some different ideas like SSH over matrix/xmpp/signal too/ the possibilities could be limitless!)
Written in Go. Terrible name, already discussed in various other comments and author acknowledges.
The secret path, otherwise giving 404 would need brute-force protection (on HTTPd level?). I think it is easier to run SSH on a non-standard port on IPv6, but it remains true that anyone with network read access between the endpoints can figure it out.
What isn't explained is why would one care about 100 ms latency during auth? I rather have mosh which has resuming support and will work on high latency (tho IIRC won't work over TOR?). But even then, with LTE and NG, my connections over mobile have become very stable here in NL (YMMV).
I thought the same until I read the page and realized that ssh is quite broken if you think about it.
With ssh everybody does TOFU or copies host fingerprints around, vs https where setting up letsencrypt is a no-brainer and you’re a weirdo of you even think about self-signed certs. Now you can do the same with ssh but do you?
For authentication, ssh relies on long lived keys rather than short lived tokens. Yes, I know about ssh certificates but again, it’s a hassle to set up compared to using any of a million IdP with oauth2 support. This enables central place to manage access and mandate MFA.
Finally, you better hope your corporate IT has not blocked the SSH port as a a security threat.
Telnet, FTP and rlogin wasn't broke, either. They had their own encrypted variants before SSH came along.
Listing all the deficiencies of something, and putting together a thing that fixes all of them, is the kind of "designed by committee" project that everyone hates. Real progress requires someone to put together a quick project, with new features they think are useful, and letting the public decide if it is useful or not.
Sure, someone paranoid about his SSH server being continuously proved by bots is going to excitedly jump to a new HTTP-SSH server that is going to be continuously proved by even more bots for HTTP exploits (easily an order of magnitude more traffic) AND whatever new fangled "HTTP-SSH" exploits appear.
I do hate the name ssh3. I was glad to see this at the top of the repo:
> SSH3 is probably going to change its name. It is still the SSH Connection Protocol (RFC4254) running on top of HTTP/3 Extended connect, but the required changes are heavy and too distant from the philosophy of popular SSH implementations to be considered for integration. The specification draft has already been renamed ("Remote Terminals over HTTP/3"), but we need some time to come up with a nice permanent name.
Same - this feels equivalent of some rando making a repo called "Windows 12" or "Linux 7".
LDAP2 or nextVFS... but point awarded. Feels that way because it is. Though my examples aren't great. These things just are; not really versioned. I don't know if major differences would call for ++
A better 'working name' would be something like sshttp3, lol. Obviously not the successor to SSH2
C.f. “JSON5”.
Eh. JSON forfeited version numbers, and if this analogy ran all the way through then we'd be looking at a scenario where SSH is based on HTTP 1 or 2. In that situation calling the HTTP/3 version SSH3 would make a lot of sense.
You mean like cryptocurrency bros naming something "web 3.0"?
Secure Hypertext Interactive TTY
That sounds a bit crap
HITTY then.
Maybe SSH/3 instead (SSH + HTTP/3)?
Doesn't /3 mean v3? I mean, for HTTP itself, doesn't the HTTP/3 == HTTPv3? If so, I don't see how this is any better than SSH3 - both SSH3 and SSH/3 read to me like "SSH v3"
Yes, but HTTP is about the only thing that versions with a slash. By writing it SSH/3, it would emphasize its relationship with HTTP/3, instead of it being the third version of SSH.
> Doesn't /3 mean v3?
I've seen very little do that. Probably just HTTP, and it's using a slash specifically to emphasize a big change.
I like this idea!
Having SSH in the name helps developers quickly understand the problem domain it improves upon.
/* This is one proper bikeshedding thread if I ever saw one. */
sshhh ... don't sidetrack the productive comment generation. (also, SSHHH as a possible name)...
Quissh?
HTTPSSH.
Why not just SSH/QUIC, what does the HTTP/3 layer add that QUIC doesn’t already have?
QuickShell - it should be called
Quicshell*
That's already a project (library for building a desktop environment).
QSH?
The ability to use HTTP authentication methods, HTTP headers, etc?
SSHTTP
HTTPSS for more confusion
SecureHyperTextShell (SHTS)
I meant this in jest but now that I think about it, it actually could be a decent name (?)
I think it's too similar to HSTS: https://en.m.wikipedia.org/wiki/HTTP_Strict_Transport_Securi...
QUICSH/T
Hyper Secure Shell (HSS)
SSHTTP3
Secure Shell Hyper Text Transfer Protocol Version 3. Yikes.
remove the hyper text:
SSHTP/3 "Secure Shell Transfer Protocol Version 3"
or even:
SSHP/3 "Secure Shell Protocol Version 3"
pronounced: shoop
Why not HSH, or HTTPS Shell.
SSH2/3, maybe?
It's still largely SSH2, but runs on top of HTTP/3.
SSHoH
SSHoH3
Pronounced "Shoe"
qrs for Quic Remote Shell?
Or h3s for HTTP 3 Shell?
H3rs for http3 remote shell?
How about Tortoise Shell - a little joke because its so fast
SSH/HTTP/3
That way, when you need to use sed for editing text containing it, your pattern can be more interesting:
try:
At least with GNU sed, you can use different separators so dodge the need for exscaping. | works as well.https://github.com/francoismichel/ssh3/issues/79#issuecommen...
SSH over QUIC
so, maybe SSHoQ or SoQ
soq reads better for the CLI I suppose.
HTTP under SSH, or hussh for short.
h3sh | hush3 | qs | qsh | shh | shh3
Anything with a 3 in it is a nightmare to type quickly. shh looks like you typo'd ssh.
qsh might be taken by QShell
https://en.m.wikipedia.org/wiki/Qshell
There's a whole github issue where the issue was bike shed to death.
SSQ
How about rthym or some variation?
Yeah, that's not cool.
HTTP3 Shell or H3S
Easy: hhs instead of ssh (since the even more obvious shh is essentially impossible to google). Stands for, idk, HTTP/3 Hardened Shell or something ("host shell"? sounds like windows)
hss? Http/3 Secure Shell?
Or h3ss, pronounced hess
SSH over 3: SO(3). Like the rotation group.
SSHoHTTP3
ussh (for udp)
Don't use it! Create your own thing and name it however you want.
Non-doers are the bottom rung of the ladder, don't ever forget that :).
No... They're one rung up from evil and dumb doers.
I was skeptical of the claim that it's faster than traditional SSH, but the README specifies that it is faster at establishing a connection, and that active connections are the same speed. That makes a lot of sense and seems like a reasonable claim to make.
It is not faster in this sense. However, an SSH connection can have multiple substreams, especially for port forwarding. Over a single classical connection, this can lead to head-of-line blocking, where an issue in one stream slows everything down. QUIC/HTTP3 protocol can solve this.
Does this implementation do that do, or does it just use a single h3 stream?
The answer is yes according to code and documentation [0]:
> The stream multiplexing capabilities of QUIC allow reducing the head-of-line blocking that SSHv2 encounters when multiplexing several SSH channels over the same TCP connection
....
> Each channel runs over a bidirectional HTTP/3 stream and is attached to a single remote terminal session
[0] https://www.ietf.org/archive/id/draft-michel-remote-terminal...
Although, dollars-to-donuts my bet is that this tool/protocol is much faster than SSH over high-latency links, simply by virtue of using UDP. Not waiting for ack's before sending more data might be a significant boost for things like scp'ing large files from part of the world to the another.
SSH has low throughput on high latency links, but not because it uses TCP. It is because SSH hardcodes a too-small maximum window size in its protocol, in addition to the one of TCP.
This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.
I think it's silly that this exists. They should just let TCP handle this.
Yeah, the longstanding hpn-ssh fork started off by adjusting ssh’s window sizes for long fat pipes.
https://github.com/rapier1/hpn-ssh
Off the top of your head do you know of any file transfer tools that do utilize multiple streams?
I tend to use 'rclone', does SSH/more. The '--transfers' arg is useful for handling several files, lol. One, if I recall correctly, isn't parallelized.
That's why mosh exists, as it is purpose built for terminals over high latency / high packet loss links.
Yeah, there’s a replacement for scp that uses ssh for setup and QUIC for bulk data transfer, which is much faster over high-latency paths.
https://github.com/crazyscot/qcp
Of course it has ACKs. There are protocols without ACKs but they are exotic and HTTP3 is not one of them.
He said not waiting for ACKs.
That makes even less sense, unless we are talking about XMODEM every protocol uses windowing to avoid getting stuck waiting for ACKs.
Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.
Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.
Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks: https://fasterdata.es.net/host-tuning/linux/
There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.
Well, you could peruse the code. Then see what it does and explain it.
Not really that relevant - anybody regularly using SSH over high latency links is using SSH+mosh already anyway.
The huge downside of mosh is it handles its own rendering and destroys the scrollback buffer. (Yes I know I can add tmux for a middle ground.)
But it's still irrelevant here; specifically called out in README:
> The keystroke latency in a running session is unchanged.
"huge downside" (completely mitigated by using tmux)
The YouTube and social media eras made everyone so damn dramatic. :/
Mosh solves a problem. tmux provides a "solution" for some that resolves a design decision that can impact some user workflows.
I guess what I'm saying here, is it you NEED mosh, then running tmux is not even a hard ask.
No it’s not completely mitigated by tmux. mosh has two main use cases (that I know of)
1. High latency, maybe even packet-dropping connections;
2. You’re roaming and don’t want to get disconnected all the time.
For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.
For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.
Should be genuinely faster over many VPNs, because it avoids the "TCP inside TCP" tar pit.
It also tracks with HTTP/3 and QUIC as a whole, as one of the main "selling points" has always been reduced round trips leading to faster connection setup.
> Establishing a new session with SSHv2 can take 5 to 7 network round-trip times, which can easily be noticed by the user. SSH3 only needs 3 round-trip times. The keystroke latency in a running session is unchanged.
Bummer. From a user perspective, I don't see the appeal. Connection setup time has never been an annoyance for me.
SSH is battle-tested. This feels risky to trust, even whenever they end up declaring it production-ready.
UDP tunnels are the main feature, way lighter than wireguard, also OpenID auth.
> also OpenID auth
Wait, what? Does it actually work?
If yes, this is a huge deal. This potentially solves the ungodly clusterfuck of SSH key/certificate management.
(I don't know how OpenID is supposed to interact with private keys here.)
> Connection setup time has never been an annoyance for me.
It has always bothered me somewhat. I sometimes use ssh to directly execute a command on a remote host.
If you're doing repeated connections to the same host to run one-off commands, SSH multiplexing would be helpful for you. SSH in and it'll open up a local unix domain socket. Point additional client connections to the UDS and they'll just go over the existing connection with out requiring round trips or remote authentication. The socket can be configured to keep itself alive for a while and then close after inactivity. Huge huge speed boost over repeated fresh TCP connections.
Head-of-line blocking is likely fully addressed by ssh3 where multiplexing several ports/connections over a single physical ssh3 connection should be faster.
Calling anything here "physical" is strange and confusing to me. Surely you don't mean the physical layer?
I've seen it a lot with communication protocols for some reason, I guess it's just relatively clear it means "the non virtualized" even though it's clearly a misnomer. E.g. with VRRP a ton of people just say "the physical IP" when talking about the address that's not the VIP, even though the RFC refers to it as "the primary" IP. Arguably "primary IP" is more confusing as to which is being referred to, even though it's more technically accurate.
Of course, maybe there's a perfectly obvious word which can apply to all of those kinds of situations just as clearly without being a misnomer I've just never thought to mention in reply :D.
If you are looking for a smoother UX: https://mosh.org/
Sadly this project looks dead.
still works great though, there's a lot great software I use that hasn't had an update in years or even decades
Is it dead or just mature?
Mature should still be fixing bugs, which something like mosh is bound to always run into. From that perspective, it doesn't seem like it's just mature. There doesn't seem to be a clear all-in-one successor fork taking the reins either. E.g. https://github.com/mobile-shell/mosh/issues/1339, as a random sample.
I don't know why it makes me a little sad that every application layer protocol is being absorbed into http.
If this were really the case, it would indeed be sad, as the standard HTTP request/response model is both too restrictive and too overengineered for many usecases.
But both HTTP/2 and QUIC (the "transport layer" of HTTP/3) are so general-purpose that I'm not sure the HTTP part really has a lot of meaning anymore. At least QUIC is relatively openly promoted as an alternative to TCP, with HTTP its primary usecase.
Indeed. "Using quic with a handshake that smells like http3" is hardly "using http" imo
This is actually good because every protocol ideally must look the same to make traffic shaping and censorship harder. Either random stream of bytes or HTTP.
If you are designing a protocol, unless you have a secret deal with telcos, I suggest you masquerade it as something like HTTP so that it is more difficult to slow down your traffic.
It's been known they throttle HTTP too.
So your super speedy HTTP SSH connection then ends up being slower than if you just used ssh. Especially if your http traffic looks rogue.
At least when its its own protocol you can come up with strategies to work around the censorship.
Yeah we got those good old network ppl or their corporate (don't knows much about tech) overlord to thank for that.
If you ever using wifi in the airport or even some hotel with work suite unit around the world, you will notice that Apple Mail can't send or receive emails. It is probably some company wide policy to first block port 25 (that is even the case with some hosting providers) all in the name of fighting SPAM. Pretty soon, 143, 587, 993, 995.... are all blocked. Guess 80 and 443 are the only ones that can go through any firewalls now days. It is a shame really. Hopefully v6 will do better.
So there you go. And know EU wants to do ChatControl!!!! Please stop this none-sense, listen to the people who actually knows tech.
Port 25 is insecure and unencrypted; EU doesn't even need ChatControl to hoover up that data, and you'd better believe anything going through an airport wifi router unencrypted is being hoovered by someone no matter what jurisdiction you're in. Apple mail prefers 587 for secure SMTP and 993 for secure IMAP.
People were (wisely) blocking port 25 twenty years ago.
> People were (wisely) blocking port 25 twenty years ago.
20 years ago (2005) STARTTLS was still widely in use. Clients can be configured to call it when STARTTLS isn't available. But clients can also be served bogus or snake oil TLS certs. Certificate pinning wasn't widely in use for SMTP in 2005.
Seems STARTTLS is deprecated since 2018 [1]
Quote: For email in particular, in January 2018 RFC 8314 was released, which explicitly recommends that "Implicit TLS" be used in preference to the STARTTLS mechanism for IMAP, POP3, and SMTP submissions.
[1] https://serverfault.com/questions/523804/is-starttls-less-sa...
Ah thanks for the correction. Just changed my post above to 587. What I mean is why block all the ports, just keep it open let the user decide if they want to use it. And linux people can always use ufw on their side to be safe. Back in the dot com days, there were people also using telnet, but that got changed to ssh.
Is it because it is hard to detect what type of the request that is being sent? Stream vs Non Stream etc?
Having all protocol look the same makes traffic shaping harder. If you develop a new protocol, do not make your protocol stand out, you won't win anything from it. Ideally all protocols should look like stream of random bytes without any distinctive features.
It’s a necessary evil resulting from misguided corporate security teams blocking and intercepting everything else.
Looking at you, teams who run Zscaler with tls man in the middle attack mode enabled.
It feels a little like a kludge as long as we keep calling it http. The premise makes sense -- best practices for connection initialization have become very complex and a lot of protocols need the same building blocks, so its beneficial to piggyback on the approach taken by one of the most battle tested protocols -- but it's not really hypertext we're using it to transfer anymore so it feels funny.
Yeah, building it on top of QUIC is reasonable, but trying to shoehorn SSH into HTTP semantics feels silly.
It's on top of HTTP CONNECT, which is intended for converting an existing request (QUIC stream) into a transparent byte stream. This removes the need to deal with request/response semantics.
The reasons states to use http3 and not QUIC directly makes sense with littlest downside - you can run it behind any standard http3 reverse proxy, under some subdomain or path of your choosing, without standing out to port scanners. While security through obscurity is not security, there's no doubt that it reduces the CPU overhead that many scanners might incur if they discover your SSH server and try a bunch of login attempts.
Running over HTTP3 has an additional benefit. It becomes harder to block. If your ssh traffic just looks like you're on some website with lots of network traffic, eg google meet, then it becomes a lot harder to block it without blocking all web traffic over http3. Even if you do that, you could likely still get a working but suboptimal emulation over http1 CONNECT
> you can run it behind any standard http3 reverse proxy
As long as said proxy supports a http CONNECT to a bi-directional connection. Which most I know of do, but may require additional configuration.
Another advantage of using http/3 is it makes it easier to authenticate using something like oauth 2, oidc, saml, etc. since it can use the normal http flow instead of needing to copy a token from the http flow to a different flow.
Google Cloud’s identity aware proxy underpinning the gcloud compute ssh command works the same way, as an http CONNECT upgrade.
It also gives you two authenticated protocol layers, which helps them because most standard protocols don’t support multiple authenticated identities. Their zero trust model uses it to authenticate each time you make a connection that your machine has authorization to connect to that endpoint via a client certificate, and then the next protocol layer authenticates the user.
Is there some indication that this is going to be adopted? The linked ietf submission is an expired individual draft (which anyone can send in) and not from the ssh spec working group, sounds like this is from some reseachers that used SSH3 as an optimistic name.
quic is more layer 4 or close to tcp reimplementation. Far from http layer 7.
I hear you that it feels like something is off. The lack of diversity feels like we're losing robustness in the ecosystem. But it can be a good thing too. A lot of security issues are concentrated into one stack that is very well maintained. So that means everything built on top of it shares the same attack surface. Which yes means it can all come crashing down at once, but also that there are many eyes looking for vulnerabilities and they'll get fixed quickly. Similarly perf optimizations are all shared, and when thing get this popular can get pushed down into hardware even.
It's not like we see a lot of downsides that the world collectively agreed on TCP/IP over IPX/SPX or DECNet or X.25. Or that the linux kernel is everywhere.
Humbug. I feel an urge to implement token ring over fiber. Excuse me while I yell at clouds.
> SSH3 is a complete revisit of the SSH protocol
so, new undiscovered vulnerabilities
SSH is slow, but in my experience the primary cause of slowdown is session setup.
Be it PAM, or whatever OpenBSD is doing, the session setup kills performance, whether you're re-using the SSH connection or not, every time you start something within that connection.
Now obviously for long running stuff, that doesn't matter as much as the total overhead. But if you're doing long running ssh you're probably using SSH for its remote terminal purposes and you don't care if it takes 0.5 seconds or 1 second before you can do anything. And if you want file transfer, we already had a HTTP/3 version of that - it's called HTTP/3.
Ansible, for example, performs really poorly in my experience precisely because of this overhead.
Which is why I ended up writing my own mini-ansible which instead runs a remote command executor which can be used to run commands remotely without the session cost.
It's cool that SSH is getting some love but I'm a little sad they're not being a little more ambitious with regard to new features, considering it seems like they're more or less creating a new thing. Looks like they're going to support connection migration but it would be cool (to me anyway) if they supported some of the roaming/intermittent connectivity of Mosh[1].
1: https://mosh.org/
One of the things I really like about Mosh is the responsiveness - there's no lag when typing text, if feels like you're really working on a local shell.
I'm guessing SSH3 doesn't do anything to improve that aspect? (although I guess QUIC will help a bit, but isn't quite the same as Mosh is it?)
AIUI connection migration (as well as multipath handling) is a QUIC feature. And how would that roaming feature differ from "built-in tmux"? I'm not sure the built-in part there would really be an advantage…
Mosh connections don't drop from merely wifi flipping around; you get replies back to the address and port the last uplink packet came from. You can just continue typing and a switch between Wi-Fi and mobile data (for example on a phone while sitting on public transit) shows as merely a lag spike during which typed characters will be predictive echoed by underlining them after an initial delay that serves to avoid flickering from rapidly retracted/changed predictions (predictions are underlined) during low-latency steady-state.
Mosh is like vnc or rdp for terminal contents: natively variable frame rate and somewhat adaptive predictive local echo for reducing latency perception; think client side cursor handling with vnc or with rdp I'd even assume there might be capability for client-side text echo rendering.
If you haven't tried mosh in situations with a mobile device that have you experience connection changes during usage, you don't know just how much better it is than "mere tmux over ssh".
I honestly don't know of a more resilient protocol than mosh that's in regular usage, other than possibly link-layer 802.11n aka "the Wi-Fi that got these 150 Mbit and those 300 Mbit and some 450 Mbit speed claims advertised onto the marker", where link-layer retransmissions and adaptive negotiation of coding parameters and actively-multipath-exploiting MIMO-OFDM (and AES crypto from WPA2) combine for a setup that hides radio interference to not be visible to higher level protocols beyond the unavoidable jitter of the retransmissions and varying throughput potentials from varying radio conditions.
Oh, I think when viewed regarding computers not the congestion control schemes adjusting the individual connection speeds, there'd also be BitTorrent with DHT and PEX that only needs an infohash: with 160 bits of hash a client seeded into the (mainline) DHT swarm can go and retrieve a (folder of) files from an infohash-specific swarm that's at least partially connected to the DHT (PEX takes care of broadening the connectivity among those that care about the specific infohash).
In the realm of digital coding schemes that are widely used but aren't of the "transmission" variety, there's also Redbook CD audio that starts off easy with lossless error correction, followed by perceptually effective lossy interpolation to cover severe scratches to the disc's surface.
I'm not sure why you're explaining mosh (I know what it is and have used it before), I was asking what there is other than migration (= handled by QUIC) and resumption (= tmux).
Local line editing, I guess. Forgot about that.
given last commit was 1+ year ago, anyone know what the status of the project is?
I wonder what the current plans are with the project, it's been over a year since the last release - yet alone commits or other activity on GitHub. As they've started working on the project with a paper - I guess they'll might be continuously working on other associated aspects?
Thanks for pointing that out. I'm gonna assume it's a dead project. It has only 239 commits, basically a proof of concept. Nothing to take seriously. OpenBSD on the other hand is extremely active, there's no way OpenSSH will be dethroned anytime soon.
https://github.com/openbsd/src/commits/master/
I sincerely don't understand this obsession with short names and aliases. I absolutely dislike it. Names should be long and descriptive. I understand that in the past we needed short names because every character cost space and space was precious but it isn't the case anymore.
Please don't give short abbreviated names. Useful full names for commands. Teach full names. When you present something, show full names. If this project used a full name like `remote-terminals-over-http3`, we would not be having this debate about ssh3.
Of course, end users and system administrators and even package managers/distributions are free to add abbreviations but we should be teaching people to use full names.
Prefer things like Set-Location over cd. Prefer npm install --global over npm i -g. Prefer remote-terminals-over-http3 over ssh3.
I like the idea, especially if it can be proxied by a regular H3 proxy.
If it also solved connection multipath/migration and fixes for TCP-related blocking issues, that'd already be amazing.
Previously: https://news.ycombinator.com/item?id=38664729
I haven’t seen anyone yet comment on the design constraint that SSH uses a lot of separation in its multiplexing for the purpose of sandboxing/isolation. Would want transport to be as straightforward as possible. SSH needs to be the reliable, secure tunnel that you can use to manage your high performance gateways. It has a lot of ways to turn things off to avoid attack surface. HTTP protocols have a lot of base requirements. Different problems.
This thread is s classic example what we care about: names. Is there technical merit to do ssh over http3? Who caes? Who knows?
So with HTTP requests you can see the domain name in the header and forward it to the correct host. That was never a thing you could do with SSH, does this allow that to work?
"Proxy jump
"It is often the case that some SSH hosts can only be accessed through a gateway. SSH3 allows you to perform a Proxy Jump similarly to what is proposed by OpenSSH. You can connect from A to C using B as a gateway/proxy. B and C must both be running a valid SSH3 server. This works by establishing UDP port forwarding on B to forward QUIC packets from A to C. The connection from A to C is therefore fully end-to-end and B cannot decrypt or alter the SSH3 traffic between A and C."
More or less, maybe but not automatically like you suggest, I think. I don't see why you couldn't configure a generic proxy to set it up, though.
You can forward any ssh traffic based on the domain name with SNI redirection. You can also use that with, lets say the nginx stream module, to run ssh and http server on the same port.
But that wasn't really a thing that was an issue with SSH.
Host *.internal.example.com
in the SSH client config would make everything in that domain hop over that hop server. It's one extra connection - but with everything correctly configured that should be barely noticeable. Auth is also proxied through.Is there a way to configure the jump (hop) server to reroute the request based on the value of %h and/or %p? Otherwise, it's going to be quite difficult to configure something like HTTP virtual hosts.
EDIT: Looking at the relevant RFC [1] and the OpenSSH sshd_config manual [2], it looks like the answer is that the protocol supports having the jump server decide what to do with the host/port information, but the OpenSSH server software doesn't present any relevant configuration knobs.
[1]: https://www.rfc-editor.org/rfc/rfc4254.html#section-7.2
[2]: https://man7.org/linux/man-pages/man5/sshd_config.5.html
Yes, but it's not in the sshd config, it's in the ssh config. See ssh_config(5), search for Remote to find the most relevant sections.
If you don't need to do anything complicated, ProxyJump is easier to remember.
ProxyJump was implemented a decade ago to replace that specific string.
I'm aware of proxy jump and other client side config but I'd rather that not every single client need to do this configuration.
Newer versions of ssh support ProxyJump
> the keystroke latency during a session remains unchanged
That’s a shame. Lowered latency (and persistent sessions, so you don’t pay the connection cost each time) are the best things about Mosh (https://mosh.org/).
Lowered perceived latency.
Mosh uses UDP in addition to optimistic updates, so there is an actual latency improvement.
I feel like this should really be SSH over QUIC, without the HTTP authorization mechanisms. Apart from the latter not really being used at all for users (only for API calls, Bearer auth), shell logins have a whole truckload of their own semantics. e.g. you'd be in a rather large amount of pain trying to wire PAM TOTP (or even just password+OTP) into HTTP auth…
I view it orthogonally: Making it easier to use our single company identity we use for every single service for SSH as well would make it so much easier to handle authorization and RBAC properly for Linux server management. Right now, we have to juggle SSH keys; I always wanted to move to SSH certificates instead, but there's not a lot of software around that yet (anyone interested in building some? Contact me).
So having the ease of mind that when I block someone in Entra ID, they will also be locked out of all servers immediately—that would be great actually.
> PAM TOTP (or even just password+OTP) into HTTP auth
But why would you? Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.
> use our single company identity we use for every single service for SSH as well
How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?
And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.
> Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.
Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it. And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.
Coincidentally, SSH's mechanisms are also an incredibly bad fit; password authentication is in there as a "hard" feature; it's not an interactive dialog and you can't do password+TOTP there either. For that you need keyboard-interactive auth, which I'm not sure but feels like it was bolted on afterwards to fix this. Going with HTTP auth would probably repeat history quite exactly here, with at some point something else getting bolted to the side…
> Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal?
You start the ssh client in the terminal, it opens a browser to authenticate, and once you're logged in you go back to the terminal. The usual trick to exfiltrate the authentication token from the browser is that the ssh client runs an HTTP server on localhost to which you get redirected after authenticating.
That, or the SSH client opens a separate connection to the authorization server and polls for the session state until the user has completed the process; that would be the device code grant, which would solve this scenario just fine.
You're both talking about web authentication, not HTTP authentication. cf. https://news.ycombinator.com/item?id=45399594
> How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?
That's pretty well covered in RFC8628 and doesn't even require a browser on the same device where the SSH client is running.
> And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.
That depends entirely on the implementation. It could also be a redirect response which the client chooses to delegate to the user's web browser for external authentication. It's just the protocol. How the client interprets responses is entirely up to the implementation.
> Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it.
I don't see why, really. It might just as well be an opaque part of a newer system to reconcile remote authorization with local identity, without any interaction with PAM itself necessary at all.
> And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.
But isn't that the nice part about HTTP auth, that it's so extensible it can solve everyone's problems just fine? At least it does so on the web, daily, for billions of users.
Everything you've said is true for web authentication, and almost nothing of what you said is true for HTTP authentication.
This is HTTP authentication: https://httpd.apache.org/docs/2.4/mod/mod_auth_basic.html
https://github.com/francoismichel/ssh3/blob/5b4b242db02a5cfb...
https://www.iana.org/assignments/http-authschemes/http-auths...
Note the OAuth listed there is OAuth 1.0. Support for "native" HTTP authentication was removed in OAuth 2.0.
This discussion is about using HTTP authentication. I specifically said HTTP authentication in the root post. If you want to do SSH + web authentication, that's a different thread.
Rule of thumb: if you need HTML in any step of it —and that includes as part of generating a token— it's web auth, not HTTP.
I hate that web-enshittification of SSH is considered the solution to this problem, and many other modern application-level problems.
It's done because the web stack exists and is understood by the web/infrastructure folks, not because it represents any kind of local design optima in the non-web space.
Using the web stack draws in a huge number of dependencies on protocols and standards that are not just very complex, but far more complex than necessary for a non-web environment, because they were designed around the constraints and priorities of the web stack. Complicated, lax, text-based formats easily parsed by javascript and safe to encode in headers/json/query parameters/etc, but a pain to implement anywhere else.
Work-arounds (origin checks, CORS, etc) for the security issues inherent in untrusted browsers/javascript being able to make network connections/etc.
We'be been using kerberos and/or fetching SSH keys out of an LDAP directory to solve this problem for literal decades, and it worked fine, but if that won't cut it, solving the SSH certificate tooling problem would be a MUCH lighter-weight solution here than adopting OAuth and having to tie your ssh(1) client implementation to a goddamn web browser.
I see your point, but I think you're missing the broader picture here. Web protocols are not just used because they are there, but because the stack is very elegantly layered and extensible, well understood and tested, and offer strong security guarantees. It's not like encryption hasn't been tacked onto HTTP retroactively, but at least that happened using proper staples instead of a bunch of duct tape and hope as with other protocols.
All of that isn't really important, though. What makes a major point for using HTTP w/ TLS as a transport layer is the ecosystem and tooling around it. You'll get authorization protocols like OIDC, client certificate authentication, connection resumption and migration, caching, metadata fields, and much more, out of the box.
> the stack is very elegantly layered and extensible
I have to disagree pretty strongly on this one. Case in point: WebSockets. That protocol switch is "nifty" but breaks fundamental assumptions about HTTP and to this day causes headaches in some types of server deployments.
That has been around for years:
https://github.com/moul/quicssh
I guess it didn't get traction… whether that happens honestly feels like a fickle, random thing.
To be fair, a go project as sole implementation (I assume it is that?) is a no-go, for example we couldn't even deploy it on all our systems since last I checked Go doesn't support ppc64. (BE, not ppc64le)
I also don't see a protocol specification in there.
[edit] actually, no, this is not SSH over QUIC. This is SSH over single bidi stream transport over QUIC, it's just a ProxyCommand. That's not how SSH over QUIC should behave, it needs to be natively QUIC so it can take advantage of the multi-stream features. And the built-in TLS.
An alternative way to hide your SSH server from portscanners is to put it inside a WireGuard VPN.
TFA says "port scanning attacks", but in my opinion it's not. It's barely jiggling the door knob. Securing SSH isn't hard to do properly, and port scans or connection attempts isn't something anyone needs to be concerned about whatsoever.
I am concerned however about the tedious trend of cramming absolutely everything into HTTP. DNS-over-HTTP is already very dumb, and I'm quite sure SSH-over-HTTP is not something I'm going to be interested in at all.
Don't get me wrong, this might likely be a fantastic tool. But something as essential as a secure connection would definitely need a good pair of eyes for audit before I'd use that for anything in production.
But it's a good start. Props to exploring that kind of space that needs improvement but is difficult to get a foothold in.
built in OIDC authentication - YES, love it!
What is the usecase of using OAuth or "my github account" to login to a linux/unix machine?
This is actually a common/desirable feature to permit a group of people to access an ephemeral machine (e.g. engineers accessing a k8s node, etc.). Authorizing "engineers" via OAuth is much more ergonomic and safe vs traditional unix auth which is designed more for non-transient or non-shared users.
I recently looked into this at it looks like the IETF(?) RFC draft for SSH3 was abandoned? It's great this exists but I think the standard needs to be done as well.
idk if you should call it that
Does this still support standard SSH encryption and authentication (on both client and server)?
The proposed architecture uses TLS for encryption/secure channel but can use SSH connection establishment/authentication.
https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
However, it can also use HTTP mechanisms for authentication/authorization.
Yes, Yes, Yes.
Firstly, I love the satirical name of tempaccount420, I was also just watching memes and this post is literally me (ryan gosling)
As I was also thinking about this thing literally yesterday being a bit delusional on hoping to create a better ssh using http/3 or something or some minor improvement because I made a comment about tor routing and linking it to things like serveo, I was thinking of enhancing that idea or something lol.
Actually, it seems that I have already starred this project but I had forgotten about it, this is primarily the reason why I star github project and this might be where I might have got some inspiration of http/3 in the first place with SSH.
Seems like a really great project (I think)
Now, one question that I have is could SSH be made modular in the sense that we can split the transport layer apart from SSH as this project does, without too much worries?
Like, I want to create a SSH-ish something to lets say something like iroh being the transport layer, are there any libraries or resources which can do something like that? (I won't do it for iroh but I always like mixing and matching and I am thinking of some different ideas like SSH over matrix/xmpp/signal too/ the possibilities could be limitless!)
Can it tunnel arbitrary TCP ports?
Shouldn’t this be called SSH over HTTP/3?
Written in Go. Terrible name, already discussed in various other comments and author acknowledges.
The secret path, otherwise giving 404 would need brute-force protection (on HTTPd level?). I think it is easier to run SSH on a non-standard port on IPv6, but it remains true that anyone with network read access between the endpoints can figure it out.
What isn't explained is why would one care about 100 ms latency during auth? I rather have mosh which has resuming support and will work on high latency (tho IIRC won't work over TOR?). But even then, with LTE and NG, my connections over mobile have become very stable here in NL (YMMV).
yes, but in NL sometimes I am just on the edge of a wifi network coverage and then mosh can be handy. it's an edge case though!
The ssh3 name feels like cloutchasing
YAMP
Knee-jerk reaction: if it aint broke ...
I thought the same until I read the page and realized that ssh is quite broken if you think about it.
With ssh everybody does TOFU or copies host fingerprints around, vs https where setting up letsencrypt is a no-brainer and you’re a weirdo of you even think about self-signed certs. Now you can do the same with ssh but do you?
For authentication, ssh relies on long lived keys rather than short lived tokens. Yes, I know about ssh certificates but again, it’s a hassle to set up compared to using any of a million IdP with oauth2 support. This enables central place to manage access and mandate MFA.
Finally, you better hope your corporate IT has not blocked the SSH port as a a security threat.
Telnet, FTP and rlogin wasn't broke, either. They had their own encrypted variants before SSH came along.
Listing all the deficiencies of something, and putting together a thing that fixes all of them, is the kind of "designed by committee" project that everyone hates. Real progress requires someone to put together a quick project, with new features they think are useful, and letting the public decide if it is useful or not.
Faster SSH in Rust when?
If it doesn’t fully implement SOAP, what’s the point?
Feels like a spinning hammer meant to drive screws because somebody has never seen a drill before.
> It also supports new authentication methods such as OAuth 2.0 and allows logging in to your servers using your Google/Microsoft/Github accounts.
Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
X.509 certificates & PKI....
Hopefully provides a way to pin certs or at least pin certificate authorities && has PFS.
My conspiracy hat doesn't trust all the cert auths out there.
> SSH3: Faster and rich secure shell using HTTP/3
Maybe they shall teach naming projects in CS.
Why not Windows 12 ? /s
Sure, someone paranoid about his SSH server being continuously proved by bots is going to excitedly jump to a new HTTP-SSH server that is going to be continuously proved by even more bots for HTTP exploits (easily an order of magnitude more traffic) AND whatever new fangled "HTTP-SSH" exploits appear.