return to table of content

Show HN: 30ms latency screen sharing in Rust

Sean-Der
15 replies
15h53m

I wrote this to solve a few things I cared about.

* I want to show people that native WebRTC players can be a thing. I hope this encourages hangouts/discord/$x to implement WHIP and WHEP it would let people do so much more

* I wanted to make low latency sharing easier. I saw the need for this working on adding WebRTC to OBS and Broadcast Box[0]

* I wanted to show devs what a great ecosystem exists for WebRTC. Lots of great implementations in different languages.

* Was a bit of a ‘frustration project’. I saw a company claiming only their proprietary protocol can do latency this low. So I thought ‘screw you I will make an open source version!’

[0] https://github.com/glimesh/broadcast-box

tamimio
1 replies
13h5m

I saw a company claiming only their proprietary protocol

Did the company have a “ripple” in its name? Curious

Sean-Der
0 replies
6h24m

Let me find it again! I saw it on LinkedIn and it was such a bullshit promo thing

slashink
1 replies
12h59m

Hey Sean, we both worked at Twitch Video but I left just as you were joining. I currently work on the Discord video stack and am somewhat curious about how you imagine Discord leveraging WHIP/WHEP. Do you see it as a way for these clients to broadcast outwards to services like Twitch or more as an interoperability tool?

Sean-Der
0 replies
6h30m

Users what to send WHIP into discord. The lack of control on screen sharing today is frustrating. Users want to capture via another tool and control bitrate/resolution.

Most Broadcast Box users tell me that’s their reason for switching off discord.

———

With WHEP I want to see easier co-streaming. I should be able to connect a room to my OBS instance and everyone’s video auto show up.

I don’t have this figured out yet. Would love your opinion and feedback. Wanna comment on the doc or would love to talk 1:1 ! siobud.com/meeting

namibj
1 replies
4h45m

Any plans on integrating L4S with e.g. Tetrys-based FEC and using a way where the congestion feedback from L4S acts on the quantizer/rate-factor instead of directly on bitrate?

It's much more appropriate to do perceptual fairness than strict bitrate fairness.

Happy to have a chat on this btw; you can best catch me on discord.

stefan_
0 replies
4h42m

Depends on the network, surely? Lots of applications for low latency video where you are not sharing the channel, but it has a fixed bandwidth.

mwcampbell
1 replies
5h45m

Is the restriction to NVIDIA necessary for the low latency?

Sean-Der
0 replies
5h15m

Nope! I want to add all the other flows.

nvidia is the absolute lowest I believe. I wanted to do it first to know if it was worth building.

cchance
1 replies
4h8m

Question, 30ms latency sounds amazing but how does it actually compare to "the standard" sharing tools for desktops, like do you know what the latency on say MSRDP is as comparison or VNC?

Sean-Der
0 replies
3h9m

I doubt the protocol itself makes a big difference. I bet you can get 30ms with VNC. The difference with BitWHIP.

* Can play WebRTC in browser. That makes things easier to use.

* simpler/hackable software. BitWHIP is simple and uses nvenc etc… if you use nvenc with VNC I bet you can get the same experience

Sean-Der
1 replies
15h27m

Another things that is in this realm. I am adding native co-streaming/conferencing to OBS [0]. OBS can send WebRTC, next I want to make receiving work well.

Between that and Simulcast I hope to make real-time video dramatically cheaper and easier

[0] https://docs.google.com/document/d/1Ed2Evze1ZJHY-1f4tYzqNZfx...

ta988
0 replies
3h57m

This would be fabulous, thank you so much for working on that. What kind of latency does dual encoding (on client then on receiver again) adds? Are there codecs that can have multiple streams on the same image (as in zones of independent streams on the video surface)?

mcwiggin2
0 replies
5h50m

This is awesome. I would love if you had some examples on how to use AntMedia as a source. I am mostly in video engineering so the read the source comes slower to me. This would be really handy in many cases.

Karrot_Kream
0 replies
10h52m

Thanks the code is really useful to read through.

daghamm
14 replies
12h22m

What is the reason for using "just" here?

I understand people have their tooling preferences, but this looks like something that build.rs or a plain makefile could have handled?

mijoharas
12 replies
11h51m

I was also wondering if anyone could chime in on advantages of using just.

I'm familiar with makefiles, is there a particular advantage to using just over makefiles or is it personal preference? (which is a totally valid answer! I'm just wondering if I'm missing something)

aerzen
10 replies
11h40m

I think that the appeal of just is that it is simpler than make. It is not checking timestamps of files, but executes a DAG of tasks unconditionally.

mijoharas
5 replies
10h12m

My first thought was that that was dropping one of the main features of make.

On reflection though, the timestamp dependant part isn't really something used much nowadays apart from compiling C.

It'd be cool if it was an opt-in feature for just files so that it could actually function as a replacement for make in all cases.

I went looking in the docs and found this[0] which I'd missed last time I looked into justfiles.

[0] https://github.com/casey/just?tab=readme-ov-file#what-are-th...

daghamm
4 replies
10h5m

I don't really buy his justification that ".PHONY: xxx" is hard to remember so we should have a completly new tool instead.

Make has its issues, but it also has two big advantages: it's simple and everyone already have it.

IshKebab
2 replies
9h3m

Everyone already has it... on Linux and Mac. It's pretty rare for it to be available on Windows.

That said I kind of agree. I like the idea of `just` but it does seem like they have just created a complicated DSL.

I think it is better to just write your infra scripting in a real language. I generally use Deno or Rust itself and a thin wrapper that `cargo run`'s it. Using Rust eliminates a dependency.

mort96
1 replies
4h54m

Anyone who's halfway serious about software development on Windows surely has make there too, and it's not like non-developers are the target audience for 'just' scripts

IshKebab
0 replies
1h10m

Anyone who's halfway serious about software development on Windows surely has make there too

Not even remotely. I know it might be hard to imagine if you only program on Linux/Mac but there's a whole world out there that isn't built on janky shell scripts and Makefiles. If you use C# or Java or Visual C++ or Qt on Windows it's pretty unlikely that you'd have Make. It's kind of a pain to install and you don't need it.

galdosdi
0 replies
27m

I agree, and even more strongly: you don't even need to remember .PHONY as long as your target names don't overlap with actual filenames, which is usually easy.

In fact, I didn't even know about .PHONY and have used make for a long time. That's what's great about it, even if you stick to the most basic features make is incredibly easy and straightforward. Dare I say, it "just" works lol.

I hate the proliferation of new tools that are the same as a tool that's been around for 20 years and is no different in any significant way except being trendy. Just unnecessary entropy. Our job is to manage and reduce, not maximize entropy.

daghamm
3 replies
10h9m

Wouldn't a shell script work just as well than?

I'm not against new better tooling, but I also want to keep my dev machine reasonably clean.

IshKebab
2 replies
9h8m

Shell scripts don't work well on Windows.

hughesjj
1 replies
1h57m

Even powershell sometimes with execution policies

galdosdi
0 replies
27m

I would just use WSL then, if native windows dev tooling is such a shit show

mharrig1
0 replies
2h36m

I recently switched my (small) company over to using just files within our codebases and it's been going over very well thus far.

We're building a set of apps that need to run on Linux, MacOS, and Windows so having a consistent solution for each is better than shell scripting and I personally have never felt great about make and it's weirdness.

It also helps that we have a pretty big monorepo so that anyone can bounce from one app to another and `just run` to use any of them, no matter the platform.

Either way the justification for me came from COSMIC[0].

[0] https://github.com/pop-os/cosmic-epoch/blob/master/justfile

Sean-Der
0 replies
3h46m

John did all the work on this.

Just is nice as a Windows user. When I started committing everything worked really well already. Editing the just stuff also is really easy. Much nicer to read then scripts I think

krick
6 replies
16h25m

I don't get what it does, exactly? This doesn't seem to be an OBS alternative (judging by the description), but… I mean, isn't it exactly the same as just running OBS directly?

notjoemama
4 replies
15h59m

Looks like a LAN tele…er, screen sharing server/client. Presumably you could serve over the internet but it will not get the 30ms latency. Aside from the streaming (I only spent a few minutes reviewing the source) it’s a live jpeg kind of thing. I built something similar to screen share with my kids when we played Minecraft together. It was really for me because once we got in game they would take off and in 5 minutes be screaming for help 10 chunks away in some zombie skeleton infested cave at or near bedrock. Being kids, I never got good enough directions to help them in time. Anyway, it was a fun project. I used CUDA and could get 60fps per client on CAT5 and 45-ish over WiFi, dropping to 10-15fps when I walked in and out of rooms with the laptop. 60fps is 15ms, so 20 is 50fps.

imtringued
3 replies
11h14m

Presumably you could serve over the internet but it will not get the 30ms latency.

Indeed, you'll have to live with something like 80ms to 100ms latency over the internet and a horrifying 160 ms if you want to have things respond to keyboard and mouse inputs.

jeffhuys
2 replies
9h17m

Then how does something like moonlight, parsec, or Geforce Now work? Sub-10ms latency, sometimes even sub-5 depending on time of day and network congestion.

notjoemama
0 replies
2h52m

Ever heard of the Akamai network? Netflix might be a good example. Trace routes show latency between network hops. To reduce latency you either buy better network hardware, buy better cabling, or reduce hops in n the network. Since the first two are more expensive than the third, if your service must have very fast response between server and client, move the server closer to the client. Large corporations run cache servers in multiple data centers everywhere geographically so the response time for clients is better than their competition. Why new video services struggles to compete with YouTube is in part because YouTube can afford this kind of architecture where a startup cannot. Even if it’s the best code money can buy, it will never provide the same level of experience to users as local cache servers. Kind sucks not body can compete.

geraldwhen
0 replies
7h51m

That is only ever theoretically possible with a direct fiber connection between both nodes at <= 200 miles between.

So the answer is “there’s a data center in your city.”

Sean-Der
0 replies
15h46m

It is also a player!

You can either pull the video from a WHEP source or run in a P2P mode. I wanted to demonstrate the flexibility and hackability of it all :)

eigenvalue
2 replies
14h36m

Couldn't get it to work in Windows 11. Was able to run the just install script only after editing it to use the full path to the 7zip binary. Said it installed correctly, but then when I try to do `just run play whip` I got this:

  cargo:rustc-cfg=feature="ffmpeg_7_0"
  cargo:ffmpeg_7_0=true

  --- stderr
  cl : Command line warning D9035 : option 'o' has been deprecated and will be removed in a future release
  thread 'main' panicked at C:\Users\jeffr\.cargo\registry\src\index.crates.io-6f17d22bba15001f\bindgen-0.69.4\lib.rs:622:31:
  Unable to find libclang: "couldn't find any valid shared libraries matching: ['clang.dll', 'libclang.dll'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [])"
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

mintplant
1 replies
13h47m

Looks like you need libclang for the ffmpeg bindings.

warkdarrior
0 replies
40m

Looks like the install script is incomplete and fails to check for and install all prerequisites.

kiririn
1 replies
8h57m

As someone who setup a discord streaming like service to use alongside Mumble, this is very exciting. I couldn’t get anything involving webrtc working reliably, but the only broadcasting clients I found were web browsers and OBS, so I am interested to see how this compares!

What I eventually settled on was https://github.com/Edward-Wu/srt-live-server with OBS and VLC player, which gives robust streaming at high bitrate 4k60, but latency is only 1-2 seconds

Sean-Der
0 replies
6h22m

Excited to hear what you think! If there is anything I can change/improve tell me and will make it better :)

jmakov
1 replies
11h45m

Can this be used as remote desktop?

Sean-Der
0 replies
6h17m

Yes! I want to add remote control features to it. Lots of things left to do

Any interest in getting involved? Would love your help making it happen

Tielem
1 replies
11h8m

Always a bit sceprical when it comes to latency claims, especially in the sub 100ms space, but screen sharing 1-1 or video ingest should be a great use case for WebRTC

WebRTC is a great technology, but it still suffers from a scaling problem that is harder to resolve. On top of that, the protocol itself does not define things like adaptive bitrate switching or stalling recovery

Curious to hear what you think of some (proprietary) options for low latency playback like LLHLS LLDASH, WebRTC or HESP

Sean-Der
0 replies
3h38m

WebRTC has congestion control and Simulcast/SVC, what is missing for adaptive bitrate switching. What is stalling recovery? I believe NACK/PLI handle this?

WebRTC doesn’t have a scaling problem. I think it was a software problem! Twitch, Tencent, Agora, Phenix all do 100k+ these days

I like WebRTC because of the open-ness of it. I also like that I only need one system for ingest and playback. I am HEAVILY biased though, way over invested in WebRTC :) I tend to care about greenfield/unique problems and not enough about scaling and making money

1oooqooq
1 replies
16h24m

mostly glues two libraries? ffmpeg from capture, play and whip?

Sean-Der
0 replies
15h50m

Yep! It glues ffmpeg, str0m[0] and SDL together. I hope bitwhip doesn’t need to exist someday. When WHIP/WHEP has enough traction it will be easier to land in FFMPEG

[0] https://github.com/algesten/str0m

tamimio
0 replies
13h7m

Amazing work! The most I could achieve is ~40ms of video streams, although it was over a cellular network from a drone. But 30ms is a new milestone! I will see if I can repurpose this and test out a real-time video stream from a robot if I get some spare time.

synthoidzeta
0 replies
14h9m

vdo.ninja is another excellent alternative but I'll definitely check this out!

comex
0 replies
14h55m

Ooh, I’ve been looking for a good solution for this for years. Currently I use Parsec, but it’s closed source and not compatible with direct streaming from OBS etc. I’ll definitely check this out.

Dwedit
0 replies
13h45m

How does this compare with Moonlight?