Clever idea. Begs the question of why you would use http if you already have a bidirectional webRTC connection, but I guess it depends on the application.
Is there a way to do this without the signaling server?
Webtorrent
Use a free signalling server from the webtorrent community. You can skip the torrent part of the implementation and just use the signalling, it's awesome. You can use a libraries like:
https://github.com/webtorrent/bittorrent-tracker
https://github.com/subins2000/p2pt
to get started. For me, I found the protocol is simple enough where I just use small vanilla javascipt implementation to talk to the websocket servers to generate the signalling messages. I wish more people knew about this and realize how easy it can be to bring WebRTC to their applications.
List of some free webtorrent trackers:
wss://tracker.openwebtorrent.com
wss://tracker.files.fm:7073
wss://tracker.webtorrent.dev
---> Usage stats for the last one: https://tracker.webtorrent.devSome free stun servers for NAT traversal:
stun:stun.cloudflare.com
stun:stun.l.google.com:19302
This is super cool and almost makes it possible building PWAs that only need a dumb http server to deliver the app as a bunch of static files and still allow users to synchronize data between their devices. It still depends on the tracker but if the user could change the tracker it sounds like it's currently the best way to get clients to communicate with each other without depending on a server provided by the PWA.
Thank you for this! I knew this shit was done by someone already and I've spent two years resisting the urge to re-invent this wheel. p2pt is exactly what I knew was possible and have been looking for!
You might be interested in my side project Trystero https://github.com/dmotz/trystero
It abstracts away the work of signaling and connects peers via open decentralized networks like BitTorrent, Nostr, MQTT, IPFS, etc.
Not had time to read through the docs properly, but would this work for CLI apps (Node compiled to an executable?) or similar?
The problems this solves look interesting for a few CLI tools I want to build :)
I am also curious about the progress on this feature: https://github.com/dmotz/trystero/issues/24 for more info
check out NAT hole-punching in libp2p: https://docs.libp2p.io/concepts/nat/hole-punching/
scroll down a bit for the STUNless/TURNless bit
I think the closest you might get is something like the bittorrent dht. There are still bootstrap servers for the first few connections, but there's really no getting away from that, right?
Not with the three major browsers and NAT unfortunately.
https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/...
Yep! If you do one non-browser WebRTC agents https://github.com/pion/offline-browser-communication
This could be possible browser to browser! I don’t think that has ever been an official use case
Doing http over webrtc is how https://camect.com works to let one access cameras own private server via their ui. They have a centralized bit for auth and then use webrtc and a physical nvr to serve your videos maximally efficiently...so there is low risk of their cloud becoming a financial burden that they cancel ala google nest cams
It's a super nice architecture
have a centralized bit for auth
Theoretically they could get the camera itself do auth, and then the server becomes fully 'dumb', and need not even be service specific.
Oauth is nice and convenient.
So from your explanation I get that they use webrtc for videos. But then what do they use http over webrtc for? Do they serve the UI as well over webrtc?
Their ui is also hosted on the nvr, they serve ui assets over webrtc
Sounds kind of similar to Frigate but packaged up ready to go, neat.
I am an early backer and have been using it for few years bow. Camect is great.
How soon do you think IT will clamp down on webrtc?
its advertising that it’s secure e2e even behind firewall/etc but that’s not true because webrtc will fallback to using TURN server to relay when other methods fail which will break the encryption, just fyi.
You can pass configuration to disable ICE entirely.
Looks like it's using PeerJS, which defaults to a config of using a Google STUN server and no TURN servers. Not sure if using a STUN server compromises the E2E in some way?
Why would STUN compromise e2e? STUN just returns your IP
I just didn't want to speculate, as I'm not familiar with the security considerations here.
But, thinking about it a bit, couldn't a compromised STUN server establish a MITM by lying to you about your IP, and then relaying to you? This old HN comment describes it: https://news.ycombinator.com/item?id=11192610
I don't know if this would break the E2EE here (although if it wouldn't, I'm not sure how a TURN server would either, as that's just a baked in MITM).
i was wrong actually, it doesn’t weaken security as long as the data is encrypted either using DLTS or application layer encryption, please ignore my comment lol.
WebRTC won’t use TURN unless it’s explicitly configured with a TURN server. Even if it did use a TURN server webrtc is still e2e encrypted.
You need to trust the signalling server though.
This library seems to do a few other things, which maybe reduces the trust in the signalling server, but I didn’t really read it in enough detail to comment on it.
Connection is E2E encrypted when using TURN. Using TURN has no negative impact on security.
The TURN server can see the size/src/dst so that has a privacy implication!
(Aside) speaking of WebRTC, but is there any solution to record videos that is done by webRTC?
There are already more than enough tools that can record HLS and Dash, but I haven't find anything, not even PoC that can record video streams transited via WebRTC (e.g. agora.io).
Do you want this in the browser or a specific language?
https://recordrtc.org/ should be able to do it.
It is really annoying when someone posts an interesting project and HN has a big discussion but when I get to try to lib, it is unmaintained and the last update was 3 years ago.
There were great recommendations in this thread tho, thanks a lot! This one looks good: https://github.com/subins2000/p2pt
If we're talking about great p2p WebRTC libraries, try Trystero: https://github.com/dmotz/trystero
Wow, really impressive project! Thanks.
I don't get it. Where is the signaling server and how is it working?
got excited but the repo hasn't been updated for over 3 years.
Nice!
Although, an alternative is something like Tailscale.
I have tried this idea before, combining Service Worker to implement a decentralized website.
You can securely serve a webpage from a server behind NAT without creating a VPN and without https certificated
I suppose the persistence of IPv4 has broken all of our brains, but with IPv6 you can just Not Have NAT, and just have normal end-to-end connectivity to any random box in your home from outside.
(And yes, I do this. Works great.)
There's something nice about being anonymous behind a communal v4 gateway.
Also can you get an tls cert for a ipv6 number address? Or are you punching through using only ssh or unencrypted stuff?
IPv6 lets you do this -- nearly every client will use privacy addressing, so your (default) source address rotates daily. However you can still connect to the machine on its main (non-privacy protected) IPv6 address.
The /64 doesn’t change, it’s unique to your network. It’s broadly equivelent of the /32 you get.
CGNat adds a layer of privacy that a public /32 (ipv4) or /64 (ipv6) doesn’t give.
Tangentially, these “privacy” addresses are such an ipv6-ism of small theoretical value at the expense of extra complexity and noise. If ipv6 had been “ipv4 but now with 100% more bits”, I suspect we would have come a lot further in global deployments.
IPv6 literally is that, plus a few pretty minor changes.
SLAAC? Literally a hack that accidentally caught on because some vendor implemented it sooner than DHCPv6 for some reason. It was intended that everyone would use DHCP just like before. And that's the biggest difference from v4 other than the address format.
The history I recall is very much a academics designing theoretical standards vs operators who actually implement the damn things.
The academics designed the standards around the use of SLAAC deliberately and intentionally. DHCPv6 was the ‘hack’ that operators implemented after the fact.
I’m sure there’s an RFC somewhere that’ll prove this one way or another, if anyone else reading this cares enough to determine for sure (Duty Calls).
RFC 4862 (IPv6 Stateless Address Autoconfiguration) RFC 3315 (Dynamic Host Configuration Protocol for IPv6 (DHCPv6))
Bless you sir
They might sound minor but in practice they violate assumptions that are really crucial for implementations. Everyone who deals with addresses must make decisions about how and what to do in face of these quirks.
Another example is the zone identifier string. So how do you store them efficiently in memory or a db? Golang did a really clever thing with netip but the implementation was not easy. Oh well maybe we can always ignore and strip it? Maybe, depends on the use case.
The point is going from exactly 32 bit to 128 bit + sometimes maybe a variable length string (max length, encoding, allowed chars?) is not a small change for something so important and ubiquitous as ip.
In most cases, you don't. Zone identifiers are OS-specific, contain whatever the OS says they do, and may be only valid in the short term (e.g. Linux interface number). You only need them if you are doing lower level networking things. As a web app you just don't, because they aren't part of an internet address. As a web browser you pass whatever the user typed through to the operating system.
Okay, but that's not a minor change. Regardless of why it caught on, SLAAC completely changes how addresses are handed out, and is in many/most environments a requirement if for no other reason than that Android explicitly refuses to implement DHCPv6 ( https://issuetracker.google.com/issues/36949085 ). And once SLAAC is in play, suddenly privacy problems come up and you kind of need to jump through the extra hoops to avoid, y'know, putting your MAC address in every single packet you send over the public internet.
Untill you can access ISP logs you can never tell anything more than 'this client accessed from $ISP from, probably, maybe, $CITY'
With Cag nat yes
With ipv6 you can match IPs realistically to a single household across multiple sites.
Access porn.com from 12.34.56.78 and you are one of dozens of households. Just because Bob Bobson who logged into Netflix is on the same IP it doesn’t mean that house was on porn.com.
Access from 2100:1234:5678:abcd:: and you are accessing from one specific household, even if the lower 64 bits differ. You’ll likely keep the same ip for longer than in cgnat anyway (and by design you keep it until your isp changes)
I think they mean CGNAT. My mobile phone connection goes through CGNAT so it's impossible to identify my individual phone by its IPv4 address, whereas my home address uniquely identifies my home, at least for a limited period of time. Sometimes this is good and sometimes this is bad. Sometimes you want to be anonymous and sometimes you want to be delineated from the people who are being anonymous.
You miss the point: that /56 or /64 is still assigned to you, while a NAT gw might serve 1000s of people.
Would you be willing to share a few details on how you do this. And how do you prevent someone spamming your devices or is the risk so low you don't care?
Unfortunately most ISP's in my area don't dish out IPv6 addresses without ridiculous monthly charges. I hope one day it becomes more commonplace.
You just plug a device into your network. The device acquires an address. You can type that address into another device on the Internet to attempt a connection to your device. If your device is running a web server that allows access from the whole Internet, this brings up the home page. If you have a firewall, tell the firewall to enable connections to that web server from the whole internet.
What do you mean by spamming? People are scanning the Internet the whole time to see what's there, and it isn't a threat unless you are doing something terribly insecure. Scanning IPv6 is impossible in practice anyway, due to the high number of available addresses.
Thanks for your response. Spamming was a poor choice of words on my part. I really meant DDos or just generally people sending erroneous requests or being a nuisance wasting data/resources once they know the address, say if it was leaked.
How do you already stop them from doing that today?
There’s a lot of work that has been done on address space reduction for IPv6 scanning. It’s not “impossible”, it’s just very very hard :)
They need to find them first.
If you've got an IPv4 address that responds to ICMP, HE's https://tunnelbroker.net/ offers free IPv6 ranges (a bunch of /64s and a /48) for free. You can configure a tunnel to work through many routers, but with some setup you could also have something like a Raspberry Pi announce itself as an IPv6 router.
Sites like Netflix treat HE tunnels as VPNs, though, so if you run into weird playback errors, consider configuring your device's DNS server/network not to use IPv6 for that.
As for your questions:
Open port 8888 to (prefix):abcd:ef01:2345:56, or whatever IP your device obtains, in your firewall. It's the same process as with IPv4, except you can use the same port on multiple devices.
While some services have started scanning IPv6, a home network from a semi-competent ISP will contain _at least_ 2^64 IPv6 addresses. Scanning the entire IPv6 network is unfeasible for most automated scanners.
That's the job of a firewall and is unchanged between ipv4 and IPv6. Theyre both equally vulnerable to denial of service attacks
This supports UDP/unreliable data streams in the browser tho.
IPv6 can get rid of NAT which is one of the most annoying hurdles. It unlocks the type of use case where technical people can host something from home for fun, although many can’t access it because both parties need ipv6.
But if you set your sights higher and want to build true p2p apps for non-techies, or if you want “roaming” servers (say an FTP server on your laptop), there are more obstacles than NAT, in practice:
- Opening up ports in both a residential router and sometimes the OS or 3p firewall. Most people don’t know what a port is.
- DNS & certs which require a domain name and a fixed connection (if the peer moves around across networks, eg a laptop or phone, DNS is not responsive enough)
There’s lots of ipv6 available. But it’s not everywhere yet
Though WebRTC works great with IPv6 too. Then the use case would be running it on a server that has incoming connections firewalled.
I don't think that's possible without a jump server. If all peers are NATed, there is no way doing p2p without a jump server. WebRTC is a giant rabbit hole itself.
This idea makes me want cjdns and/or yggdrasil over websockets.
Don't you want static peering setup for them?
Provide a server on-prem at the customer, but allow them a hybrid access to the system.
Via cloud when necessary, "local" (by WebRTC) when possible. While we could just open a local port, using the cloud to arbitrate gives us a common product vision, and proper authN/authZ.
Also allows us to pull the latency down to single digit milliseconds. The regional relays are double digit. When we use relays that aren't regional it's a couple hundred.