Some tests were done on Comcast networks on cable plant.
Slide deck below explains it:
https://datatracker.ietf.org/meeting/118/materials/slides-11...
Not sure where this leads but I guess ISPs will start charging toll for express lanes
Some tests were done on Comcast networks on cable plant.
Slide deck below explains it:
https://datatracker.ietf.org/meeting/118/materials/slides-11...
Not sure where this leads but I guess ISPs will start charging toll for express lanes
This thing is cool. I saw a live demo at IETF 118 in Prague last month. It totally eliminates buffer bloat, which makes it awesome for video chat. I saw the demo and was like "woah... I didn't think this would ever be possible."
It requires an additional bit to be inserted into IP packets, to carry information about when buffers are full (I think?), but it actually works. It feels like living in the future!
That bit is already there. L4S changes the meaning of the bit to allow a more accurate signal.
More particularly, L4S is an advancement to the existing ECN (Explicit Congestion Notification) extension to TCP/IP, allowing for more advanced algorithms to cut down latency further.
The main problem with ECN was the remarkably widespread behaviour by middleboxes that either cleared that bit or straight up dropped the packets. Maybe that situation has improved now?
It has definitely changed. Looking at measurements on this it seems there is only 1 major transit network doing that, and they are working to fix it.
Yes, thanks for the clarification. IIRC was explained to me as "we put the last unused bit in IP packets to use, and get this great feature from it."
Yeah. It being the last bit (really the last codepoint in a 2 bit field) there was a big argument over it:
https://datatracker.ietf.org/meeting/interim-2020-tsvwg-01/s...
https://mailarchive.ietf.org/arch/msg/tsvwg/rXWRHAyGOuu_qOGM...
Actually somewhat better, even: you can let it control the rate factor of your video encoder directly, getting perceptual fairness instead of simple naive bandwidth fairness.
It took a bit of searching, but I assume this is it?
https://youtube.com/watch?t=4900&v=RWjbrXxpzVU
(1hr21m for anyone whom the time link doesn't work for)
EDIT: Never mind, that's their hackathon recap. Still searching, this is not an easy conf to find talks for!
While it's a step in the right direction, there's a problem if there's at least one 'malicious' actor, who ignores the congestion feedback and just wants a larger share of bandwidth. Then all other actors will retreat and the unfair actors get what they want. Unfortunately it is hard to know for a good actor if the other actors are playing nicely or not. Only if a good actor knows that there's fair queuing, they can trust L4S to treat them fairly.
This can be solved by complementing L4S with fair queuing (e.g. fq_codel) and by making sure that congestion control can detect the presence of fair queuing (https://github.com/muxamilian/fair-queuing-aware-congestion-...).
To clarify this - most ISPs implement per customer bandwidth allocation, so a malicious actor should not be able to take share from other customers.
The FQ thing is a part of a larger dispute. Without FQ is is already the case that, irrespective of L4S, fairness is implemented by end hosts, and an end host (eg a server) can ignore congestion responses and take more than a fair share. This is not an issue which L4S introduces, but some argue that L4S "makes it easier" to take a larger share.
The people behind FQ argue that the network should guarantee fair sharing, but not everyone believes they have chosen the right fairness metric. In particular one of the main proponents of L4S does not, as can be seen from his paper linked here: https://news.ycombinator.com/item?id=38598023
most ISPs implement per customer bandwidth allocation
This really should have an asterisk (*). There is generally a limit on what an ISP will advertise, and what they will provide (usually ~110% of advertised).
However, it's also extremely common that they overprovision segments on their network.
In the case of a Coax network like Comcast, or Spectrum, they will overprovision the actual last-mile capacity so that _most_ times of the day, you'll receive your ~110% of advertised speeds, but during peak (mid-evening), it's extremely unlikely that you're going to receive even your advertised speeds, usually only ~70%.
In the case for L4S, it would absolutely help "perceptively" resolve these kinds of congestion points, but the "evil take" would be that ISPs can extend their network upgrades further.
That's a different issue though. I should have been more precise with my terminology. The way it usually works is that there is a scheduler at the bottleneck. Let's for simplicity assume that the customers at a particular bottleneck have all got the same advertised rate, but the bottleneck is less than the sum of these. Say there are 10 customers with 100Mbps each but the bottleneck is only 500Mpbs. Then if each of the 10 customers are maxing out their usage, they will each only get 50Mbps, which is less than the advertised rate on their service. What I meant was, playing games with congestion control won't reduce any other customer below that. (There are different options for how the scheduler could work if the customers have different limits; it could just cap them to their limit, or it could weight their share according to their limit).
I guess you are right that buffer bloat problems could pressure ISPs to avoid overprovisioning, and any solution to bufferbloat could take the pressure off. But you can also get bufferbloat and other latency issues without overprovisioning, so it doesn't seem to me to be a good reason to hold off implementing solutions to them.
Where do they implement this?
Sorry, I've referred to multiple things in that comment - which are you asking about?
How can you differentiate between L4S and non-L4S traffic at the network level, especially in mixed traffic environments?
you'd never guess: a new heater bit
it reuses an existing header bit, but yeah
so you know when things get hot...
I wrote a simplified summary of the IETF specs FWIW: https://github.com/jlivingood/IETF-L4S-Deployment/blob/main/...
Im confused, everyone here is talking about improvements to video conferencing and streaming, but those applications use UDP instead of TCP so I don’t understand how this will change anything.
I think the key point is the bottleneck link is a shared resource. Many TCP flows traversing the link will drive it to a relatively high queue occupancy which causes higher delay for all traffic regardless of protocol.
Only skimmed the proposal but looks like it isolates traffic using the new protocol by giving it a dedicated buffer, and the explicit congestion notification protocol would then keep the size of this queue much smaller at steady state when the link is saturated.
You ever get that robotic latency thing? That’s because of udp and the stream allowing dropped packets at all. It’s a horrible experience.
This standard is a change to IP, which TCP and UDP (and transports implemented on top of UDP) are both implemented on. So it applies to all of them. Each transport has to implement its own way of using it.
I’m having trouble determining if my 3.1 cable modem supports the draft spec. Is there a way to tell based on serial number? Are there hardware limitations that would prevent older 3.1 modems from receiving a software update to enable support?
It is quite rare that a modem, or home router have support for draft specs. I'm sorry to disappoint you
Thanks. I’ll be on fiber instead of Comcast by the time ISPs are actually deploying this, so I was just curious about my current hardware.
Several D3.1 modems support it now but most will need to be updated. Many of the vendors have been testing at quarterly L4S interop events, so I would expect them all to have production grade s/w next year.
How does it compare to μTP (Micro Transport Protocol)?
UTP is intended to be less than best effort priority. This is about all the apps trying to share queues at best effort (1 level up).
It is independent of that. The L4S standards change the IP layer to provide a more accurate ECN congestion signal , any transport protocol can then take advantage of it. There are versions of TCP and QUIC that do so, in theory a version of uTP could be made to do so as well.
However, from a brief look, uTP is designed for background transfers for which latency is not important, so there is no particular need to do so.
How does this interact with eg BBR?
BBRv1 doesn't take ECN into account. BBRv2/v3 do, and it's mentioned in the RFC:
Scalable variants are
under consideration for more recent transport protocols (e.g.,
QUIC), and the L4S ECN part of BBRv2 [BBRv2] [BBR-CC] is a
Scalable congestion control intended for the TCP and QUIC
transports, amongst others.
BBR is a congestion control algorithm, L4S provides a congestion control signal. So BBR can be updated to take advantage the L4S signal. Apparently there are some plans to do so.
What does this mean in practicality as a user? Will e.g. video calls be closer to real-time? There's usually about 0.5-1 second delay which leads to a lot of hiccups and interruptions when speaking with each other. What other application uses will be significantly improved?
It makes new cloud-based apps realistically & reliably workable - think cloud gaming and cloud AR. It also makes interactive stuff like gaming and video conferencing perform a lot better w/o lag. But really anything interactive (user & device) should be better given how many round trips it currently takes to paint a web page or stream video to handle an AI assistant (Alexa) interaction.
How does the feedback loop works ? I.e. the routers need to tell the source (upstream) to back off , but this used an IP header bit, so there is No guaranteed Return Stream....
With TCP the receiver has to send an ACK back to the sender. If the receiver sees that the congestion bit is set on a packet it gets from the sender then it will set the same bit on the ACK packet it sends back to the sender to acknowledge that the packet was received. This ACK is sent anyways, since it's part of how the sliding window is designed with TCP.
There are built in ways for the TCP protocol to handle congestion, but it doesn't allow a router to signal congestion. The router just has to hope for the sender to detect the congestion fast enough.
Bob Briscoe has been on this line of thought for a long time. I'd recommend reading a couple of his classics on the topic, including:
http://www.sigcomm.org/sites/default/files/ccr/papers/2007/A...
Thank you for the links. I will read over them.
In case anyone else was curious, I found a brief demo of this in use with a video feed from an RC car: https://www.youtube.com/watch?v=RZmS10djDEg
Like diffserv? Allowing to tell the ISP about low latency traffic?
Ofc, ISPs would have to aggressively limit this type of traffic as it would be abused otherwise (video game gameplay traffic, and voice call streams).
a rfc which simply sells two others rfc... sigh
Center TCP (DCTCP) [RFC8257] and a Dual-Queue Coupled AQM [RFC9332]
this only exists to ask that cable modems (and maybe mobile phones?) use that too
I was wondering how the receiver tells the sender that there was congestion. So I tried to figure it out, but it wasn't the easiest to find.
Essentially the details are documented in https://www.rfc-editor.org/info/rfc3168
The simple answer is that there are more than just one flag. From what i gather there are three flags. One flag that the sender sets to inform the routers that it can handle ECN. A second flag is used by the router to tell the recipient that the router was congested. And a third flag is set in by the recipient when it sends an ACK package back to the sender.
For more details, here is the relevant section:
* An ECT codepoint is set in packets transmitted by the sender to indicate that ECN is supported by the transport entities for these packets.
* An ECN-capable router detects impending congestion and detects that an ECT codepoint is set in the packet it is about to drop. Instead of dropping the packet, the router chooses to set the CE codepoint in the IP header and forwards the packet.
* The receiver receives the packet with the CE codepoint set, and sets the ECN-Echo flag in its next TCP ACK sent to the sender.
* The sender receives the TCP ACK with ECN-Echo set, and reacts to the congestion as if a packet had been dropped.
* The sender sets the CWR flag in the TCP header of the next packet sent to the receiver to acknowledge its receipt of and reaction to the ECN-Echo flag.
I just hope they pronounce it "L-Force"
If you are interesting in learning more on L4S, there is a webinar series starting today on understandinglatency.com. Some of the authors of L4S, the head of Comcasts L4S field trail and some critical voices are speaking
Essentially, L4S shrinks the latency feedback loop. The second half of this video explains it quite nicely: https://youtu.be/tAVwmUG21OY?si=lydbqfNL80Y8Uxvp
what in the Pied Piper?
L4S is not really an express lane. It is a way for applications to know when their traffic is congested, enabling them to scale DOWN their traffic to alleviate the congestion. Less congestion means less latency.
How is that different to TCP congestion control?
TCP congestion control relies on packets being dropped to signal that a link is congested.
L4S actually includes an extra bit of information in IP packets that routers can mutate to explicitly say when they are congested.
This means that you (a) don't need to play exponential backoff games, (b), don't need to re-send redundant packets, and (c) don't need big buffers in routers.
You need big buffers in routers because otherwise exponential backoff goes crazy. But when you add big buffers, you get latency, which is another kind of suck.
In order to avoid latency, you need to avoid buffers, which is hard unless you avoid exponential backoff. To avoid exponential backoff, you need routers to actually communicate their congestion, by sending more information. L4S does that by using an unallocated bit in IP packets.
I'll need to read up on this, but one potential misuse of this is to just always/often set that bit on traffic you want to suppress.
Which feels much easier and much less heavy-handed than what you can to today. Which technically is a great thing but just wondering about misuse aspect.
A router could pretend to drop packets too, but that would result in higher latency. With L4S can a router cheat and get lower latency?
TCP congestion control can use this new signal, if present. An update to the TCP protocol which allows it to do is going through IETF at present: https://datatracker.ietf.org/doc/draft-ietf-tcpm-accurate-ec...
Can anyone guess a timeline when this will be available in OSes, middleboxes and whatnot? So when can we reap the benefits?
This signals congestion explicitly, by a device declaring the link congested and asking others to slow down. TCP congestion control works by detecting when packets are dropped because devices can't keep up.
Also, when the congestion signal disappears you can try to push the transfer speed up immediately, rather than slowly ramping back up like with TCP.
Congestion control with TCP will eventually still need to send the same number of bytes down a pipe, albeit with added latency. After a while an application could notice and make a change, but it would be long enough for a user to notice poor service.
I guess ISPs will start charging toll for congestionless lanes...
They already did that, L4S or no. "Fast lanes" usually come in the form of peering links or colocated cache servers, both of which involve actual new capacity. Prioritizing individual flows of traffic over ordinary transit links based on monetary value is something IP is uniquely ill-suited to do.
See my comment above - doubt this will happen - rather it will become a differentiator like thoughput.
100% agree - read my IETF Internet Draft for more on that. :-) https://www.ietf.org/archive/id/draft-livingood-low-latency-...
Doubtful IMO. I think latency becomes another competitive differentiator, much like throughput/speed is today. (this is a personal comment but I work at Comcast)
Oh wow, I did not know this is what’s behind their low latency trials I’ve seen on dslreports.