return to table of content

Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: RFC 9330

vkdelta
15 replies
12h17m

Some tests were done on Comcast networks on cable plant.

Slide deck below explains it:

https://datatracker.ietf.org/meeting/118/materials/slides-11...

Not sure where this leads but I guess ISPs will start charging toll for express lanes

jesperwe
12 replies
11h53m

L4S is not really an express lane. It is a way for applications to know when their traffic is congested, enabling them to scale DOWN their traffic to alleviate the congestion. Less congestion means less latency.

Phelinofist
7 replies
10h28m

How is that different to TCP congestion control?

toomim
2 replies
9h10m

TCP congestion control relies on packets being dropped to signal that a link is congested.

L4S actually includes an extra bit of information in IP packets that routers can mutate to explicitly say when they are congested.

This means that you (a) don't need to play exponential backoff games, (b), don't need to re-send redundant packets, and (c) don't need big buffers in routers.

You need big buffers in routers because otherwise exponential backoff goes crazy. But when you add big buffers, you get latency, which is another kind of suck.

In order to avoid latency, you need to avoid buffers, which is hard unless you avoid exponential backoff. To avoid exponential backoff, you need routers to actually communicate their congestion, by sending more information. L4S does that by using an unallocated bit in IP packets.

tjoff
1 replies
6h23m

I'll need to read up on this, but one potential misuse of this is to just always/often set that bit on traffic you want to suppress.

Which feels much easier and much less heavy-handed than what you can to today. Which technically is a great thing but just wondering about misuse aspect.

0xdeadbeefbabe
0 replies
2h19m

A router could pretend to drop packets too, but that would result in higher latency. With L4S can a router cheat and get lower latency?

ajb
1 replies
9h12m

TCP congestion control can use this new signal, if present. An update to the TCP protocol which allows it to do is going through IETF at present: https://datatracker.ietf.org/doc/draft-ietf-tcpm-accurate-ec...

Phelinofist
0 replies
5m

Can anyone guess a timeline when this will be available in OSes, middleboxes and whatnot? So when can we reap the benefits?

sznio
0 replies
9h45m

This signals congestion explicitly, by a device declaring the link congested and asking others to slow down. TCP congestion control works by detecting when packets are dropped because devices can't keep up.

Also, when the congestion signal disappears you can try to push the transfer speed up immediately, rather than slowly ramping back up like with TCP.

flumpcakes
0 replies
10h12m

Congestion control with TCP will eventually still need to send the same number of bytes down a pipe, albeit with added latency. After a while an application could notice and make a change, but it would be long enough for a user to notice poor service.

polonbike
2 replies
11h5m

I guess ISPs will start charging toll for congestionless lanes...

kmeisthax
0 replies
1h33m

They already did that, L4S or no. "Fast lanes" usually come in the form of peering links or colocated cache servers, both of which involve actual new capacity. Prioritizing individual flows of traffic over ordinary transit links based on monetary value is something IP is uniquely ill-suited to do.

jlivingood
0 replies
3h26m

See my comment above - doubt this will happen - rather it will become a differentiator like thoughput.

jlivingood
0 replies
3h27m

100% agree - read my IETF Internet Draft for more on that. :-) https://www.ietf.org/archive/id/draft-livingood-low-latency-...

jlivingood
0 replies
3h28m

"Not sure where this leads but I guess ISPs will start charging toll for express lanes"

Doubtful IMO. I think latency becomes another competitive differentiator, much like throughput/speed is today. (this is a personal comment but I work at Comcast)

fotta
0 replies
11h40m

Oh wow, I did not know this is what’s behind their low latency trials I’ve seen on dslreports.

toomim
8 replies
11h12m

This thing is cool. I saw a live demo at IETF 118 in Prague last month. It totally eliminates buffer bloat, which makes it awesome for video chat. I saw the demo and was like "woah... I didn't think this would ever be possible."

It requires an additional bit to be inserted into IP packets, to carry information about when buffers are full (I think?), but it actually works. It feels like living in the future!

ajb
5 replies
9h27m

That bit is already there. L4S changes the meaning of the bit to allow a more accurate signal.

fragmede
2 replies
9h12m

More particularly, L4S is an advancement to the existing ECN (Explicit Congestion Notification) extension to TCP/IP, allowing for more advanced algorithms to cut down latency further.

xorcist
1 replies
5h18m

The main problem with ECN was the remarkably widespread behaviour by middleboxes that either cleared that bit or straight up dropped the packets. Maybe that situation has improved now?

jlivingood
0 replies
3h30m

It has definitely changed. Looking at measurements on this it seems there is only 1 major transit network doing that, and they are working to fix it.

toomim
1 replies
9h16m

Yes, thanks for the clarification. IIRC was explained to me as "we put the last unused bit in IP packets to use, and get this great feature from it."

ajb
0 replies
9h7m

Yeah. It being the last bit (really the last codepoint in a 2 bit field) there was a big argument over it:

https://datatracker.ietf.org/meeting/interim-2020-tsvwg-01/s...

https://mailarchive.ietf.org/arch/msg/tsvwg/rXWRHAyGOuu_qOGM...

namibj
0 replies
6h22m

Actually somewhat better, even: you can let it control the rate factor of your video encoder directly, getting perceptual fairness instead of simple naive bandwidth fairness.

averageRoyalty
0 replies
6h54m

It took a bit of searching, but I assume this is it?

https://youtube.com/watch?t=4900&v=RWjbrXxpzVU

(1hr21m for anyone whom the time link doesn't work for)

EDIT: Never mind, that's their hackathon recap. Still searching, this is not an easy conf to find talks for!

muxamilian
5 replies
6h1m

While it's a step in the right direction, there's a problem if there's at least one 'malicious' actor, who ignores the congestion feedback and just wants a larger share of bandwidth. Then all other actors will retreat and the unfair actors get what they want. Unfortunately it is hard to know for a good actor if the other actors are playing nicely or not. Only if a good actor knows that there's fair queuing, they can trust L4S to treat them fairly.

This can be solved by complementing L4S with fair queuing (e.g. fq_codel) and by making sure that congestion control can detect the presence of fair queuing (https://github.com/muxamilian/fair-queuing-aware-congestion-...).

ajb
4 replies
5h43m

To clarify this - most ISPs implement per customer bandwidth allocation, so a malicious actor should not be able to take share from other customers.

The FQ thing is a part of a larger dispute. Without FQ is is already the case that, irrespective of L4S, fairness is implemented by end hosts, and an end host (eg a server) can ignore congestion responses and take more than a fair share. This is not an issue which L4S introduces, but some argue that L4S "makes it easier" to take a larger share.

The people behind FQ argue that the network should guarantee fair sharing, but not everyone believes they have chosen the right fairness metric. In particular one of the main proponents of L4S does not, as can be seen from his paper linked here: https://news.ycombinator.com/item?id=38598023

qmarchi
1 replies
1h23m

most ISPs implement per customer bandwidth allocation

This really should have an asterisk (*). There is generally a limit on what an ISP will advertise, and what they will provide (usually ~110% of advertised).

However, it's also extremely common that they overprovision segments on their network.

In the case of a Coax network like Comcast, or Spectrum, they will overprovision the actual last-mile capacity so that _most_ times of the day, you'll receive your ~110% of advertised speeds, but during peak (mid-evening), it's extremely unlikely that you're going to receive even your advertised speeds, usually only ~70%.

In the case for L4S, it would absolutely help "perceptively" resolve these kinds of congestion points, but the "evil take" would be that ISPs can extend their network upgrades further.

ajb
0 replies
1h1m

That's a different issue though. I should have been more precise with my terminology. The way it usually works is that there is a scheduler at the bottleneck. Let's for simplicity assume that the customers at a particular bottleneck have all got the same advertised rate, but the bottleneck is less than the sum of these. Say there are 10 customers with 100Mbps each but the bottleneck is only 500Mpbs. Then if each of the 10 customers are maxing out their usage, they will each only get 50Mbps, which is less than the advertised rate on their service. What I meant was, playing games with congestion control won't reduce any other customer below that. (There are different options for how the scheduler could work if the customers have different limits; it could just cap them to their limit, or it could weight their share according to their limit).

I guess you are right that buffer bloat problems could pressure ISPs to avoid overprovisioning, and any solution to bufferbloat could take the pressure off. But you can also get bufferbloat and other latency issues without overprovisioning, so it doesn't seem to me to be a good reason to hold off implementing solutions to them.

Hikikomori
1 replies
52m

Where do they implement this?

ajb
0 replies
33m

Sorry, I've referred to multiple things in that comment - which are you asking about?

tamarlikesdata
4 replies
10h31m

How can you differentiate between L4S and non-L4S traffic at the network level, especially in mixed traffic environments?

ksjskskskkk
2 replies
9h46m

you'd never guess: a new heater bit

fragmede
0 replies
9h17m

it reuses an existing header bit, but yeah

fabrixxm
0 replies
9h16m

so you know when things get hot...

jlivingood
0 replies
3h22m

I wrote a simplified summary of the IETF specs FWIW: https://github.com/jlivingood/IETF-L4S-Deployment/blob/main/...

callalex
3 replies
1h37m

Im confused, everyone here is talking about improvements to video conferencing and streaming, but those applications use UDP instead of TCP so I don’t understand how this will change anything.

thehappysellout
0 replies
1h20m

I think the key point is the bottleneck link is a shared resource. Many TCP flows traversing the link will drive it to a relatively high queue occupancy which causes higher delay for all traffic regardless of protocol.

Only skimmed the proposal but looks like it isolates traffic using the new protocol by giving it a dedicated buffer, and the explicit congestion notification protocol would then keep the size of this queue much smaller at steady state when the link is saturated.

asylteltine
0 replies
1h36m

You ever get that robotic latency thing? That’s because of udp and the stream allowing dropped packets at all. It’s a horrible experience.

ajb
0 replies
57m

This standard is a change to IP, which TCP and UDP (and transports implemented on top of UDP) are both implemented on. So it applies to all of them. Each transport has to implement its own way of using it.

1123581321
3 replies
11h4m

I’m having trouble determining if my 3.1 cable modem supports the draft spec. Is there a way to tell based on serial number? Are there hardware limitations that would prevent older 3.1 modems from receiving a software update to enable support?

evilmonkey19
1 replies
8h46m

It is quite rare that a modem, or home router have support for draft specs. I'm sorry to disappoint you

1123581321
0 replies
8h8m

Thanks. I’ll be on fiber instead of Comcast by the time ISPs are actually deploying this, so I was just curious about my current hardware.

jlivingood
0 replies
3h19m

Several D3.1 modems support it now but most will need to be updated. Many of the vendors have been testing at quarterly L4S interop events, so I would expect them all to have production grade s/w next year.

ncruces
2 replies
9h45m

How does it compare to μTP (Micro Transport Protocol)?

https://en.wikipedia.org/wiki/Micro_Transport_Protocol

jlivingood
0 replies
3h21m

UTP is intended to be less than best effort priority. This is about all the apps trying to share queues at best effort (1 level up).

ajb
0 replies
9h23m

It is independent of that. The L4S standards change the IP layer to provide a more accurate ECN congestion signal , any transport protocol can then take advantage of it. There are versions of TCP and QUIC that do so, in theory a version of uTP could be made to do so as well.

However, from a brief look, uTP is designed for background transfers for which latency is not important, so there is no particular need to do so.

eru
2 replies
8h44m

How does this interact with eg BBR?

the8472
0 replies
6h5m

BBRv1 doesn't take ECN into account. BBRv2/v3 do, and it's mentioned in the RFC:

       Scalable variants are
       under consideration for more recent transport protocols (e.g.,
       QUIC), and the L4S ECN part of BBRv2 [BBRv2] [BBR-CC] is a
       Scalable congestion control intended for the TCP and QUIC
       transports, amongst others.

ajb
0 replies
7h49m

BBR is a congestion control algorithm, L4S provides a congestion control signal. So BBR can be updated to take advantage the L4S signal. Apparently there are some plans to do so.

dozaa
1 replies
8h31m

What does this mean in practicality as a user? Will e.g. video calls be closer to real-time? There's usually about 0.5-1 second delay which leads to a lot of hiccups and interruptions when speaking with each other. What other application uses will be significantly improved?

jlivingood
0 replies
3h23m

It makes new cloud-based apps realistically & reliably workable - think cloud gaming and cloud AR. It also makes interactive stuff like gaming and video conferencing perform a lot better w/o lag. But really anything interactive (user & device) should be better given how many round trips it currently takes to paint a web page or stream video to handle an AI assistant (Alexa) interaction.

ddalex
1 replies
2h44m

How does the feedback loop works ? I.e. the routers need to tell the source (upstream) to back off , but this used an IP header bit, so there is No guaranteed Return Stream....

hmottestad
0 replies
2h29m

With TCP the receiver has to send an ACK back to the sender. If the receiver sees that the congestion bit is set on a packet it gets from the sender then it will set the same bit on the ACK packet it sends back to the sender to acknowledge that the packet was received. This ACK is sent anyways, since it's part of how the sliding window is designed with TCP.

There are built in ways for the TCP protocol to handle congestion, but it doesn't allow a router to signal congestion. The router just has to hope for the sender to detect the congestion fast enough.

barathr
1 replies
12h33m

Bob Briscoe has been on this line of thought for a long time. I'd recommend reading a couple of his classics on the topic, including:

http://www.sigcomm.org/sites/default/files/ccr/papers/2007/A...

https://dl.acm.org/doi/pdf/10.1145/1080091.1080124

monkburger
0 replies
8h54m

Thank you for the links. I will read over them.

virgildotcodes
0 replies
2h29m

In case anyone else was curious, I found a brief demo of this in use with a video feed from an RC car: https://www.youtube.com/watch?v=RZmS10djDEg

sylware
0 replies
5h5m

Like diffserv? Allowing to tell the ISP about low latency traffic?

Ofc, ISPs would have to aggressively limit this type of traffic as it would be abused otherwise (video game gameplay traffic, and voice call streams).

ksjskskskkk
0 replies
9h43m

a rfc which simply sells two others rfc... sigh

Center TCP (DCTCP) [RFC8257] and a Dual-Queue Coupled AQM [RFC9332]

this only exists to ask that cable modems (and maybe mobile phones?) use that too

hmottestad
0 replies
1h24m

I was wondering how the receiver tells the sender that there was congestion. So I tried to figure it out, but it wasn't the easiest to find.

Essentially the details are documented in https://www.rfc-editor.org/info/rfc3168

The simple answer is that there are more than just one flag. From what i gather there are three flags. One flag that the sender sets to inform the routers that it can handle ECN. A second flag is used by the router to tell the recipient that the router was congested. And a third flag is set in by the recipient when it sends an ACK package back to the sender.

For more details, here is the relevant section:

* An ECT codepoint is set in packets transmitted by the sender to indicate that ECN is supported by the transport entities for these packets.

* An ECN-capable router detects impending congestion and detects that an ECT codepoint is set in the packet it is about to drop. Instead of dropping the packet, the router chooses to set the CE codepoint in the IP header and forwards the packet.

* The receiver receives the packet with the CE codepoint set, and sets the ECN-Echo flag in its next TCP ACK sent to the sender.

* The sender receives the TCP ACK with ECN-Echo set, and reacts to the congestion as if a packet had been dropped.

* The sender sets the CWR flag in the TCP header of the next packet sent to the receiver to acknowledge its receipt of and reaction to the ECN-Echo flag.

danr4
0 replies
10h46m

I just hope they pronounce it "L-Force"

cepholdapod
0 replies
9h53m

If you are interesting in learning more on L4S, there is a webinar series starting today on understandinglatency.com. Some of the authors of L4S, the head of Comcasts L4S field trail and some critical voices are speaking

apienx
0 replies
7h3m

Essentially, L4S shrinks the latency feedback loop. The second half of this video explains it quite nicely: https://youtu.be/tAVwmUG21OY?si=lydbqfNL80Y8Uxvp

Ostatnigrosh
0 replies
1h41m

what in the Pied Piper?