return to table of content

Researchers develop the fastest possible flow algorithm

nabla9
13 replies
21h34m

The algorithm is near linear asymptotically at the limit when n -> inf.

In the end of video they tell there is no way that any implementation of their algorithm gets close to beating existing algorithms in the real world.

https://cacm.acm.org/research/almost-linear-time-algorithms-...

poincaredisk
5 replies
17h48m

I imagine the point of this algorithm, like a lot of algorithm research, is to prove the upper bound of complexity for this problem. Not to be used in practice (despite what this article seem to suggest).

On a similar note, there's a lot of work put into optimal matrix multiplication algorithm. We know the lower bound is N*2, the obvious upper bound is N*3, the best (complexity wise, not practical at all) current algorithm is N*2.37, but we don't know how fast can it really get. Is it possible to write N*2 algorithm? We don't know.

nnoremap2
4 replies
9h58m

I mean nobody is stopping me from writing an exponential time algorithm.

Dylan16807
2 replies
7h16m

Sure? If you're slow on purpose that doesn't affect the upper bound set by the "obvious" method.

jakeinspace
1 replies
5h15m

I think they’re replying to the claimed upper bound of n^3. I’m not actually sure what that means.

adrianN
0 replies
4h50m

Knowing an upper bound means you know that the best solution for the problem takes at most that much work. It does not mean that you can’t find worse solutions.

s_Hogg
0 replies
7h50m

From that page:

Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical.

Are there examples of Galactic Algorithms that now aren't Galactic Algorithms?

edit: turns out some matmul algorithms are?

klysm
0 replies
1h29m

I was about to ask if there was a term for this

ddtaylor
0 replies
14h3m

Thank you for introducing a term and concept I was not familiar with.

torginus
0 replies
10h12m

yeah, and another caveat tends to be with these kinds of cases is that you nearly need the absolute optimum - something that gets a 99% in 1% the time tends to be much more practical

julianeon
0 replies
2h56m

I have to say that when I saw the headline I was extremely skeptical. "Fastest possible" is a very, very bold claim.

fromMars
0 replies
4h3m

That's a let down after all the hype in the opening pages.

Sniffnoy
9 replies
23h6m

The abstract just says the time is m^(1+o(1))... anyone know if a more specific bound is stated anywhere?

Maxatar
3 replies
19h18m

That is the specific bound. The little o is a function that aproaches 0 as n approaches infinity, referred to as asymptotically negligible.

Sniffnoy
2 replies
16h30m

I know what the notation means. I'm wondering what the actual function is. Using little o is a way of abstracting that information away.

Rarebox
1 replies
9h59m

I could be wrong but I think many times the researchers don't care about the exact function. It could be something like 1/log(log(n)) .

Sniffnoy
0 replies
25m

Yes, I am very aware that many times they don't, but that doesn't mean they shouldn't!

Fortunately, in many cases, even when the detail is omitted from the headline theorem, they did in fact do the work and it is in fact stated lower down in the body of the paper; or in other cases they fail to state it but it can be easily assembled from a few parts. That's why I was asking.

But sometimes though it's a big complicated thing and they were just like, eh, let's not bother figuring out exactly what it is. To which I say, laziness! You're just making other people do a proof-mine later! You're doing this much, do the work to get a concrete bound, because you can do it better than later proof-miners can!

I won't say that in an ideally functioning mathematics proof-mining would never be necessary, that'd be like saying that in a well-written mathematics one would never need to refactor, but c'mon, mathematicians should at least do what they can to reduce the necessity of it.

JohnKemeny
2 replies
20h29m

It means that you can choose constants such that the algorithm is as close to O(m) as you'd like.

In other words, it's an algorithm scheme that allows you to get an algorithm running in time O(m^ɛ) for any ɛ>1.

Sniffnoy
1 replies
19h43m

Sorry, where's that stated? I'm pretty doubtful of that claim because if that's what they meant they would say that -- they'd say it was O(m^(1+ɛ)), that would be well-understood notation. But what they wrote is that it's O(m^(1+o(1))), which, read as written, means it's a single bound that they're just not being very specific about.

I'm not asking for help decoding the notation; I'm asking for if anyone knows what the more detailed bound is that O(m^(1+o(1))) is abstracting.

vitus
0 replies
17h25m

That's because even the ACM link is an abbreviation of the actual paper.

Preprint at https://arxiv.org/abs/2203.00671

(Pages 68-75 build up the full details of the bound, which looks something like Õ(mκ⁻²α⁻²ϵ⁻¹). There are enough details over the preceding dozens of pages that I can't tell at a glance exactly what all the variables stand for.)

Technically this captures any logarithmic factors, such as exp(O(log^(7/8) m log log m)) as presented on page 75).

c-smile
3 replies
23h30m

Almost-Linear-Time Algorithm

From O(mn) to O(m) ... thus excluding N (number of vertices) from computation ...

Too good to be true?

progbits
2 replies
20h56m

Constant factor so large it's going to be slower than existing (asymptotically worse) algorithms on any practical inputs.

Still, a neat theoretical result.

8474_s
1 replies
13h7m

The constant factors could be optimized or even accelerated with special-purpose hardware. There could be a simplification or even something like reuse/caching/memoization that in real world will reduce the constant factor significantly.

Jabbles
0 replies
10h2m

Maybe, but that would be a different research project.

The constant factors are currently so large that even multiple orders of magnitude speedups would not make this practical.

squarerootof-1
2 replies
23h57m

Where is the paper / code for this?

elwell
0 replies
21h43m

In 2030, this algorithm will be expected to optimally solve some leetcode interview question.

josefrichter
2 replies
12h7m

I don’t want to spam, but I’ve been using rome2rio website/app to find complex connections. They’re definitely not using this algorithm, but I’ve always been fascinated that you get complex results almost immediately. I don’t know how they do it, but for me it’s one of the most fascinating works on the internet. Great job. [I’m not affiliated with them in any way]

smokel
1 replies
5h15m

Rome2Rio seems to find an optimal route, and the problem discussed in this post is about finding an optimal flow.

Both are fascinating problems, but quite different. Finding shortest paths was typically solved with Dijkstra's algorithm [1], until someone discovered an amazing optimization scheme by precalculating some information that speeds up the search algorithm dramatically [2]. Thanks to this breakthrough, one can now interactively drag routes on Google Maps, for instance. And have Rome2Rio.

[1] https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm

[2] https://en.wikipedia.org/wiki/Contraction_hierarchies

Sesse__
0 replies
2h53m

Note that there was not a step directly from Dijkstra to contraction hierarchies (CH); in particular, you could route using Highway hierarchies (HH) before CH came along. Both assume a fairly static network, though.

ecstrema
2 replies
7h43m

Maybe someone could clarify something for me here:

o(n) seems like a stronger statement to me than O(n), since all o(n) algorithms are O(n), but the reverse is not true.

Also if o(n) applies to all n, however small, whereas O(n) applies only when n -> inf,

(From the Wikipedia page example: 2n = O(n) but 2n != o(n))

Then doesn’t that means this algorithm should be applicable to even small n’s? Then it would be the opposite of a galactic algorithm, as someone above suggested, wouldn’t it?

Or am I missing something?

dbaupp
0 replies
6h55m

Little o is still an asymptotic statement: it doesn’t have to apply for small n. A definition of f(n) = o(g(n)) is something like

   lim (n -> infinity) f(n)/g(n) = 0
Or, in other words, for sufficiently large n, g grows faster than f.

For instance, this function is o(n), because 1e1000/n goes to 0 as n grows.

   f(n) = 10**n if n < 1000 else 1e1000
(Pseudo-Python for a piecewise function that grows exponentially to 10**1000 at n = 1000 and then remains constant after that.)

JohnKemeny
0 replies
59m

If the complexity of an algorithm is 3↑↑64*n^0.999, the algorithm is o(n) but can safely be said to be galactic.

* Ps, if memory serves me correct, 3↑↑64 is Graham's number.

rowanG077
1 replies
17h1m

Sometimes I think we have lost the plot completely with complexity as a metric. Increasingly we are seeing algorithms which have optimized the complexity metric to an insane degree but which aren't actually useful.

jltsiren
0 replies
8h27m

That has been the case for decades. Once the low-hanging fruit were all picked, algorithms research became yet another highly specialized field. If you are not a researcher in a closely related area, most research papers are not worth your time.

imvetri
1 replies
13h4m

My words.

Solving a problem for computational efficiency is pointless.

Wy

Take a look at AI neural networks where they blast computational resources.

May be

One day this might help.

Reply to myself

Appreciate this post. And get back to writing.

Appreciation

Out of so many other less interesting post, this post caught my attention and nowhere it spoke about how it works, most importantly why it is needed.

imvetri
0 replies
13h3m

I'm not expert, saying out of experience

nothrowaways
0 replies
13h15m

2022

ziofill
0 replies
18h13m

damn you constant factors [shakes fist in the air]

okintheory
0 replies
5h7m

Interestingly, the same guy also works on making 'theory-only' algorithms work well in practice [1]. But, it seems like that takes another 20 years -- [1] is building on a theory breakthrough from 2004 [2], but these algorithms are only starting to work in practice in 2024, IIUC. I guess that means there's hope for practical min-cost flow algorithms in 2044.

[1] https://arxiv.org/pdf/2303.00709

[2] https://arxiv.org/abs/cs/0310051

jpster
0 replies
12h28m

A glance at the raw figures shows just how far we have come: until the turn of the millennium, no algorithm managed to compute faster than m1.5, where m stands for the number of connections in a network that the computer has to calculate, and just reading the network data once takes m time. In 2004, the computing speed required to solve the problem was successfully reduced to m1.33. Using Kyng’s algorithm, the “additional” computing time required to reach the solution after reading the network data is now negligible.

TFA didn’t describe Kyng’s breakthrough in terms of this mscore it considers so important. What’s up with that?

imtringued
0 replies
9h59m

I was hoping for some kind of evolutionary algorithm. Giving up optimality in exchange for being able to solve problem instances with billions of items would be worth it.

I_am_tiberius
0 replies
4h10m

Can this be used to improve the Bitcoin Lightning Network?