return to table of content

2023 ACM Turing Prize awarded to Avi Wigderson

jkaptur
17 replies
2d1h

The unreasonable effectiveness of randomness led him to think about the nature of randomness itself. He, like other researchers at the time, questioned how necessary it was for efficient problem-solving and under what conditions it could be eliminated altogether. “Initially, it was not clear if this was only our own stupidity, that we cannot remove randomness,” he said. “But the larger question was whether randomness can always be eliminated efficiently or not.” He realized that the need for randomness was intimately tied to the computational difficulty of the problem.

Does anyone have a sketch of how this works? My understanding of complexity classes is that they consider worst case performance. Even if you have a great pseudorandom number generator and a great randomized algorithm, how do you prove that no instance of RNG + seed + problem instance takes exponential time?

bo1024
8 replies
1d22h

BPP is the complexity class for decision problems that a randomized algorithm can solve in polynomial time, in the sense that on every input (worst-case), the algorithm is right with at least 2/3 probability (over its own randomness).

One example is deciding whether a given number is prime. For a long time, we knew randomized algorithms like the Miller-Rabin test that are correct with high probability. That is, we knew PRIMES is in BPP. But we didn't know a deterministic algorithm. In 2006, a deterministic algorithm was discovered, proving that PRIMES is in P.

One of the central open problems in Computer Science is whether BPP=P. That is, for any problem we can solve with randomness (in the above sense), can we solve it without randomness? Among many other contributions, Wigderson's work has made a ton of progress toward this problem.

The general idea is to try to "derandomize" any randomized algorithm that runs in polynomial time and is correct with 2/3 probability. We will feed it inputs from a pseudorandom generator (instead of true random numbers). If the PRG is really really good, then we can get a deterministic algorithm by running the original algorithm along with the PRG.

Now, if we can prove certain problems are hard, then we can also prove that such really really good PRGs exist. This is basically because telling apart the PRG from true randomness would be a computationally hard problem.

jvanderbot
5 replies
1d1h

Dumb question: There exist randomized approximation algorithms for Weighted Hitting Set, yet it is NP-Hard. Assuming I haven't said anything dumb yet, this surely should satisfy it: Why does this not prove BPP != P?

Perhaps: Am I incorrect in assuming that randomized 2-approximation is somehow transformable to 'correctness in the decision problem 2/3 of the time'?

JohnKemeny
3 replies
1d

We don't know whether P = NP or not.

jvanderbot
2 replies
22h47m

I realize that, but can you help me see why that's relevant here? The question as I understood it was BPP?=P, and I read that as "Do all problems that seem hard but have randomized algorithms actually live in P?" for which a counter example would be an NP-Hard problem with good randomized solution, right?

bo1024
1 replies
21h41m

(Not who you asked, but) one important point here is definitely that BPP is about yes/no decision problems, and approximation algorithms are generally a different beast.

jvanderbot
0 replies
17h51m

That makes sense! Thanks I felt like I'm perpetually a beginner when it comes to computational complexity theory

bo1024
0 replies
21h42m

Perhaps: Am I incorrect in assuming that randomized 2-approximation is somehow transformable to 'correctness in the decision problem 2/3 of the time'?

Yes, that's the issue. For another example, Max-Cut is NP-hard. We can easily get a randomized 2-approximation by taking a cut in the graph uniformly at random (don't even look at the graph structure). But it doesn't look possible to go from this totally random 2-approximation to a correct guess about the size of the true max cut.

pulisse
1 replies
1d2h

BPP is the complexity class for decision problems that a randomized algorithm can solve in polynomial time, in the sense that on every input (worst-case), the algorithm is right with at least 2/3 probability (over its own randomness).

What's the theoretical significance of the 2/3 threshold?

thehumanmeat
0 replies
1d2h

None, it can actually be any constant > 1/2, because you can always run the algorithm a non-exponential number of more times to be more convinced (approaching prob 1) of the answer. 2/3 is just convention.

hinkley
6 replies
2d

Randomization prevents the worst degenerate cases from being repeatable and thus exploitable. (By an attacker or self-own).

Using them is really in the applied CS rather than a theoretical CS domain. The problem can still happen, it just no longer happens consistently to the same customer or at the same time every day.

Eliminating the randomized solution was not on my radar so I’ve got some homework to do.

pca006132
5 replies
2d

No, there are complexity classes for random algorithms. And randomness is also crucial for some cryptographic protocols, typically you want to show that the probability the adversary can do what they want is very slim, e.g. 2^-n for some large n.

hinkley
4 replies
2d

In crypto we are worried about the best case scenario (from the attacker’s perspective).

Randomization in algorithms, to avoid worst case behavior, would be things like shuffling input, XORing hashes for query parameters per run or per hash table, or using random tiebreakers.

kadoban
1 replies
1d23h

In crypto we are worried about the best case scenario (from the attacker’s perspective).

That seems mistaken to me. The best case in most crypto, from an attacker's perspective, is "I tried one key and just happened to get it right". Aren't we more interested in expected outcomes?

hinkley
0 replies
1d20h

In the sense of big O you’re right and I misspoke.

When we are deciding to retire an algorithm it’s often down to how many unaddressed weaknesses there are and assuming that they compose. That’s the best case I was referring to.

wbl
0 replies
1d20h

That's completely untrue.

For example there's an easy randomized algorithm to determine a quadratic nonresidue modulo a prime with success probability 1/2 per Legendre symbol evaluation. In expectation this is 2 iterations, and that's independent of the prime. The best known deterministic algorithm takes log(p)^2 such evaluations in the worst case, even assuming generalized Riemann hypothesis.

pca006132
0 replies
2d

Yeah randomization in algorithms and crypto serve different purpose. But overall introducing randomness can allow you to do amazing things that deterministic algorithm/schemes cannot do.

sn41
0 replies
1d17h

An important idea is to use what are called worst-case-to-average-case reductions. You convert a boolean function f that is worst-case hard to another function g which is average-case hard. In other words, if g is average-case easy, then f is worst-case easy.

These reductions are a major achievement in complexity theory. Constructions use tools like combinatorial designs, list-decoding of error-correcting codes, expander graphs, extractors etc. A good overview of these methods is in Arora and Barak's "Computational complexity", in, I believe Chapters 19 through 21.

gaws
1 replies
1d17h

Correction: April 10, 2024

The original version of this article said Wigderson attended the University of Haifa. He actually graduated from the Technion, in Haifa, Israel.

How did the reporter mess that up?

chaosite
0 replies
1d15h

Haifa is the only city in Israel to have two public universities (excluding branches of the Open University.)

I can see the reporter reading that he attended university in Haifa and misconstruing that to mean the University of Haifa...

sandspar
0 replies
1d21h

That "sitting in chair looking out the window" pose is a Martin Scorsese or Sopranos type pose. The elderly gangster in the old folk's home etc.

srvmshr
10 replies
2d5h

From ACM:

ACM has named Avi Wigderson as recipient of the 2023 ACM A.M. Turing Award for foundational contributions to the theory of computation, including reshaping our understanding of the role of randomness in computation, and for his decades of intellectual leadership in theoretical computer science.

Wigderson is the Herbert H. Maass Professor in the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey. He has been a leading figure in areas including computational complexity theory, algorithms and optimization, randomness and cryptography, parallel and distributed computation, combinatorics, and graph theory, as well as connections between theoretical computer science and mathematics and science.

Also of interest, he has won the Abel Prize in 2021, making it a rather unique combination of winning the top honors in both theoretical/abstract math & CS

polygamous_bat
4 replies
2d3h

Also of interest, he has won the Abel Prize in 2021, making it a rather unique combination of winning the top honors in both theoretical/abstract math & CS

The overlap between theoretical CS and math is way larger than most people know. For a simple example, check out the theoretical CS course catalog at MIT: https://catalog.mit.edu/subjects/6/ and how many of them are cross listed as course 18 (math) classes.

vecter
2 replies
2d3h

Theoretical CS is basically a branch of applied math

waldrews
0 replies
2d2h

Perhaps 'an applied branch of math' since the term 'applied math' is claimed by something usually disconnected from the discrete math subjects CS tends to study.

Kranar
0 replies
2d1h

Theoretical CS is not at all applied math. Ordinary computer science overlaps with applied math, but theoretical CS is very abstract.

jchonphoenix
4 replies
2d2h

Technically the Field's medal is the top honor in mathematics.

But who am I to talk.

srvmshr
1 replies
2d1h

True, it is considered one of the two top honors in Math since last decade. Previously it was the only distinguished prize.

There was a growing need for another award which bridged few gaps. The 40 year cutoff age, awarded every 4 years to living mathematical prodigies failed to honor several prominent mathematical breakthroughs which came after decades of painstaking research.

As the field has progressed, monumental breakthroughs are harder to come by early into career. Many of the ingenuity comes from cross-study of disciplines for e.g. Riemannian hypothesis being approached by Algebraic geometry and Topology rather than number theory. These require years of mastery - not just prodigy. Also the prize money offered by Abel Foundation is a good incentive for research into pure math

sn9
0 replies
21h1m

Yes the Fields is much more similar to the MacArthur Genius Grants.

The Abel is much more similar to the Nobel, though both the Abel and Fields are Nobel-caliber in prestige.

Kranar
1 replies
2d1h

There's no formal hierarchy so there's nothing technical about it. Field's medal is certainly prestigious but it's not widely applicable, it has a bunch of restrictions that don't have anything to do with math itself, including an age limit. For example no one who has ever been awarded an Abel Prize would qualify for a Field's medal strictly due to age.

math_dandy
0 replies
2d

Six Abel laureates (out of 22) had previously won the Fields Medal: Serre, Aliyah, Thompson, Milner, Deligne, and Margulis.

myth_drannon
5 replies
2d

I looked at the book and it's more for graduate-advanced undergrad students.

Can someone recommend a more basic book on the topic of computation for someone with a rusty comp-sci/math undergrad background?

sn9
0 replies
1d18h

Sipser is the canonical text for undergraduates.

cryptoxchange
0 replies
1d21h

Introduction to the Theory of Computation by Michael Sipser

tinhhuynh_97
0 replies
2d1h

Agreed

teleforce
0 replies
2d1h

Final draft version of the book available here for personal research and education:

https://www.math.ias.edu/avi/book

hinkley
5 replies
2d

Computer scientists have discovered a remarkable connection between randomness and computational difficulty (i.e., identifying natural problems that have no efficient algorithms). Working with colleagues, Wigderson authored a highly influential series of works on trading hardness for randomness. They proved that, under standard and widely believed computational assumptions, every probabilistic polynomial time algorithm can be efficiently derandomized (namely, made fully deterministic). In other words, randomness is not necessary for efficient computation. This sequence of works revolutionized our understanding of the role of randomness in computation, and the way we think about randomness.

How would I go about catching up with this aspect of his research? It’s not often that I’ve never heard of a Turing winner, but this guy is completely off of my radar.

layer8
3 replies
2d

I wonder what the “standard and widely believed computational assumptions” are. Presumably, probabilistic approximations to NP-complete problems are not polynomial-time? Or the derandomized versions would still be just approximations?

reader5000
1 replies
2d

I think his argument assumes the existence of pseudorandom generators which map a small amount of "true" random bits to a large amount of bits that look random to any polytime observer. The "derandomization" is that we just have to check all possible states of the seed bits which hopefully will be logarithmic in the size of the problem so you can do exhaustive checking.

polymipisexp
0 replies
1d22h

Nisan and Wigderson prove many different corollaries of their construction in their seminal 'Hardness vs Randomness' paper but their requirement for general derandomization (P = BPP) is that there's some function f computable in time 2^O(n) such that for some e > 0 for all circuits of size 2^en the correlation between f and the output of the circuit is sufficiently low (if I understand correctly).

bo1024
0 replies
1d22h

They are generally esoteric conjectures similar in spirit to P != NP. For example, the third paper uses the assumption "E requires exponential circuits". Here E is the class of problems solvable in exponential time, or 2^O(n) time, on a Turing Machine. Another model of computation besides Turing Machines are Boolean circuits, i.e. AND, OR, and NOT gates. The conjecture is that not every problem in E can be solved by "small" (subexponential-sized) circuits.

The basic idea of the work is that if these problems are hard, then we can use them to build pseudorandom generators that are "just as good" as true random, which we can use to turn truly random algorithms into pseudorandom algorithms with the same performance.

thedatamonger
3 replies
1d22h

From the related article: https://www.quantamagazine.org/avi-wigderson-complexity-theo...

... if a statement can be proved, it also has a zero-knowledge proof.

Mind blown.

Feeding the pseudorandom bits (instead of the random ones) into a probabilistic algorithm will result in an efficient deterministic one for the same problem.

This is nuts. AI is a probabilistic computation ... so what they're saying - if i'm reading this right - is that we can reduce the complexity of our current models by orders of magnitude.

If I'm living in noobspace someone please pull me out.

ilya_m
1 replies
1d22h

AI is a probabilistic computation ... so what they're saying - if i'm reading this right - is that we can reduce the complexity of our current models by orders of magnitude.

Unfortunately, no. First, the result applies to decision, not search problems. Second, the resulting deterministic algorithm is much less efficient than the randomized algorithm, albeit it still belongs to the same complexity class (under some mild assumptions).

mxkopy
0 replies
1d21h

Can’t you build search from decision by deciding on every possible input?

IshKebab
0 replies
1d22h

I don't know exactly what it's saying but it definitely isn't that. AI already uses pseudorandom numbers and is deterministic. (Except some weird AI accelerator chips that use analogue computation to improve efficiency.)

ode
2 replies
2d

Two of Wigderson's major papers mentioned in the announcement are co-authored with Noam Nisan, one of the professors behind the well known on-line course 'From Nand to Tetris'.

sn9
0 replies
21h2m

There's a book, too!

Recently came out as a second edition.

YouWhy
0 replies
1d14h

Prof. Nissan is also remarkable due to having had first line achievements in CS Theory and then shifting to do major impact in the rather distinct field of algorithmic game theory. It's both heart warming that a single person living today could be as diverse, and that the system allowed him that flexibility.

unethical_ban
0 replies
2d3h

Congratulations.

This was a pretty well written summary by ACM.

tu7001
0 replies
1d8h

Anyone some good Avi youtube videos?

tialaramex
0 replies
1d6h

That headline phrasing reads weird to me, obviously people from various allied disciplines might be expected to win the Turing Award, so it's not completely redundant but it's not so far from "Movie wins Best Picture Oscar" and "Sprinter wins Olympic 100m medal".

rramadass
0 replies
1d13h

Any study recommendations (beginner friendly to advanced) on this topic i.e. Probability/Randomness and Computation?

Google brings up Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis by Eli Upfal and Michael Mitzenmacher but i can't find any beginner/introductory books/articles/videos.

Daub
0 replies
1d6h

I love this section of the article (edited for clarity):

"I'm not motivated by application, but I know that we can find uses for fundamental work. Think about Alan Turing. He wrote a mathematical paper in logic in an obscure journal about Entscheidungsproblem. It was not motivated by application."

This is similar to Feynman's plate story, which started off as a casual response to an observation he made in a university canteen, and which ended in a Nobel prize.

To extend my point, it is precisely this kind of curiosity-based enquiry that modern academia discourages.