Here's a great Quanta article as well:
https://www.quantamagazine.org/avi-wigderson-complexity-theo...
One thing I enjoyed was the variety of poses they had Wigderson in. It just looks so awkward. "Here, sit in this chair and look out the window longingly".
Does anyone have a sketch of how this works? My understanding of complexity classes is that they consider worst case performance. Even if you have a great pseudorandom number generator and a great randomized algorithm, how do you prove that no instance of RNG + seed + problem instance takes exponential time?
BPP is the complexity class for decision problems that a randomized algorithm can solve in polynomial time, in the sense that on every input (worst-case), the algorithm is right with at least 2/3 probability (over its own randomness).
One example is deciding whether a given number is prime. For a long time, we knew randomized algorithms like the Miller-Rabin test that are correct with high probability. That is, we knew PRIMES is in BPP. But we didn't know a deterministic algorithm. In 2006, a deterministic algorithm was discovered, proving that PRIMES is in P.
One of the central open problems in Computer Science is whether BPP=P. That is, for any problem we can solve with randomness (in the above sense), can we solve it without randomness? Among many other contributions, Wigderson's work has made a ton of progress toward this problem.
The general idea is to try to "derandomize" any randomized algorithm that runs in polynomial time and is correct with 2/3 probability. We will feed it inputs from a pseudorandom generator (instead of true random numbers). If the PRG is really really good, then we can get a deterministic algorithm by running the original algorithm along with the PRG.
Now, if we can prove certain problems are hard, then we can also prove that such really really good PRGs exist. This is basically because telling apart the PRG from true randomness would be a computationally hard problem.
Dumb question: There exist randomized approximation algorithms for Weighted Hitting Set, yet it is NP-Hard. Assuming I haven't said anything dumb yet, this surely should satisfy it: Why does this not prove BPP != P?
Perhaps: Am I incorrect in assuming that randomized 2-approximation is somehow transformable to 'correctness in the decision problem 2/3 of the time'?
We don't know whether P = NP or not.
I realize that, but can you help me see why that's relevant here? The question as I understood it was BPP?=P, and I read that as "Do all problems that seem hard but have randomized algorithms actually live in P?" for which a counter example would be an NP-Hard problem with good randomized solution, right?
(Not who you asked, but) one important point here is definitely that BPP is about yes/no decision problems, and approximation algorithms are generally a different beast.
That makes sense! Thanks I felt like I'm perpetually a beginner when it comes to computational complexity theory
Yes, that's the issue. For another example, Max-Cut is NP-hard. We can easily get a randomized 2-approximation by taking a cut in the graph uniformly at random (don't even look at the graph structure). But it doesn't look possible to go from this totally random 2-approximation to a correct guess about the size of the true max cut.
What's the theoretical significance of the 2/3 threshold?
None, it can actually be any constant > 1/2, because you can always run the algorithm a non-exponential number of more times to be more convinced (approaching prob 1) of the answer. 2/3 is just convention.
Randomization prevents the worst degenerate cases from being repeatable and thus exploitable. (By an attacker or self-own).
Using them is really in the applied CS rather than a theoretical CS domain. The problem can still happen, it just no longer happens consistently to the same customer or at the same time every day.
Eliminating the randomized solution was not on my radar so I’ve got some homework to do.
No, there are complexity classes for random algorithms. And randomness is also crucial for some cryptographic protocols, typically you want to show that the probability the adversary can do what they want is very slim, e.g. 2^-n for some large n.
In crypto we are worried about the best case scenario (from the attacker’s perspective).
Randomization in algorithms, to avoid worst case behavior, would be things like shuffling input, XORing hashes for query parameters per run or per hash table, or using random tiebreakers.
That seems mistaken to me. The best case in most crypto, from an attacker's perspective, is "I tried one key and just happened to get it right". Aren't we more interested in expected outcomes?
In the sense of big O you’re right and I misspoke.
When we are deciding to retire an algorithm it’s often down to how many unaddressed weaknesses there are and assuming that they compose. That’s the best case I was referring to.
That's completely untrue.
For example there's an easy randomized algorithm to determine a quadratic nonresidue modulo a prime with success probability 1/2 per Legendre symbol evaluation. In expectation this is 2 iterations, and that's independent of the prime. The best known deterministic algorithm takes log(p)^2 such evaluations in the worst case, even assuming generalized Riemann hypothesis.
Yeah randomization in algorithms and crypto serve different purpose. But overall introducing randomness can allow you to do amazing things that deterministic algorithm/schemes cannot do.
An important idea is to use what are called worst-case-to-average-case reductions. You convert a boolean function f that is worst-case hard to another function g which is average-case hard. In other words, if g is average-case easy, then f is worst-case easy.
These reductions are a major achievement in complexity theory. Constructions use tools like combinatorial designs, list-decoding of error-correcting codes, expander graphs, extractors etc. A good overview of these methods is in Arora and Barak's "Computational complexity", in, I believe Chapters 19 through 21.
How did the reporter mess that up?
Haifa is the only city in Israel to have two public universities (excluding branches of the Open University.)
I can see the reporter reading that he attended university in Haifa and misconstruing that to mean the University of Haifa...
That "sitting in chair looking out the window" pose is a Martin Scorsese or Sopranos type pose. The elderly gangster in the old folk's home etc.