Wasn't expecting my question to hit top of HN. I guess I'll give some context for why I asked it.
I work in quantum error correction, and was trying to collect interesting and quantitative examples of repetition codes being used implicitly in classical systems. Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons [1], undersea cables sending X photons per bit (don't know that one yet), some kind of number for a transistor switching (haven't even decided on the number for that one yet), etc.
A key reason quantum computing is so hard is that by default repetition makes things worse instead of better, because every repetition is another chance for an unintended measurement. So protecting a qubit tends to require special physical properties, like the energy gap of a superconductor, or complex error correction strategies like surface codes. A surface code can easily use 1000 physical qubits to store 1 logical qubit [2], and I wanted to contrast that with the sizes of implicit repetition codes used in classical computing.
Can you elaborate on this a bit? My intuition is that, by default, statistical models benefit from larger N. But I have no experience in quantum physics.
It's because unintended measurement is a type of error in a quantum computer. Like, if an electron passing near your qubit would get pushed left if your qubit was 0 and right if was 1, then you will see errors when electrons pass by. Repeating the 0 or 1 a thousand times just means there's 1000x more places that electrons passing by would cause a problem. That kind of redundancy makes that kind of error mechanism worse instead of better.
There are ways of repeating quantum information that protect against accidental measurement errors. For example, if your logical 0 is |000> + |110> + |011> + |101> and your logical 1 is |111> + |001> + |100> + |010> then can recover from one accidental measurement. And there are more complex states that protect against both bitflip errors and accidental measurements simultaneously. They're just more complicated to describe (and implement!) than "use 0000000 instead of 0 and 1111111 instead of 1".
Is this the correct interpretation?
Classical systems: You measure some state, with the measurement containing some error. Averaging the measurement error usually gets closer to the actual value.
Quantum systems: Your measurement influences/can influence the state, which can cause an error in the state itself. Multiple measurements means more possible influence.
If there's interference, could you do something like when using 7 repetition for each bit, take whatever 5 of 7 is, e.g. 1111100 is 1 and 1100000 is 0.
It actually depends how this sentence is intended. There exist quantum repetition codes: the Shor code is the simplest example that uses 9 physical qubits per logical qubit. Since the information is quantum it needs majority voting over two independent bases (hence 3x3=9 qubits to encode a logical one).
You might be making the mistake of thinking that quantum mechanics runs on probabilities, which work in the way you are used to, when in fact it runs on amplitudes, which work quite differently.
Very cool. It’s interesting to realize that at some level, every system is a quantum system if you “zoom in” enough
I think the point is the model though - if a system's behavior can be modeled/described classically, it's a bit silly to to call it a "quantum" system in the same way that it's reductive to say Biology is just applied particle physics. Sure, but that's not a very useful level of abstraction.
If you want to understand the transition between a fundamental theory and its effective description in some limiting regime, you need to be able to describe a system in the limiting regime using the fundamental theory. It's not "silly" to talk about an atom having a gravitational field even if its currently unmeasurably small.
if we consider "quantum" to mean our quantum theory, at the level of general relativity, gravity is not a quantum system. and the qualifier "yet" is also not known.
I would spontaneously respond that you are right and at the same time have no problem if someone explains to me that it is not so.
Subsea cables don't use repetition codes (they are very much suboptimal), but typically use large overhead (20%) LDPC codes (as do satellite comms systems for that matter (the dvb-s2 standard is a good example). Generally to get anywhere close to Shannon we always need sophisticated coding.
Regarding the sensitivity of Subsea systems they are still significantly above 1 photon/bit, the highest sensitivity experiments have been done for optical space comms (look e.g. for the work from Mit Lincoln Labs, David Geisler, David Kaplan and Bryan Robinson are some of the people to look for.
I think you're picturing a different level of the network stack than I had in mind. Yes, above the physical level they will be explicitly using very sophisticated codes. But I think physically it is the case that messages are transmitted using pulses of photons, where a pulse will contain many photons and will lose ~5% of its photons per kilometer when travelling through fiber (which is why amplifiers are needed along the way). In this case the "repetition code" is the number of photons in a pulse.
But we are classical, so I think it's wrong (or at least confusing) to talk about the many photons as repetition codes. Then we might as well start to call all classical phenomena repetition codes. Also how would you define SNR when doing this?
Repetition codes have a very clearly defined meaning in communication theory, using them to mean something else is very confusing.
All classical phenomena are repetition codes (e.g., https://arxiv.org/abs/0903.5082 ). And this is perfectly compatible with the meaning in communication theory, except that the symbols we're talking about are the states of the fundamental physical degrees of freedom.
In the exact same sense, the von Neumann entropy of a density matrix is the Shannon entropy of its spectrum, and no one says "we shouldn't call that the Shannon entropy because Shannon originally intended to apply it to macroscopic signals on a communication line".
Yeah, I agree it's unusual to describe "increased brightness" as "bigger distance repetition code". But I think it'll be a useful analogy in context, and I'd of course explain that.
Isn't sending more than one photon always "repetition" in that sense? Classical systems probably don't do that because of the engineering complexity of sending a single photon at a time -- we had oscillators and switches, not single photon emitters.
Yes. But regardless of whether its feasible to send single quanta in any given circumstance, the redundant nature of the signals is key to understanding its much higher degree of robustness relative to quantum signals.
And to be clear, you can absolutely send a classical signal with individual quanta.
wouldn't you also want to know how many photons are transmitted and how many bits transmitted are received?
All transmitted bits are also received, at least when everything works as intended.
I believe that a classical radio receiver is measuring a coherent state. This is a much lower level notion than people normally think about in QEC since the physical DoF are usually already fixed (and assumed to be a qubit!) in QEC. The closest analogue might be different choices of qubit encodings in a bosonic code.
In general, I'm not sure that the classical information theory toolkit allows us to compare a coherent state with some average occupation number N to say, M (not necessarily coherent) states with average occupation number N' such that N' * M = N. For example, you could use a state that is definitely not "classical" / a coherent state or you could use photon number resolving measurements.
A tangential remark: The classical information theory field uses this notion of "energy per bit" to be able to compare more universally between information transmission schemes. So they would ask something like "How many bits can I transmit with X bandwidth and Y transmission power?"