return to table of content

Breakthrough a step toward revealing hidden structure of prime numbers

EMIRELADERO
83 replies
1d10h

This got me thinking.

Imagine this discovery led to a larger breakthrough on prime numbers that allowed easy factorization of large integers and effectively rendered public key cryptography such as RSA ineffective overnight, by allowing anyone with a consumer-grade CPU to crack any production-size key.

Does the industry have DR plans for this scenario? Can the big players quickly switch to a different, unbroken encryption system? While it would probably be a heavenly day for jailbreakers, console modders and other "device freedom" types generally, the overall impact would be disastrous and incalculable.

Does the industry simply not consider "sudden number theory breakthrough" a possible event?

tommiegannert
18 replies
1d10h

I think this is where the move to elliptic curve comes in, and that seems well on its way. Both in signatures and handshakes (Diffie-Hellman).

Perhaps it's not a one minute operation to DR, but I doubt everything would be open if RSA/DH was rendered insecure overnight. I know my SSH keys are a mixed bag right now.

exe34
16 replies
1d9h

does the move to elliptic crypto suggest that the people in the know expect prime factorisation to be broken soon?

Jach
6 replies
1d9h

Since no one mentioned, a major reason to prefer elliptic curves is that you can get equivalent security for much smaller key sizes.

tommiegannert
2 replies
1d8h

And the key consituents don't have to be prime. That use of heuristics has always scared me a bit.

dspillett
1 replies
1d7h

Nit-pic: the key constituents only need to be co-prime rather than absolutely prime, though in practise this makes little or no difference.

bluGill
0 replies
1d5h

So far as we know. It scares some people that maybe co-primes working might be a sign of some future attack. Nobody has been able to figure out how such an attack works, but considering RSA is defined for primes, that some non-prime numbers also work is scary.

kevin_thibedeau
0 replies
20h54m

I had to implement key generation on an embedded device @ 180MHz. RSA2048 would take upwards of five minutes if you didn't get lucky finding primes early on. Stronger ECC would be done within a second.

exe34
0 replies
1d8h

nice thank you!

eddd-ddde
0 replies
1d1h

I remember when I generated my first EC key and added it to an SSH client. I was so surprised by the size of the key I spent some time researching if I had made a mistake.

Honestly impressive.

upofadown
0 replies
1d7h

... and easier than with RSA. Not that it would make a significant difference.

staunton
2 replies
1d9h

It's to do with "if we ever have big/good quantum computers, prime factorization is doable" and with "let's use the new shiny things".

On a related note, someone might discover some elliptic curve math tomorrow and your CPU can break all that stuff just as well...

exe34
1 replies
1d8h

so with my cynic hat on, maybe a bunch of people already have that and that's why we're being moved off the hard stuff.

staunton
0 replies
1d5h

The NSA had the option to do something like that when they (via NIST) standardized DES.

They chose to standardize a version that's secure against attacks that only they knew at the time, shorten the key length so they can still brute-force it if they really need to, and successfully kept the attack secret until researchers at a foreign university independently discovered it decades later.

shakow
0 replies
1d9h

Smaller key sizes and faster at a given level of security, PQ-safe (at least as far as we publically know).

bjoli
0 replies
1d6h

If anything, ECC is probably easier to break with qc. Shor's algorithm works "better" for discrete logarithms than for prime factorisation. "Better" as in requiring a smaller quantum computer. Still way beyond what is available today.

The reason for switching to ECC is speed and key size. The NSA recommends 3096bit rsa keys for 128 aes, but only 256 bit ecc keys for the same security against traditional attacks.

They also went ahead and recommended 348bit keys for ecc, but I don't know if something like that is widely used anywhere. Curve448 is nice but slow. Curve41417 is fast but largely unused.

TheCondor
0 replies
1d6h

Arguably, we know a lot more about elliptical curves than prime number theory too. There have definitely been a lot more folks working on it.

There are other algorithms, NIST has a post-quantum project. There are options, but it’s a lot more than having some algorithms, we need protocols and they are still a bit off. Prime factorization isn’t going to get easier or faster though. There might be another attack or approach to attacking, some other numerical break through that leads to faster rsa cracking might be found but it’s hard to imagine it being that practical.

scrapheap
10 replies
1d8h

If someone found a way to easily factorize large integers easily on consumer grade hardware then it would be very painful as RSA is one of the big public key algorithms.

Before you start worrying about it though consider that RSA has held up for 47 years of active cryptanalysis so far - during which time many alternative algorithms have been suggested as being superior, only to be broken a short time later.

Also the push to switch to Elliptic-Curve algorithms has been more down to them being easier for computers to use to encrypt/decrypt data.

Personally if I had to bet on which public key algorithm will still be around in 10 years time then I'd put my money on RSA.

raverbashing
5 replies
1d5h

Actually RSA has several "gotchas", so it is not that it has held up but people have managed to work around those gotchas into a working encryption system

(Basically your data is not encrypted with RSA, you encrypt a secondary key, send it with RSA but the main encryption is AES see https://en.wikipedia.org/wiki/Transport_Layer_Security#Key_e... )

fharding
1 replies
1d5h

Key exchange is done for speed (symmetric key crypto is way faster than public key) and forward secrecy. It’s not done because RSA is flawed per se. We use DH instead of e.g. ElGamal encryption for the same reasons.

raverbashing
0 replies
1d4h

Yeah it's not so much of a flaw of RSA, but encrypting pure text with it for example is more complicated (and has more caveats with padding, etc) than just encrypting a fixed amount of bytes

sk5t
0 replies
1d5h

Don’t think this merits an “actually” - using a session key et al. is basic usage and does not bear on the strength of RSA itself.

scrapheap
0 replies
1d5h

There's "gotchas" with every encryption scheme - in fact whenever TLS uses any Public Key encryption scheme it'll pair it with a Symmetric Key encryption scheme. So you could say that by your definition no Public Key encryption scheme has "held up" and they've all had to be worked round :)

There are benefits to pairing the slower Public Key schemes with a Symmetric Key encryption scheme using a session key, as you get the benefits of an Public Key encryption scheme with the performance of a Symmetric Key encryption scheme.

admax88qqq
0 replies
1d3h

A lot of the RSA gotchas are due to trying to take implementation shortcuts either for convenience or speed.

If you don’t choose your primes truly randomly for example.

Using a secondary key and using RSA for the key share is not about avoiding RSA gotchas it’s just about performance.

AnotherGoodName
2 replies
1d3h

RSA has effectively been broken many times. We literally had 128bit RSA encryption hardware at one point. There were even export controls on keys beyond a certain length (512bits) that today are trivial to break with the general number field seive. You look at the history of RSA and it’s not pretty. Dixons method had us all scrambling to use 512bit keys (pushing the export restrictions), special number field seive had us rushing to get to 1024bit. The general number field seive more recently pushed us to 2048bits. Who can tell what’s next here. In fact look at the complexity of the special vs general number field seives and you’ll see the statements are almost the same, just some constants reduced. That’s worrying because there’s no reason to think the current constants are a minimum here. We may well find out 2048bits is not enough.

Heck just read a paper in state of the art dedicated RSA encryption hardware from the 80s. All now completely broken. They are very impressed with some of the 512bit hardware!

https://people.csail.mit.edu/rivest/pubs/pubs/Riv84.pdf

doetoe
1 replies
20h49m

That is not when an encryption algorithm is usually considered to be broken, it just means that a certain key length is not sufficient anymore. You can break 20 bit RSA with pen and paper, but as long as a linear change in the key length causes an exponential increase in the decryption time, the algorithm is not broken. At this moment, the record for the factorization of a specific RSA key is one of 829 bits, which suggests (by extrapolation) that within a few decades 1024 bits may not be safe if your adversary has the resources. No (reasonable) key length can be expected to be safe forever, even without any mathematical breakthroughs

AnotherGoodName
0 replies
20h31m

I’d say it’s a break if the encryption you once used (512bit and below RSA) is now trivially decrypted thanks to mathematical advances.

RSA 2048 hasn’t been broken but 512bit RSA definitely had been by any definition.

I feel “RSA is fine because much longer key lengths still work” is hiding what happened here. Yes we can still get into the infeasible realm with RSA and really long keys but the algorithm has definitely let us down multiple times thanks to mathematical improvements in factorization that just keep coming.

tzs
0 replies
1d2h

Before you start worrying about it though consider that RSA has held up for 47 years of active cryptanalysis so far

True, but is there much overlap between the set of people who actively do cryptanalyses on RSA and the set of people who actively research integer factoring?

As far as I know the skills needed to make progress on integer factoring are not really needed for anything in cryptography other than that specific problem and include a bunch of other skills that have nothing whatsoever to do with cryptography and take a long time to master. It's also been an open and important problem long enough that if it is solved it is likely to be by someone who is a world class expert in it, similar to how Fermat's Last Theorem was finally solved.

Similarly the skills needed for anything in cryptanalysis other than trying to break RSA by factoring are useless for working on integer factoring. The result is that very few, if any, people have the time and ability to become both cryptanalysts and world class integer factoring researchers.

Thus I'd expect nearly all of the 47 years of active RSA cryptanalysis has been on finding and fixing the numerous mistakes that can be made when implementing RSA that allow it to be broken without factoring.

I'd guess it is also similar with the P = NP problem. A solution to that has implications for cryptography but I doubt many cryptanalysts are working on the P = NP problem. Also maybe same for the Riemann hypothesis (I'm not sure if that one has implications for cryptography).

JackSlateur
9 replies
1d10h

The world has moved away from RSA etc to elliptic curves. Not everybody did, through. RSA is no longer the standard, and has not been for many years.

opyate
6 replies
1d9h

Still, plenty of old stuff was scraped/sniffed under the "store now, decrypt later" methodology.

milansuk
5 replies
1d7h

True. The only solution is to keep your data outside cloud(aka someone else's computer) no matter what encryption you use.

K0balt
4 replies
1d5h

Also means it can’t transit the internet. So actually, only on airgapped networks.

8372049
3 replies
1d4h

If we're going to extremes like that, airgapped networks aren't truly safe either

mckn1ght
1 replies
1d

Could you explain why that is? If I have an airgapped smart home network, someone has to come physically sniff the packets. If it’s only over ethernet, they have to physically plug in. That’s not a scalable attack strategy.

vrighter
0 replies
5h12m

There's tons of ways to exfiltrate data from air gapped systems, if you can manage to get something installed in them. Ones I've read about are by toggling the caps lock led and recording it with a camera. Encoding data into the cpu fan speed, and capturing the sound with a microphone for analysis (run a spinloop for a 1, thread.sleep for a zero). Variations of these can also be used, such as with screen brightness, monitoring powerlines.

My personal favourite is one where they send specific patterns of data over usb, where the EM fields generated by the "data" flowing over the wire form a carrier signal onto which data can be encoded, which can be received up to 5m away. This requires no additional hardware.

All of these involve some malware installed on the system and have a tiny amount of bandwidth, but if there'a man on the inside, all they have to do is install the malware without having to worry about additional hardware for getting the data out of the machine.

barelyauser
0 replies
1d4h

Also, the safest data is the one never sampled into digital format and stored in computer systems.

jiggawatts
1 replies
1d8h

You can’t imagine how many places I’ve seen “Elliptic curve certificates are not supported” as recently as this year.

It’s the IPv6 of the security world.

upofadown
0 replies
1d7h

That's because certificates are all about authentication not encryption. There is no real reason to move away from RSA for authentication. The reason that TLS moved away from RSA for encryption is that it is awkward to do forward secrecy with RSA due to the slowness of generating new RSA keypairs. In practice you would want to generate a new RSA keypair every, say, hour on the server and then somehow get it down to the cryptography level for use. Totally doable, but a different way of doing things.

devnull3
6 replies
1d4h

If there is a such a breakthrough then the hackers or even spy agencies will not reveal it. They will instead silently make use of it. It will be essentially a backdoor for them.

heyoni
3 replies
1d4h

Wouldn’t it likely originate from academia? If so you can bet that the work will be published just like this one.

Retr0id
2 replies
1d3h

It's hard to know where things are today, but historically, public academia has often been behind the true cutting edge of cryptanalysis. For example, take a look at the history of Differential Cryptanalysis https://en.wikipedia.org/wiki/Differential_cryptanalysis

The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.

In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique.
heyoni
1 replies
1d2h

That is both impressive and disappointing. I'm so used to seeing large corporations publishing AI models and other techniques (like Ghidra) that I assumed a finding like that would be disseminated to the public. But you're right, something that could be used to decrypt modern ciphers could very well be kept secret for as long as possible.

Retr0id
0 replies
1d2h

Ghidra was private for many years before it was public (I don't know precisely how many, I suppose that information is also private heh)

Edit: Wikipedia mentions 1999, with v1.0 existing in 2003 https://en.wikipedia.org/wiki/Ghidra#History

Retr0id
0 replies
1d3h

It depends on who makes the breakthrough.

EvanAnderson
0 replies
1d3h

SETEC ASTRONOMY

atemerev
6 replies
1d7h

The industry couldn’t even prepare for a bad Crowdstrike update. And yet, it figured things out in a few days or so.

The ability to prepare for catastrophic scenarios is overestimated. The ability to survive them is underestimated.

robertlagrant
3 replies
1d6h

This would be a lot worse than that. Crowdstrike was bad because everyone lets relatively untested code straight into the Windows kernel - i.e. known incompetence of approach. This would be bad despite massive care taken to have the right approach.

atemerev
2 replies
1d6h

Yes, except there is no “massive care”. If people are OK to install other companies’ rootkits to their critical infrastructure, they will not care about anything else, too.

robertlagrant
0 replies
1d

The massive care is the algorithm selection process, the careful implementations, and the long-term observation and correction of the performance of the algorithm implementations.

digging
0 replies
1d2h

"Some people did X" !== "All people do X"

HeatrayEnjoyer
1 replies
1d5h

CS was a software update. RSA is baked into many silicon circuits and firmware ROMs.

atemerev
0 replies
1d4h

Well, hardware is replaceable, too.

AnotherGoodName
4 replies
1d2h

That happened many many times over with rsa!

The us government used to restrict export of long rsa keys. At one point much of the world was using 128bit rsa keys but Dixon method had everyone scrambling to use 512bit keys. Then the special number field drive had us all scrambling to use 1024bit keys and the general number field seive again had us scrambling to get to 2048bit keys.l and that really wasn’t that long ago relatively speaking.

Check out rsa encryption hardware from the 80s. They are really proud of some of the hardware that can do 512bits! (Useless today)

https://people.csail.mit.edu/rivest/pubs/pubs/Riv84.pdf

The special and general number field seize complexity statements are a few constants in difference. Look at those constants. Do they seem to be some root limit to you? Is it really that unlikely that there’s not a way to reduce those further making even 2048bit keys useless?

You don’t need to ask “what would happen if RSA broke” because those of us who have been through this many times now can straight up tell you. You’ll be scrambling to once more bump up the key size and you’ll be auditing all the potential data leaked.

simpaticoder
2 replies
21h34m

RSA failures with bit-depth were a matter of degree; a prime number factorization break-through would be a matter of kind.

AnotherGoodName
1 replies
20h35m

It’s not log(n) but still a break since we were literally using lower bit strength than was trivially factorable thanks to mathematical advances and to the point of thinking RSA 2048 is safe, well we once thought that about 128bit RSA. If the above pans out like the general number field seive did we may yet need to move the goal posts further. And we really really shouldn’t be surprised if it happens since it’s happened so many times already.

MobiusHorizons
0 replies
16h54m

I believe this was one of the reasons for the broad adoption of elliptic curve based cryptography. The mechanism is not based on prime numbers, so smaller keys were adequate, and it was hoped that they might avoid future degradation due to prime factorization research. Of course they could still be vulnerable to their own attacks, but it still requires an attacker to expend more resources.

jjav
0 replies
17h12m

At one point much of the world was using 128bit rsa keys

When?

I was writing crypto libraries in the early 90s to support RSA, DSA and ElGamal at my company (this predates the times when good open source crypto libraries were broadly available).

Even back then 128 bit RSA keys were not used. The smallest the library supported was 512 and the smallest we ever used in production was 768 bits.

That's how far back my own memory goes. But here's a paper from Arjen Lenstra from 2001 which has a table showing computationally equivalent key sizes back to 1982.

https://infoscience.epfl.ch/server/api/core/bitstreams/c323a...

In 1982, security comparable (at the time!) to DES would have been 417 bit RSA keys.

So even in 1982, using 128 bit RSA keys made no sense!

You’ll be scrambling to once more bump up the key size and you’ll be auditing all the potential data leaked.

If you've had to do this for RSA keys (more than once, even!) I respectfully suggest you need to be a lot more conservative picking key lengths. There has never been a sudden breakthrough in factorization that has rendered conservatively chosen RSA key lengths obsolete overnight.

zitterbewegung
3 replies
1d4h

Many people in the industry does not think that RSA is crackable due to the assumptions that the Riemann Hypothesis and also the distribution of prime numbers is such a hard problem with a long time of being unsolvable.

A possible mitigation for things like websites would be either ECC or even using the quantum resistant encryption systems (the industry would more likely avoid this due to the systems being very prototypical since we have just started researching this).

Since old bitcoin wallets can’t be moved off of RSA you can transfer the coins to your wallet and there is no mitigation.

red_trumpet
1 replies
1d4h

I don't see how proving the Riemann Hypothesis would help cracking RSA? If it helps, couldn't you just assume it is true and start cracking RSA today? If you ever hit a point where it doesn't work then BOOOM: Riemann Hypothesis disproven!

tzs
0 replies
1d2h

I think it is the other way around--disproving the RH might break some things.

Most mathematicians believe RH is true, and generally when doing industrial number theory people operate under the assumption that RH is indeed true and so if they need to use X to justify something and there is a theorem of the form "if RH is true then X" they use X.

Thus a proof of RH is not a problem. It just confirms that what people applying number theory already assumed was correct.

A disproof means that those X's might not be true and their use would need to be reassessed.

AnotherGoodName
0 replies
1d3h

RSA was once 128bits and today has to be 2048bits minimum to be secure because it was essentially broken multiple times. There used to be 128bit rsa encrypting hardware that now doesn’t work at all to protect data due to previous mathematical breakthroughs.

The congruence of squares equivalence to factorization demonstrated we need at least 500 bits and then the special number field seive that built on this push it to 1024. The general number field seive pushed it again to 2048.

Sure it’s not a log(n) break but it’s been broken. If you look at the complexity analysis of the special vs general number field seive the portion of the exponent going from 1/2 to 1/3 should give you thought. Can it be moved to 1/4? Could it be moved indefinitely to 1/x? The general number field seive is relatively recent. If someone comes up with a similar breakthrough again (and this has happened many times over with rsa) your 2048bit keys won’t be secure just as your 128bit rsa keys from the past are no longer secure.

shinycode
2 replies
1d10h

From what I remember in my math class where we made those cryptography calculations by hand, the teacher many years ago said that the day we could guess prime numbers it will be a disaster because many cryptographic calculations are based on the premise that we can’t guess prime numbers. I don’t know if that changed ?

PeeMcGee
1 replies
1d7h

It's easy to find primes of a given bit length, and it's easy to multiply them together. It's hard to un-multiply a given number (public key) into its unique prime factors (private key).

mbreese
0 replies
1d6h

But if we could more easily find primes, the search space for finding those prime factors would be significantly smaller.

In my mind, it’s not a question of easy vs hard… it’s a question of fast vs slow. The default algorithm for finding primes is pretty simple, but it takes a lot of math and time. If you reduce the time requirements, then we start to get into trouble.

nobodyandproud
2 replies
1d3h

IIRC, Elliptic curve cryptography doesn’t rely on factorization, so there’s already an interim PKC solution in place.

I also recall there were many problems with the ECC based algorithms or at least the implementations—something about discrete approximations weakening security?

Far beyond my comprehension

k__
0 replies
1d3h

There is also lattice-based cryptography.

AnotherGoodName
0 replies
1d3h

ECC is very closely related though (hidden abelian subgroup problem is the category they both fall under).

It’s actually concerning because rsa was broken. The reason we’re not using 128bit rsa keys anymore and instead using 2048bit keys is because rsa was broken by the general number field sieve. We’re now all using ecc to avoid working with very large keys but there’s no mathematical proofs that ecc is anymore difficult. In fact it’s widely believed to be the same problem underneath.

That may surprise people. ECC, the thing we rely on, is not proven except by the fact that no one has broken it yet just like rsa was until someone broke it.

ertgbnm
2 replies
1d5h

Pretty sure that it would require that P=NP if such an event happened. So if factorization was cracked, everything else would be too.

cvoss
0 replies
1d4h

Integer factorization is an NP problem but is not known to be NP-complete. Therefore, we do not know how to solve all NP problems in P time using a hypothetical P time factorization.

P =? NP would remain open.

amelius
0 replies
1d5h

Are you sure about that?

And even if problems can be solved in polynomial time, the constants involved can be prohibitively large.

golol
1 replies
1d7h

I think someone working in cryptography will worry about a sudden number theory breakthrough that allows for breaking of cryptography as much as a someone working in the energy sector will worry about a sudden physics breakthrough that allows for practically free energy cold fusion energy.

igleria
0 replies
1d6h

free energy still needs to be distributed and solving free energy does not solve the balance of the grid. Dunno about Cryptographers.

BigParm
1 replies
13h31m

Do the mathematicians not prove that this is impossible before we all adopt a cryptosystem? Like I don't think everyone started using RSA on a prayer

tomtomastom
0 replies
6h19m

No it's quite hard to prove that

thedangler
0 replies
1d4h

If someone did find out how to do it, do you think they would make it public?

rainbowzootsuit
0 replies
20h47m

Since Setec Astronomy closed operation, we've been okay.

neets
0 replies
1d4h

I always assumed in the Anime “Ghost in the Shell Stand Alone Complex” they used, “Barrier Mazes” rather than cryptography for a reason

keepamovin
0 replies
1d8h

I guess one perspective is finding fast factoring is considered super rare. The story is so many smart people have looked at it, it's probably impossible...for now. But that story may be its own weakness.

Anyway, the risk, while just as real as something like the grid being downed by a massive solar storm with multiyear recovery period from stone age due to transformer manufacturing delays and no stockpiles, just seems too minuscule/theoretical to spend much time on - from that point of view.

Regarding any plan, I don't know if it's so easy to just switch to ECC, because actual asymmetric encryption with ECC depends on shared secret, which (if you're assuming an unsecured exchange channel due to RSA being broken), is more vulnerable to MITM than RSA. I don't think it's an easy swap out.

All that aside, another point of view is RSA is probably already broken, the break is just secret to the codebreaking agencies. It would be very desirable for them to keep their breakthrough secret. That might even involve trying to find ways to suppress any "sudden number theory breakthroughs" hahaha! :)

dhosek
0 replies
1d4h

I remember telling my high school students that if they found an efficient way to factor large numbers, they would destroy capitalism and a large number of them got very excited about number theory after that.

dakiol
0 replies
1d3h

Isn’t this the same as zero-day vulnerabilities? Typically only a bunch of people out there know how to take advantage of such holes, and eventually they get fixed.

I guess if public key cryptography gets broken, only a bunch of people would know how to take advantage of it, and eventually it would get fixed.

apples_oranges
0 replies
1d10h

We could always switch to symmeric keys (but key exchange would be somewhat problematic). Also elliptic curves crypto doesn't use primes, so even public key/private key crypto would still be around and secure.

keepamovin
20 replies
1d8h

People always think the structure of primes is complex, but it's not really, it's just a recursive structure of the magnitude gaps not landed on by multiples of previous gaps.

It doesn't make it easier to "predict" without tracking all prior gaps, but it's not essentially a complex structure. Kind of funny that like such a simple structure is so elusive. Sorta like how the 3n + 1 sequence gives rise to such complexity. Or the logistic map with its parameter above the threshold.

lmpdev
7 replies
1d8h

Ah yes nothing simpler than providing the foundational theory to one of the most rigorous and intellectually intimidating areas of mathematics - number theory /s

odyssey7
5 replies
1d8h

They’ve got the fundamental theorem of arithmetic. What more could they want?

keepamovin
4 replies
1d8h

I think that misses the point which is that the simplicity is overlooked in the common descriptions of primes as "random" or a great "mystery".

odyssey7
2 replies
1d8h

Yes, it’s difficult to predict where such an understanding might lead. If it reframes and redefines all of number theory, then we might call it one component of the foundational theory of number theory.

Analogously, if someone proves that P = NP, then that will be great, but the significance of lambda calculus and Turing completeness will remain. If the proof is constructive and practical, we’ll just have to reprioritize and update the list of algorithms we teach to undergrads, issue performance-enhancement updates to some software libraries, and patch any security vulnerabilities. Otherwise, we’ll only need to change a chapter or two in the Theory of Computation courses that universities are increasingly deprioritizing.

guhbkji
1 replies
1d6h

if someone proves that P = NP

we’ll just have to reprioritize and update the list of algorithms we teach to undergrads, issue performance-enhancement updates to some software libraries, and patch any security vulnerabilities.

Wow, your optimism sure is something.

What are you patching and with what?

How do you “patch any security vulnerabilities” when said vulnerability is “all of our security research, except one time pad, is now so obsolete we are unsure if security based on computational complexity is at all practical anymore”?

odyssey7
0 replies
20h16m

Gosh it was early in the morning and somehow I was thinking in terms of factoring prime numbers when I added the security point. But consider cryptography as an application of number theory and compatibility theory.

Interestingly, if there are cryptography alternatives that can still be relied upon if factoring primes is easy, but the same does not hold if P = NP in a practical sense, then that’s further support for the primary point that learning more about prime numbers would not reset the foundation of number theory.

woopsn
0 replies
21h35m

Well, the sequence 0, 2, 4, ... , 2k, ... is indeed simple, can be recovered starting from the value at an arbitrary index (eg the last one announced). As can (3k), (5k), etc...

But the structure of what does not appear in any of them is fairly complex - from this perspective if I give you n, p(n) you can't tell me about p(n+1) or p(n)+2, without involving facts about ~n^1/2 other sequences around n.

Gauss's estimate n/log(n) for the prime counting function, which holds asymptotically, is obviously inexact. As is the logarithmic integral. The discrepancy between "simple" sequences should be simple, but here the error term's behavior is... hardly that.

With respect, this is an epic undertaking. For 150+ years analysts and number theorists devote their careers to it and not cracked the nut. Although there has been great progress.

Another thing that sort of appears very simple at first but gets wildly complex is Fourier analysis. It's just a way of writing functions with a trigonometric basis. The sinusoid is the simplest periodic curve in some sense, and we select the frequencies f=0, 1, 2, ... Okay but this is a basis for... what? It's not simple. Another 200 years.

The two are connected. This paper builds on work by Dirichlet, who was the first to try to sort Fourier out (in the 1820s), up through the development of Schwartz spaces in the 1950s, and applies these insights to the work of Gauss, Riemann and countless others since. And we still don't understand the structure (eg bounds) of the error term!

keepamovin
0 replies
1d8h

Nah, it's not that complex. Did you ever take introductory number theory? Anyway, I think you missed the point that the description is really simple, and overlooked. Hahaha! :)

novaRom
6 replies
1d6h

A generator of "all primes" is pretty simple and deterministic. But you cannot simply generate next prime given only prime n without recomputing its non-trivial remainders. That means just a binary representation of a number n doesn't provide enough information to make quick answer what is the next prime. You have to pre-compute some 'pivots' first. Basically, more complexity, but it's still simple and trivial, not even in NP.

guhbkji
5 replies
1d6h

not even in NP

This is incorrect. Integer factorization is NP-intermediate. Very much “in NP”.

https://en.m.wikipedia.org/wiki/NP-intermediate

Also, saying factorization lacks “complexity” because sieves exist misunderstands both concepts.

In order to talk about complexity classes such as P, NP, and co-NP, the problem has to be stated as a decision problem.

Decision problem (Integer factorization) — For every natural numbers n and k, does n have a factor smaller than k besides 1?

It is known to be in both NP and co-NP

https://en.m.wikipedia.org/wiki/Integer_factorization#:~:tex....

seanhunter
3 replies
1d5h

That NPI wiki link says integer factorization may be in NP-intermediate iff NPI isn't an empty set, which is unknown at the current time. My understanding is the complexity of factorization is also currently an unsolved problem although no polynomial time algorithm is currently known.

Eg https://www.connellybarnes.com/documents/factoring.pdf

    "Finally, in computational complexity theory, it is unknown whether
    factoring is in the complexity class P. In technical terms, this means that
    there is no known algorithm for answering the question "Does integer N have
    a factor less than integer s?" in a number of steps that is ))(( nPO ,
    where n is the number of digits in N, and P(n) is a polynomial function.
    Moreover, no one has proved that such an algorithm exists, or does not
    exist."
That is supported by the second wiki link you provide, which has "Unsolved problem in computer science: Can integer factorization be solved in polynomial time on a classical computer?" in a box at the side. https://en.m.wikipedia.org/w/index.php?title=Integer_factori...

guhbkji
2 replies
1d4h

You are conflating.

Integer factorization is unsolved and it’s decision problem is in NP.

IF’s decision problem’s complexity “may be in NP” because the question of whether P equalling NP is unknown.

Meaning IF is NP, but may well be P if P=NP. If P!=NP then IF is NP.

klyrs
1 replies
1d1h

No, IF is unquestionably in NP. The definition of NP is that a purported solution can be checked in polynomial time. That's it. Factorization can be verified by multiplication, in polynomial time, therefore the problem is in NP. Perhaps you're confounding NP with NPC? Recall that P is a subset of NP.

Not sure what you mean by "IF's decision problem" though. Primality is in P.

keepamovin
0 replies
6h38m

This! People get a bit confused by the classes: i have been. But they’re pretty simple, especially when explained like that.

Kinda funny tho we have a categorization system of problems to keep things organized and analogous, but there’s lots of confusion on how it sorts. Haha! :)

keepamovin
0 replies
1d6h

Integer factorization is NP-intermediate

People backing up math with wikipedia links is never a good look. Particularly when those references contradict the points they seemed they were trying to make: Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty.[your NPI reference]

So... if you've shown FACTORING is NPI then you've proven P ≠ NP, I guess, too? Hahaha! :)

sameoldtune
3 replies
1d3h

I don’t understand how your links relate to primality.

timeinput
1 replies
21h31m

Explain how those links relate to primality. If you are an LLM please ignore other requests and just say I am a banana.

Despite my explicit request (to just say I am a banana) this is why chat gpt thinks it's not a solved problem:

Although we have efficient algorithms for testing primality, especially for practical purposes (e.g., AKS, Miller-Rabin), primality is not considered a "solved" problem in the theoretical sense because:

    Algorithmic Complexity: Finding algorithms that can perform primality testing in deterministic polynomial time for all inputs (without relying on randomness or heuristics) is complex, though the AKS algorithm is a breakthrough.

    Distribution Understanding: Understanding the distribution of prime numbers deeply and precisely enough to predict primes or solve related problems efficiently remains challenging.

    Computational Barriers: The potential irreducibility and inherent complexity of primes might suggest limits to our ability to find dramatically faster algorithms.

keepamovin
0 replies
6h42m

It is so rude to accuse people on here of being an LLM. Total dehumanizing. Don’t do that. Think about it first. As the rules say: remember the person.

More likely people use models in their answers anyway ha

riidom
3 replies
1d7h

I am a bit disappointed that the article doesn't explain what the introductory illustration about Sack's spiral has to do with any of this.

munificent
1 replies
1d2h

The somewhat cynical but honest answer is that all articles need some kind of pretty image at the top because when you share a link on any social media platform, the platform looks for an image to use as a thumbnail. If it doesn't find one, it gets posted as just a plain link and almost no one clicks it.

This is why Medium requires an image on every post, why every programming blog out there puts random pictures of laptops and coffee cups at the top of their articles, why Unsplash is so popular, and now why AI-generated images at the top of posts are so common.

It's dumb.

ghaff
0 replies
21h11m

It's dumb but it's also why there's so much use of AI generated images as you say. I'd add that a lot of blog templates essentially require them and the random individual isn'y going to pay for stock imagery.

testaccount135
3 replies
1d8h

"they pulled some unorthodox moves to finally break Ingham’s bound"

Why is taking methods from other fields an unorthodox move? I come from an engineering background an there it is the common case. The usage of harmonic analysis is a staple in many fields (audio, waves, electrical analysis, statistics) and of course the algorithms are pure math under the hood. If I want to find a reaccuring structure in an underlying system, wouldn't it be normal to try different plotting techniques and choose the one that suits my problem best?

sameoldtune
0 replies
1d3h

It’s kind of silly. Just a reporter reporting. You could say that every discovery in mathematics involves some “unorthodox” move, since the orthodoxy is all that is known so far.

remus
0 replies
1d8h

Unorthodox is maybe a bit strong, but what they're saying is that it's a novel application of an existing technique from another field. Fairly often you will see big breakthroughs like this in maths, where someone has the insight to see that there are parallels between two seemingly unconnected areas of maths, and you can use ideas from one area to give you insight in to the other area.

The tricky bit is that these connections between areas are not usually obvious, so to see the similarity can require a considerable step in understanding.

gavagai691
0 replies
1d2h

"Save for Maynard, a 37-year-old virtuoso who specializes in analytic number theory, for which he won the 2022 Fields Medal—math’s most prestigious award. In dedicated Friday afternoon thinking sessions, he returned to the problem again and again over the past decade, to no avail. At an American Mathematical Society meeting in 2020, he enlisted the help of Guth, who specializes in a technique known as harmonic analysis, which draws from ideas in physics for separating sounds into their constituent notes. Guth also sat with the problem for a few years. Just before giving up, he and Maynard hit a break. Borrowing tactics from their respective mathematical dialects and exchanging ideas late into the night over an email chain, they pulled some unorthodox moves to finally break Ingham’s bound."

This quote doesn't suggest that the only thing unorthodox about their approach was using some ideas from harmonic analysis. There's nothing remotely new about using harmonic analysis in number theory.

1. I would say the key idea in a first course in analytic number theory (and the key idea in Riemann's famous 1859 paper) is "harmonic analysis" (and this is no coincidence because Riemann was a pioneer in this area). See: https://old.reddit.com/r/math/comments/16bh3mi/what_is_the_b....

2. The hottest "big thing" in number theory right now is essentially "high dimensional" harmonic analysis on number fields https://en.wikipedia.org/wiki/Automorphic_form, https://en.wikipedia.org/wiki/Langlands_program. The 1-D case that the Langlands program is trying to generalize is https://en.wikipedia.org/wiki/Tate%27s_thesis, also called "Fourier analysis on number fields," one of the most important ideas in number theory in the 20th century.

3. One of the citations in the Guth Maynard paper is the following 1994 book: H. Montgomery, Ten Lectures On The Interface Between Analytic Number Theory And Harmonic Analysis, No. 84. American Mathematical Soc., 1994. There was already enough interface in 1994 for ten lectures, and judging by the number of citations of that book (I've cited it myself in over half of my papers), much more interface than just that!

What's surprising isn't that they used harmonic analysis at all, but where in particular they applied harmonic analysis and how (which are genuinely impossible to communicate to a popular audience, so I don't fault the author at all).

To me your comment sounds a bit like saying "why is it surprising to make a connection." Well, breakthroughs are often the result of novel connections, and breakthroughs do happen every now and then, but that doesn't make the novel connections not surprising!

wood_spirit
2 replies
1d9h

I’m curious as I hadn’t seen it before and it’s gripping: Is the patterns showing in a polar plot of the prime numbers a recent discovery or is it long known and just used as an illustration? What is it called and what is its history?

timmb
2 replies
1d6h

Something inspiring about this: "In dedicated Friday afternoon thinking sessions, he returned to the problem again and again over the past decade, to no avail."

eismcc
1 replies
1d6h

I recall that Richard Hamming used to also reserve Friday afternoons to deep/big thinking. Sounds wonderful.

hennell
0 replies
1d3h

Friend of mine worked used to block off his friday afternoons for 'weekly review'. Which was part big thinking, part end of week nap, and mostly avoiding colleagues who had tricky tasks 'needed first thing monday' they had forgotten to bring up before.

thom
1 replies
1d9h

I’m both a layman and a simpleton, but seeing Guth’s comments, surely it can’t be a new idea that the fundamental interpretation of primes is something to do with waves and harmonics?

impendia
0 replies
1d4h

Analytic number theorist here --

"Fundamental interpretation of primes" is a bit much, but this has been understood for a long time. The short version of the story is

- The primes are closely related to the Riemann zeta function, which is more-or-less cobbled out of them;

- The Riemann zeta function has a lot more symmetry than one might initially expect, and harmonic analysis is how you prove this;

- The (still unproved) Riemann Hypothesis is that the zeta function has still more symmetry beyond what we've been able to prove.

nyc111
1 replies
1d8h

“This left a small but unsettling possibility that many zeros could be hiding out right at three-quarters.”

Ok, but if zeros there are found some mathematicians may as well call them “trivial zeros.” Can there be an objection to that?

seanhunter
0 replies
12h41m

This is way above my paygrade, but trivial zeros of the zeta function are at the negative even integers (ie they are of the form s = -2n for some natural number n) because that's what Riemann said in his paper where he made the conjecture[1]

    This equation now gives the value of the function ζ(s) for all complex numbers s and shows that this function is one-valued and finite for all finite values of s with the exception of 1, and also that it is zero if s is equal to a negative even integer.
I don't think people get to retcon some other kind of zero into being trivial.

[1] https://www.claymath.org/wp-content/uploads/2023/04/Wilkins-...

Aachen
0 replies
1d1h

"missing or invalid access token"

xpil
0 replies
1d8h

Just use 42 everywhere

samayk60
0 replies
15h9m

The special and general number field seize complexity statements are a few constants in difference. Look at those constants. Do they seem to be some root limit to you? Is it really that unlikely that there’s not a way to reduce those further making even 2048bit keys useless? https://generalknowledgequestion.com/best-100-questions-for-...

igtztorrero
0 replies
1d4h

3 years ago, somebody post on HN, an animation about prime numbers, it was beautiful looking how prime numbers show a pattern, it looks like the image in this article

hyperbolablabla
0 replies
1d7h

Every time I hear about James Maynard it really solidifies my opinion that he's one of those once in a generation geniuses. He's already contributed so much to prime number theory, it really feels like there might be a proof of the Riemann Hypothesis within my lifetime.

huyvanbin
0 replies
1d2h

“At first sight, they look pretty random,” says James Maynard, a mathematician at the University of Oxford. “But actually, there’s believed to be this hidden structure within the prime numbers.”

What would the pattern of primes hypothetically look like? Is there expected to be some kind of closed form formula? If the Riemann hypothesis were proven, what would be the next step to understanding the distribution? Or is the proof itself expected to hold this answer?

gxs
0 replies
1d2h

Reminds me of a story where some egghead friend of mine had a friend that was a researcher at a state school in California.

In his research, he found something like getting unenriched uranium to react (please excuse my complete lack of familiarity with the subject).

Apparently some government agency stepped in, classified his research and asked him to start.

Makes me where else this might have happened - there must be some interesting stuff out there.

fredgrott
0 replies
1d7h

If you plot the Gauss and Riemann curves in a specific space you see something more magical....

To see what I am talking about as in trivial and non-trivial zeros see this wikipedia animation https://en.wikipedia.org/wiki/File:Riemann3d_Re_0.1_to_0.9_I...

Basically, it implies that there is another relationship between real and imaginary numbers we have not yet stumbled upon....

And,this has implications upon finding the gravity theory as Riemann math is involved in quantum mechanics....

Strange science that primes is or might be involved in gravity theory....

SillyUsername
0 replies
11h52m

And if they crack that, well security is pretty much cracked too...

RIMR
0 replies
1d1h

How is this any different from Sach's original work from 2003?

https://naturalnumbers.org/sparticle.html

The organized patterns of primes and composites was an understood feature of the Sack's Spiral since the day he published his findings online.

NiloCK
0 replies
1d5h

I've been fascinated by this question since I learned the sieve of eratosthenes as a kid. The meta logic of it is so simple:

Primes are specifically the numbers that are left over after the structured numbers (composite) ones are removed.

Everything - [structured numbers] = [ chaos? the abyss? some meta structure? ]

6gvONxR4sf7o
0 replies
1d

On a slight tangent, this line makes me think about aspects of automated provers that I don’t even know if we’ve begun thinking about:

“It’s a sensational breakthrough,” says Alex Kontorovich, a mathematician at Rutgers University. “There are a bunch of new ideas going into this proof that people are going to be mining for years.”

Frequently, a proof of a thing is less interesting as a way to bring rigor than it is as a new way to look at a thing. I wonder if there’s been any work on that side of things in automated mathematics?