return to table of content

Elliptic curve 'murmurations' found with AI

robertk
15 replies
4d21h

Very cool result but the title is overselling the "AI" contribution. It seems like they trained a few standard binary classifiers (Naive Bayes, decision trees, kNN). The novelty is the independent variable coming from an attribute precomputed for many known elliptic curves in the LMFDB database, namely the Dirichlet coefficients of the associated L-function; and the dependent variable being whether or not the elliptic curve has complex multiplication (CM), an important theoretical property for which lots of flashy theorems begin with assuming whether or not the curve has CM. They go on to train another binary classifier (and a separate size k classifier) to determine a curve's Sato-Tate identity component using the Euler coefficients and group-theoretic information about the Sato-Tate group (constructed by randomly sampling elements and representing the two non-trivial coefficients of their characteristic polynomials as independent variables in the classifier). They also run a PCA: https://arxiv.org/pdf/2010.01213.pdf

The cool part is that they then stepped back and scratched their heads wondering why the classifier was so good at achieving separation for these dependent variables in the first place, and plotting the points showed them to be (non-linearly) separable due to a visually clear pattern! The punchline and the reason it's so important to understand these data points, the Euler coefficients for elliptic curves, is because they contain all the relevant number-theoretic information about the curve. With some major handwaving, understanding them perfectly would lead to things like the Langlands program (and some analogues of the Riemann hypothesis) getting resolved. These wide reaching conjectures are ultimately structural assertions about L-functions, and L-functions are uniquely specified by their Euler coefficients (the a_p term in their Euler factors). Will murmurations help with that? Who knows, but the more patterns the better for forming precise conjectures.

Relevant intersectional credentials: I have lead ML engineering teams in industry and also did my doctorate work in this area of math, including using the LMFDB database referenced in the article for my research (which was much smaller back then and has grown a lot, so very neat to see it's still a force for empirical findings!).

frakt0x90
7 replies
4d19h

This is something I've been thinking about a lot lately. Especially in combinatorics and number theory, there are databases like oeis, LMFDB, etc that contain tons of data with the ability to generate more algorithmically (sometimes easier said than done). Using ML to get heuristics and really good guesses on where the next opportunities lie and then formalizing it once you have a good guess would be SO cool.

Is there a name for that? Or groups working on that stuff that I could follow?

My own little pet project was I scraped OEIS and built a graph of sequences where 2 were connected if one mentioned the other in its related sequences section. You got these huge clusters around prime powers and other important sequences. Then I thought maybe you could use a GNN to do link prediction providing an estimation of a relationship that should exist but hasn't been discovered yet.

ykonstant
3 replies
4d8h

The Lean 4 Focused Research Organization has ML interoperability in its roadmap. Since Lean 4 is shaping up to be a capable general purpose language as well, I can imagine a Lean project that retrieves and formats LMFDB data, uses it to train and test a NN, gets Lean 4 proof code from it, verifies or rejects it (possibly with more detailed feedback) and loops this like a "conversation".

However, Lean 4 still has a long way to go in terms of speed and library features, and I at least have given up on writing optimized code until we get the new compiler (whose timeline seems optimistic to me, but Leo de Moura knows much better).

knotthebest
2 replies
3d21h

At which point would mathematicians become obsolete? Something like this seems like it could automate a lot of mathematics research, no?

ykonstant
1 replies
3d8h

We would be interested in actual automation of theorem production, but this pipeline would automate approximately 0% of (interesting) mathematics research. It does have the potential to automate some boring parts and enable mathematicians to make better conjectures faster.

knotthebest
0 replies
3d3h

I think I may be missing something. Why would you be interested in the automation of theorem production? Wouldn’t this make mathematicians obsolete? How far away do you think we are from that?

I ask as a newbie in math; math is a passion of mine. I am genuinely reconsidering going into math research as I fear just being automated away.

jononor
0 replies
4d17h

In these area of physics informed machine learning this is refered to as "discovering new physics". Probably there are analogs in computational mathematics, biology, chemistry, etc.

joachimma
0 replies
4d10h

I am not a mathematician but have some interest on a pop-sci level. I believe this presentation at G-Research by Alex Davies would be of interest. https://www.youtube.com/watch?v=Mp_skPK-X9M

djbusby
3 replies
4d16h

Suppose someone understands 0% of that. What would I type into DDG or Wikipedia to start?

Like, ecliptic curves are part of libsoduim/nacl - does it mean something "big"?

tanvach
1 replies
4d11h

I highly recommend the PeakMath (https://youtube.com/@PeakMathLandscape?si=zQg6bbp2SvfqzKYm) RH saga video series on YouTube for this topic.

They are excellent, and not requiring more than high school maths knowledge to really get quite deep into the mysterious connections between prime numbers, Riemann hypothesis, elliptic curves and L-Functions.

ykonstant
0 replies
4d8h

I second this recommendation; it is serious material made very accessible. The channel is great, and this series is truly a marvel.

However, while it does not require more knowledge than high school math, it does require more maturity and certainly lots of patience.

couchand
0 replies
4d16h

As someone who understands about 2% of the GP but maybe 85% of TFA, I'd suggest diving into the various topics explored there. Galois Fields, for instance, are a rich topic for Wikipedia research and have intuitive and surprising properties that make them fun to learn about.

This will lead you deeper into study of abstract algebra concepts like groups and rings. If you haven't done much set theory you will probably go deep on that and develop an opinion on the Axiom of Choice.

Then you'll probably surface a bit to look at elliptic curves and consider their many applications in abstract and concrete topics like cryptography and the elusive proof of Fermat's Last Theorem.

By then you'll have caught up to me. In the meantime I'll be reading up on module forms and L-functions.

brabel
1 replies
4d10h

Very cool result but the title is overselling the "AI" contribution. It seems like they trained a few standard binary classifiers (Naive Bayes, decision trees, kNN).

But it seems they would never have even suspected there were such patterns if the "AI" had not provided evidence for them?

By the way: the tools mentioned, like decision trees, Bayes and kNN were all taught in the AI course I attended one and a half decade ago... AI was basically ML at the time, but nowadays it seems that ML has become "just statistics", and AI only includes LLMs.

radicalbyte
0 replies
4d10h

There are plenty of companies using ML methods (DT, Bayes, kNN), normal NN etc now that the AI money spigot is wide open, if only as part of the "shit in, shit out" process.

weebull
0 replies
4d5h

Sounds like it's far more about "big" data analysis, and recognising that elyptic curves encryption has a statistically apparent signature. AI/ML was just the analysis that exposed it.

galkk
5 replies
4d21h

I love story in spite of the article above.

Speech synthesis also was attempted as modeling of human biology: computer modeling of throats, vocal cords, how the air is going through mouth.

In the end computational power also won. No need of all of that.

acer4666
2 replies
4d21h

The article is talking about deep learning winning, ie neural networks. Surely modelling of human biology is part of that?

nomel
0 replies
4d21h

Maybe emergent, to some extent, but not explicit.

bigyikes
0 replies
4d18h

Neural nets do not model human biology, they are models which are inspired by human biology.

bbor
2 replies
4d20h

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.

It’s nice when an author includes a sentence up top that betrays their standpoint so that I can stop reading. I’m sure this person is very nice and has lots of stuff to say, but this is the same old Scruffy v. Neat fight, except now the former side thinks that they’re empirically completely right. Which doesn’t even make sense, they’re not mutually exclusive claims, and to say that the result of 70 years of expert systems is any kind of failure is just revisionist.

For the same reason, I don’t read many papers about Realism vs Idealism, Nature vs Nurture, etc

SonOfLilit
1 replies
4d18h

I recommend that you continue. It's a very short and highly influential piece by a guy at the top of his field, you'll take a thing or two from the one-minute read even if you agree with nothing he says.

bbor
0 replies
4d17h

Why thanks for the suggestion, that’s very kind. I just read it - in fact I think I read this early on in my research on scruffies v neats.

My takeaway now is the same, sadly. It’s not so much that I disagree with his premises that I find his whole attitude and conclusion to be a preposterous artifact of ego inflation after helping found a line of research that was much more productive than people thought it would be. I get it, that’s very exciting, but I need way more evidence than that to completely abandon self-conscious structured reasoning in my conception of a good AGI, much less the human mind. Like this:

  This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes.
This is just arrogance. You don’t see this among philosophers or social scientists, who recognize that rhetoric is more than the cherry on top of science, and that this sort of confidence is dangerous. To see “designing things by hand” as a categorical “mistake” is just… that’s a hot take.

But either way we’re all on the same side. Despite my harsh words I’m glad he helped pave the way LLMs, which are the biggest unexpected breakthrough in our lifetimes IMO. Which understandably makes people confident

couchand
2 replies
4d21h

This is a great story that highlights how human beings working together can reveal new insights. I love how the author covers each individual's contribution to the discovery.

It's also interesting to see how critical the human element of this story is, and how incidental the "AI" piece is. A computer system employed statistics to exploit (but not comprehend) a pattern in a high-dimensional dataset. This led researchers to examine the relevant dimensions using traditional data visualization tools.

Once the nature of the pattern was characterized, other mathematicians were able to use their insight to find deep connections to other areas. These interconnections are now blossoming.

naasking
0 replies
4d4h

A computer system employed statistics to exploit (but not comprehend) a pattern in a high-dimensional dataset.

Nitpick: I don't like this phrasing because there are degrees of comprehension, and understanding that two or more things are correlated in specific ways is a form of comprehension. "Exploit but not explain" is a better phrasing IMO.

bbor
0 replies
4d20h

The title is obviously clickbait, but the idea a good one I hope some of the scientists here take away from this: “using AI” is about identifying things it can do that you could never hope to, usually for reasons of scale or complexity. LLM-based systems will revolutionize the day-to-day of science IMO, but that doesn’t mean that they’re replacing human reasoning faculties.

notfed
1 replies
4d15h

"found with AI" is the new "made in Rust"?

weebull
0 replies
4d5h

"Made in rust." is dying off?

lwansbrough
1 replies
4d11h

So might this be a precursor to cracking ECC?

Assume they go on to find a formula which defines the relationship between a_p and rank, what does that actually achieve?

omidHeravi
0 replies
4d8h

Next thing you know, it’s solved the discrete log problem and the rest of the millennium problems.

billiam
1 replies
4d17h

The article is actually a great illustration of how far ahead of machine learning humans remain in their ability to collaborate and make intuitive connections (the AI contribution was minimal). Which LLM is going to say, hey this pattern looks like the birds out my window, or the problem I worked on years ago and never got anywhere, or I must send a competing LLM a preprint of my paper?

naasking
0 replies
4d4h

Which LLM is going to say, hey this pattern looks like the birds out my window, or the problem I worked on years ago and never got anywhere, or I must send a competing LLM a preprint of my paper?

They might never say that, but the model has a good chance of containing that association because learning is compression. If two things have the same patterns, they will likely be tied to the same networks in the model, because that's just how good compression works.

carlossouza
0 replies
4d14h

Sutherland was impressed by the significant dose of luck that had led to the discovery of murmurations.

Talented people + hard work… + LUCK!

Even then, the murmurations were only found because of Pozdnyakov’s inexperience.

Also, fresh inexperienced eyes to see what experts would dismiss!

What a great read :)