return to table of content

Llama 3 implemented in pure NumPy

blharr
6 replies
22h0m

So is this the case that the information is in the data set? Or the code is very well defined to be so small? As an outsider it's surprising that such a capable model can be so "simple".

jacobn
2 replies
21h17m

The training code is presumably quite a bit more complex than what they've open sourced, but part of the beauty of the GPT-based LLMs is their structural simplicity.

Now, that simplicity can be deceiving - there are a lot of conceptual interconnectedness within these models. They've been put together "just so" if you will.

If you look at the source code to nanoGPT and compare it to Llama3, the most remarkable thing (when you look past the superficial name changes) is just how similar they are.

If I recall correctly the primary differences are:

  - The MLP: Llama3 uses SwiGLU vs the more "traditional" x = x + proj(gelu(expand(x))) in GPT2
  - The token encoders, which is arguably external to the model
  - Attention: Llama3 uses Grouped Query Attention, vs full Multi-Head Attention in GPT2
  - Normalization: Llama3 uses RMSNorm, vs LayerNorm for GPT2
They were published more than five years apart. On the one hand progress has been breathtaking, truly astounding. On the other hand, it's almost exactly the same model.

Goes to show just how much is in the training data.

novaRom
0 replies
12h5m

beauty of the GPT-based LLMs is their structural simplicity

human brain's structure is also encoded in a short DNA sequence

jacobn
0 replies
16h1m

Forgot one: the positional encoding also changed, llama3 uses RoPE, gpt2 uses a learned embedding.

SpaceManNabs
1 replies
20h50m

300 lines of this code is a bit different than 300 lines of typical code where you read files, set up a backend/frontend, or parse data. In the latter case, there are a lot of tedious operations. Sure, the former also has that with reshaping and asserts or wtv.

But in a sense, the 300 lines of Llama code are essentially just lines of math. And reading through any math proof will show you that any particular line can hide large amounts of complexity.

This can be true with code with more tedious operations, but those lines are a smaller fraction of the overall code base by definition.

Even the "tedious" parts of the llama code can hide large complexity. Setting a learning rate with a schedule might require reading a paper or two for your particular architecture.

But yes, once you parse all the math and the theory, the lines are kinda simple matmul and forward lol.

ffriend
0 replies
19h23m

Sure, knowing the basics of LLM math is necessary. But it's also _enough_ to know this math to fully grasp the code. There are only 4 concepts - attention, feed-forward net, RMS-normalization and rotary embeddings - organized into a clear structure.

Now compare it to the Hugginface implementation [1]. In addition to the aforementioned concepts, you need to understand the hierarchy of `PreTrainedModel`s, 3 types of attention, 3 types of rotary embeddings, HF's definition of attention mask (which is not the same as mask you read about in transformer tutorials), several types of cache class, dozens of flags to control things like output format or serialization, etc.

It's not that Meta's implementation is good and HF's implementation is bad - they pursue different goals in their own optimal way. But if you just want to learn how the model works, Meta's code base is great.

[1]: https://github.com/huggingface/transformers/blob/main/src/tr...

moritzwarhier
0 replies
20h59m

I think with LLMs in general, the algorithms are very refined and require lots of research, despite being "simple" in terms of entropy, or an imagined Kolgomorov complexity for defining algorithms.

So "simple" is a fuzzy term here, but yes, the entropic complexity is in the data, not the algorithms.

Related to the so-called "Bitter lesson".

Edit: the sister comment pointed out what I failed to express: RILHF and training are also algorithms, and their applications and implementations are probably much more complex than the code that evaluates a given prompt.

So basically, "models" (trained NNs) are also an example for the equivalence of code and data.

Fixed data used by code (the trained model) is code in itself, even when it is not directly written by humans or in a human-readable language.

Edit edit: don't forget to count the imported maths code :) but I assume this is not relevant to the "it's just matrix multiplications" overall argument

kureikain
3 replies
20h7m

Do you know why these are so short? What is the algorithm/magic in all of these?

I tried to make sense of it but cannot

chpatrick
0 replies
19h16m

The magic is the structure of the model, and the real magic is the billions of weights.

Hugsun
0 replies
19h46m

Architecturally, LLMs are very simple compared to many software projects.

The crux of their behavior comes from their learned weights which are gigabytes and can cost millions to obtain via training.

DavidSJ
0 replies
20h2m

The magic is in the billions of learned weights (~synapses). This is just the scaffolding that runs them.

danielheath
1 replies
18h34m

What's the operator precedence in python?

Is it `assert(0 <= (1 < ndim))` or `assert((0 <= 1) < ndim)`, or something even stranger like `assert(0 <= 1) < ndim`?

__s
0 replies
18h32m

Python actually does something pretty neat: it chains comparisons so that `x < y <= z` is like `x < y and y <= z` except y is only evaluated once

In linked code we can be confident that `0 <= 1`, so only `1 < ndim` should matter. In fact I'd expect peephole optimization to remove most of the code for `0 <= 1`

bloaf
0 replies
18h32m

I am a reasonably competent python coder, yet when I see stuff like this I regard it with the same suspicion as a switch in the "more magic" position.

https://www.catb.org/jargon/html/magic-story.html

mkolodny
0 replies
17h40m

That's just the default. You can set max_seq_len to 8k. From the readme [0]:

All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to max_seq_len and max_batch_size values. So set those according to your hardware.

[0] https://github.com/meta-llama/llama3/tree/14aab0428d3ec3a959...

hongspike
0 replies
13h41m

The numpy code can seem more accessible and easy to understand. Torch can look scary even though it's similar to numpy.

blt
0 replies
22h10m

the simplicity of the transformer is quite refreshing. especially in vision where the Vision Transformer with linear patch encodings replaces complex intertwined decisions about filter size, striding, pooling, #filters, depth, etc., with the simpler decision of how to allocate your FLOPS between dimensionality, #heads, and #layers.

aeyes
3 replies
1d2h

Well, it supports Llama3.

But the other question I have is about the license. The tokenizer.py file is identical, and the rest is very similar - just making minor adjustments here and there.

Can they just take this Apache 2 licensed code, change it a bit and offer it as MIT? They are clearly not the original author.

Scaevolus
2 replies
1d1h

Unfortunately, licenses are only worth as much as your lawyers.

yjftsjthsd-h
1 replies
1d1h

DMCA takedowns are free.

not2b
0 replies
22h44m

A less aggressive approach would be to file an issue and let the maintainer correct the license issue.

Zambyte
3 replies
17h47m

The description says GPT-like, but is is just a GPT, right?

p1esk
2 replies
17h15m

GPT refers to the specific family of models developed at OpenAI.

Zambyte
1 replies
16h8m

It also stands for generative pretrained transformer, which this seems to be.

p1esk
0 replies
14h43m

It’s like saying SSD is a YOLO. Both are single shot object detectors, but only YOLO is “a YOLO”.

xchip
2 replies
1d1h

Nice but the tricky part is the training data.

whereismyacc
0 replies
23h13m

there are a lot of tricky parts.

swader999
0 replies
21h6m

The tricky part is getting big enough that no one can successfully sue you for using "your" training data.

lnyan
2 replies
1d1h

`import jax.numpy as np`, then we also get a jax implemention after certain modifications: e.g. remove in-place index assignment, replace unsupported functions, etc

cl3misch
0 replies
1d

...which should be much faster also on CPU, I assume.

Scene_Cast2
2 replies
1d2h

The rotary embeddings bit is neat. I wonder if a complex representation would simplify vs complexify things (readability, performance, expressive power).

johndough
0 replies
1d2h

Some implementations use a complex rotary encoding, but it makes it a bit harder to port to platforms or frameworks which do not support complex numbers natively.

6gvONxR4sf7o
0 replies
1d2h

The tensor cores that do the bulk of the flops on the bulk of the gpus people use are just various sizes of floats, i think. We're in a funny position where progress in models and progress in hardware are kind of linked.

As far as expressive power goes, it shouldn't make a difference for the models in common use, but I could totally imagine models where it improves readability.

ulam2
1 replies
1d3h

I'll consider superintelligence achieved if AI can do such work faithfully.

sebzim4500
0 replies
1d2h

What? Lots of people could produce this repo, it hardly counts as superintelligence.

rhdunn
1 replies
23h57m

From the TinyStories dataset card [1] the dataset is generated by GPT-3.5 and GPT-4. Reading the discussions in the community tab [2] it looks like there are a lot of incomplete or misspelled words, incorrect grammar, and even Chinese characters in the dataset.

As such, I'd be weary of using that dataset to train or evaluate models.

[1] https://huggingface.co/datasets/roneneldan/TinyStories

[2] https://huggingface.co/datasets/roneneldan/TinyStories/discu...

nwoli
0 replies
23h31m

It’s just used for checking that the implementation is correct. The dataset is just a toy dataset it doesn’t matter if it has misspelled words

threatripper
0 replies
12h15m

np.sin(freqs)

Didn't we drop 2 pi somewhere?

buildbot
0 replies
1d

Cool, instant cuda acceleration via cupy! `import cupy as np`

AI_hacker
0 replies
1d

How does the performance of llama3.np compare to other implementations, especially considering it's a pure NumPy implementation?