It's also worth mentioning that the original implementation by Meta is only 300 lines of very readable code [1].
[1]: https://github.com/meta-llama/llama3/blob/main/llama/model.p...
It's also worth mentioning that the original implementation by Meta is only 300 lines of very readable code [1].
[1]: https://github.com/meta-llama/llama3/blob/main/llama/model.p...
What is the difference to the llama.np repository credited in the README? https://github.com/hscspring/llama.np
Well, it supports Llama3.
But the other question I have is about the license. The tokenizer.py file is identical, and the rest is very similar - just making minor adjustments here and there.
Can they just take this Apache 2 licensed code, change it a bit and offer it as MIT? They are clearly not the original author.
Unfortunately, licenses are only worth as much as your lawyers.
DMCA takedowns are free.
A less aggressive approach would be to file an issue and let the maintainer correct the license issue.
Trainable Llama-like transformer (with backpropagation) in numpy only (~600 lines)
The description says GPT-like, but is is just a GPT, right?
GPT refers to the specific family of models developed at OpenAI.
It also stands for generative pretrained transformer, which this seems to be.
It’s like saying SSD is a YOLO. Both are single shot object detectors, but only YOLO is “a YOLO”.
Nice but the tricky part is the training data.
there are a lot of tricky parts.
The tricky part is getting big enough that no one can successfully sue you for using "your" training data.
`import jax.numpy as np`, then we also get a jax implemention after certain modifications: e.g. remove in-place index assignment, replace unsupported functions, etc
JAX requires a bit more work to maintain fixed-size buffers as required by XLA, especially in case of caching and rotary embeddings. But yeah, overall the code can be pretty similar [1].
[1]: https://github.com/dfdx/fabrique/blob/main/fabrique/llama/mo...
...which should be much faster also on CPU, I assume.
The rotary embeddings bit is neat. I wonder if a complex representation would simplify vs complexify things (readability, performance, expressive power).
Some implementations use a complex rotary encoding, but it makes it a bit harder to port to platforms or frameworks which do not support complex numbers natively.
The tensor cores that do the bulk of the flops on the bulk of the gpus people use are just various sizes of floats, i think. We're in a funny position where progress in models and progress in hardware are kind of linked.
As far as expressive power goes, it shouldn't make a difference for the models in common use, but I could totally imagine models where it improves readability.
I'll consider superintelligence achieved if AI can do such work faithfully.
What? Lots of people could produce this repo, it hardly counts as superintelligence.
From the TinyStories dataset card [1] the dataset is generated by GPT-3.5 and GPT-4. Reading the discussions in the community tab [2] it looks like there are a lot of incomplete or misspelled words, incorrect grammar, and even Chinese characters in the dataset.
As such, I'd be weary of using that dataset to train or evaluate models.
[1] https://huggingface.co/datasets/roneneldan/TinyStories
[2] https://huggingface.co/datasets/roneneldan/TinyStories/discu...
It’s just used for checking that the implementation is correct. The dataset is just a toy dataset it doesn’t matter if it has misspelled words
np.sin(freqs)
Didn't we drop 2 pi somewhere?
Obligatory Recmo’s Llama1 implementation in numpy :)
We changed the URL from https://github.com/likejazz/llama3.np to the article it points to, which gives more background.
Cool, instant cuda acceleration via cupy! `import cupy as np`
How does the performance of llama3.np compare to other implementations, especially considering it's a pure NumPy implementation?
So is this the case that the information is in the data set? Or the code is very well defined to be so small? As an outsider it's surprising that such a capable model can be so "simple".
The training code is presumably quite a bit more complex than what they've open sourced, but part of the beauty of the GPT-based LLMs is their structural simplicity.
Now, that simplicity can be deceiving - there are a lot of conceptual interconnectedness within these models. They've been put together "just so" if you will.
If you look at the source code to nanoGPT and compare it to Llama3, the most remarkable thing (when you look past the superficial name changes) is just how similar they are.
If I recall correctly the primary differences are:
They were published more than five years apart. On the one hand progress has been breathtaking, truly astounding. On the other hand, it's almost exactly the same model.Goes to show just how much is in the training data.
human brain's structure is also encoded in a short DNA sequence
Forgot one: the positional encoding also changed, llama3 uses RoPE, gpt2 uses a learned embedding.
300 lines of this code is a bit different than 300 lines of typical code where you read files, set up a backend/frontend, or parse data. In the latter case, there are a lot of tedious operations. Sure, the former also has that with reshaping and asserts or wtv.
But in a sense, the 300 lines of Llama code are essentially just lines of math. And reading through any math proof will show you that any particular line can hide large amounts of complexity.
This can be true with code with more tedious operations, but those lines are a smaller fraction of the overall code base by definition.
Even the "tedious" parts of the llama code can hide large complexity. Setting a learning rate with a schedule might require reading a paper or two for your particular architecture.
But yes, once you parse all the math and the theory, the lines are kinda simple matmul and forward lol.
Sure, knowing the basics of LLM math is necessary. But it's also _enough_ to know this math to fully grasp the code. There are only 4 concepts - attention, feed-forward net, RMS-normalization and rotary embeddings - organized into a clear structure.
Now compare it to the Hugginface implementation [1]. In addition to the aforementioned concepts, you need to understand the hierarchy of `PreTrainedModel`s, 3 types of attention, 3 types of rotary embeddings, HF's definition of attention mask (which is not the same as mask you read about in transformer tutorials), several types of cache class, dozens of flags to control things like output format or serialization, etc.
It's not that Meta's implementation is good and HF's implementation is bad - they pursue different goals in their own optimal way. But if you just want to learn how the model works, Meta's code base is great.
[1]: https://github.com/huggingface/transformers/blob/main/src/tr...
I think with LLMs in general, the algorithms are very refined and require lots of research, despite being "simple" in terms of entropy, or an imagined Kolgomorov complexity for defining algorithms.
So "simple" is a fuzzy term here, but yes, the entropic complexity is in the data, not the algorithms.
Related to the so-called "Bitter lesson".
Edit: the sister comment pointed out what I failed to express: RILHF and training are also algorithms, and their applications and implementations are probably much more complex than the code that evaluates a given prompt.
So basically, "models" (trained NNs) are also an example for the equivalence of code and data.
Fixed data used by code (the trained model) is code in itself, even when it is not directly written by humans or in a human-readable language.
Edit edit: don't forget to count the imported maths code :) but I assume this is not relevant to the "it's just matrix multiplications" overall argument
Do you know why these are so short? What is the algorithm/magic in all of these?
I tried to make sense of it but cannot
The magic is the structure of the model, and the real magic is the billions of weights.
Architecturally, LLMs are very simple compared to many software projects.
The crux of their behavior comes from their learned weights which are gigabytes and can cost millions to obtain via training.
The magic is in the billions of learned weights (~synapses). This is just the scaffolding that runs them.
On line 59, there is a less-than-or-equals comparison between 0 and 1. Curious https://github.com/meta-llama/llama3/blob/main/llama/model.p...
What's the operator precedence in python?
Is it `assert(0 <= (1 < ndim))` or `assert((0 <= 1) < ndim)`, or something even stranger like `assert(0 <= 1) < ndim`?
Python actually does something pretty neat: it chains comparisons so that `x < y <= z` is like `x < y and y <= z` except y is only evaluated once
In linked code we can be confident that `0 <= 1`, so only `1 < ndim` should matter. In fact I'd expect peephole optimization to remove most of the code for `0 <= 1`
I am a reasonably competent python coder, yet when I see stuff like this I regard it with the same suspicion as a switch in the "more magic" position.
https://www.catb.org/jargon/html/magic-story.html
Why is max_seq_len set to 2048 [1] when the model card says the context size is 8k [2]?
[1] https://github.com/meta-llama/llama3/blob/14aab0428d3ec3a959...
[2] https://github.com/meta-llama/llama3/blob/14aab0428d3ec3a959...
That's just the default. You can set max_seq_len to 8k. From the readme [0]:
[0] https://github.com/meta-llama/llama3/tree/14aab0428d3ec3a959...
The numpy code can seem more accessible and easy to understand. Torch can look scary even though it's similar to numpy.
the simplicity of the transformer is quite refreshing. especially in vision where the Vision Transformer with linear patch encodings replaces complex intertwined decisions about filter size, striding, pooling, #filters, depth, etc., with the simpler decision of how to allocate your FLOPS between dimensionality, #heads, and #layers.