return to table of content

Building an LLM from Scratch: Automatic Differentiation (2023)

itissid
6 replies
1d19h

Every one should go through this rite of passage work and get to the "Attention is all you need" implementation. It's a world where engineering and the academic papers are very close and reproducible and a must for you to progress in the field.

(see also andre karpathys zero to hero nn series on youtube as well its very good and similar to this work)

wredue
1 replies
1d16h

Is this YouTube series also “from scratch (but not really)”

Edit - it is. Not to talk down on the series. I’m sure it’s good, but it is actually “LLM with PyTorch”.

Edit - I looked again and I was actually not correct. He does ultimately use frameworks, but gives some early talk about how those function under the hood.

deadfast
0 replies
1d5h

I appreciate you coming back and giving more details, it encourages me to look into it now. Maybe my expectations on the internet are just low, but I thought it was a virtuous act worth the effort, I wish more people would continue with skepticism but be willing to follow through and let their opinions change given solid evidence.

calebkaiser
1 replies
1d16h

I would also recommend going through Callum McDougall/Neel Nanda's fantastic Transformer from Scratch tutorial. It takes a different approach to conceptualizing the model (or at least, it implements it in a way which emphasizes different characteristics of Transformers and self-attention), which I found deeply satisfying when I first explored them.

https://arena-ch1-transformers.streamlit.app/%5B1.1%5D_Trans...

joshua11
0 replies
1d11h

Thanks for sharing. This is a nice resource

simfoo
0 replies
1d9h

That magic moment in Karpathys first video when he gets to the loss function and calls backward for the first time - this is when it clicked for me. Highly recommended!

bschne
0 replies
1d18h

+1 for Karpathy, the series is really good

cafaxo
2 replies
1d20h

I did a similar thing for Julia: Llama2.jl contains vanilla Julia code [1] for training small Llama2-style models on the CPU.

[1] https://github.com/cafaxo/Llama2.jl/tree/master/src/training

andxor_
0 replies
1d16h

Great stuff. Thanks for sharing.

3abiton
0 replies
1d10h

How hard was it to find open source data nowadays? I saw that books3 are already made illegal to train on.

nqzero
1 replies
1d17h

is there an existing SLM that resembles an LLM in architecture that includes the code for training it ?

i realize the cost and time to train may be prohibitive and that quality on general english might be very limited, but is the code itself available ?

sva_
0 replies
1d16h

Not sure what you mean with SLM, but https://github.com/karpathy/nanoGPT

asgraham
1 replies
1d21h

As a chronic premature optimizer my first reaction was, "Is this even possible in vanilla python???" Obviously it's possible, but can you train an LLM before the heat death of the universe? A perceptron, sure, of course. A deep learning model, plausible if it's not too deep. But a large language model? I.e. the kind of LLM necessary for "from vanilla python to functional coding assistant."

But obviously the author already thought of that. The source repo has a great motto: "It don't go fast but it do be goin'" [1]

I love the idea of the project and I'm curious to see what the endgame runtime will be.

[1] https://github.com/bclarkson-code/Tricycle

gkbrk
0 replies
1d20h

Why wouldn't it be possible? You can generate machine code with Python and call into it with ctypes. All your deep learning code is still in Python, but in the runtime it gets JIT compiled into something faster.

revskill
0 replies
1d6h

The only problem is it's implemented in Python. One reason is i hate to install python on my machine, and i don't know how to manage dependencies. The MacOSX required to upgrade to install native stuffs. Such a hell.

andxor_
0 replies
1d16h

Very well written. AD is like magic and this is a good exposition on the basic building block.

I quite like Jeremy's approach: https://nbviewer.org/github/fastai/fastbook/blob/master/17_f...

It shows a very simple "Pythonic" approach to assemble gradient of a composition of functions from the gradients of the components.

ESOLprof
0 replies
12m

Amazing! Thank you