I love this, something that starts off as some kind of sorcery a year ago is now being explained so well and almost in a childish way.
I think, I think, GPT creating GPT... creating GPT will be a thing soon. GPTception.
GPT creating a better algo than itself is what’s even more interesting
It might. But also, all decisions and knowledge is pretty much based on a resampling of our own language, conversations and internet data. It might puzzle together some existing ideas that have never been combined. Or it might hallucinate a better solution by accident. But we're definitely not at the level yet where it will actively build something great.
Self-play GPT (by bots in a rich simulation) similar to Alpha Go Zero?
Games like Alpha Go have very limited(or known) end state so reinforcements learning or similar methods work great. However, I wonder how will AI train itself in learning human languages without being judged by humans. It’s just a matter of time before someone figures out
Right, a rich simulator with humans for feedback: an evolved version of online worlds with a mix of AI NPC's and real people, with the task: find the NPC's. The NPC's can train in rooms with exclusive NPC's or mixed with people, without knowing.
Self-play works for Go, because the "world" (for lack of a better term) can be fully simulated. Human language talks about the real world, which we cannot simulate, so self-play wouldn't be able to learn new things about the world.
We might end up with more regularised language, and a more consistent model of the world, but that would come at the expense of accuracy and faithfulness (two things which are already lacking).
Personally I like the AI dogman angle. AI trained to beat other AI (resumes tailored to beat ATS algorithms)
This is great. In a similar vein, I implemented GPT entirely in spreadsheet functions with accompanying video tutorials
Nice job. Spreadsheets are a natural way to explain an LLM. I suspect that you can show training well too by calculating the derivatives for each parameter under each training example and showing it all explicitly mapped to the relevant parameter etc etc
Thank you. Validating to hear others feel spreadsheets are a natural and accessible way to explain LLMs. Someone asks “what’s do they mean by a parameter in a model” or “what is the attention matrix” and you can just pull it up graphically laid out. Then they can fiddle with it and get a visceral feel for things. It also becomes easier for non coders do things like logit lens which is just a few extra simple spreadsheet functions.
I actually plan to do what you describe after I do the embeddings video (but only for a “toy” neural net as a proof-of-concept introduction to gradient descent).
Not only is that amazing, your video was so well done. Superb crowd work! Color me double impressed.
Thanks! Each one takes a surprisingly long time to make. Even figuring out how make the explanation accessible and compelling yet still accurate takes awhile and then there’s still the actual video to do.
Your first video is fantastic. As someone who thinks LLMs are pretty nifty but never had a professional need to learn how they actually work, that 10 minute video taught me more than several years of reading esoteric HN comments and fluffy mainstream media on the topic. Seeing a bazillion floating point numbers all stacked up ready to be calculated across also makes it much more intuitive why this tech devours GPUs, which never really occurred to me before. Thanks for sharing.
These marvels need to be preserved. Just posting the archive link here in case the blog is not maintained in future.
Thanks, this is a fantastic article and it would be a shame to be lost.
This is very cool
I can feel the SQL-Force in this young jedi, Midichlorian level is on another level
alex is a genius. he’s worth a follow.
"It's full of jewels", as someone almost said.
E.g. The Sultan’s Riddle in SQL https://explainextended.com/2016/12/31/happy-new-year-8/
I’ve completely avoided GPT and LLMs. This looks like it would generate some level of fluidity in text output, but not be able to parse and answer a question.
Is there any simplistic blog posts / training courses which go through how they work, or expose a toy engine in python or similar that? All the training I’ve seen so far seems oriented at how to use the platforms rather than how they actually work.
Strap in, this is by far the best resource: https://www.youtube.com/watch?v=kCc8FmEb1nY
Jay Alammar has my favorite sequence of tutorials from basic neural network math to GPT2.
Particularly [0], [1], and [2]
[0] http://jalammar.github.io/illustrated-transformer/
[1] http://jalammar.github.io/illustrated-gpt2/
[2] https://jalammar.github.io/visualizing-neural-machine-transl...
In the tokenization example for '"Mississippilessly", why is 'si' not part of the combined column? It appears twice in the text. My speculation is that it got collapsed out with 'iss' (a longer token). Is that right?
Yes. At step 1 there are two 'i','s' and two 's','i', 'is' gets picked because it comes first. Then at step 7, because 'is' got formed, there are two instances of 'is','s' and only one instance of 's','i' so 'iss' gets formed.
Thanks.
This should be illegal
Can you elaborate?
Great write up, I enjoyed the reading the explanations for each piece and found them to be clear and quite thorough.
I did make the mistake though of clicking "+ expand source", and after seeing the (remarkable) abomination I can sympathize with ChatGPT's "SQL is not suitable for implementing large language model..." :)
I did that too and could not find a way to collapse it.
Serious question, how do I get this smart?
No doubt the author is a super genius ex-child prodigy whizzkid who emerged from the womb with the umbilical cord diagramming Euler's formula.
For real though, and knowing this is a leading question, the author has near-on 15 years of blog posts showing complex problems being solved in SQL. Is their brain bigger than yours and mine? Maybe a little bit. Do they have a ton of experience doing things like this? Most definitely.
Is this an accurate representation of the GPT driver loop?
def generate(prompt: str) -> str:
# Transforms a string into a list of tokens.
tokens = tokenize(prompt) # tokenize(prompt: str) -> list[int]
while True:
# Runs the algorithm.
# Returns tokens' probabilities: a list of 50257 floats, adding up to 1.
candidates = gpt2(tokens) # gpt2(tokens: list[int]) -> list[float]
# Selects the next token from the list of candidates
next_token = select_next_token(candidates)
# select_next_token(candidates: list[float]) -> int
# Append it to the list of tokens
tokens.append(next_token)
# Decide if we want to stop generating.
# It can be token counter, timeout, stopword or something else.
if should_stop_generating():
break
# Transform the list of tokens into a string
completion = detokenize(tokens) # detokenize(tokens: list[int]) -> str
return completion
because that looks a lot like a state machine implementing Shlemiel the painter's algorithm which throws doubt on the intrinsic compute cost of the generative exercise.I think the "context window" that people refer to with large language models means there's a maximum number of tokens that are retained, with the oldest being discarded. The window is a sliding window.
I keep reading that GPT is a "smarter" "complex" Markov, in the end, just a function spitting out next word with some probability.
But from my experience that cannot be true - it has to learn somehow. There is an easy example to make. Tell it something that happened today and contradicts the past (I used to test this with the Qatar World Cup), and then ask questions that are affected by that event, and it replied correctly. How is that possible? How a simple sentence (the information I provide) changes the probabilites for next token by that far?
There are two parts of the knowledge at play here.
1. The trained knowledge included in the parameters of the model
2. The context of the conversation
The 'learning' you are experiencing here is due to the conversation context retaining the new facts. Historically the context windows were very short and as the conversation continues it would quickly forget the new facts.
More recently context windows have grown to rather massive lengths.
It's a nice demo. Unfortunately the article mixes up things in the explanation for causal masking, because it seems the author conflates aspects from training and inference. First, causal masking exists to prevent the model from "peeking" at future tokens during training, and second (at least for GPT-like architectures) for enforcing the autoregressive aspect during inference. During inference we only use the last token anyways, so it will attend the entire input sequence. So this token is definitely not decided only from the last token's embedding.
Tại xỉu
This is a great read, I didn't expect to scroll right back to the top as soon as I finished it the first time.
Related:
A GPT in 60 Lines of NumPy - https://news.ycombinator.com/item?id=34726115 - February 2023 (146 comments)
This is a very good article and introduction.
Fantastic article. It kept my eyes on the screen for 2 hours, without interruption The author is a genius.
Unexpectedly insightful and answers some of the same questions I had early on: not just “how” questions, but “why” as well. You see the pattern with softmax quite often. I wish it was taught as “differentiable argmax” rather than by giving people a formula straight away. That’s not all it is, but that’s how it’s often used.
This is beautiful. I'd actually been going down this same rabbit hole with sqlite, I hadn't gotten far enough to bring a neural net into it.
I'd been inspired by the makemore lecture series[0]. At the 1hr mark or so, he switches from counting to using a nn, which is about as far as I've gotten. Breaking it down into a relational model is actually a really great exercise.
What is this sorcery?
Plain Unicode, however, doesn't really work well with neural networks.
That is not true. See ByT5, for example.
As an illustration, let's take the word "PostgreSQL". If we were to encode it (convert to an array of numbers) using Unicode, we would get 10 numbers that could potentially be from 1 to 149186. It means that our neural network would need to store a matrix with 149186 rows in it and perform a number of calculations on 10 rows from this matrix.
What the author calls alphabet here, is typically called vocabulary. And you can just use UTF-8 bytes as your vocabulary, so you end up with 256 tokens, not 149186. That is what ByT5 does.
Despite this impressive achievement, the author does not appear to have the most solid theoretical background.
Some explanations (e.g. regarding UTF encoding) are not entirely correct and use uncommon terminology (e.g. alphabet instead of vocabulary).
Also, about the non-determinism issue, there was a post some time ago and that comes from the way the GPU does the calculations, something something floating point something.
So of course the algorithm is deterministic, but the real-life implementation isn't.
Floating point addition, for example, is not associative, so the order of taking a sum affects the result. If the summation were sequential and single threaded, it would be deterministic. But it happens in parallel, so timing variations affect the result.
But there is probabilistic sampling that happens (see "temperature").
In this sense, I don't think it's fair to say floating point math is non-deterministic, as much as parallel computation is non-deterministic. FP behaves in unexpected ways, but the same order of operations always yields the same unexpected results (except on Pentium 1).
when talking about a set of characters, alphabet is a commonly used term https://cs.stackexchange.com/search?q=string++alphabet+size
Sure, but if your doing work in machine learning that’s generally not the terminology used, hinting that this isn’t the area the author specializes in (which isn’t a bad thing, but take their explanations with a grain of salt).
Electricity, Cars, and Gas were once upon a time a luxury as well - reserved to those who could afford them or had unique access / credentials / skills. The people who were able to simplify and spread the advanced tech to the common person became Billionaires.
No. You've got to have a solid background in computer science to even start to understand fully this article.
Even the title itself is not accessible to 99% of humans.
Count me in as one of the 99%
This sorcery didn't start a year ago. The model being described in the article is GPT-2 which was released in early 2019.