return to table of content

GPT in 500 Lines of SQL

Alifatisk
10 replies
9h47m

I love this, something that starts off as some kind of sorcery a year ago is now being explained so well and almost in a childish way.

Hendrikto
6 replies
7h57m

Despite this impressive achievement, the author does not appear to have the most solid theoretical background.

Some explanations (e.g. regarding UTF encoding) are not entirely correct and use uncommon terminology (e.g. alphabet instead of vocabulary).

worewood
2 replies
5h12m

Also, about the non-determinism issue, there was a post some time ago and that comes from the way the GPU does the calculations, something something floating point something.

So of course the algorithm is deterministic, but the real-life implementation isn't.

gugagore
1 replies
3h38m

Floating point addition, for example, is not associative, so the order of taking a sum affects the result. If the summation were sequential and single threaded, it would be deterministic. But it happens in parallel, so timing variations affect the result.

But there is probabilistic sampling that happens (see "temperature").

marginalia_nu
0 replies
3h0m

Floating point addition, for example, is not associative, so the order of taking a sum affects the result. If the summation were sequential and single threaded, it would be deterministic. But it happens in parallel, so timing variations affect the result.

In this sense, I don't think it's fair to say floating point math is non-deterministic, as much as parallel computation is non-deterministic. FP behaves in unexpected ways, but the same order of operations always yields the same unexpected results (except on Pentium 1).

ecnahc515
0 replies
3h26m

Sure, but if your doing work in machine learning that’s generally not the terminology used, hinting that this isn’t the area the author specializes in (which isn’t a bad thing, but take their explanations with a grain of salt).

dzink
0 replies
4h52m

Electricity, Cars, and Gas were once upon a time a luxury as well - reserved to those who could afford them or had unique access / credentials / skills. The people who were able to simplify and spread the advanced tech to the common person became Billionaires.

maelito
1 replies
8h43m

and almost in a childish way.

No. You've got to have a solid background in computer science to even start to understand fully this article.

Even the title itself is not accessible to 99% of humans.

bookofjoe
0 replies
7h25m

Count me in as one of the 99%

pedrosorio
0 replies
1h55m

This sorcery didn't start a year ago. The model being described in the article is GPT-2 which was released in early 2019.

brainless
7 replies
16h26m

I think, I think, GPT creating GPT... creating GPT will be a thing soon. GPTception.

codetiger
5 replies
15h59m

GPT creating a better algo than itself is what’s even more interesting

mavamaarten
4 replies
9h38m

It might. But also, all decisions and knowledge is pretty much based on a resampling of our own language, conversations and internet data. It might puzzle together some existing ideas that have never been combined. Or it might hallucinate a better solution by accident. But we're definitely not at the level yet where it will actively build something great.

erwincoumans
3 replies
9h11m

Self-play GPT (by bots in a rich simulation) similar to Alpha Go Zero?

codetiger
1 replies
8h28m

Games like Alpha Go have very limited(or known) end state so reinforcements learning or similar methods work great. However, I wonder how will AI train itself in learning human languages without being judged by humans. It’s just a matter of time before someone figures out

erwincoumans
0 replies
1h53m

Right, a rich simulator with humans for feedback: an evolved version of online worlds with a mix of AI NPC's and real people, with the task: find the NPC's. The NPC's can train in rooms with exclusive NPC's or mixed with people, without knowing.

wizzwizz4
0 replies
6h32m

Self-play works for Go, because the "world" (for lack of a better term) can be fully simulated. Human language talks about the real world, which we cannot simulate, so self-play wouldn't be able to learn new things about the world.

We might end up with more regularised language, and a more consistent model of the world, but that would come at the expense of accuracy and faithfulness (two things which are already lacking).

incahoots
0 replies
16h8m

Personally I like the AI dogman angle. AI trained to beat other AI (resumes tailored to beat ATS algorithms)

ianand
5 replies
15h25m

This is great. In a similar vein, I implemented GPT entirely in spreadsheet functions with accompanying video tutorials

https://spreadsheets-are-all-you-need.ai/

danielmarkbruce
1 replies
15h4m

Nice job. Spreadsheets are a natural way to explain an LLM. I suspect that you can show training well too by calculating the derivatives for each parameter under each training example and showing it all explicitly mapped to the relevant parameter etc etc

ianand
0 replies
13h23m

Thank you. Validating to hear others feel spreadsheets are a natural and accessible way to explain LLMs. Someone asks “what’s do they mean by a parameter in a model” or “what is the attention matrix” and you can just pull it up graphically laid out. Then they can fiddle with it and get a visceral feel for things. It also becomes easier for non coders do things like logit lens which is just a few extra simple spreadsheet functions.

I actually plan to do what you describe after I do the embeddings video (but only for a “toy” neural net as a proof-of-concept introduction to gradient descent).

airstrike
1 replies
15h0m

Not only is that amazing, your video was so well done. Superb crowd work! Color me double impressed.

ianand
0 replies
13h18m

Thanks! Each one takes a surprisingly long time to make. Even figuring out how make the explanation accessible and compelling yet still accurate takes awhile and then there’s still the actual video to do.

alisonatwork
0 replies
10h13m

Your first video is fantastic. As someone who thinks LLMs are pretty nifty but never had a professional need to learn how they actually work, that 10 minute video taught me more than several years of reading esoteric HN comments and fluffy mainstream media on the topic. Seeing a bazillion floating point numbers all stacked up ready to be calculated across also makes it much more intuitive why this tech devours GPUs, which never really occurred to me before. Thanks for sharing.

wanderingmind
3 replies
14h27m

These marvels need to be preserved. Just posting the archive link here in case the blog is not maintained in future.

https://archive.is/VAGzF

zarkenfrood
0 replies
13h52m

Thanks, this is a fantastic article and it would be a shame to be lost.

rawgabbit
0 replies
14h6m

This is very cool

slt2021
2 replies
16h0m

I can feel the SQL-Force in this young jedi, Midichlorian level is on another level

swasheck
1 replies
15h53m

alex is a genius. he’s worth a follow.

hsbauauvhabzb
2 replies
15h45m

I’ve completely avoided GPT and LLMs. This looks like it would generate some level of fluidity in text output, but not be able to parse and answer a question.

Is there any simplistic blog posts / training courses which go through how they work, or expose a toy engine in python or similar that? All the training I’ve seen so far seems oriented at how to use the platforms rather than how they actually work.

deskamess
2 replies
7h22m

In the tokenization example for '"Mississippilessly", why is 'si' not part of the combined column? It appears twice in the text. My speculation is that it got collapsed out with 'iss' (a longer token). Is that right?

remram
1 replies
2h58m

Yes. At step 1 there are two 'i','s' and two 's','i', 'is' gets picked because it comes first. Then at step 7, because 'is' got formed, there are two instances of 'is','s' and only one instance of 's','i' so 'iss' gets formed.

deskamess
0 replies
39m

Thanks.

seasonalnull
1 replies
17h15m

This should be illegal

fifilura
0 replies
9h42m

Can you elaborate?

ksarw
1 replies
12h28m

Great write up, I enjoyed the reading the explanations for each piece and found them to be clear and quite thorough.

I did make the mistake though of clicking "+ expand source", and after seeing the (remarkable) abomination I can sympathize with ChatGPT's "SQL is not suitable for implementing large language model..." :)

deskamess
0 replies
7h27m

I did that too and could not find a way to collapse it.

cdelsolar
1 replies
6h42m

Serious question, how do I get this smart?

anthomtb
0 replies
2h58m

No doubt the author is a super genius ex-child prodigy whizzkid who emerged from the womb with the umbilical cord diagramming Euler's formula.

For real though, and knowing this is a leading question, the author has near-on 15 years of blog posts showing complex problems being solved in SQL. Is their brain bigger than yours and mine? Maybe a little bit. Do they have a ton of experience doing things like this? Most definitely.

JonChesterfield
1 replies
5h12m

Is this an accurate representation of the GPT driver loop?

    def generate(prompt: str) -> str:
      # Transforms a string into a list of tokens.
      tokens = tokenize(prompt) # tokenize(prompt: str) -> list[int]
    
      while True:
     
        # Runs the algorithm.
        # Returns tokens' probabilities: a list of 50257 floats, adding up to 1.
        candidates = gpt2(tokens) # gpt2(tokens: list[int]) -> list[float]
     
        # Selects the next token from the list of candidates
        next_token = select_next_token(candidates)
        # select_next_token(candidates: list[float]) -> int
     
        # Append it to the list of tokens
        tokens.append(next_token)
     
        # Decide if we want to stop generating.
        # It can be token counter, timeout, stopword or something else.
        if should_stop_generating():
          break
 
      # Transform the list of tokens into a string
      completion = detokenize(tokens) # detokenize(tokens: list[int]) -> str
      return completion

because that looks a lot like a state machine implementing Shlemiel the painter's algorithm which throws doubt on the intrinsic compute cost of the generative exercise.

NortySpock
0 replies
4h41m

I think the "context window" that people refer to with large language models means there's a maximum number of tokens that are retained, with the oldest being discarded. The window is a sliding window.

101008
1 replies
1h27m

I keep reading that GPT is a "smarter" "complex" Markov, in the end, just a function spitting out next word with some probability.

But from my experience that cannot be true - it has to learn somehow. There is an easy example to make. Tell it something that happened today and contradicts the past (I used to test this with the Qatar World Cup), and then ask questions that are affected by that event, and it replied correctly. How is that possible? How a simple sentence (the information I provide) changes the probabilites for next token by that far?

Lerc
0 replies
1h21m

There are two parts of the knowledge at play here.

1. The trained knowledge included in the parameters of the model

2. The context of the conversation

The 'learning' you are experiencing here is due to the conversation context retaining the new facts. Historically the context windows were very short and as the conversation continues it would quickly forget the new facts.

More recently context windows have grown to rather massive lengths.

sigmoid10
0 replies
12h5m

It's a nice demo. Unfortunately the article mixes up things in the explanation for causal masking, because it seems the author conflates aspects from training and inference. First, causal masking exists to prevent the model from "peeking" at future tokens during training, and second (at least for GPT-like architectures) for enforcing the autoregressive aspect during inference. During inference we only use the last token anyways, so it will attend the entire input sequence. So this token is definitely not decided only from the last token's embedding.

namnhocq8
0 replies
13h27m

Tại xỉu

lagniappe
0 replies
13h47m

This is a great read, I didn't expect to scroll right back to the top as soon as I finished it the first time.

jakjak123
0 replies
10h42m

This is a very good article and introduction.

huqedato
0 replies
8h52m

Fantastic article. It kept my eyes on the screen for 2 hours, without interruption The author is a genius.

ein0p
0 replies
2h57m

Unexpectedly insightful and answers some of the same questions I had early on: not just “how” questions, but “why” as well. You see the pattern with softmax quite often. I wish it was taught as “differentiable argmax” rather than by giving people a formula straight away. That’s not all it is, but that’s how it’s often used.

chrsig
0 replies
17h6m

This is beautiful. I'd actually been going down this same rabbit hole with sqlite, I hadn't gotten far enough to bring a neural net into it.

I'd been inspired by the makemore lecture series[0]. At the 1hr mark or so, he switches from counting to using a nn, which is about as far as I've gotten. Breaking it down into a relational model is actually a really great exercise.

[0] https://www.youtube.com/watch?v=PaCmpygFfXo

behnamoh
0 replies
17h10m

What is this sorcery?

Hendrikto
0 replies
8h1m

Plain Unicode, however, doesn't really work well with neural networks.

That is not true. See ByT5, for example.

As an illustration, let's take the word "PostgreSQL". If we were to encode it (convert to an array of numbers) using Unicode, we would get 10 numbers that could potentially be from 1 to 149186. It means that our neural network would need to store a matrix with 149186 rows in it and perform a number of calculations on 10 rows from this matrix.

What the author calls alphabet here, is typically called vocabulary. And you can just use UTF-8 bytes as your vocabulary, so you end up with 256 tokens, not 149186. That is what ByT5 does.