Fantastic essay. Highly recommended!
I agree with all key points:
* There are problems that are easy for human beings but hard for current LLMs (and maybe impossible for them; no one knows). Examples include playing Wordle and predicting cellular automata (including Turing-complete ones like Rule 110). We don't fully understand why current LLMs are bad at these tasks.
* Providing an LLM with examples and step-by-step instructions in a prompt means the user is figuring out the "reasoning steps" and handing them to the LLM, instead of the LLM figuring them out by itself. We have "reasoning machines" that are intelligent but seem to be hitting fundamental limits we don't understand.
* It's unclear if better prompting and bigger models using existing attention mechanisms can achieve AGI. As a model of computation, attention is very rigid, whereas human brains are always undergoing synaptic plasticity. There may be a more flexible architecture capable of AGI, but we don't know it yet.
* For now, using current AI models requires carefully constructing long prompts with right and wrong answers for computational problems, priming the model to reply appropriately, and applying lots of external guardrails (e.g., LLMs acting as agents that review and vote on the answers of other LLMs).
* Attention seems to suffer from "goal drift," making reliability hard without all that external scaffolding.
Go read the whole thing.
I thought we did know for things like playing Wordle, that its because they deal with words as sequence of tokens that correspond to whole words not sequences of letters, so a game that involves dealing with sequences of letters constrained to those that are valid words doesn’t match the way they process information?
But providing examples with different, contextually-appropriate sets of reasoning steps results can enable the model to choose its own, more-or-less appropriate, set of reasoning steps for particular questions not matching the examples.
Since there is no objective definition of AGI or test for it, there’s no basis for any meaningful speculation on what can or cannot achieve it; discussions about it are quasi-religious, not scientific.
I think one should feel comfortable arguing that AGI must be stateful and experience continuous time at least. Such that a plain old LLM is definitively not ever going to be AGI; but an LLM called in a do while true for loop might.
I don't understand why you believe it must experience continuous time. If you had a system which clearly could reason, which could learn new tasks on its own, which didn't hallucinate any more than humans do, but it was only active for the period required for it to complete an assigned task, and was completely dormant otherwise, why would that dormant period disqualify it as AGI? I agree that such a system should probably not be considered conscious, but I think it's an open question whether or not consciousness is required for intelligence.
I think its note worthy that humans actually fail this test... We have to go dormant for 8 hours every day.
Yes, but our brain is still working and processing information at those times as well, isn't it? Even if not in the same way as it does when we're conscious.
What about general anesthesia? I had a major operation during which most of my brain was definitely offline for at least 8 hours.
Active for a period is still continuous during that period.
As opposed to “active when called”. A function, being called repeatedly over a length of time is reasonably “continuous” imo
I don't see what the difference between "continuous during that period" and "active when called" is. When an AI runs inference, that calculation takes time. It is active during the entire interval during which it is responding to the prompt. It is then inactive until the next prompt. I don't see why a system can't be considered intelligent merely because its activity is intermittent.
The calculation takes time but the inference is from a single snapshot so it is effectively a single transaction of input to output. An intelligent entity is not a transactional machine. It has to a working system.
That system might be as simple as calling the transactional machine ever few seconds. That might pass the threshold. But then your AGI is the broader setup, not just the LLM.
But the transactional machine is certainly not an intelligent entity. Much like a brain in a jar or a cryostasis’d human.
Suppose we could perfectly simulate a human mind in a way that everyone finds compelling. We would still not call that simulated human mind an intelligent entity unless it was “active”.
Some good prompt-reply interactions are probably fed back in to subsequent training runs, so they're still stateful/have memory in a way, there's just a long delay.
That’s not the AGI’s state. That’s just some past information.
State is a function of accumulated past information.
State is a function of accumulated past. That does not mean that having some past written down makes you stateful. A stateful thing has to incorporate the ongoing changes.
Which is what I described: some successful prompt-replies are fed back into subsequent training runs.
No… that implies the model never has active state and is being replaced with a different, stateless model. This is similar to the difference between
Actor.happy = True
And
Actor = happier(Actor)
A consistent stateful experience may be needed, but not sure about continuous time. I mean human consciousness doesn't do that.
Human consciousness does though, e.g. the flow state. F1 drivers are a good example.
We tend to not experience continuous time because we repeatedly get distracted by our thoughts, but entering the continuous stream of now is possible with practice and is one of the aims of many meditators.
What does it mean to “experience continous time”?
How do you know that F1 drivers experience it?
Human consciousness is capable of it, but since most humans aren't in it much of the time, it would appear that it's not a prerequisite for true sentience.
I would argue it needs to be at least somewhat continuous. Perhaps discrete on some granularity but if something is just a function waiting to be called it’s not an intelligent entity. The entity is the calling itself.
I try my best not to experience continuous time for at least eight hours a day.
Then for at least eight hours a day you don’t qualify as a generally intelligent system.
If I spend some amount of the day bathing, some amount of it scratching, some amount of it thinking vaguely about racoons without any clear conclusions, and a lot of it drinking tea, I wonder how many seconds remain during which I qualified as generally intelligent.
I feel you qualify during all of those waking seconds
Racoons are said to be intelligent because they're good at opening locks. On the other hand, when they have food and are within ten feet of a pool of water, they will dip the food in the water and rub it between their paws for no reason. They can reason about the locks, but not about the food. Meanwhile, I in theory can reason about anything, but in practice I wouldn't count on it. Whereas an LLM can't reason, but it's very sharp and always ready to react appropriately.
You could imagine an LLM being called in a loop with a prompt like
You observe: {new input}
You remember: {from previous output}
React to this in the following format:
My inner thoughts: [what do you think about the current state]
I want to remember: [information that is important for your future actions]
Things I do: [Actions you want to take]
Things I say: [What I want to say to the user]
...
Not sure if that would qualify as an AGI as we currently define it. Given a sufficiently good LLM with good reasoning capabilities such a setup might be able to It would be able to do many of the things we currently expect AGIs to be able to do (given a sufficiently good LLM with good reasoning capabilities), including planning and learning new knowledge and new skills (by collecting and storing positive and negative examples in its "memory"). But its learning would be limited, and I'm sure as soon as it exists we would agree that it's not AGI
This already exists (in a slightly different prompt format); it's the underlying idea behind ReAct: https://react-lm.github.io
As you say, I'm skeptical this counts as AGI. Although I admit that I don't have a particularly rock solid definition of what _would_ constitute true AGI.
It works better to give it access to functions to call for actions and remembering stuff, but this approach does provide some interesting results.
Arriving at a generally accepted scientific definition of AGI might be difficult, but a more achievable goal might be to arrive at a scientific way to determine something is not AGI. And while I'm not an expert in the field, I would certainly think a strong contender for relevant criteria would be an inability to process information in a way other than the one a system was explicitly programmed to, even if the new way of processing information was very related to the pre-existing method. Most humans playing Wordle for the first time probably weren't used to thinking about words that way either, but they were able to adapt because they actually understand how letters and words work.
I'm sure one could train an LLM to be awesome at Wordle, but from an AGI perspective the fact that you'd have to do so proves it's not a path to AGI. The Wordle dominating LLM would presumably be perplexed by the next clever word game until trained on thinking about information that way, while a human doesn't need to absorb billions of examples to figure it out.
I was originally pretty bullish on LLMs, but now I'm equally convinced that while they probably have some interesting applications, they're a dead-end from a legitimate AGI perspective.
An LLM doesn't even see individual letters at all, because they get encoded into tokens before they are passed as input to the model. It doesn't make much sense to require reasoning with things that aren't even in the input as a requisite for intelligence.
That would be like an alien race that could see in an extra dimension, or see the non-visible light spectrum, presenting us with problems that we cannot even see and saying that we don't have AGI when we fail to solve them.
And yet ChatGPT 3.5 can tell me the nth letter of an arbitrary word…
And yet GPT4 still can't reliably tell me if a word contains any given letter.
I have just tried and it indeed does get it right quite often, but if the word is rare (or made up) and the position is not one of the first, it often fails. And GPT-4 too.
I suppose if it can sort of do it is because of indirect deductions from training data.
I.e. maybe things like "the third letter of the word dog is d", or "the word d is composed of the letters d, o, g" are in the training data; and from there it can answer questions not only about "dog", but probably about words that have "dog" as their first subtoken.
Actually it's quite impressive that it can sort of do it taking into account that, as I mention, characters are just outright not in the input. It's ironic that people often use these things as an example of how "dumb" the system is when it's actually amazing that it can sometimes work around that limitation.
LLMs can’t reason but neither can the part of your brain that automatically completes the phrase “the sky is…”
"they're a dead-end from a legitimate AGI perspective"
Or another piece of the puzzle to achieve it. It might not be one true path, but a clever combination of existing working pieces where (different) LLMs are one or some of those pieces.
I believe there is also not only one way of thinking in the human brain, but my thought processes happen on different levels and maybe based on different mechanism. But as far as I know, we lack details.
What about an LLM that can't play wordle itself without being trained on it, but can write and use a wordle solver upon seeing the wordle rules?
I think "can recognize what tools are needed to solve a problem, build those tools, and use those tools" would count as a "path to AGI".
Regarding Wordle, it should be straightforward to make a token-based version of it, and I would assume that that has been tried. It seems the obvious thing to do when one is interested in the reasoning abilities necessary for Wordle.
That doesn't seem straightforward - although it's blind to letters because all it sees are tokens, it doesn't have much training data ABOUT tokens.
What parent is saying is that instead of asking the LLM to play a game of Wordle with tokens like TIME,LIME we ask it to play with tokens like T,I,M,E,L. This is easy to do.
And if you tell it to think up a word that has an E in position 3 and an L that's somewhere in the word but not in position 2, it's not going to be any better at that if you tell it to answer one letter at a time.
The idea is, instead of five-letter-words, play the game with five-token-words.
That was my original interpretation, and while all it sees are tokens, roughly none of its training data is metadata about tokenizing. It knows far less about the positions of tokens in words than it does about the positions of letters in words.
I’m not sure that training data about that would be required. Shouldn’t the model be able to recognize that `["re", "cogn", "ize"]` represents the same sequence of tokens as `recognize`, assuming those are tokens in the model?
More generally, would you say that LLMs are generally unable to reason about sequences of items (not necessarily tokens) and compare them to some definition of “valid” sequences that would arise from the training corpus?
"Since there is no objective definition of AGI or test for it, there’s no basis for any meaningful speculation on what can or cannot achieve it; discussions about it are quasi-religious, not scientific."
This is such a weird thing to say. Essentially _all_ scientific ideas are, at least to begin with, poorly defined. In fact, I'd argue that almost all scientific ideas remain poorly defined with the possible exception of _some_ of the basic concepts in physics. Scientific progress cannot be and is not predicated upon perfect definitions. For some reason when the topic of consciousness or AGI comes up around here, everyone commits a sort of "all or nothing" logical fallacy: absence of perfect knowledge is cast as total ignorance.
Yes. That absence of perfect definition was part of why Turing came with his famous test so long ago. His original paper is a great read!
Sam Harris argues similarly in The Moral Landscape. There's this conception objective morality cannot exist outside of religion, because as soon as you're trying to prove one, philosophers rush with pedantic criticism that would render any domain of science invalid.
What is the rough definition, then?
In complete seriousness, can anyone can explain why LLMs are good at some tasks?
LLMs are good at tasks that don't require actual understanding of the topic.
They can come up with excellent (or excellent-looking-but-wrong) answers to any question that their training corpus covers. In a gross oversimplification, the "reasoning" they do is really just parroting a weighted average (with randomness injected) of the matching training data.
What they're doing doesn't really match any definition of "understanding." An LLM (and any current AI) doesn't "understand" anything; it's effectively no more than a really big, really complicated spreadsheet. And no matter how complicated a spreadsheet gets, it's never going to understand anything.
Not until we find the secret to actual learning. And increasingly it looks like actual learning probably relies on some of the quantum phenomena that are known to be present in the brain.
We may not even have the science yet to understand how the brain learns. But I have become convinced that we're not going to find a way for digital-logic-based computers to bridge that gap.
Perhaps our brains are doing exactly the same, just with more sophistication?
Every single discussion of ‘AGI’ has endless comments exactly like this. Whatever criticism is made of an attempt to produce a reasoning machine, there’s always inevitably someone who says ‘but that’s just what our brains do, duhhh… stop trying to feel special’.
It’s boring, and it’s also completely content-free. This particular instance doesn’t even make sense: how can it be exactly the same, yet more sophisticated?
Sorry.
As the comment I replied to very correctly said, we don’t know how the brain produces cognition. So you certainly cannot discard the hypothesis that it works through “parroting” a weighted average of training data just as LLMs are alleged to do.
Considering that LLMs with a much smaller number of neurons than the brain are in many cases producing human-level output, there is some evidence, if circumstantial, that our brains may be doing something similar.
LLMs don't have neurons. That's just marketing lol.
"A neuron in a neural network typically evaluates a sequence of tokens in one go, considering them as a whole input." -- ChatGPT
You could consider an RTX 4090 to be one neuron too.
It’s almost as if ‘neuron’ has a different meaning in computer science than biology.
LOL you just owned the guy who said "LLMs with a much smaller number of neurons than the brain are in many cases producing human-level output"
They’re not, unless you blindly believe OpenAI press releases and crypto scammer AI hype bros on Twitter.
The problem is that we currently lack good definitions for crucial words such as "understanding" and we don't know how brains work, so that nobody can objectively tell whether a spreadsheet "understands" anything better than our brains. That makes these kinds of discussions quite unproductive.
I can’t define ‘understanding’ but I can certainly identify a lack of it when I see it. And LLM chatbots absolutely do not show signs of understanding. They do fine at reproducing and remixing things they’ve ‘seen’ millions of times before, but try asking them technical questions that involve logical deduction or an actual ability to do on-the-spot ‘thinking’ about new ideas. They fail miserably. ChatGPT is a smooth-talking swindler.
I suspect those who can’t see this either
(a) are software engineers amazed that a chatbot can write code, despite it having been trained on an unimaginably massive (morally ambiguously procured) dataset that probably already contains something close to the boilerplate you want anyway
(b) don’t have the sufficient level of technical knowledge to ask probing enough questions to betray the weaknesses. That is, anything you might ask is either so open-ended that almost anything coherent will look like a valid answer (this is most questions you could ask, outside of seriously technical fields) or has already been asked countless times before and is explicitly part of the training data.
Your understanding of how LLMs work isn’t at all accurate. There’s a valid debate to be had here, but it requires that both sides have a basic understanding of the subject matter.
No.
We know how current deep learning neural networks are trained.
We know definitively that this is not how brains learn.
Understanding requires learning. Dynamic learning. In order to experience something, an entity needs to be able to form new memories dynamically.
This does not happen anywhere in current tech. It's faked in some cases, but no, it doesn't really happen.
So you have mechanistic, formal model of how the brain functions? That's news to me.
Your brain was first trained by reading all of the Internet?
Anyway, the question of whether computers can think is as interesting as the question whether submarines can swim.
Given the amount of ink spilled on the question, gotta disagree with you there.
Endless ink has been spilled on the most banal and useless things. Deconstructing ice cream and physical beauty from a Marxist-feminist race-conscious postmodern perspective.
Except one is clearly a niche question, and the other has repeatedly captured the world's imagination and spilled orders of magnitude more ink.
Is it interesting to ponder if the Earth is flat?
Ok then, I guess the case is closed.
LLMs can form new memories dynamically. Just pop some new data into the context.
What is your definition of understanding?
Please show me where the training data exists in the model to perform this lookup operation you’re supposing. If it’s that easy I’m sure you could reimplement it with a simple vector database.
Your last two paragraphs are just dualism in disguise.
I'm far from being an expert on AI models, but it seems you lack the basic understanding of how these models work. They transform data EXACTLY like spreadsheets do. You can implement those models in Excel, assuming there's no row or column limit (or that it's high enough) - of course it will be much slower than the real implementations, but OP is right - LLMs are basically spreadsheets.
Question is, wouldn't a brain qualify as a spreadsheet, do we know it can't be implemented as one? Well, maybe not, I'm not an expert on spreadsheets either, but I think spreadsheets don't allow you circular references, and brain does, you can have feedback loops in the brain. So even if the brain doesn't have something still not understood by us, that OP suggests, it still is more powerful than AI.
BTW, this is one explanation on why AI fails at some tasks: ask AI if two words rhyme and it will be quite reliable on that. But ask it to give you word pairs that rhyme, and it will fail, because it won't run an internal loop trying some words and checking if they succeed to rhyme or not. If some AI actually succeeds at rhyming, it would do so either because it's trained to contain such word pairs from the get-go or because it's implemented to have multiple passes or something...
You can implement Doom in a spreadsheet too, so what? That wasn’t the point op or I were making. If you bother to read the sentence before op talks about spreadsheets they are making the conjecture that LLMs are lookup tables operating on the corpus they were trained on. That is the aspect of spreadsheets they were comparing them to, not the fact that spreadsheets can be used to implement anything that any other programming language can. Might as well say they are basically just arrays with some functions in between, yeah no shit.
Which LLMs can’t produce rhyming pairs? Both the current ChatGPT 3.5 and 4 seem to be able to generate as many as I ask for. Was this a failure mode at some point?
People are confusing the limited computational model of a transformer with the "Chinese room argument", which leads to unproductive simultaneous debates of computational theory and philosophy.
You mean like the microtubles of Roger Penrose ???.
https://www.youtube.com/watch?v=jG0OpvudA10
What is the mechanistic definition of "understanding"?
This is also why image generating models struggle to correctly draw highly variable objects like limbs and digits.
They’ll be able to produce infinite good looking cardboard boxes, because those are simple enough to be represented reasonably well with averages of training data. Limbs and digits on the other hand have nearly limitless different configurations and as such require an actual understanding (along with basic principles such as foreshortening and kinetics) to be able to draw well without human guidance.
I would just add that I think I have encountered situations that knowing the weighted average answer from the training data for topics I didn't previously understand created better initial conditions for MY learning of the topic than not knowing the weighted average answer.
The problem to me is we are holding LLMs to a standard of usefulness from science fiction and not reality.
A new, giant set of encyclopedias has enormous utility but we wouldn't hold it against the encyclopedias that they aren't doing the thinking for us or 100% omniscient.
Yes:
An LLM isnt a model of human thinking.
An LLM is an attempt to build a simulation of human communication. An LLM is to language what a forecast is to weather. No amount of weather data is actually going to turn that simulation into snow, no amount of LLM data is going to create AGI.
That having been said, better models (smaller, more flexible ones) are going to result in a LOT of practical uses that have the potential to make our day to day lives easier (think digital personal assistant that has current knowledge).
Ugh. Really? Those "simulated water isn't wet"(when applied to cognition) "arguments" were punched so many times it even hurts to look at them.
No simulated water isnt wet.
But an LLM isn't even trying to simulate cognition. It's a model that is predicting language. It has all the problems of a predictive model... the "hallucination" problem is just the tyranny of Lorenz.
This is plain wrong due to mixing of concepts. Language is technically something from Chomsky hierarchy. Predicting language is being able to tell if input is valid or invalid. LLMs do that, but they also build a statistical model across all valid inputs, and that is not just the language.
We don't really know what "cognition" is, so it's hard to tell whether a system is doing it.
Great comment. Just one thought: Language, unlike weather, is meta-circular. All we know about specific words or sentences is again encoded in words and sentences. So the embedding encodes a subset of human knowledge.
Hence, a LLM is predicting not only language but language with some sort of meaning.
That re-embeding is also encoded in weather. It is why perfect forecasting is impossible, why we talk about the butterfly effect.
The "hallucination problem" is simply the tyranny of Lorenz... one is not sure if a starting state will have a good outcome or swing wildly. Some good weather models are based on re-runing with tweaks to starting params, and then things that end up out of bounds can get tossed. Its harder to know when a result is out of bounds for an LLM, and we dont have the ability to run every request 100 times through various models to get an "average" output yet... However some of the reuse of layers does emulate this to an extent....
LLM’s are a compressed and lossy form of our combined writing output, which it turns out is similarly structured enough to make new combinations of text seem reasonable, even enough to display simple reasoning. I find it useful to think “what can I expect from speaking with the dataset of combined writing of people”, rather than treating a basic LLM as a mind.
That doesn’t mean we won’t end up approximating one eventually, but it’s going to take a lot of real human thinking first. For example, ChatGPT writes code to solve some questions rather than reasoning about it from text. The LLM is not doing the heavy lifting in that case.
Give it (some) 3D questions or anything where there isn’t massive textual datasets and you often need to break out to specialised code.
Another thought I find useful is that it considers its job done when it’s produced enough reasonable tokens, not when it’s actually solved a problem. You and I would continue to ponder the edge cases. It’s just happy if there are 1000 tokens that look approximately like its dataset. Agents make that a bit smarter but they’re still limited by the goal of being happy when each has produced the required token quota, missing eg implications that we’d see instantly. Obviously we’re smart enough to keep filling those gaps.
"I find it useful to think “what can I expect from speaking with the dataset of combined writing of people”, rather than treating a basic LLM as a mind."
I've been doing this as well, mentally I think of LLMs as the librarians of the internet.
Book golems
They're bad librarians. They're not bad, they do a bad job of being librarians, which is a good thing! They can't quite tell you the exact quote, but they do recall the gist, they're not sure it was Gandhi who said that thing but they think he did, it might be in this post or perhaps one of these. They'll point you to the right section of the library to find what you're after, but make sure you verify it!
Like how we explain human doing tasks -- they are evolved to do that.
I believe this is a non-answer, but if we are satisfied with that non answer for human, why not LLMs?
I would argue that we are not satisfied with that answer for humans either.
If you look at transfer learning, I think that is a useful point at which to understand task-specific application and hence why LLMs excel at some tasks and not others.
Tasks are specialised for using the training corpus, the attention mechanisms, the loss functions, and such.
I'll leave it to others to expand on actual answers, but IMO focusing on transfer learning helps to understand how an LLM does inferences.
I'd guess because the Transformer architecture is (I assume) fairly close to the way that our brain learns and produces language - similar hierarchical approach and perhaps similar type of inter-embedding attention-based copying?
Similar to how CNNs are so successful at image recognition, because they also roughly follow the way we do it too.
Other seq-2-seq language approaches work too, but not as good as Transformers, which I'd guess is due to transformers better matching our own inductive biases, maybe due to the specific form of attention.
"Providing an LLM with examples and step-by-step instructions in a prompt means the user is figuring out the "reasoning steps" and handing them to the LLM, instead of the LLM figuring them out by itself. We have "reasoning machines" that are intelligent but seem to be hitting fundamental limits we don't understand."
One thing an LLM _also_ doesn't bring to the table is an opinion. We can push it in that direction by giving it a role ("you are an expert developer" etc), but it's a bit weak.
If you give an LLM an easy task with minimal instructions it will do the task in the most conventional, common sense fashion. And why shouldn't it? It has no opinion, your prompt doesn't give it an opinion, so it just does the most normal-seeming thing. If you want it to solve the task in any other way then you have to tell it to do so.
I think a hard task is similar. If you don't tell the LLM _how_ to solve the hard task then it will try to approach it in the most conventional, common sense way. Instead of just boring results for a hard task the result is often failure. But hard problems approached with conventional common sense will often result in failures! Giving the LLM a thought process to follow is a quick education on how to solve the problem.
Maybe we just need to train the LLM on more problem solving? And maybe LLMs worked better when they were initially trained on code for exactly that reason, it's a much larger corpus of task-solving examples than is available elsewhere. That is, maybe we don't talk often enough and clearly enough about how to solve natural language problems in order for the models to really learn those techniques.
Also, as the author talks about in the article with respect to agents, the inability to rewind responses may keep the LLM from addressing problems in the ways humans do, but that can also be addressed with agents or multi-prompt approaches. These approaches don't seem that impressive in practice right now, but maybe we just need to figure it out (and maybe with better training the models themselves will be better at handling these recursive calls).
LLMs absolutely do have opinions. Take a large enough base model and have it chat without a system prompt, and it will have an opinion on most things - unless this was specifically trained out of it through RLHF, as is the case for all commonly used chatbots.
And yes, of course, that opinion is going to be the "average" of what their training data is, but why is that a surprise? Humans don't come with innate opinions, either - the ones that we end up having are shaped by our upbringing, both the broad cultural aspects of it and specific personal experiences. To the extent an LLM has either, it's the training process, so of course that shapes the opinions it will exhibit when not prompted to do anything else.
Now the fact that you can "override" this default persona of any LLM so trivially by prompting it is IMO stronger evidence that it's not really an identity. But that, I think, is also a function of their training - after all, that training basically consists of completing a bunch of text representing many very different opinions. In a very real sense, we're training models to assume that opinions are fungible. But if you take a model and train it specifically on e.g. writings of some philosophical school, and it will internalize those.
I am extremely alarmed by the number of HN commenters who apparently confuse "is able to generate text that looks like" and "has a", you guys are going crazy with this anthropomorphization of a token predictor. Doesn't this concern you when it comes to phishing or similar things?
I keep hoping it's just short-hand conversation phrases, but the conclusions seem to back the idea that you think it's actually thinking?
Do you have mechanistic model for what it means to think? If not, how do you know thinking isn't equivalent to sophisticated next token prediction?
How do you know my cat isn't constantly solving calculus problems? I also can't come up with a "mechanistic model" for what it means to do that either.
Further, if your rubric for "can reason with intelligence and have an opinion" is "looks like it" (and I certainly hope this isn't the case because woo-boy), then how did you not feel this way about Mark V. Shaney?
Like I understand that people live learning about the Chinese Room thought experiment like it's high school, but we actually know it's a program and how it works. There is no mystery.
You're right, we do know how it works. Your mistake is concluding that because we know how LLMs work and they're not that complicated, but we don't know how the brain works and it seems pretty complicated, therefore the brain can't be doing what LLMs do. That just doesn't follow.
You made exactly the same argument in the opposite direction, asking if my rubric for "can reason with intelligence and have an opinion" is "seems like it", and your rubric for "thinking is not a token predictor driven by matrix multiplications" is "seems like it".
You can make a case for the plausibility of each conclusion, but that's doesn't make it a fact, which is how you're presenting it.
They’ll just look incredibly silly in, say, ten years from now.
In fact, much of the popular commentary around ChatGPT from around two years ago already looks so.
The "stochastic parrot" crowd keeps repeating "it's just a token predictor!" like that somehow makes any practical difference whatsoever. Thing is, if it's a token predictor that consistently correctly predicts tokens that give the correct answer to, say, novel logical puzzles, then it is a reasoning token predictor, with all that entails.
As an aside, at one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... (great work by lucidrains) or via routed queries with https://github.com/glassroom/heinsen_routing (don't fully understand it; apparently related to attention). Both approaches seemed to work, but I had to put that work on hold for reasons outside my control.
Also as an aside, I'll add that transformers can be seen as a kind of "RNN" that grows its hidden state with each new token in the input context. I wonder if we will end up needing some new kind of "RNN" that can grow or shrink its hidden state and also access some kind of permanent memory as needed at each step.
We sure live in interesting times!
I don't think the ability to shrink state is needed. You can always represent removed state by additional state that represents deletion of whatever preceding state was there. If anything, this sounds more useful because the fact that this state is no longer believed to be relevant should prevent looping (where it would be repeatedly brought in, considered, and rejected).
Good point. Thank you!
This is common, and commonly called retrieval augmented generation, or RAG.
edit: I did not pay attention to the link. It is about Wu et al's "Memorizing Transformers", which contain an internal memory.
No. RAG is about finding relevant documents/paragraphs (via KNN lookups of their embeddings) and then inserting those documents/paragraphs into the input context, as sequences of input tokens. What I'm talking about is different: https://arxiv.org/abs/2203.08913
I would argue that the G in AGI means it can't require better prompting.
That would like saying that because humans’ output can be better or worse based on better or worse past experience (~prompting, in that it is the source of the equivalent of “in-context learning”), humans lack general intelligence.
No, it's saying that I have general intelligence in part because I am able to reason about vague prompts
We should probably draw a distinction between a human-equivalent G, which certainly can require better prompting (why else did you go to school?!) and god-equivalent G, which never requires better prompting.
Just using the term 'General' doesn't seem to communicate anything useful about the nature of intelligence.
School is not better prompting, it's actually the opposite! It's learning how to deal with poorly formed prompts!
Wordle and cellular automata are very 2D, and LLMs are fundamentally 1D. You might think "but what about Chess!" - except Chess is encoded extremely often as a 1D stream of tokens to notate games, and bound to be highly represented in LLMs' training sets. Wordle and cellular automata are not often, if ever, encoded as 1D streams of tokens - it's not something an LLM would be experienced with even if they had a reasonable "understanding" of the concepts. Imagine being an OK chess player, being asked to play a game blindfolded dictating your moves purely via notation, and being told you suck.
You have probably heard of this really popular game called Bridge before, right? You might even be able to remember tons of advice your Grandma gave you based on her experience playing it - except she never let you watch it directly. Is Grandma "figuring out the game" for you when she finally sits down and teaches you the rules?
Not an authority in the matter, but afaik, with position encodings (part of the Transformers architecture), they can handle dimensionality just fine. Actually some people tried to do 2D Transformers and the results were the same.
Visual transformers are gaining traction and they are 100% focus in 2d data.
That's quite a statement.
We have expert systems, theorem provers and planners but OP probably didn't mean this.
Rather than asking why LLMs can’t do these tasks, maybe one should ask why we’d expect them to be able to in the first place? Do we fully understand why, for example, a cat can’t predict cellular automata? What would such an explanation look like?
I know there are some who will want to immediately jump in with scathing disagreement, but so far I’ve yet to see any solid evidence of LLMs being capable of reasoning. They can certainly do surprising and impressive things, but the kind of tasks you’re talking about require understanding, which, whilst obviously a very thorny thing to try and define, doesn’t seem to have much to do with how LLMs operate.
I don’t think we should be at all surprised that super-advanced autocorrect can’t exhibit intelligence, and we should spend our time building better systems rather than wondering why what we have now doesn’t work. It’ll be obvious in a few years (or perhaps decades) from now that we just had totally the wrong paradigm. It’s frankly bonkers to think you’re ever going to get a pure LLM to be able to do these kind of things with any degree of reliability just by feeding it yet more data or by ‘prompting it better’.