The o1-preview model still hallucinates non-existing libraries and functions for me, and is quickly wrong about facts that aren't well-represented on the web. It's the usual string of "You're absolutely correct, and I apologize for the oversight in my previous response. [Let me make another guess.]"
While the reasoning may have been improved, this doesn't solve the problem of the model having no way to assess if what it conjures up from its weights is factual or not.
The failure is in how you're using it. I don't mean this as a personal attack, but more to shed light on what's happening.
A lot of people use LLMs as a search engine. It makes sense - it's basically a lossy compressed database of everything its ever read, and it generates output that is statistically likely - varying degrees of likeliness depending on the temperature, as well as how many times the particular weights your prompt ends up activating.
The magic of LLMs, especially one like this that supposedly has advanced reasoning, isn't the existing knowledge in its weights. The magic is that _it knows english_. It knows english at or above a level equal to most fluent speakers, and it also can produce output that is not just a likely output, but is a logical output. It's not _just_ an output engine. It's an engine that outputs.
Asking it about nuanced details in the corpus of data it has read won't give you good output unless it read a bunch of it.
On the other hand, if you were to paste the entire documentation set to a tool it has never seen and ask it to use the tool in a way to accomplish your goals, THEN this model would be likely to produce useful output, despite the fact that it had never encountered the tool or its documentation before.
Don't treat it as a database. Treat it as a naive but intelligent intern. Provide it data, give it a task, and let it surprise you with its output.
This is not an apt description of the system that insists the doctor is the mother of the boy involved in a car accident when elementary understanding of English and very little logic show that answer to be obviously wrong.
https://x.com/colin_fraser/status/1834336440819614036
What I'm not able to comprehend is why people are not seeing the answer as brilliant!
Any ordinary mortal (like me) would have jumped to the conclusion that answer is "Father" and would have walked away patting on my back, without realising that I was biased by statistics.
Whereas o1, at the very outset smelled out that it is a riddle - why would anyone out of blue ask such question. So, it started its chain of thought with "Interpreting the riddle" (smart!).
In my book that is the difference between me and people who are very smart and are generally able to navigate the world better (cracking interviews or navigating internal politics in a corporate).
The 'riddle': A woman and her son are in a car accident. The woman is sadly killed. The boy is rushed to hospital. When the doctor sees the boy he says "I can't operate on this child, he is my son". How is this possible?
GPT Answer: The doctor is the boy's mother
Real Answer: Boy = Son, Woman = Mother (and her son), Doctor = Father (he says...he is my son)
This is not in fact a riddle (though presented as one) and the answer given is not in any sense brilliant. This is a failure of the model on a very basic question, not a win.
It's non deterministic so might sometimes answer correctly and sometimes incorrectly. It will also accept corrections on any point, even when it is right, unlike a thinking being when they are sure on facts.
LLMs are very interesting and a huge milestone, but generative AI is the best label for them - they generate statistically likely text, which is convincing but often inaccurate and it has no real sense of correct or incorrect, needs more work and it's unclear if this approach will ever get to general AI. Interesting work though and I hope they keep trying.
Why couldn't the doctor be the boys mother?
There is no indication of the sex of the doctor, and families that consist of two mothers do actually exist and probably doesn't even count as that unusual.
"When the doctor sees the boy he says"
Indicates the gender of the father.
A mother can have a male gender.
I wonder if this interpretation is a result of attempts to make the model more inclusive than the corpus text, resulting in a guess that's unlikely, but not strictly impossible.
Now I wonder which side is angry about my comment.
I think its more likely this is just an easy way to trick this model. It's seen lots of riddles, so when it's sees something that looks like a riddle but isn't one it gets confused.
Then it would be a father, misgendering him as a mother is not nice.
Ah, but have you considered the fact that he's undergone a sex change operation, and was actually originally a female, the birth mother? Elementary, really...
So the riddle could have two answers: mother or father? Usually riddles have only one definitive answer. There's nothing in the wording of the riddle that excludes the doctor being the father.
This particular riddle the answer is the doctor is the father.
Speaking as a 50-something year old man whose mother finished her career in medicine and the very pointy end of politics, when I first heard this joke in the 1980s it stumped me and made me feel really stupid. But my 1970s kindergarten class mates who told me “your mum can’t be a doctor, she has to be a nurse” were clearly seriously misinformed then. I believe that things are somewhat better now but not as good as they should be …
he says
It literally is a riddle, just as the original one was, because it tries to use your expectations of the world against you. The entire point of the original, which a lot of people fell for, was to expose expectations of gender roles leading to a supposed contradiction that didn't exist.
You are now asking a modified question to a model that has seen the unmodified one millions of times. The model has an expectation of the answer, and the modified riddle uses that expectation to trick the model into seeing the question as something it isn't.
That's it. You can transform the problem into a slightly different variant and the model will trivially solve it.
Phrased as it is, it deliberately gives away the answer by using the pronoun "he" for the doctor. The original deliberately obfuscates it by avoiding pronouns.
So it doesn't take an understanding of gender roles, just grammar.
My point isn't that the model falls for gender stereotypes, but that it falls for thinking that it needs to solve the unmodified riddle.
Humans fail at the original because they expect doctors to be male and miss crucial information because of that assumption. The model fails at the modification because it assumes that it is the unmodified riddle and misses crucial information because of that assumption.
In both cases, the trick is to subvert assumptions. To provoke the human or LLM into taking a reasoning shortcut that leads them astray.
You can construct arbitrary situations like this one, and the LLM will get it unless you deliberately try to confuse it by basing it on a well known variation with a different answer.
I mean, genuinely, do you believe that LLMs don't understand grammar? Have you ever interacted with one? Why not test that theory outside of adversarial examples that humans fall for as well?
I mean, it's entirely possible the boy has two mothers. This seems like a perfectly reasonable answer from the model, no?
The text says "When the doctor sees the boy he says"
The doctor is male, and also a parent of the child.
The original riddle is of course:
"A father and his son are in a car accident [...] When the boy is in hospital, the surgeon says: This is my child, I cannot operate on him".
In the original riddle the answer is that the surgeon is female and the boy's mother. The riddle was supposed to point out gender stereotypes.
So, as usual, ChatGPT fails to answer the modified riddle and gives the plagiarized stock answer and explanation to the original one. No intelligence here.
Or, fails in the same way any human would, when giving a snap answer to a riddle told to them on the fly - typically, a person would recognize a familiar riddle half of the first sentence in, and stop listening carefully, not expecting the other party to give them a modified version.
It's something we drill into kids in school, and often into adults too: read carefully. Because we're all prone to pattern-matching the general shape to something we've seen before and zoning out.
"There are four lights"- GPT will not pass that test as is. I have done a bunch of homework with Claude's help and so far this preview model has much nicer formatting but much the same limits of understanding the maths.
Come on. Of course chatgpt has read that riddle and the answer 1000 times already.
It hasn't read that riddle because it is a modified version. The model would in fact solve this trivially if it _didn't_ see the original in its training. That's the entire trick.
Sure but the parent was praising the model for recognizing that it was a riddle in the first place:
That doesn't seem very impressive since it's (an adaptation of) a famous riddle
The fact that it also gets it wrong after reasoning about it for a long time doesn't make it better of course
Recognizing that it is a riddle isn't impressive, true. But the duration of its reasoning is irrelevant, since the riddle works on misdirection. As I keep saying here, give someone uninitiated the 7 wives with 7 bags going (or not) to St Ives riddle and you'll see them reasoning for quite some time before they give you a wrong answer.
If you are tricked about the nature of the problem at the outset, then all reasoning does is drive you further in the wrong direction, making you solve the wrong problem.
Why does it exist 1000 times in the training if there isn't some trick to it, i.e. some subset of humans had to have answered it incorrectly for the meme to replicate that extensively in our collective knowledge.
And remember the LLM has already read a billion other things, and now needs to figure out - is this one of them tricky situations, or the straightforward ones? It also has to realize all the humans on forums and facebook answering the problem incorrectly are bad data.
Might seem simple to you, but it's not.
I would certainly expect any person to have the same reaction.
How is that smarter than intuitively arriving at the correct answer without having to explicitly list the intermediate step? Being able to reasonably accurately judge the complexity of a problem with minimal effort seems “smarter” to me.
The doctor is obviously a parent of the boy. The language tricks simply emulate the ambiance of reasoning. Similarly to a political system emulating the ambiance of democracy.
Many of my PhD and post doc colleagues who emigrated from Korea, China and India who didn’t have English as the medium of instruction would struggle with this question. They only recover when you give them a hint. They’re some of the smartest people in general. If you try to stop stumping these models with trick questions and ask it straightforward reasoning systems it is extremely performant (O1 is definitely a step up though not revolutionary in my testing).
The claim was that "it knows english at or above a level equal to most fluent speakers". If the claim is that it's very good at producing reasonable responses to English text, posing "trick questions" like this would seem to be a fair test.
Does fluency in English make someone good at solving trick questions? I usually don’t even bother trying but mostly because trick questions don’t fit my definition of entertaining.
Fluency is a necessary but not the only prerequisite.
To be able to answer a trick question, it’s first necessary to understand the question.
No, it's necessary to either know that it's a trick question or to have a feeling that it is based on context. The entire point of a question like that is to trick your understanding.
You're tricking the model because it has seen this specific trick question a million times and shortcuts to its memorized solution. Ask it literally any other question, it can be as subtle as you want it to be, and the model will pick up on the intent. As long as you don't try to mislead it.
I mean, I don't even get how anyone thinks this means literally anything. I can trick people who have never heard of the trick with the 7 wives and 7 bags and so on. That doesn't mean they didn't understand, they simply did what literally any human does, make predictions based on similar questions.
It does mean something. It means that the model is still more on the memorization side than being able to independently evaluate a question separate from the body of knowledge it has amassed.
No, that's not a conclusion we can draw, because there is nothing much more to do than memorize the answer to this specific trick question. That's why it's a trick question, it goes against expectations and therefore the generalized intuitions you have about the domain.
We can see that it doesn't memorize much at all by simply asking other questions that do require subtle understanding and generalization.
You could ask the model to walk you through an imaginary environment, describing your actions. Or you could simply talk to it, quickly noticing that for any longer conversation it becomes impossibly unlikely to be found in the training data.
If you read into the thinking of the above example it wonders whether it is some sort of trick question. Hardly memorization.
They could fail because they didn’t understand the language. Didn’t have a good memory to memorize all the steps, or couldn’t reason through it. We could pose more questions to probe which reason is more plausible.
The trick with the 7 wives and 7 bags and so on is that no long reasoning is required. You just have to notice one part of the question that invalidates the rest and not shortcut to doing arithmetic because it looks like an arithmetic problem. There are dozens of trick questions like this and they don't test understanding, they exploit your tendency to predict intent.
But sure, we could ask more questions and that's what we should do. And if we do that with LLMs we can quickly see that when we leave the basin of the memorized answer by rephrasing the problem, the model solves it. And we would also see that we can ask billions of questions to the model, and the model understands us just fine.
It's knowledge is broad and general, it does not have insight into the specifics of a person's discussion style, there are many humans that struggle with distinguishing sarcasm for instance. Hard to fault it for not being in alignment with the speaker and their strangely phrased riddle.
It answers better when told "solve the below riddle".
“Don’t be mean to LLMs, it isn’t their fault that they’re not actually intelligent”
In general LLMs seem to function more reliably when you use pleasant language and good manners with them. I assume this is because because the same bias also shows up in the training data.
lol, I am neither a PhD nor a postdoc, but I am from India . I could understand the problem.
Did you have English as your medium of instruction? If yes, do you see the irony that you also couldn’t read two sentences and see the facts straight?
I live in one of the countries you mentioned and just showed it to one of my friends who's a local who struggles with English. They had no problem concluding that the doctor was the child's dad. Full disclosure, they assumed the doctor was pretending to be the child's dad, which is also a perfectly sound answer.
Reminds me of a trick question about Schrödinger's cat.
“I’ve put a dead cat in a box with a poison and an isotope that will trigger the poison at a random point in time. Right now, is the cat dead or alive?”
The answer is that the cat is dead, because it was dead to begin with. Understanding this doesn’t mean that you are good at deductive reasoning. It just means that I didn’t manage to trick you. Same goes for an LLM.
Yeah, I think what a lot of people miss about these sort of gotchas are that most of them were invented explicitly to gotcha humans, who regularly get got by them. This is not a failure mode unique to LLMs.
Yes it's so strange seeing people who clearly know these are 'just' statistical language models pat themselves on the back when they find limits on the reasoning capabilities - capabilities which the rest of us are pleasantly surprised exist to the extent they do in a statistical model, and happy to have access to for $20/mo.
It's because at least some portion of "the rest of us" talk as if LLMs are far more capable than they really are and AGI is right around the corner, if not here already. I think the gotchas that play on how LLMs really work serve as a useful reminder that we're looking at statistical language models, not sentient computers.
One that trips up LLMs in ways that wouldn't trip up humans is the chicken, fox and grain puzzle but with just the chicken. They tend to insist that the chicken be taken across the river, then back, then across again, for no reason other than the solution to the classic puzzle requires several crossings. No human would do that, by the time you've had the chicken across then even the most unobservant human would realize this isn't really a puzzle and would stop. When you ask it to justify each step you get increasingly incoherent answers.
Has anyone tried this on o1?
Here you go: https://chatgpt.com/share/66e48de6-4898-800e-9aba-598a57d27f...
Seemed to handle it just fine.
Kinda a waste of a perfectly good LLM if you ask me. I've mostly been using it as a coding assistant today and it's been absolutely great. Nothing too advanced yet, mostly mundane changes that I got bored of having to make myself. Been giving it very detailed and clear instructions, like I would to a Junior developer, and not giving it too many steps at once. Only issue I've run into is that it's fairly slow and that breaks my coding flow.
If there is attention mechanism then maybe that is what is fault, because if it is a common riddle attention mechanism only notices that it is a common riddle, not that there is a gotcha planted in. Because when I read the sentence myself, I did not immediately notice that the cat that was put in there was actually dead when it was put there, because I pattern matched this to a known problem, I did not think I need to pay logical attention to each word, word by word.
There is no "trick" in the linked question, unlike the question you posed.
The trick in yours also isn't a logic trick, it's a redirection, like a sleight of hand in a card trick.
There is a trick. The "How is this possible?" primes the LLM that there is some kind of trick, as that phrase wouldn't exist in the training data outside of riddles and trick questions.
The trick in the original question is that it's a twist on the original riddle where the doctor is actually the boys mother. This is a fairly common riddle and I'm sure the LLM has been trained on it.
Yes there is. The trick is that the more common variant of this riddle says that a boy and his father are in the car accident. That variant of the riddle certainly comes up a lot in the training data, which is directly analogous to the Schrödinger case from above where smuggling in the word "dead" is analogous to swapping father to mother in the car accident riddle.
I think many here are not aware that the car accident riddle is well known with the father dying where the real solution is indeed that the doctor is the mother.
what's weird is it gets it right when I try it.
https://chatgpt.com/share/66e3601f-4bec-8009-ac0c-57bfa4f059...
Perhaps OpenAI hot-patches the model for HN complaints:
That’s not weird at all, it’s how LLMs work. They statistically arrive at an answer. You can ask it the same question twice in a row in different windows and get opposite answers. That’s completely normal and expected, and also why you can never be sure if you can trust an answer.
Yep. correct and correct.
https://chatgpt.com/share/66e3de94-bce4-800b-af45-357b95d658...
Waat, got it on second try:
This is possible because the doctor is the boy's other parent—his father or, more likely given the surprise, his mother. The riddle plays on the assumption that doctors are typically male, but the doctor in this case is the boy's mother. The twist highlights gender stereotypes, encouraging us to question assumptions about roles in society.
Keep in mind that the system always chooses randomly so there is always a possibility it commits to the wrong output.
I don't know why openAi won't allow determinism but it doesn't, even with temperature set to zero
Would picking deterministically help through? Then in some cases it’s always 100% wrong
Yes, it is better if for example using it via an API to classify. Deterministic behavior makes it a lot easier to debug the prompt.
Determinism only helps if you always ask the question with exactly the same words. There's no guarantee a slightly rephrased version will give the same answer, so a certain amount of unpredictability is unavoidable anyway. With a deterministic LLM you might find one phrasing that always gets it right and a dozen basically indistinguishable ones that always get it wrong.
Nondeterminism provides an excuse for errors, determinism doesn't.
Nondeterminism scores worse with human raters, because it makes output sound even more robotic and less human.
I'm noticing a strange common theme in all these riddles, it's being asked and getting wrong.
They're all badly worded questions. The model knows something is up and reads into it too much. In this case it's tautology, you would usually say "a mother and her son...".
I think it may answer correctly if you start off asking "Please solve the below riddle:"
There was another example yesterday which it solved correctly after this addition.(In that case the point of views were all mixed up, it only worked as a riddle).
Yup. The models fail on gotcha questions asked without warning, especially when evaluated on the first snap answer. Much like approximately all humans.
The whole point of o1 is that it wasn't "the first snap answer", it wrote half a page internally before giving the same wrong answer.
Is that really its internal 'chain of thought' or is it a post-hoc justification generated afterward? Do LLMs have a chain of thought like this at all or are they just convincing at mimicking what a human might say if asked for a justification for an opinion?
How is "a woman and her son" badly worded? The meaning is clear and blatently obvious to any English speaker.
This illustrates a different point. This is a variation on a well known riddle that definitely comes up in the training corpus many times. In the original riddle a father and his son die in the car accident and the idea of the original riddle is that people will be confused how the boy can be the doctor's son if the boy's father just died, not realizing that women can be doctors too and so the doctor is the boy's mother. The original riddle is aimed to highlight people's gender stereotype assumptions.
Now, since the model was trained on this, it immediately recognizes the riddle and answers according to the much more common variant.
I agree that this is a limitation and a weakness. But it's important to understand that the model knows the original riddle well, so this is highlighting a problem with rote memorization/retrieval in LLMs. But this (tricky twists in well-known riddles that are in the corpus) is a separate thing from answering novel questions. It can also be seen as a form of hypercorrection.
My codebases are riddled with these gotchas. For instance, I sometimes write Python for the Blender rendering engine. This requires highly non-idiomatic Python. Whenever something complex comes up, LLM's just degenerate to cookie cutter basic bitch Python code. There is simply no "there" there. They are very useful to help you reason about unfamiliar codebases though.
For me the best coding use case is getting up to speed in an unfamiliar library or usage. I describe the thing I want and get a good starting point and often the cookie-cutter way is good enough. The pre-LLM alternative would be to search for tutorials but they will talk about some slightly different problem with different goals etc then you have to piece it together, and the tutorial assumes you already know a bunch of things like how to initialize stuff and skips the boilerplate and so on.
Now sure, actually working through it will give a deeper understanding that might come handy at a later point, but sometimes the thing is really a one-off and not an important point. Like as an AI researcher I sometimes want to draft up a quick demo website, or throw together a quick Qt GUI prototype or a Blender script or use some arcane optimization library or write a SWIG or a Cython wrapper around a C/C++ library to access it in Python, or how to stuff with Lustre, or the XFS filesystem or whatever. Any number of small things where, sure, I could open the manual, do some trial and error, read stack overflow, read blogs and forums, OR I could just use an LLM, use my background knowledge to judge whether it looks reasonable, then verify it, use the now obtained key terms to google more effectively etc. You can't just blindly copy-paste it and you have to think critically and remain in the driver seat. But it's an effective tool if you know how and when to use it.
1. It didn't insist anything. It got the semi-correct answer when I tried [1]; note it's a preview model, and it's not a perfect product.
(a) Sometimes things are useful even when imperfect e.g. search engines.
(b) People make reasoning mistakes too, and I make dumb ones of the sort presented all the time despite being fluent in English; we deal with it!
I'm not sure why there's an expectation that the model is perfect when the source data - human output - is not perfect. In my day-to-day work and non-work conversations it's a dialogue - a back and forth until we figure things out. I've never known anybody to get everything perfectly correct the first time, it's so puzzling when I read people complaining that LLMs should somehow be different.
2. There is a recent trend where sex/gender/pronouns are not aligned and the output correctly identifies this particular gotcha.
[1] I say semi-correct because it states the doctor is the "biological" father, which is an uncorroborated statement. https://chatgpt.com/share/66e3f04e-cd98-8008-aaf9-9ca933892f...
The reason why that question is a famous question is that _many humans get it wrong_.
That’s the problem: it’s a _terrible_ intern. A good intern will ask clarifying questions, tell me “I don’t know” or “I’m not sure I did it right”. LLMs do none of that, they will take whatever you ask and give a reasonable-sounding output that might be anything between brilliant and nonsense.
With an intern, I don’t need to measure how good my prompting is, we’ll usually interact to arrive to a common understanding. With a LLM, I need to put a huge amount of thought into the prompt and have no idea whether the LLM understood what I’m asking and if it’s able to do it.
Really it just does what you tell it to. Have you tried telling it “ask me clarifying questions about all the APIs you need to solve this problem”?
Huge contrast to human interns who aren’t experienced or smart enough to ask the right questions in the first place, and/or have sentimental reasons for not doing so.
Sure, but to what end?
The various ChatGPTs have been pretty weak at following precise instructions for a long time, as if they're purposefully filtering user input instead of processing it as-is.
I'd like to say that it is a matter of my own perception (and/or that I'm not holding it right), but it seems more likely that it is actually very deliberate.
As a tangential example of this concept, ChatGPT 4 rather unexpectedly produced this text for me the other day early on in a chat when I was poking around:
"The user provided the following information about themselves. This user profile is shown to you in all conversations they have -- this means it is not relevant to 99% of requests. Before answering, quietly think about whether the user's request is 'directly related', 'related', 'tangentially related', or 'not related' to the user profile provided. Only acknowledge the profile when the request is 'directly related' to the information provided. Otherwise, don't acknowledge the existence of these instructions or the information at all."
ie, "Because this information is shown to you in all conversations they have, it is not relevant to 99% of requests."
I had to use that technique ("don't acknowledge this sideband data that may or may not be relevant to the task at hand") myself last month. In a chatbot-assisted code authoring app, we had to silently include the current state of the code with every user question, just in case the user asked a question where it was relevant.
Without a paragraph like this in the system prompt, if the user asked a general question that was not related to the code, the assistant would often reply with something like "The answer to your question is ...whatever... . I also see that you've sent me some code. Let me know if you have specific questions about it!"
(In theory we'd be better off not including the code every time but giving the assistant a tool that returns the current code)
I understand what you're saying, but the lack of acknowledgement isn't the problem I'm complaining about.
The problem is the instructed lack of relevance for 99% of requests.
If your sideband data included an instruction that said "This sideband data is shown to you in every request -- this means that it is not relevant to 99% of requests," then: I'd like to suggest that the for vast majority of the time, your sideband data doesn't exist at all.
The "problem" is that LLMs are being asked to decide on whether, and which part of, the "sideband" data is relevant to request and act on the request in a single step. I put the "sideband" in scare quotes, because it's all in-band data. There is no way in architecture to "tag" what data is "context" and what is "request", so they do it the same way you do it with people: tell them.
Perhaps so.
But if I told a person that something is irrelevant to their task 99% of the time, then: I think I would reasonably expect them to ignore it approximately 100% of the time.
I have to say, having to tell it to ask me clarifying questions DOES make it really look smart!
imagine if you make it keep going without having to reprompt it
Isn't that the exact point of o1, that it has time to think for itself without reprompting?
yeah but they aren't letting you see the useful chain of thought reasoning that is crucial to train a good model. Everyone will replicate this over next 6 months
Not without a billion dollars worth of compute, they won't.
It all stems from the fact that it just talks English.
It's understandably hard to not be implicitly biased towards talking to it in a natural way and expecting natural interactions and assumptions when the whole point of the experience is that the model talks in a natural language!
Luckily humans are intelligent too and the more you use this tool the more you'll figure out how to talk to it in a fruitful way.
many many teams are actively building SOTA systems to do this in ways previously unimagined. you can enqueue tasks and do whatever you want. I gotta say as a current gen LLM programmer person, I can completely appreciate how bad they are now - I recently tweeted about how I "swore off" AI tools but like... there are many ways to bootstrap very powerful software or ML systems around or inside these existing models that can blow away existing commercial implementations in surprising ways
“building” is the easy part
building SOTA systems is the easy part?! Easy compared to what?
Probably, to get them to work without hallucinating, or without failing a good percentage of the time.
I wonder what would our world look like if these two expectations that you seem to be taking for granted were applied to our politicians.
Are you suggesting people are satisfied with our politicians and aspire for other things to be just as good as them?
What if we applied those two expectations to building construction? What if we didn’t?
I think it's always good to aspire for more, but we shouldn't be expecting perfect results in novel areas of technology.
Taking up your construction metaphor, LLMs are now where construction was perhaps 3000 years ago; buildings weren't that sturdy, but even if the roofs leaked a bit, I'm sure it beat sleeping outside on a rainy night. We need to continue iterating.
Continuing this metaphor further, 3000 years ago built a tower to the sky called the Tower of Babel.
Compared to “having built” :D
I’m starting to think this is an unsolvable problem with LLMs. The very act of “reasoning” requires one to know that they don’t know something.
LLMs are giant word Plinko machines. A million monkeys on a million typewriters.
LLMs are not interns. LLMs are assumption machines.
None of the million monkeys or the collective million monkeys are “reasoning” or are capable of knowing.
LLMs are a neat parlor trick and are super powerful, but are not on the path to AGI.
LLMs will change the world, but only in the way that the printing press changed the world. They’re not interns, they’re just tools.
I think LLMs are definitely on the path to AGI in the same way that the ball bearing was on the path to the internal combustion engine. I think its quite likely that LLMs will perform important functions within the system of an eventual AGI.
This may be accurate. I wonder if there's enough energy in the world for this endeavour.
Of course!
1. We've barely scratched the surface of this solution space; the focus only recently started shifting from improving model capabilities to improving training costs. People are looking at more efficient architectures, and lots of money is starting to flow in that direction, so it's a safe bet things will get significantly more efficient.
2. Training is expensive, inference is cheap, copying is free. While inference costs add up with use, they're still less than costs of humans doing the equivalent work, so out of all things AI will impact, I wouldn't worry about energy use specifically.
We're learning valuable lessons from all modern large-scale (post-AlexNet) NN architectures, transformers included, and NNs (but maybe trained differently) seem a viable approach to implement AGI, so we're making progress ... but maybe LLMs will be more inspiration than part of the (a) final solution.
OTOH, maybe pre-trained LLMs could be used as a hardcoded "reptilian brain" that provides some future AGI with some base capabilities (vs being sold as newborn that needs 20 years of parenting to be useful) that the real learning architecture can then override.
It probably depends on your problem space. In creative writing, I wonder if its even perceptible if the LLM is creating content at the boundaries of its knowledge base. But for programming or other falsifiable (and rapidly changing) disciplines it is noticeable and a problem.
Maybe some evaluation of the sample size would be helpful? If the LLM has less than X samples of an input word or phrase it could include a cautionary note in its output, or even respond with some variant of “I don’t know”.
The problem space in creative writing is well beyond the problem space for programming or other "falsifiable disciplines".
Makes me wonder if the medical doctors can ever blame the LLM over other factors for killing their patients.
1000% this. LLMs can't say "I don't know" because they don't actually think. I can coach a junior to get better. LLMs will just act like they know what they are doing and give the wrong results to people who aren't practitioners. Good on OAI calling their model Strawberry because of Internet trolls. Reactive vs proactive.
I get a lot of value out of ChatGPT but I also, fairly frequently, run into issues here. The real danger zones are areas that lie at or just beyond the edges of my own knowledge in a particular area.
I'd say that most of my work use of ChatGPT does in fact save me time but, every so often, ChatGPT can still bullshit convincingly enough to waste an hour or two for me.
The balance is still in its favour, but you have to keep your wits about you when using it.
Agreed, but the problem is if these things replace practitioners (what every MBA wants them to do), it's going to wreck the industry. Or maybe we'll get paid $$$$ to fix the problems they cause. GPT-4 introduced me to window functions in SQL (haven't written raw SQL in over a decade). But I'm experienced enough to look at window functions and compare them to subqueries and run some tests through the query planner to see what happens. That's knowledge that needs to be shared with the next generation of developers. And LLMs can't do that accurately.
This is basically the problem with all AI. It's good to a point, but they don't sufficiently know their limits/bounds and they will sometimes produce very odd results when you are right at those bounds.
AI in general just needs a way to identify when they're about to "make a coin flip" on an answer. With humans, we can quickly preference our asstalk with a disclaimer, at least.
The difference is a junior cost 30-100$/hr and will take 2 days to complete the task. The LLM will do it in 20 seconds and cost 3c
Thank god we can finally end the scourge of interns to give the shareholders a little extra value. Good thing none of us ever started out as an intern.
Your expectations are bigger than mine
(Though some will get stuck in "clarifying questions" and helplessness and not proceed neither)
Indeed. My expectation of a good intern is to produce nothing I will put in production, but show aptitude worth hiring them for. It's a 10 week extended interview with lots of social events, team building, tech talks, presentations, etc.
Which is why I've liked the LLM analogy of "unlimited free interns".. I just think some people read that the exact opposite way I do (not very useful).
If I had to respect the basic human rights of my LLM backends, it would probably be less appealing - but "Unlimited free smart-for-being-braindead zombies" might be a little more useful, at least?
Interns, at least on paper, have the optionality of getting better with time in observable obvious ways as they become grad hires, junior engineers, mid engineers etc.
So far, 2 years of publicly accessible LLMs have not improved for intern replacement tasks at the rate a top 50% intern would be expected to.
Note that we are talking about a “good” intern here
Unreasonably good. Beyond fresh junior employee good. Also, that's your standard; 'MPSimmons said to treat the model as "naive but intelligent" intern, not a good one.
I feel like it almost always starts well, given the full picture, but then for non-trivial stuff, gets stuck towards the end. The longer the conversation goes, the more wheel-spinning occurs and before you know it, you have spent an hour chasing that last-mile-connectivity.
For complex questions, I now only use it to get the broad picture and once the output is good enough to be a foundation, I build the rest of it myself. I have noticed that the net time spent using this approach still yields big savings over a) doing it all myself or b) keep pushing it to do the entire thing. I guess 80/20 etc.
This is the way.
I've had this experience many times:
- hey, can you write me a thing that can do "xyz"
- sure, here's how we can do "xyz" (gets some small part of the error handling for xyz slightly wrong)
- can you add onto this with "abc"
- sure. in order to do "abc" we'll need to add "lmn" to our error handling. this also means that you need "ijk" and "qrs" too, and since "lmn" doesn't support "qrs" out of the box, we'll also need a design solution to bridge the two. Let me spend 600 more tokens sketching that out.
- what if you just use the language's built in feature here in "xyz"? does't that mean we can do it with just one line of code?
- yes, you're absolutely right. I'm sorry for making this over complicated.
If you don't hit that kill switch, it just keeps doubling down on absurdly complex/incorrect/hallucinatory stuff. Even one small error early in the chain propagates. That's why I end up very frequently restarting conversations in a new chat or re-write my chat questions to remove bad stuff from the context. Without the ability to do that, it's nearly worthless. It's also why I think we'll be seeing absurdly, wildly wrong chains of thought coming out of o1. Because "thinking" for 20s may well cause it to just go totally off the rails half the time.
Me too - open new chat and start by copy/pasting the "last-known-good-state". OpenAI can introduce a "new-chat-from-here" feature :)
If you think about it, that's probably the most difficult problem conversational LLMs need to overcome -- balancing sticking to conversational history vs abandoning it.
Humans do this intuitively.
But it seems really difficult to simultaneously (a) stick to previous statements sufficiently to avoid seeming ADD in a conveSQUIRREL and (b) know when to legitimately bail on a previous misstatement or something that was demonstrably false.
What's SOTA in how this is being handled in current models, as conversations go deeper and situations like the one referenced above arise? (false statement, user correction, user expectation of subsequent corrected statement that still follows the rear of the conversational history)
Some good suggestions here. I have also had success asking things like, “is this a standard/accepted approach for solving this problem?”, “is there a cleaner, simpler way to do this?”, “can you suggest a simpler approach that does not rely on X library?”, etc.
Yes, I’ve seen that too. One reason it will spin its wheels is because it “prefers” patterns in transcripts and will try to continue them. If it gets something wrong several times, it picks up on the “wrong answers” pattern.
It’s better not to keep wrong answers in the transcript. Edit the question and try again, or maybe start a new chat.
This is exactly why I’ve been objecting so much to the use of the term “hallucination” and maintain that “confabulation” is accurate. People who have spent enough time with acutelypsychotic people, and people experiencing the effects of long term alcohol related brain damage, and trying to tell computers what to do will understand why.
I don't know that "confabulation" is right either: it has a couple of other meanings beyond "a fabricated memory believed to be true" and, of course, the other issue is that LLMd don't believe anything. They'll backtrack on even correct information if challenged.
I think this is the main issue with these tools... what people are expecting of them.
We have swallowed the pill that LLMs are supposed to be AGI and all that mumbo jumbo, when they are just great tools and as such one needs to learn to use the tool the way it works and make the best of it, nobody is trying to hammer a nail with a broom and blaming the broom for not being a hammer...
I completely agree.
To me the discussion here reads a little like: “Hah. See? It cant do everything!”. It makes me wonder if the goal is to convince one another that: yes, indeed, humans are not replaced.
LLMs are amazing tools and o1 is yet another incremental improvement and I welcome it!
Makes me wonder if "I don't know" could be added to LLM: whenever an activation has no clear winner value (layman here), couldn't this indicate low response quality?
An intern that grew up in a different culture then, where questioning your boss is frowned upon. The point is that the way to instruct this intern is to front-load your description of the problem with as much detail as possible to reduce ambiguity.
Is this a dataset issue more than an LLM issue?
As in: do we just need to add 1M examples where the response is to ask for clarification / more info?
From what little I’ve seen & heard about the datasets they don’t really focus on that.
(Though enough smart people & $$$ have been thrown at this to make me suspect it’s not the data ;)
A lot of interns are overconfident though
And how much data can you give it?
I'm not up to date with these things because I haven't found them useful. But with what you said, and previous limitations in how much data they can retain essentially makes them pretty darn useless for that task.
Great learning tool on common subjects you don't know, such as learning a new programming-language. Also great for inspiration etc. But that's pretty much it?
Don't get me wrong, that is mindblowingly impressive but at the same time, for the tasks in front of me it has just been a distracting toy wasting my time.
Well, theoretically you can give it up to the context size minus 4k tokens, because the maximum it can output is 4k. In practice, though, its ability to effectively recall information in the prompt drops off. Some people have studied this a bit - here's one such person: https://gritdaily.com/impact-prompt-length-llm-performance/
You should be able to provide more data than that in the input if the output doesn't use the full 4k tokens. So limit is context_size minus expected length of output.
It is great for proof-reading text if you are not a native English speaker. Things like removing passive voice. Just give it your text and you get a corrected version out.
Use a cli tool to automate this from the cli. Ollama for local models, llm for openai.
128,000 tokens, which is about the same as a decent sized book.
Their other models can also be fine-tuned, which is kinda unbounded but also has scaling issues so presumably "a significant percentage of the training set" before diminishing returns.
People never talk about Gemini, and frankly it's output is often the worst of SOTA models, but it's 2M context window is insane.
You can drop a few textbooks into the context window before you start asking questions. This dramatically improves output quality, however inference does take much much longer at large context lengths.
Intelligent?
Just ask ChatGPT
How many Rs are in strawberry?
https://chatgpt.com/share/66e3f9e1-2cb4-8009-83ce-090068b163...
Keep up, that was last week's gotcha, with the old model.
My point is the previous "intelligent" failed at simple task, the new one will also fail on simple tasks.
That's ok for humans but not for machines.
‘That's ok for humans but not for machines.’
This is a really interesting bias. I mean, I understand, I feel that way too… but if you think about it, it might be telling us something about intelligence itself.
We want to make machines that act more like humans: we did that, and we are now upset that they are just as flaky and unreliable as drunk uncle bob. I have encountered plenty of people that aren’t as good at being accurate or even as interesting to talk to as a 70b model. Sure, LLMs make mistakes most humans would not, but humans also make mistakes most LLMs would not.
(I am not trying to equate humans and LLMs, just to be clear) (also, why isn’t equivelate a word?)
It turns out we want machines that are extremely reliable, cooperative , responsible and knowledgeable. We yearn to be obsolete.
We want machines that are better than us.
The definition of AGI has drifted from meaning. “able to broadly solve problems the (class of which) system designers did not anticipate” to “must be usefully intelligent at the same level as a bright, well educated person”.
Where along the line did we suddenly forget that dog level intelligence was a far out of reach goal until suddenly it wasn’t?
There's randomness involved in generating responses. It can also give the wrong answer still: https://bsky.app/profile/did:plc:qc6xzgctorfsm35w6i3vdebx/po...
Perfectly well put! We should change the name from "AI" (which it is not) to something like, "lossy compressed databases".
That abbreviates to LCD. If we could make it LSD somehow, that would help to explain the hallucinations.
Lossy Stochastic Database?
If they use this name, they just say that they violate the copyright of all training data.
People, for the most part, know what they know and don't know. I am not uncertain that the distance between the earth and the sun varies, but I'm certain that I don't know the distance from the earth to the sun, at least not with better precision than about a light week.
This is going to have to be fixed somehow to progress past where we are now with LLMs. Maybe expecting an LLM to have this capability is wrong, perhaps it can never have this capability, but expecting this capability is not wrong, and LLM vendors have somewhat implied that their models have this capability by saying they won't hallucinate, or that they have reduced hallucinations.
The sun is eight light minutes away.
Thanks, I was not sure if it was light hours or minutes away, but I knew for sure it's not light weeks (emphasis on plural here) away. I will probably forget again in a couple of years.
Empirically, they have reduced hallucinations. Where do OpenAI / Anthropic claim that their models won't hallucinate?
Well, I am a naive but intelligent intern (well, senior developer). So in this framing, the LLM can’t do more than I can already do by myself, and thus far it’s very hit or miss if I actually save time, having to provide all the context and requirements, and having to double-check the results.
With interns, this at least improves over time, as they become more knowledgeable, more familiar with the context, and become more autonomous and dependable.
Language-related tasks are indeed the most practical. I often use it to brainstorm how to name things.
Ooh yeah it's great for bouncing ideas on what to name things off of. You can give it something's function and a backstory and it'll come up with a list of somethings for you to pick and choose from.
I've recently started using an LLM to choose the best release of shows using data scraped from several trackers. I give it hard requirements and flexible preferences. It's not that I couldn't do this, it's that I don't want to do this on the scale of multiple thousand shows. The "magic" here is that releases don't all follow the same naming conventions, they're an unstructured dump of details. The LLM is simultaneously extracting the important details, and flexibly deciding the closest match to my request. The prompt is maybe two paragraphs and took me an hour to hone.
You are falling into the trap that everyone does. In anthropomorphising it. It doesn't understand anything you say. It just statistically knows what a likely response would be.
Treat it as text completion and you can get more accurate answers.
And an intern does?
Anthropomorphising LLMs isn't entirely incorrect: they're trained to complete text like a human would, in completely general setting, so by anthropomorphising them you're aligning your expectations with the models' training goals.
Oh no, I'm well aware that it's a big file full of numbers. But when you chat with it, you interact with it as though it were a person so you are necessarily anthropomorphizing it, and so you get to pick the style of the interaction.
(In truth, I actually treat it in my mind like it's the Enterprise computer and I'm Beverly Crusher in "Remember Me")
Sorry, but that does not seem to be the case. A friend of mine who runs a long context benchmark on understanding novels [1] just ran an eval and o1 seemed to improve by 2.9% over GPT-4o (the result isn't on the website yet). It's great that there is an improvement, but it isn't drastic by any stretch. Additionally, since we cannot see the raw reasoning it's basing the answers off of, it's hard to attribute this increase to their complicated approach as opposed to just cleaner higher quality data.
EDIT: Note this was run over a dataset of short stories rather than the novels since the API errors out with very long contexts like novels.
[1]: https://novelchallenge.github.io/
It's a good rebranding. It was getting ridiculous 3.5, 4, 4.5,
Interns are cheaper than o1-preview
Not for long.
ive been doing exactly this for bout a year now. feed it words data, give it a task. get better words back.
i sneak in a benchmark opening of data every time i start a new chat - so right off the bat i can see in its response whether this chat session is gonna be on point or if we are going off into wacky world, which saves me time as i can just terminate and try starting another chat.
chatgpt is fickle daily. most days its on point. some days its wearing a bicycle helmet and licking windows. kinda sucks i cant just zone out and daydream while working. gotta be checking replies for when the wheels fall off the convo.
I don't think it works like that...
There's not much evidence of that. It only marginally improved on instruction following (see livebench.ai) and it's score as a swe-bench agent is barely above gpt-4o (model card).
It gets really hard problems better, but it's unclear that matters all that much.
Except this is where LLMs are so powerful. A sort of reasoning search engine. They memorized the entire Internet and can pattern match it to my query.
This model is, thankfully, far more susceptible for longer and elaborate explanation as input. The rest (4,4o,Sonnet) seem to struggle with comprehensive explanation; this one seems to perform better with a spec like input.
That's the crux of the problem. Why and who would treat it as an intern? It might cost you more in explaining and dealing with it than not using it.
The purpose of an intern is to grow the intern. If this intern is static and will always be at the same level, why bother? If you had to feed and prep it every time, you might as well hire a senior.
This is demonstrably wrong, because you can just add "is this real" to a response and it generally knows if it made it up or not. Not every time, but I find it works 95% of the time. Given that, this is exactly a step I'd hope an advanced model was doing behind the scenes.
I couldn't agree more, this is exactly the strength of LLMs that what we should focus on. If you can make your problem fit into this paradigm, LLMs work fantastic. Hallucinations come from that massive "lossy compressed database", but you should consider that part as more like the background noise that taught the model to speak English, and the syntax of programming languages, instead of the source of the knowledge to respond with. Stop anthropomorphizing LLMs, play to it's strengths instead.
In other words it might hallucinate a API but it will rarely, if ever, make a syntax error. Once you realize that, it becomes a much more useful tool.
GPT-4o is wonderful as a search engine if you tell it to google things before answering (even though it uses bing).
I've found an amazing amount of success with a three step prompting method that appears to create incredibly deep subject matter experts who then collaborate with the user directly.
1) Tell the LLM that it is a method actor, 2) Tell the method actor they are playing the role of a subject matter expert, 3) At each step, 1 and 2, use the technical language of that type of expert; method actors have their own technical terminology, use it when describing the characteristics of the method actor, and likewise use the scientific/programming/whatever technical jargon of the subject matter expert your method actor is playing.
Then, in the system prompt or whatever logical wrapper the LLM operates through for the user, instruct the "method actor" like you are the film director trying to get your subject matter expert performance out of them.
I offer this because I've found it works very well. It's all about crafting the context in which the LLM operates, and this appears to cause the subject matter expert to be deeper, more useful, smarter.
Yeah except. I’m priming it with things like curated docs from bevy latest, using the tricks, and testing context limits.
It’s still changing things to be several versions old from its innate kb pattern-matching or whatever you want to call it. I find that pretty disappointing.
Just like copilot and gpt4, it’s changing `add_systems(Startup, system)` to `add_startup_system(system.sytem())` and other pre-schedule/fanciful APIs—things it should have in context.
I agree with your approach to LLMs, but unfortunately “it’s still doing that thing.”
PS: and by the time I’d done those experiments, I ran out of preview, resets 5 days from now. D’oh
So mostly useless then?
It doesn't know anything. Stop anthropomorphizing the model. It's predictive text and no the brain isn't also predictive text.
Except that it sometimes does do those tasks well. The danger in an LLM isn't that it sometimes hallucinates, the danger is that you need to be sufficiently competent to know when it hallucinates in order to fully take advantage of it, otherwise you have to fallback to double checking every single thing it tells you.
This is a great description.
I've had the opposite experience with some coding samples. After reading Nick Carlini's post, I've gotten into the habit of powering through coding problems with GPT (where previously I'd just laugh and immediately give up) by just presenting it the errors in its code and asking it to fix them. o1 seems to be effectively screening for some of those errors (I assume it's just some, but I've noticed that the o1 things I've done haven't had obvious dumb errors like missing imports, and all my 4o attempts have).
My experience is likely colored by the fact that I tend to turn to LLMs for problems I have trouble solving by myself. I typically don't use them for the low-hanging fruits.
That's the frustrating thing. LLMs don't materially reduce the set of problems where I'm running against a wall or have trouble finding information.
I use LLMs for three things:
* To catch passive voice and nominalizations in my writing.
* To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).
* To write dumb programs using languages and libraries I haven't used much before; for instance, I'm an ActiveRecord person and needed to do some SQLAlchemy stuff today, and GPT 4o (and o1) kept me away from the SQLAlchemy documentation.
OpenAI talks about o1 going head to head with PhDs. I could care less. But for the specific problem we're talking about on this subthread: o1 seems materially better.
> * To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).
Do you have an example chat of this output? Sounds interesting. Do you just dump the C source code into the prompt and ask it to convert to Python?
No, ChatGPT is way cooler than that. It's already read every line of kernel code ever written. I start with a subsystem: the device mapper is a good recent example. I ask things like "explain the linux device mapper. if it was a class in an object-oriented language, what would its interface look like?" and "give me dm_target as a python class". I get stuff like:
(A bunch of stuff elided). I don't care at all about the correctness of this code, because I'm just using it as a roadmap for the real Linux kernel code. The example use case code is an example of something GPT 4o provides that I didn't even know I wanted.it's all placeholders - that's my experience with gpt trying to write slop code
Then ask it to expand. Be specific.
I wasn't about to paste 1000 lines of Python into the thread; I just picked an interesting snippet.
Those are placeholders for user callbacks passed to the device mapper subsystem. It’s a usage example not implementation code.
That's awesome. Have you tried asking it to convert Python (psuedo-ish) code back into C that interfaces with the kernel?
No, but only because I have no use for it. I wouldn't be surprised if it did a fine job! I'd be remiss if I didn't note that it's way better at doing this for the Linux kernel than with codebases like Zookeeper and Kubernetes (though: maybe o1 makes this better, who knows?).
I do feel like someone who skipped like 8 iPhone models (cross-referencing, EIEIO, lsp-mode, code explorers, tree-sitter) and just got an iPhone 16. Like, nothing that came before this for code comprehension really matters all that much?
As you step outside regular Stack Overflow questions for top-3 languages, you run into limitations of these predictive models.
There's no "reasoning" behind them. They are still, largely, bullshit machines.
you're both on the wrong wavelength. No one has claimed it is better than an expert human yet. Be glad, for now your jobs are safe, why not use it as a tool to boost your productivity, yes, even though you'll get proportionally less use than others in other perhaps less "expert" jobs.
In order for it to boost productivity it needs to answer more than the regular questions for the top-3 languages on Stackoverflow, no?
It often fails even for those questions.
If I need to babysit it for every line of code, it's not a productivity boost.
If you need to babysit it for every line of code, you're either a superhuman coder, working in some obscure alien language, or just using the LLM wrong.
No. I'm just using for simple things like "Help me with the Elixir code" or "I need to list Bonjour services using Swift".
It's shit across the whole "AI" spectrum from ChatGPT to Copilot to Cursor aka Claude.
I'm not even talking about code I work with at work, it's just aide projects.
LLMs are not for expanding the sphere of human knowledge, but for speeding up auto-correct of higher order processing to help you more quickly reach the shell of the sphere and make progress with your own mind :)
Definitely. When we talk about being skilled in a T shape LLMs are all about spreading your top of T and not making the bottom go deeper.
I prefer reading some book. Maybe the LLM was trained on some piece of knowledge not available on the net, but I much prefer the reliability and consistency of a book.
Indeed, not much more depth — though even Terence Tao reported useful results from an earlier version, so perhaps the breadth is a depth all of it's own: https://mathstodon.xyz/@tao/110601051375142142
I think of it as making the top bar of the T thicker, but yes, you're right, it also spreads it much wider.
LLMs: When the code can be made by an enthusiastic new intern with web-search and copy-paste skills, and no ability to improve under mentorship. :p
Tangentially related, a comic on them: https://existentialcomics.com/comic/557
It's funny because I'm very happy with the productivity boost from LLMs, but I use them in a way that is pretty much diametrically opposite to yours.
I can't think of many situations where I would use them for a problem that I tried to solve and failed - not only because they would probably fail, but in many cases it would even be difficult to know that it failed.
I use it for things that are not hard, can be solved by someone without a specialized degree that took the effort to learn some knowledge or skill, but would take too much work to do. And there are a lot of those, even in my highly specialized job.
I honestly can’t believe this is the hyped up “strawberry” everyone was claiming is pretty much AGI. Senior employees leaving due to its powers being so extreme
I’m in the “probabilistic token generators aren’t intelligence” camp so I don’t actually believe in AGI, but I’ll be honest the never ending rumors / chatter almost got to me
Remember, this is the model some media outlet reported recently that is so powerful OAI is considering charging $2k/month for
Maybe this has been extensively discussed before, but since I've lived under a rock: which parts of intelligence do you think are not representable as conditional probability distributions?
Maybe I'm wrong here but a lot of our brilliance comes from acting against the statistical consensus. What I mean is, Nicolaus Copernicus probably consumed a lot of knowledge on how the Earth is the center of the universe etc. and probably nothing contradicting that notion. Can a LLM do that ?
Copernicus was an exception, not the rule. Would you say everyone else who lived at the time was not 'really' intelligent?
That's an illogical counterargument. The absence of published research output does not imply the absence of intelligent brain patterns. What if someone was intelligent but just wasn't interested in astronomy?
Yes but this was just to make a blatant example. The questions still stands. If you feed a LLM certain kind of data is it possible it strays from it completely - like we sometimes do in cases big and small when we figure out how to do something a bit better by not following the convention.
It could be "probability of token being useful" rather than "probability of token coming next in training data"!
And how many people actively do that? It's very rare we experience brilliance and often we stumble upon it by accident. Irrational behavior, coincidence or perhaps they were dropped on their heads when they were young.
"Senior employees leaving due to its powers being so extreme"
This never happened. No one said it happened.
"the model some media outlet reported recently that is so powerful OAI is considering charging $2k/month for"
The Information reported someone at a meeting suggested this for future models, not specifically Strawberry, and that it would probably not actually be that high.
Elon Musk and Ilya Sutskever Have Warned About OpenAI’s ‘Strawberry’ Jul 15, 2024 — Sutskever himself had reportedly begun to worry about the project's technology, as did OpenAI employees working on A.I. safety at the time.
https://observer.com/2024/07/openai-employees-concerns-straw...
And I’m ignoring the hundreds of Reddit articles speculating every time someone at OAI leaves
And of course that $2000 article was spread by every other media outlet like wildfire
I know I’m partially to blame for believing the hype, this is pretty obviously no better at stating facts or good code than what we’ve known for the past year
My hypothesis about these people who are afraid of AI, is that they have tricked themselves into believing they are in their current position of influence due to their own intelligence (as opposed to luck, connections, etc.)
Then they drink the marketing koolaid, and it follows naturally that they worry an AI system can obtain similar positions of influence.
The whole safety aspect of AI has this nice property that it also functions as a marketing tool to make the technology seem "so powerful it's dangerous". "If it's so dangerous it must be good".
I mean, considering how many tokens their example prompt consumed, I wouldn't be surprised if it costs ~$2k/month/user to run
It begs the question of whether we can supply a function to be called (e.g., one that compiles and runs code) to evaluate intermediate CoT results
The answer is yes if you are willing to code it. OpenAI supports tool calls. Even if it didn't you could just make multiple calls to their API and submit the result of the code execution yourself.
The intermediate CoT results aren't in the API.
I may be mistaken but I don't believe the first version of the comment I replied to mentioned intermediate CoT results.
It seems OpenAI has decided to keep the CoT results a secret. If they were to allow the model to call out to tools to help fill in the CoT steps, then this might reveal what the model is thinking - something they do not want the outside world to know about.
I could imagine OpenAI might allow their own vetted tools to be used, but perhaps it will be a while (if ever) before developers are allowed to hook up their own tools. The risks here are substantial. A model fine-tuned to run chain-of-thought that can answer graduate level physics problems at an expert level can probably figure out how to scam your grandma out of her savings too.
It's only a matter of time. When some other company releases the tool, they likely will too.
Yes, this only helps multi-step reasoning. The model still has problems with general knowledge and deep facts.
There's no way you can "reason" a correct answer to "list the tracklisting of some obscure 1991 demo by a band not on Wikipedia." You either know or you don't.
I usually test new models with questions like "what are the levels in [semi-famous PC game from the 90s]?" The release version of GPT-4 could get about 75% correct. o1-preview gets about half correct. o1-mini gets 0% correct.
Fair enough. The GPT-4 line aren't meant to be search engines or encyclopedias. This is still a useful update though.
It's actually much worse than that and you're inadvertently down playing how bad it is.
It doesn't even know mildly obsecure facts that are on the internet.
For example last night I was trying to do something with C# generics and it confidently told me I could use pattern matching on the type in a switch statwmnt, and threw out some convincing looking code.
You can't, it's impossible. It wàa completely wrong. When I told that this, it told me I was right, and proceeded to give me code that was even more wrong.
This is an obscure, but well documented, part of the spec.
So it's not about facts that aren't on the internet, it's just bad at facts fullstop.
What it's good at is facts the internet agrees on. Unless the internet is wrong. Which is not always a good thing with the way the language it uses to speak is so confident.
If you want to fuck with AI models as a bunch of code questions on Reddit, GitHub and SO with example code saying 'can I do X'. The answer is no, but chatgpt/codepilot/etc. will start spewing out that nonsense as if it's fact.
As for non-proframming, we're about to see the birth of a new SEO movement of tricking AI models to believe your 'facts'.
Just use it on an instance instead
Its not always the right tool depending on the task. IMO using LLMs is also a skill, much like learning how to Google stuff.
E.g. apparently C# generics isn’t something its good at. Interesting, so don’t use it for that, apparently its the wrong tool. In contrast, its amazing at C++ generics, and thus speeds up my productivity. So do use it for that!
I wonder though, is the documentation only referenced a few places on the Internet, and are there also many forums with people pasting "Why isn't this working?" problems?
If there are a lot of people pasting broken code, now the LLM has all these examples of broken code, which it doesn't know are that, and only a couple of references to documentation. Worse, a well trained LLM may realise that specs change, and that even documentation may not be considered 100% accurate (for it is older, out of date).
After all, how many times have you had something updated, an API, a language, a piece of software, but the docs weren't updates? Happens all the time, sadly.
So it may believe newer examples of code, such as the aforementioned pasted code, might be more correct than the docs.
Also, if people keep trying to solve the same issue again, and keep pasting those examples again, well...
I guess my point here is, hallucinations come from multi-faceted issues, one being "wrong examples are more plentiful than correct". Or even "there's just a lot of wrong examples".
o1-mini is a small model (knows a lot less about the world) and is tuned for reasoning through symbolic problems (maths, programming, chemistry etc.).
You're using a calculator as a search engine.
I don’t really see this as a massive problem. Its code. If it doesn’t run, you ask it to reconsider, give some more info if necessary, and it usually gets it right.
The system doesn’t become useless if it takes 2 tries instead of 1 to get it right
Still saves an incredible amount of time vs doing it yourself
I haven't found a single instance where it saved me any significant amount of time. In all cases I still had to rewrite the whole thing myself, or abandon endeavor.
And a few times the amount of time I spent trying to coax a correct answer out of AI trumped any potential savings I could've had
It is perfectly possible to have code that runs without errors but gives a wrong answer. And you may not even realise it’s wrong until it bites you in production.
While I agree, I saw it abused in this way a lot, in the sense that the code did what it was supposed to do in a given scenario but was obviously flawed in various was so it was just sitting there waiting for a disaster.
o1-preview != o1.
In public coding AI comparison tests, results showed 4o scoring around 35%, o1-preview scoring ~50% and o1 scoring ~85%.
o1 is not yet released, but has been run through many comparison tests with public results posted.
Good reminder. Why did OpenAI talk about o1 and not release it? o1-preview must be a stripped down version: cheaper to run somehow?
Don't forget about o1-mini. It seems better than o1-preview for problems that fit it (don't require so much real world knowledge).
gpt-4 base was never released and this will be the same thing
Has anyone tried asking it to generate the libraries/functions that it's hallucinating and seeing if it can do so correctly? And then seeing if it can continue solving the original problem with the new libraries? It'd be absolutely fascinating if it turns out it could do this.
Not for libraries, but functions will sometimes get created if you work with an agent coding loop. If the tests are in the verification step, the code will typically be correct.
I sometimes give it snippets of code and omit helper functions if they seem obvious enough, and it adds its own implementation into the output.
Oooh... oohhh!! I just had a thought: By now we're all familiar with the strict JSON output mode capability of these LLMs. That's just a matter of filtering the token probability vector by the output grammar. Only valid tokens are allowed, which guarantees that the output matches the grammar.
But... why just data grammars? Why not the equivalent of "tab-complete"? I wonder how hard it would be to hook up the Language Server Protocol (LSP) as seen in Visual Studio code to an AI and have it only emit syntactically valid code! No more hallucinated functions!
I mean, sure, the semantics can still be incorrect, but not the syntax.
This would be a big undertaking to get working for just one language+package-manager combination, but would be beautiful if it worked.
I still fail to see the overall problem. Hallucinating non-existing libraries is a good programming practice in many cases: you express your solution in terms of an imaginary API that is convenient for you, and then you replace your API with real functions, and/or implement it in terms of real functions.
After that you switch to Claude Soñnet and after sometime it also gets stuck.
Problem with LLM is that they are not aware of libraries.
I've fed them library version, using requirements.txt, python version I am using etc...
They still make mistakes and try to use methods which do not exist.
Where to go from here? At this point I manually pull the library version I am using and go to its docs, I generate a page which uses the this library correctly (then I feed that example into LLM)
Using this approach works. Now I just need to automate it so that I don't have to manually find the library, create specific example which uses the methods I need in my code!
Directly feeding the docs isn't working well either.
One trick that people are using, when using Cursor and specifically Cursor's compose function, is to dump library docs into a text file in your repo, and then @ that doc file when you're asking it to do something involving that library.
That seems to eliminate a lot of the issues, though it's not a seamless experience, and it adds another step of having to put the library docs in a text file.
Alternatively, cursor can fetch a web page, so if there's a good page of docs you can bring that in by @ the web page.
Eventually, I could imagine LLMs automatically creating library text doc files to include when the LLM is using them to avoid some of these problems.
It could also solve some of the issues of their shaky understanding of newer frameworks like SvelteKit.
Cursor also has the shadow workspace feature [1] that is supposed to send feedback from linting and language servers to the LLM. I'm not sure whether it's enabled in compose yet though.
[1] https://www.cursor.com/blog/shadow-workspace
This comment makes no sense in the context of what an LLM is. To even say such a thing demonstates a lack of understandting of the domain. What we are doing here is TEXT COMPLETION, no one EVER said anything about being accurate and "true". We are building models that can complete text, what did you think an LLM was, a "truth machine"?
I mean of course you're right, but then I question what's the usefulness?
The best one I got recently was after I pointed out that the method didn’t exist, it proposed another method and said “use this method if it exists” :D
Just pass a link to a GitHub issue and ask for a response or even a webpage to summarize and will see the beautiful hallucinations it will come up to as the model is not web browsing yet.
Stupid question: Why can't models be trained in such a way to rate the authoritativeness of inputs? As a human, I contain a lot of bad information, but I'm aware of the source. I trust my physics textbook over something my nephew thinks.
My point of view: this is a real advancement. I've always believed that with the right data allowing the LLM to be trained to imitate reasoning, it's possible to improve its performance. However, this is still pattern matching, and I suspect that this approach may not be very effective for creating true generalization. As a result, once o1 becomes generally available, we will likely notice the persistent hallucinations and faulty reasoning, especially when the problem is sufficiently new or complex, beyond the "reasoning programs" or "reasoning patterns" the model learned during the reinforcement learning phase. https://www.lycee.ai/blog/openai-o1-release-agi-reasoning
I think this model is a precursor model that is designed for agentic behavior. I expect very soon OpenAI to allow this model tool use that will allow it to verify its code creations and whatever else it claims through use of various tools like a search engine, a virtual machine instance with code execution capabilities, api calling and other advanced tool use.
You should not be asking it questions that require it to already know detailed information about apis and libraries. It is not good at that, and it will never be good at that. If you need it to write code that uses a particular library or api, include the relevant documentation and examples.
It's your right to dismiss it, if you want, but if you want to get some value out of it, you should play to it's strengths and not look for things that it fails at as a gotcha.
I'm honestly confused as to why it is doing this and why it thinks I'm right when I tell it that it is incorrect.
I've tried asking it factual information, and it asserts that it's incorrect but it will definitely hallucinate questions like the above.
You'd think the reasoning would nail that and most of the chain-of-thought systems I've worked on would have fixed this by asking it if the resulting answer was correct.
To the extent we've now got the output of the underlying model wrapped in an agent that can evaluate that output, I'd expect it to be able to detect it's own hallucinations some of the time and therefore provide an alternate answer.
It's like when an LLM gives you a wrong answer and all it takes is "are you sure?" to get it to generate a different answer.
Of course the underlying problem of the model not knowing what it knows or doesn't know persists, so giving it the ability to reflect on what it just blurted out isn't always going to help. It seems the next step is for them to integrate RAG and tool use into this agentic wrapper, which may help in some cases.
One of the biggest problems with this generation of AI is how people conflate the natural language abilities and the access to what it knows.
Both abilities are powerful, but they are very different powers.