return to table of content

ARC Prize – a $1M+ competition towards open AGI progress

salamo
93 replies
19h51m

This is super cool. I share Francois' intuition that the presently data-hungry learning paradigm is not only not generalizable but unsustainable: humans do not need 10,000 examples to tell the difference between cats and dogs, and the main reason computers can today is because we have millions of examples. As a result, it may be hard to transfer knowledge to more esoteric domains where data is expensive, rare, and hard to synthesize.

If I can make one criticism/observation of the tests, it seems that most of them reason about perfect information in a game-theoretic sense. However, many if not most of the more challenging problems we encounter involve hidden information. Poker and negotiations are examples of problem solving in imperfect information scenarios. Smoothly navigating social situations also requires a related problem of working with hidden information.

One of the really interesting things we humans are able to do is to take the rules of a game and generate strategies. While we do have some algorithms which can "teach themselves" e.g. to play go or chess, those same self-play algorithms don't work on hidden information games. One of the really interesting capabilities of any generally-intelligent system would be synthesizing a general problem solver for those kinds of situations as well.

com2kid
56 replies
18h44m

humans do not need 10,000 examples to tell the difference between cats and dogs,

I swear, not enough people have kids.

Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

One thing kids do is they'll ask for confirmation of their guess. You'll be reading a book you've read 50 times before and the kid will stop you, point at a dog in the book, and ask "dog?"

And there is a development phase where this happens a lot.

Also kids can get mad if they are told an object doesn't match up to the expected label, e.g. my son gets really mad if someone calls something by the wrong color.

Another thing toddlers like to do is play silly labeling games, which is different than calling something the wrong name on accident, instead this is done on purpose for fun. e.g. you point to a fish and say "isn't that a lovely llama!" at which point the kid will fall down giggling at how silly you are being.

The human brain develops really slowly[1], and a sense of linear time encoding doesn't really exist for quite awhile. (Even at 3, everything is either yesterday, today, or tomorrow) so who the hell knows how things are being processed, but what we do know is that kids gather information through a bunch of senses, that are operating at an absurd data collection rate 12-14 hours a day, with another 10-12 hours of downtime to process the information.

[1] Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot. Watch kids who are learning to stand develop a sense of "up above me" after they bonk their heads a few time on a table bottom. Kids only learn "fast" in the sense that they have nothing else to do for years on end.

PheonixPharts
38 replies
17h17m

Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

I have kids so I'm presuming I'm allowed to have an opinion here.

This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Once they have the basics down concept acquisition time shrinks rapidly and kids can easily learn their new favorite animal in as little as a single example.

Compare this to LLMs which can one-shot certain tasks, but only if they have essentially already memorized enough information to know about that task. It gives the illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts.

Beyond just learning a new animal, humans are able to learn entirely new systems of reasoning in surprisingly few examples (though it does take quite a bit of time to process them). How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.

dimask
16 replies
10h2m

How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.

Not just that: people learn mathematics mainly by _thinking over and solving problems_, not by memorising solutions to problems. During my mathematics education I had to practice solving a lot of problems dissimilar what I had seen before. Even in the theory part, a lot of it was actually about filling in details in proofs and arguments, and reformulating challenging steps (by words or drawings). My notes on top of a mathematical textbook are much more than the text itself.

People think that knowledge lies in the texts themselves; it does not, it lies in what these texts relate to and the processes that they are part of, a lot of which are out in the real world and in our interactions. The original article is spot on that there is no AGI pathway in the current research direction. But there are huge incentives for ignoring this.

naasking
10 replies
4h2m

Not just that: people learn mathematics mainly by _thinking over and solving problems_, not by memorising solutions to problems.

I think it's more accurate to say that they learn math by memorizing a sequence of steps that result in a correct solution, typically by following along with some examples. Hopefully they also remember why each step contributes to the answer as this aids recall and generalization.

The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly. This is just standard training. Understanding the motivation of each step helps with that memorization, and also allows you to apply that step in novel problems.

The original article is spot on that there is no AGI pathway in the current research direction.

I think you're wrong. The research on grokking shows that LLMs transition from memorization to generalized circuits for problem solving if trained enough, and parametric memory generalizes their operation to many more tasks.

They have now been able to achieve near perfect accuracy on comparison tasks, where GPT-4 is barely in the double digit success rate.

Composition tasks are still challenging, but parametric memory is a big step in the right direction for that too. Accurate comparitive and compositional reasoning sound tantalizingly close to AGI.

Vetch
5 replies
3h23m

The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly

Simply memorizing sequences of steps is not how mathematics learning works, otherwise we would not see so much variation in outcomes. Me and Terence Tao on the same exact math training data would not yield two mathematicians of similar skill.

While it's true that memorization of properties, structure, operations and what should be applied when and where is involved, there is a much deeper component of knowing how these all relate to each other. Grasping their fundamental meaning and structure, and some people seem to be wired to be better at thinking about and picking out these subtle mathematical relations using just the description or based off of only a few examples (or be able to at all, where everyone else struggles).

I think you're wrong. The research on grokking shows that LLMs transition from memorization to generalized circuits

It's worth noting that for composition, key to abstract reasoning, LLMs failed to generalize to out of domain examples on simple synthetic data.

From: https://arxiv.org/abs/2405.15071

The levels of generalization also vary across reasoning types: when faced with out-of-distribution examples, transformers fail to systematically generalize for composition but succeed for comparison.
naasking
4 replies
2h58m

Simply memorizing sequences of steps is not how mathematics learning works, otherwise we would not see so much variation in outcomes

Everyone starts by memorizing how to do basic arithmetic on numbers, their multiplication tables and fractions. Only some then advance to understanding why those operations must work as they do.

It's worth noting that for composition, key to abstract reasoning, LLMs failed to generalize to out of domain examples on simple synthetic data.

Yes, I acknowledged that when I said "Composition tasks are still challenging". Comparisons and composition are both key to abstract reasoning. Clearly parametric memory and grokking have shown a fairly dramatic improvement in comparative reasoning with only a small tweak.

There is no evidence to suggest that compositional reasoning would not also fall to yet another small tweak. Maybe it will require something more dramatic, but I wouldn't bet on it. This pattern of thinking humans are special does not have a good track record. Therefore, I find the original claim that I was responding to("there is no AGI pathway in the current research direction") completely unpersuasive.

SonOfLilit
1 replies
2h39m

I started by understanding. I could multiply by repeat addition (each addition counted one at a time with the aid of fingers) before I had the 10x10 addition table memorized. I learned university level calculus before I had more than half of the 10x10 multiplication table memorized, and even that was from daily use, not from deliberate memorization. There wasn't a day in my life where I could recite the full table.

Maybe schools teach by memorization, but my mom taught me by explaining what it means, and I highly recommend this approach (and am a proof by example that humans can learn this way).

naasking
0 replies
2h16m

I started by understanding. I could multiply by repeat addition

How did you learn what the symbols for numbers mean and how addition works? Did you literally just see "1 + 3 = 4" one day and intuit the meaning of all of those symbols? Was it entirely obvious to you from the get-go that "addition" was the same as counting using your fingers which was also the same as counting apples which was also the same as these little squiggles on paper?

There's no escaping the fact that there's memorization happening at some level because that's the only way to establish a common language.

11101010001100
1 replies
2h40m

The point is the memorization exercise requires orders of magnitude fewer examples for bootstrapping.

naasking
0 replies
2h15m

Does it though? It's a common claim but I don't think that's been rigourously established.

shkkmo
3 replies
3h26m

The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly

Perhaps that is how you learned math, but it is nothing like how I learned math. Memorizing steps does not help, I sucked at it. What works for me us understanding the steps and why we used them. Once I understood the process and why it worked, I was able to reason my way through it.

The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly.

Did you look at the types of problems presented by the ARC-AGO test? I don't see how memorization plays any role.

They have now been able to achieve near perfect accuracy on comparison tasks, where GPT-4 is barely in the double digit success rate.

Then lets see how they do on the ARC test? While it is possible that generalized circuits can develop in Ls with enough training but I am pretty skeptical till we see results.

naasking
2 replies
3h7m

Perhaps that is how you learned math, but it is nothing like how I learned math.

Memorization is literally how you learned arithmetic, multiplication tables and fractions. Everyone starts learning math by memorization, and only later start understanding why certain steps work. Some people don't advance to that point, and those that do become more adept at math.

pedrosorio
1 replies
1h58m

Memorization is literally how you learned arithmetic, multiplication tables and fractions

I understood how to do arithmetic for numbers with multiple digits before I was taught a "procedure". Also, I am not even sure what you mean by "memorization is how you learned fractions". What is there to memorize?

naasking
0 replies
1h5m

I understood how to do arithmetic for numbers with multiple digits before I was taught a "procedure"

What did you understand, exactly? You understood how to "count" using "numbers" that you also memorized? You intuitively understood that addition was counting up and subtraction was counting down, or did you memorize those words and what they meant in reference to counting?

Also, I am not even sure what you mean by "memorization is how you learned fractions". What is there to memorize?

The procedure to add or subtract fractions by establishing a common denominator, for instance. The procedure for how numerators and denominators are multiplied or divided. I could go on.

whyever
1 replies
9h30m

I think there is a component of memorizing solutions. For example, for mathematical proofs there is a set of standard "tricks" that you should have memorized.

shkkmo
0 replies
3h25m

Sure memory helps a lot, it allows you to concentrate your mental effort on the novel ot unique parts of the problem.

TeMPOraL
1 replies
8h29m

People think that knowledge lies in the texts themselves; it does not, it lies in what these texts relate to and the processes that they are part of, a lot of which are out in the real world and in our interactions

And almost all of it is just more text, or described in more text.

You're very much right about this. And that's exactly why LLMs work as well as they do - they're trained on enough text of all kinds and topics, that they get to pick up on all kinds of patterns and relationships, big and small. The meaning of any word isn't embedded in the letters that make it, but in what other words and experiences are associated with it - and it so happens that it's exactly what language models are mapping.

dimask
0 replies
5h48m

It is not "just more text". That is an extremely reductive approach on human cognition and experience that does favour to nothing. Describing things in text collapses too many dimensions. Human cognition is multimodal. Humans are not computational machines, we are attuned and in constant allostatic relationship with the changing world around us.

imtringued
0 replies
5h12m

Every time I see people online reduce the human thinking process to just production of a perceptible output, I start questioning myself, whether somehow I am the only human on this planet capable of thinking and everyone else is just pretending. That can't be right. It doesn't add up.

The answer is that both humans and the model are capable of reasoning, but the model is more restricted in the reasoning that it can perform since it must conform to the dataset. This means the model is not allowed to invest tokens that do not immediately represent an answer but have to be derived on the way to the answer. Since these thinking tokens are not part of the dataset, the reasoning that the LLM can perform is constrained to the parts of the model that are not subject to the straight jacket of training loss. Therefore most of the reasoning occurs in-between the first and last layers and ends with the last layer, at which point the produced token must cross the training loss barrier. Tokens that invest into the future but are not in the dataset get rejected and thereby limit the ability of the LLM to reason.

p1esk
8 replies
15h56m

illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts

Now imagine how much would your kid learn if the only input he ever received was a sequence of words?

_flux
7 replies
9h42m

Are you saying it's not fair for LLMs, because of the way they are taught is different?

The difference is that we don't know better methods for them, but we do know of better methods for people.

TeMPOraL
5 replies
8h25m

I think they're saying that it's silly to claim humans learn with less data than LLMs, when humans are ingesting a continuous video, audio, olfactory and tactile data stream for 16+ hours a day, every day. It takes at least 4 years for a human children to be in any way comparable in performance to GPT-4 on any task both of them could be tested on; do people really believe GPT-4 was trained with more data than a 4 year old?

lelanthran
2 replies
2h24m

I think they're saying that it's silly to claim humans learn with less data than LLMs, when humans are ingesting a continuous video, audio, olfactory and tactile data stream for 16+ hours a day, every day.

Yeah, but they're seeing mostly the same thing day after day!

They aren't seeing 10k stills of 10k different dogs, then 10k stills of 10k different cats. They're seeing $FOO thousand images of the family dog and the family cat.

My (now 4.5yo) toddler did reliably tell the difference between cats and dogs the first time he went with us to the local SPCA and saw cats and dogs that were not our cats and dogs.

In effect, 2 cats and 2 dogs were all he needed to reliably distinguish between cats and dogs.

TeMPOraL
1 replies
2h12m

In effect, 2 cats and 2 dogs were all he needed to reliably distinguish between cats and dogs.

I assume he was also exposed to many images, photos and videos (realistic or animated) of cats and dogs in children books and toys he handled. In our case, this was a significant source of animal recognition skills of my daughters.

lelanthran
0 replies
1h21m

I assume he was also exposed to many images, photos and videos (realistic or animated) of cats and dogs in children books and toys he handled.

No images or photos (no books).

TV, certainly, but I consider it unlikely that animals in the animation style of pepper pig helps the classifier.

Besides which, we're still talking under a dozen cats/dogs seen till that point.

Forget about cats/dogs. Here's another example: he only had to see a burger patty once to determine that it was an altogether new type of food, different from (for example) a sausage.

Anyone who has kids will have dozens of examples where the classifier worked without a false positive off a single novel item.

ben_w
0 replies
7h11m

do people really believe GPT-4 was trained with more data than a 4 year old?

I think it was; the guesstimate I've seen is GPT-4 was trained on 13e12 tokens, that over 4 years is 8.9e9/day or about 1e5/s.

Then it's a question of how many bits per token — my expectation is 100k/s is more than the number of token-equivalents we experience, even though it's much less than the bitrate even of just our ears let alone our eyes.

_flux
0 replies
7h45m

I think that's all fair that both LMMs and and people get a certain (even unbounded) amount of "pretraining" before actual tasks.

But after the training people are much more equipped to do single-shot recognition and cognitive tasks of imagery and situations they have not encountered before, e.g. identifying (from pictures) which animals is being shown, even if it is the second time of seeing that animal (the first being shown that this animal is a zebra).

So, basically, after initial training, I believe people are superior in single-shot tasks—and things are going to get much more interesting once LMMs (or something after that?) are able to do that well.

It might be that GPT-4o can actually do that task well! Someone should demo it, I don't have access. Except, of course, GPT-4o already knows what zebras look like, so something else than exactly that..

p1esk
0 replies
9h22m

So a billion years of evolutionary search plus 20 years of finetuning is a better method?

educasean
5 replies
11h54m

kids can easily learn their new favorite animal in as little as a single example

Until they encounter a similar animal and get confused, at which point you understand the implicit heuristic they were relying on. (Eg. They confused a dairy cow as a zebra, which means their heuristic was a black-and-white quadrupedal)

Doesn't this seem remarkably close to how LLMs behave with one-shot or few-shot learning? I think there are a lot more similarities here than you give it credit for.

Also, I grew up in South Korea where early math education is highly prioritized (for better or for worse). I remember having to solve 2 dozen arithmetic problems every week after school with a private tutor. Yes, it was torture and I was miserable, but it did expose me to thousands more arithmetic questions than my American peers. All that misery paid off when I moved to the U.S. at the age of 12 and realized that my math level was 3-4 years above my peers. So yes, I think human intelligence accuracy also does improve with more training data.

interloxia
3 replies
11h36m

Not many zebras where I live but lots of little dogs. Small dogs were clearly cats for a long time no matter what I said. The training can take a while.

TeMPOraL
2 replies
8h37m

This. My 2.5 y.o. still argues with me that a small dog she just saw in the park is a "cat". That's in contrast to her older sister, who at 5 is... begrudgingly accepting that I might be right about it after the third time I correct her.

lynx23
0 replies
7h56m

And once they learn sarcasm, small dogs are cats again :-)

HarHarVeryFunny
0 replies
6h55m

The thing is that the labels "cat" and "dog" reflect a choice in most languages to name animals based on species, which manifests in certain physical/behavioral attributes. Children need to learn by observation/teaching and generalization that these are the characteristics they need to use to conform to our chosen labelling/distinction, and that other things such as size/color/speed are irrelevant.

Of course it didn't have to be this way - in a different language animals might be named based on size or abilities/behavior, etc.

So, your daughter wanting to label a cat-sized dog as a cat is just a reflection of her not having aligned her generalization of what you are talking about when you say "cat" vs "dog" with her own.

pea
0 replies
2h22m

My favourite part of this is when they apply their new words to things that technically make sense, but don't. My daughter proudly pointed at a king wearing a crown as "sharp king" after learning about knives, saws, etc.

aamar
1 replies
15h16m

How many homework questions did your entire calc 1 class have? I'm guessing less than 100…

I’m quite surprised at this guess and intrigued by your school’s methodology. I would have estimated >30 problems average across 20 weeks for myself.

My kids are still in pre-algebra, but they get way more drilling still, well over 1000 problems per semester once Zern, IReady, etc. are factored in. I believe it’s too much, but it does seem like the typical approach here in California.

com2kid
0 replies
28m

I preferred doing large problem sets in math class because that is the only way I felt like I could gain an innate understanding of the math.

For example after doing several hundred logarithms, I was eventually able to do logs to 2 decimal places in my head. (Sadly I cannot do that anymore!) I imagine if I had just done a dozen or so problems I would not have gained that ability.

TeMPOraL
1 replies
8h38m

This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Yes. All that learning is feeding off one another. They're learning how reality works. Every bit of new information informs everything else. It's something that LLMs demonstrate too, so it shouldn't be a surprising observation.

Once they have the basics down concept acquisition time shrinks rapidly

Sort of, kind of.

and kids can easily learn their new favorite animal in as little as a single example.

Under 5 they don't. Can't speak what happens later, as my oldest kid just had their 5th birthday. But below 5, all I've seen is kids being quick to remember a name, but taking quite a bit longer to actually distinguish between a new animal and similarly looking ones they already know. It takes a while to update the classifier :).

(And no, they aren't going to one-shot recognize an animal in a zoo that they saw first time on a picture hours earlier; it's a case I've seen brought up, and I maintain that even most adults will fail spectacularly at this test.)

Compare this to LLMs which can one-shot certain tasks, but only if they have essentially already memorized enough information to know about that task. It gives the illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts.

Correct, in the sense that the models don't update their weights while you use them. But that just means you have to compare them with ability of humans to one-shot tasks on the spot, "thinking on their feet", which for most tasks makes even adults look bad compared to GPT-4.

How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.

I don't believe someone could learn calc in 100 exercises or less. Per concept like "addition of small numbers", or "long division", or "basic derivatives", or "trivial integrals", yes. Note that in-class exercises count too; learning doesn't happen primarily by homework (mostly because few have enough time in a day to do it).

shkkmo
0 replies
3h17m

But that just means you have to compare them with ability of humans to one-shot tasks on the spot, "thinking on their feet", which for most tasks makes even adults look bad compared to GPT-4.

This simply is not true as stated in the article. ARC-AGI is a one-shot task test that humans reliably do much, much better on than any AI model.

I don't believe someone could learn calc in 100 exercises or less.

I learned the basics of integration in a foreign language I barely understood by watching a couple of diagrams get drawn out and seeing far less than 100 examples or exercises.

com2kid
0 replies
13h45m

This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Sure, but they learn a lot of labels.

How many homework questions did your entire calc 1 class have? I'm guessing less than 100

At least 20 to 30 a week, for about 10 weeks of class. Some weeks were more, and I remember plenty of days where we had 20 problems assigned a day.

Indeed, I am a huge fan of "the best way to learn math is to do hundreds upon hundreds of problems", because IMHO some concepts just require massive amounts of repetition.

_carbyau_
0 replies
16h32m

Two other points - I've also forgotten a bunch, but also know I could "relearn" it faster than the first time around.

To continue your example, I know I've learned calculus and was lauded at the time. Now I could only give you the vagaries, nothing practical. However I know if I was pressed, I could learn it again in short order.

smusamashah
2 replies
11h48m

My kid is about 3 and has been slow on language development. He can barely speak a few short sentences now. Learning names of things and concepts made a big difference for him and that's a fascinating watch and realization.

This reminds of the story of Adam learning names, or how some languages can express a lot more in fewer words. And it makes sense that LLMs look intelligent to us.

My kid loves repeating the names of things he learned recently. For past few weeks, after learning 'spider' and 'snake' and 'dangerous' he keeps finding spiders around, no snakes so makes up snakes from curly drawn lines and tells us they are dangerous.

I think we learn fast because of stereo (3d) vision. I have no idea how these models learn and don't know if 3d vision will make multi model LLMs better and require exponentially less examples.

Tepix
1 replies
9h8m

I think we learn fast because of stereo (3d) vision.

I think stereo vision is not that important if you can move around and get spatial clues that way also.

smusamashah
0 replies
6h53m

Every animal/insect I can think of has more than 1 eye. Some has lot more than 2 eyes. It has to be that important.

resource0x
2 replies
15h33m

I haven't seen 1000 cats in my entire life. I'm sure I learned how to tell a dog from a cat after being exposed to just a single instance of each.

lostmsu
1 replies
14h56m

I'm sure you saw over 1B images of cats though, assuming 24 images per second from vision.

lelanthran
0 replies
1h25m

I'm sure you saw over 1B images of cats though, assuming 24 images per second from vision.

The AI models aren't seeing the same image 1B times.

llm_trw
1 replies
11h32m

Babies, unlike machine learning models, aren't placed in limbo when they aren't running back propagation.

Babies need few examples for complex tasks because they get constant infinitely complex examples on tasks which are used for transfer learning.

Current models take a nuclear reactors worth of power to run back prop on top of a small countries GDP worth of hardware.

They are _not_ going to generalize to AGI because we can't afford to run them.

larodi
0 replies
7h15m

Current models take a nuclear reactors worth of power to run back prop on top of a small countries GDP worth of hardware.

Nice one. Perhaps we are to conclude the whole transformer architecture is amazingly overblown in storage/computation costs.

AGI or not, we need better approach to what transformers are doing.

ein0p
1 replies
11h23m

Not to mention that babies receive petabytes of visual input to go with other stimuli. It’s up for debate how sample efficient humans actually are in the first few years of their lives.

Tepix
0 replies
9h7m

Hardly. Visual acuity is quite low (limited to a tiny area of the FoV), your brain is filling in all the blanks for you.

1024core
1 replies
18h8m

I swear, not enough people have kids.

My friends toddler, who grew up with a cat in the house, would initially call all dogs "cat". :-D

mkl
0 replies
7h24m

My niece, 3yo, at the zoo, spent about 30 seconds trying to figure out whether a pig was a cat or a car.

cess11
0 replies
2h59m

I have a small kid. When they first saw some jackdaws, the first bird they noticed could fly, they thought it was terribly exciting and immediately learned the word for them, and generalised it to geese, crows, gulls and magpies (plus some less common species I don't know what they're called in english), pointing at them and screaming the equivalent of 'jackda! jackda!'.

bamboozled
0 replies
12h12m

I think your comment over intellectualises the way children experience the world.

My child experiences the world in a really pure way. They don’t care much about labels or colours or any other human inventions like that. He picks up his carrot, he doesn’t care about the name or the color . He just enjoys it through purely experiencing eating it. He can also find incredible flow state like joy from playing with river stones or looking at the moon.

I personally feel bad I have to each them to label things and but things in boxes. I think your child is frustrated at times because it’s a punish of a game. The departure from “the oceanic feeling.

Your comment would make sense to me if the end game of our brains and human experience is labelling things. It’s not. It’s useful but it’s not what living is about.

Nition
0 replies
17h27m

the kid will stop you, point at a dog in the book, and ask "dog?"

Of course for a human this can either mean "I have an idea about what a dog is, but I'm not sure whether this is one" or it can mean "Hey this is a... one of those, what's the word for it again?"

AuryGlenz
0 replies
13h6m

That’s all true, yet my 2.5 year old sometimes one-shots specific information. I told my daughter that woodpeckers eat bugs out of trees after doing what you said and asking “what’s that noise?” for the fifth time in a few minutes when we heard some this spring. She brought it up again at least a week later, randomly. Developing brains are amazing.

She also saw an eagle this spring out the car window and said “an eagle! …no, it’s a bird,” so I guess she’s still working on those image classifications ;)

9cb14c1ec0
0 replies
18h25m

not enough people have kids.

Second that. I think I've learned as much as my children have.

Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot.

Watching a baby's awareness grow from pretty much nothing to a fully developed ability to understand the world around is one of the most fascinating parts of being a parent.

VirusNewbie
13 replies
19h22m

: humans do not need 10,000 examples to tell the difference between cats and dogs

well, maybe. We view things in three dimensions at high fidelity: viewing a single dog or cat actually ends up being thousands of training samples, no?

amelius
9 replies
19h15m

Yes, but we do not call a couch in a leopard print a leopard. Because we understand that the print is secondary to the function.

VirusNewbie
6 replies
18h34m

I'm not sure it's as simple as you say. The first time my very young son saw a horse, he made the ASL sign for 'dog'.

He had only ever seen cats and dogs in his life previous to that.

clipsy
5 replies
16h13m

Did he require 9,999 more examples of horses before learning the difference?

VirusNewbie
4 replies
15h5m

In another comment I replied that 3D high fidelity images do end up being thousands of training samples, so the answer is yes.

clipsy
1 replies
14h10m

I'm deeply skeptical that training AI on (effectively) thousands of images of one horse will perform very well at training to recognize horses in general.

jpc0
0 replies
10h8m

I'll double down with you on this.

Then train the AI using a binaural video of a thoroughbred and see if it can distinguish a draft horse and a quarter horse as horse...

_flux
1 replies
9h38m

Are you suggesting that if a group of kids were given a book of zoo animals before going to the zoo, they would have difficulties identifing any new animals, because they only have seen one picture of each?

VirusNewbie
0 replies
2h28m

I think that's an interesting question, and a possible counter to my argument.

Certainly kids learn and become better at extrapolation and need fewer and fewer samples in general as they get more life experience.

rolisz
0 replies
17h16m

Hah. My toddler gladly calls her former walking aid toy a "lawn mower". Random toys become pie and cakes she brings to us to eat.

mewpmewp2
0 replies
14h39m

But we have a lot more sensory input and context to verify all of that.

If you kept training LLMs with all that data, it would be interesting to see what the results would be.

bbor
2 replies
19h2m

Eh, still doesn’t hold up. I really don’t think there’s many psychologists working on the posited mechanism of simple NN-like backprop learning. Aka conditioning, I guess. As Chomsky reminds us every time we let him: human children learn to understand and use language — an incredibly complex and nuanced domain, to say the least — with shockingly little data and often zero-to-none intentional instruction. We definitely employ principles and patterns that are far more complex (more “emergent”?) than linear regression.

Tho I only ever did undergrad stats, maybe ML isn’t even technically a linear regression at this point. Still, hopefully my gist is clear

ekidd
0 replies
17h1m

Chomsky reminds us every time we let him: human children learn to understand and use language — an incredibly complex and nuanced domain, to say the least — with shockingly little data and often zero-to-none intentional instruction.

Chomsky's arguments about "poverty of the stimulus" rely on using non-probabistic grammars. Norvig discusses this here: https://norvig.com/chomsky.html

In 1967, Gold's Theorem showed some theoretical limitations of logical deduction on formal mathematical languages. But this result has nothing to do with the task faced by learners of natural language. In any event, by 1969 we knew that probabilistic inference (over probabilistic context-free grammars) is not subject to those limitations (Horning showed that learning of PCFGs is possible).

If I recall correctly, human toddlers hear about 3-13 million spoken words per year, and the higher ranges are correlated with better performance in school. Which:

- Is a lot, in an absolute sense.

- But is still much less training data than LLMs require.

Adult learners moving between English and romance languages can get a pretty decent grasp of the language (C1 or C2 reading ability) with about 3 million words of reading. Which is obviously exploiting transfer learning and prior knowledge, because it's harder in a less related language.

So yeah, humans are impressive. But Chomsky doesn't really seem to have the theoretical toolkit to deal with probabilistic or statistical learning. And LLMs are closer to statistical learning than to Chomsky's formal models.

VirusNewbie
0 replies
18h33m

human children learn to understand and use language — an incredibly complex and nuanced domain, to say the least — with shockingly little data and often zero-to-none intentional instruction

This isn't accurate comparison imo, because we're mapping language to a world model which was built through a ton of trial and error.

Children aren't understanding language at six months old, there seems to be a minimum amount of experience with physics and the world before language can click for them.

theptip
6 replies
15h30m

humans do not need 10,000 examples to tell the difference between cats and dogs

The optimization process that trained the human brain is called evolution, and it took a lot more than 10,000 examples to produce a system that can differentiate cats vs dogs.

Put differently, an LLM is pre-trained with very light priors, starting almost from scratch, whereas a human brain is pre-loaded with extremely strong priors.

PaulDavisThe1st
3 replies
14h9m

The optimization process that trained the human brain is called evolution, and it took a lot more than 10,000 examples to produce a system that can differentiate cats vs dogs.

Asserted without evidence. We have essentially no idea at what point living systems were capable of differentiating cats from dogs (we don't even know for sure which living systems can do this).

choeger
2 replies
13h47m

We know for a fact that cats, dogs, and humans do.

ben_w
0 replies
7h47m

As adults, not (as per this thread) genetically.

PaulDavisThe1st
0 replies
5h8m

Sure, but can earthworms? Butterflies? Oak trees? Slime mould? At what point in the history of life did sufficient discrimination to differentiate e.g. a cat and a dog actually arise? Are the mechanisms used for this universal? Are some better than others? etc.

llm_trw
1 replies
11h31m

The optimization process that trained the human brain is called evolution

A human brain that doesn't get visual stimulus at the critical age between 0 and 3 years old will never be able to tell the difference between a cat and a dog because it will be forevermore blind.

ben_w
0 replies
7h23m

Commonly believed, but not so: https://www.sciencedaily.com/releases/2007/02/070220021337.h...

I heard a similar case before I did my A-levels, so at least 22 years ago, where the person had cateracts removed and it took a while to learn to see, something about having to touch a statue (of a monkey?) before being able to recognise monkeys?

pants2
3 replies
18h51m

Humans, I would bet, could distinguish between two animals they've never seen based only on a loose or tangential description. I.e. "A dog hunts animals by tracking and chasing them long enough to exhaust their energy, but a cat is opportunistic and strikes using stealth and agility."

A human that has never seen a dog or a cat could probably determine which is which based on looking at the two animals and their adaptations. This would be an interesting test for AIs, but I'm not quite sure how one would formulate a eval for this.

taneq
0 replies
17h11m

Only after being exposed to (at least pictures and descriptions of) dozens if not hundreds of different types of animal and their different attributes. Literal decades of training time and carefully curated curriculum learning are required for a human to perform at what we consider ‘human level’.

ryankrage77
0 replies
18h18m

A possible way to this idea would be to draw two aliens with different hunting strategies and do a poll of which is which. I'd try it but my drawing skills are terrible and I'm averse to using generated images.

allanrbo
2 replies
14h41m

If a human eye works at say 10 fps, then 8 minutes with a cat is about 10k images :-D

captaincaveman
1 replies
8h45m

I'd say that was more like a single instance, one interaction with a thing.

lxgr
0 replies
2h55m

But in that single interaction, you might have seen the cat from all kinds of different angles, in various poses, doing various things, some of which are particularly not-dog-like.

I vaguely remember hearing that there's even ways to expand training data like that for neural networks, i.e. by presenting the same source image slightly rotated, partially obscured etc.

jules
1 replies
19h29m

Do computers need 10,000 examples to distinguish dogs from cats when pretrained on other tasks?

curious_cat_163
0 replies
16h39m

No.

nphard85
0 replies
10h32m

Dwarkesh*

woadwarrior01
0 replies
4h50m

humans do not need 10,000 examples to tell the difference between cats and dogs

Neither do machines. Lookup few-shot learning with things like CLIP.

nextaccountic
0 replies
12h20m

humans do not need 10,000 examples to tell the difference between cats and dogs

Humans learn through a lifetime.

Or are we talking about newborn infants?

goertzen
0 replies
18h53m

I don’t know enough of biology or genetics or evolution, but surely the millions of years of training that is hardcoded into our genes and expressed in our biology had much larger “training” runs.

fennecbutt
0 replies
7h15m

Humans don't need those examples because our brains are very pretrained. Natural fear of snakes and snakelike things, etc etc.

ML models are starting from absolute zero, single celled organism level.

lacker
36 replies
21h27m

I really like the idea of ARC. But to me the problems seem like they require a lot of spatial world knowledge, more than they require abstract reasoning. Shapes overlapping each other, containing each other, slicing up and reassembling pieces, denoising regular geometric shapes, you can call them "core knowledge" but to me it seems like they are more like "things that are intuitive to human visual processing".

Would an intelligent but blind human be able to solve these problems?

I'm worried that we will need more than 800 examples to solve these problems, not because the abstract reasoning is so difficult, but because the problems require spatial knowledge that we intelligent humans learn with far more than 800 training examples.

HarHarVeryFunny
15 replies
19h2m

I just did the first 5 of the "public eval set" without having looked at the "public training set", and found them easy enough. If we're defining AGI as at least human level, then the AGI should also be able to do these without seeing any more examples.

I don't think there's any rules about what knowledge/experience you build into your solution.

mewpmewp2
14 replies
14h35m

AGI should obviously be able to do them. But AI being able to do those 100 percent wouldn't be evidence of AGI however. It is a very narrow domain.

bubblyworld
12 replies
12h29m

Why not? If the only thing that can solve problem X is AGI (e.g. humans), and something else comes along that solves it, then rationally that should be evidence that the something else is AGI right?

Unless you have strong prior beliefs (like "computers can't be AGI") or something else that's problem specific ("these problems can be solved by these techniques which don't count as AGI"). So I guess that's my real question.

lucianbr
3 replies
11h38m

That makes no sense at all. Any problem is initially only solvable by humans, until some technology is developed to solve it. Calculating a logarithm was at some point only doable by humans, and then digital computers came along. This would be in your view evidence that digital computers are AGI!? As in, an 8086 with some math code is AGI. We've had it for decades now, only nobody noticed :)

bubblyworld
2 replies
11h4m

It's just Bayes theorem - there are basically two variables that control how strong the evidence is:

* How likely you think AGI is in general.

* How solvable you think the problem is, independently of what's solving it.

In the cases you've brought up that latter probability is very high, which means that they are extremely weak evidence that computers are AGI. So we agree!

In this case the latter probability seems to be quite low - attempts to solve it with computers have largely failed so far!

lucianbr
1 replies
10h43m

We don't agree. You're now saying anything is evidence of anything, which just makes the word "evidence" meaningless.

In real life, when people say "A is evidence of B" they mean strong evidence, or even overwhelming evidence. You just backpedalled by redefining evidence to mean anything and nothing, so you can salvage an obviously false claim.

Nobody in the real world says "rain is evidence of aliens" with the implicit assumption that it's just extremely weak evidence. The way English is used by people makes that sentence simply false, as is yours that anything previously not solved is evidence of AGI.

bubblyworld
0 replies
10h41m

We're talking about a specific problem here - the competition in the OP. Not aliens in the rain.

educasean
3 replies
11h30m

This flies directly in the face of technologies such as Deep Blue and AlphaGo. They excel in tiny domains previously thought to be the pinnacle of intelligence, and now they dominate humans. Are they AGI in your definition?

bubblyworld
2 replies
11h2m

See my response to the other commenter. In these cases as well I would conclude it's very weak evidence of AGI, so I don't think we disagree.

Edit: I think maybe the disagreement here is about the nature of evidence. I think there can be evidence that something is AGI even if it isn't, in fact, AGI. You seem to believe that if there's any evidence that something is AGI, it must be AGI, I think?

educasean
1 replies
10h34m

I personally don't find this line of rhetoric useful or relevant. Let's agree to disagree.

bubblyworld
0 replies
10h26m

Okay, that's fair. But to be clear - this is a theorem of probability theory, not rhetoric.

nl
1 replies
7h33m

If the only thing that can solve problem X is AGI (e.g. humans), and something else comes along that solves it, then rationally that should be evidence that the something else is AGI right?

No.

Because there might undiscovered ways to solve these problems that no one claims is AGI.

The definition of AGI is notoriously fuzzy, but non-the-less if there was a 10 line python program (with no external dependencies or data) that could solve it then few would argue that was AGI.

So perhaps there is an algorithm that solves these puzzles 100% of the time and can be easily expressed.

So I agree that only being able to solve these problems doesn't define AGI.

bubblyworld
0 replies
3h32m

I think I agree with you, but consider these two cases:

1. Only humans are known to have solved problem X, and we've spent no time looking for alternative solutions.

2. Only humans are known to have solved problem X, and we've spent hundreds of thousands of hours looking for alternative solutions and failed.

Now suppose something solves the problem. I feel like in case 2 we are justified in saying there's evidence that something is a human-like AGI. In case 1 we probably aren't justified in saying that.

To me this seems evident regardless of what the problem actually is! Because if it's hard enough that thousands of human hours cannot find a simple/algorithmic solution it's probably something like an "AGI-complete" problem?

HarHarVeryFunny
1 replies
6h37m

Humans can do infinitely many things because we have general intelligence.

Testing whether an AI can play chess or solve Chollet's ARC problems, or some other set of narrow skills, doesn't prove generality. If you want to test for generality, then you either have to:

1) Have a huge and very broad test suite, covering as many diverse human-level skills as possible.

and/or,

2) Reductively understand what human intelligence is, and what combination of capabilities it provides, then test for all of those capabilities both individually and in combination.

As Chollet notes, a crucial part of any AGI test is solving novel problems that are not just templated versions (or shallow combinatins) of things the wanna-be AGI has been trained on, so for both of above tests this is key.

bubblyworld
0 replies
3h18m

I suspect trying to reductively understand intelligence is a bit like trying to reductively understand biology - every level of abstraction is causally influenced by every other level of abstraction, so there just aren't simple primitives you can break everything down into.

HarHarVeryFunny
0 replies
6h46m

Yes, a narrow domain, but the core capability it is testing for (explorative combination/application of learned patterns and skills) is a general one that in a meaningful AGI would be available across domains.

CooCooCaCha
7 replies
20h37m

“Would an intelligent but blind human be able to solve these problems?”

This is the wrong way to think about it IMO. Spatial relationships are just another type of logical relationship and we should expect AGI to be able to analyze relationships and generate algorithms on the fly to solve problems.

Just because humans can be biased in various ways doesn’t mean these biases are inherent to all intelligences.

crazygringo
4 replies
20h1m

Spatial relationships are just another type of logical relationship and we should expect AGI to be able to analyze relationships and generate algorithms on the fly to solve problems.

Not really. By that reasoning, 5-dimensional spatial reasoning is "just another type of logical relationship" and yet humans mostly can't do that at all.

It's clear that we have incredibly specialized capabilities for dealing with two- and three-dimensional spatiality that don't have much of anything to do with general logical intelligence at all.

andoando
2 replies
15h52m

Literally every single thing you reason about is something happening in space-time.

lucianbr
1 replies
11h33m

Where exactly in space-time are complex numbers? Could you point me to 2+i for example?

How about some aliens in a SF book. When we reason about them, where are they exactly? Literally on the pages of the book?

How about a context-free grammar?

andoando
0 replies
5h27m

Complex numbers are just 2d numbers and they are mapped on a 2d plane, yeah. They are just a 2d vector. Callling them imaginary numbers is silly in the first place, 2+i is just the vector (2,1), all we mean here are the two numbers are orthogonal, i.e they are destinguished by some independent factor. The imaginary component is no more imaginary than the real component.

I mean what problems does physics solve not with just complex number but with even more complex vectors? Problems of...space-time.

Aliens in a SF book. What do you imagine? I see some kind of physical entity having geometric compomnents in some kind of space.

Context free grammers are represented by...trees where one side of a spatial relationship maps to one idea and the other to another. What is context? Things surrounding something, where something is.

Come up with any idea, it can be represented in space and time.

CooCooCaCha
0 replies
18h16m

Yes really. Problem solving on the fly doesn't mean the algorithm can instantly learn anything. Reality is HEAVILY biased towards two and three spatial dimensions so our brains have hours and hours of training on that dataset. But, with time, humans can learn to be good at all sorts of things.

It's important that we try to think from the perspective of an algorithm, not a human. And it's also important that we don't jump to extremes.

It seems like you interpreted "solving problems on the fly" to mean "instantly being an expert on a completely different and novel domain". What it does mean is flexibility, resilience to novel situations, and being able to adapt over time.

janalsncm
1 replies
20h23m

Part of the concern might be that visual reasoning problems are overrepresented in ARC in the space of all abstract reasoning problems.

It’s similar to how chess problems are technically reasoning problems but they are not representative of general reasoning.

CooCooCaCha
0 replies
18h23m

ARC is meant to test fundamental algorithms. It's entirely ok to train a model specifically for this task. Part of the beauty of ARC is that it's resistant to memorization.

nickpsecurity
3 replies
21h14m

To parent: the spatial reasoning and blind person were great counterexamples. It still might be OK despite the blind exceptions if it showed general reasoning.

To OP: I like your project goal. I think you should look at prior, reasoning engines that tried to build common sense. Cyc and OpenMind are examples. You also might find use for the list of AGI goals in Section 2 of this paper:

https://arxiv.org/pdf/2308.04445

When studying intros of brain function, I also noted many regions tie into the hippocampus which might do both sense-neutral storage of concepts and make inner models (or approximations) of external world. The former helps tie concepts together through various senses. The latter helps in planning when we are imagining possibilities to evaluate and iterate on them.

Seems like AGI should have these hippocampus-like traits and those in the Cyc paper. One could test if an architecture could do such things in theory or on a small scale. It shouldn’t tie into just one type of sensory input either. At least two with the ability to act on what only exists in one or what is in both.

Edit: Children also have an enormous amount of unsupervised training on visual and spatial data. They get reinforcement through play and supervised training by parents. A realistic benchmark might similarly require GB of prettaining.

HarHarVeryFunny
2 replies
18h59m

CYC was an expert system, which is arguably what LLMs are.

A similar vintage GOFAI project that might do better on these, with a suitable visual front end, is SOAR - a general purpose problem solver.

nickpsecurity
1 replies
5h47m

LLM’s aren't expert systems. A hallmark of expert systems is they encoded human-readable, human-checked knowledge with explainable reasoning. It was usually done as if-then rules. Others with logic programming. Forward and backward chaining for rules. Usually had specialist knowledge for one, use case.

LLM’s are unsupervised, use probabilities with unpredictable results, and don’t explain every step of their thinking. They’re the opposite.

You might argue Cyc was. It was also more complex than any expert system I had ever seen. We just called stuff like that a reasoning engine or just Cyc to avoid confusion.

HarHarVeryFunny
0 replies
5h32m

An expert system is just a system based on repeated application of declarative rules. CYC was certainly an expert system - the ultimate scaling experiment of expert systems. I believe CYC also had a variety of inference/reasoning engines in addition to it's set of rules.

The rules (some prefer to call it a world model) in an LLM are deduced, via gradient descent, from the training samples, but are still there. The transformations effected by each layer of a transformer are exactly those it has learnt - the rules it is applying.

As with CYC people seem to be hoping that some external scaffolding (better inference engine(s)) will rescue LLMs from just being a set of rules to something more general and capable, but I tend to agree with Chollet that this active inference (reasoning) is actually the hard part.

andoando
3 replies
16h13m

I would argue that spatial reasoning encompasses all reasoning. All the things you mentioned have a direct analogue to abstract models and logic we employ and are engrained deeply into language. For example, shapes containing eachother:

There are two countries both which lay claim to the same territory. There is a set X that contains Y and there is a set Z that contains Y. In the case that the common overlap is 3D and one in on top of the other, we can extend this to there is a set X that contains -Y and a set Z that contains Y, and just as you can only see one on top and not both depending on where you stand, we can apply the same property here and say set X and Z cannot both exist, and therefore if set X is on then -Y and if set Z then Y.

If you pay attention to the language you use youll start to realize how much of it uses spatial relationships to describe completely abstract things. For example, one can speak of disintigrating hegonomic economies. i.e turning things built on top of eachother into nothing, to where it came

We are after all, reasoning about things which happen in time and space.

And spatial != visual. Even if you were blind youd have to reason spatially, because again any set of facts are facts in space-time. What does it take to understand history? People in space, living at various distances from each other, producing goods from various locations of the earth using physical processes, and physically exchanging them. To understand battles you have to understand how armies are arranged physically, how moving supplies works, weather conditions, how weapons and their physical forms affect what they can physically do, etc.

Hell LLMs, the largest advancement we had in artificial intelligence do what exactly? Encode tokens into multi dimensional space.

parentheses
2 replies
13h43m

Spatial reasoning is easily isomorphic to many kinds of reasoning - just not all of them. Spatial reasoning in this case also limits the AI to 2 dimensions. I concede that with more dimensions, there will be more isomorphisms.

Is there a number of dimensions that captures all reasoning? I don't know..

dimask
1 replies
10h18m

Claims of isomorphisms are really strong claims to not be backed up with some kind of evidence.

andoando
0 replies
4h52m

I think the reasoning is very simple. Everything that happens happens in space through time. Intelligent systems must solve problems where they observe what's happening in space over some amount of time, and then predict whats going to happen to space over some other amount of time.

modeless
0 replies
18h59m

to me it seems like they are more like "things that are intuitive to human visual processing".

Yann LeCun argues that humans are not general intelligence and that such a thing doesn't really exist. Intelligence can only be measured in specific domains. To the extent that this test represents a domain where humans greatly outperform AI, it's a useful test. We need more tests like that, because AIs are acing all of our regular tests despite being obviously less capable than humans in many domains.

the problems require spatial knowledge that we intelligent humans learn with far more than 800 training examples.

Pretraining on unlimited amounts of data is fair game. Generalizing from readily available data to the test tasks is exactly what humans are doing.

Would an intelligent but blind human be able to solve these problems?

I'm confident that they would, given a translation of the colors to tactile sensation. Blind humans still understand spatial relationships.

lynx23
0 replies
7h41m

If a blind individual can solve a visually oriented challenge is not really a question of their intelligence but more a question of accessibility/translation. Just because I cant see something myself doesnt really say anything about my ability to deal with abstractions.

dimask
0 replies
10h20m

Would an intelligent but blind human be able to solve these problems?

Blind people can have spatial reasoning just fine. Visual =/= spatial [0]. Now, one would have to adapt the colour-based tasks to something that would be more meaningful for a blind person, I guess.

[0] https://hal.science/hal-03373840/document

Lerc
0 replies
21h10m

I don't think the intent is to learn the entire problem domain from the examples, but the specific rule that is being applied.

There may (almost certainly will be) additional knowledge encoded in the solver to cover the spacial concepts etc. The distinction with the AGI-ARC test is the disparity between human and AI performance, and that it focuses on puzzles that are easier for humans.

It would be interesting to see a finetuned LLM just try and express the rule for each puzzle as english. It could have full knowledge of what ARC-AGI is and how the tests operate, but the proof of the pudding is simply how it does on the test set.

pmayrgundter
20 replies
20h40m

This claim that these tests are easy for humans seems dubious, and so I went looking a bit. Melanie Mitchell chimed in on Chollet's thread and posted their related test [ConceptARC].

In it they question the ease of Chollet's tests: "One limitation on ARC’s usefulness for AI research is that it might be too challenging. Many of the tasks in Chollet’s corpus are difficult even for humans, and the corpus as a whole might be sufficiently difficult for machines that it does not reveal real progress on machine acquisition of core knowledge."

ConceptARC is designed to be easier, but then also has to filter ~15% of its own test takers for "[failing] at solving two or more minimal tasks... or they provided empty or nonsensical explanations for their solutions"

After this filtering, ConceptARC finds another 10-15% failure rate amongst humans on the main corpus questions, so they're seeing maybe 25-30% unable to solve these simpler questions meant to test for "AGI".

ConceptARC's main results show CG4 scoring well below the filtered humans, which would agree with a [Mensa] test result that its IQ=85.

Chollet and Mitchell could instead stratify their human groups to estimate IQ then compare with the Mensa measures and see if e.g. Claude3@IQ=100 compares with their ARC scores for their average human

[ConceptArc]https://arxiv.org/pdf/2305.07141 [Mensa]https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-10...

kenjackson
12 replies
18h59m

I just tried the first puzzle and I can't get it right. I think my solution makes logical sense and I explain why the patterns are consistent with the input, but it says its wrong. I'm either a lot dumber than I thought or they need to do a better job of vetting their tests.

saati
10 replies
18h48m

It's pretty easy, just follow the second example with the colors from the test input. (if it's the same puzzle 00576224 for you too)

kenjackson
9 replies
18h38m

https://arcprize.org/play?task=00576224 Yes the same puzzle.

And I followed the second example. This was my solution:

GRG

OBO

RGR

B is the cyan like blue color. My solution looks right, but it says it’s wrong.

halter73
7 replies
18h18m

You need to resize the output grid to 6x6.

kenjackson
5 replies
15h8m

How come? The pattern should work for any size grid.

bigyikes
4 replies
15h1m

You might be technically correct, but if you extend that logic, why not just make the grid 1x1 and select a single color?

The grid size is part of the pattern in the same way that the colors are part of the pattern. It’s not just a color pattern, it’s a generalized mapping of input to output.

In short: you need to resize the grid because that’s what the examples do.

johndough
2 replies
13h16m

why not just make the grid 1x1 and select a single color?

For two reasons:

1. The initially suggested grid size was 3x3.

2. Filling in a 3x3 grid is sufficient to show that you understood the pattern, but filling in a 1x1 (or even 2x2) grid is insufficient.

Requiring the user fill in a larger grid is a waste of time. The existence of the grid size selector would still make sense in cases where a 2x2 grid would be sufficient to show the solution, so it is not obvious at all that a 6x6 grid should be chosen.

The grid size is part of the pattern in the same way that the colors are part of the pattern.

To understand a pattern, you have to see at least two valid inputs and corresponding outputs. For the first example, a valid example for the expected output grid size is missing.

I arrived at the "correct" conclusion eventually, but the only indicator was that the reading direction for the UI was absolutely ridiculous ( https://i.imgur.com/CuQ2z2N.png ), suggesting that the authors did not think this through properly, so the solution had to be weird as well.

lucianbr
1 replies
11h11m

The fact that two intelligent beings are debating what the correct answer is shows that there is no fixed correct answer that proves "intelligence".

This is IQ tests all over again. Actually testing how alike you think to the author of the test.

wantsanagent
0 replies
2h37m

Twist: bigyikes is an LLM!

lucianbr
0 replies
11h13m

What is even the meaning of "correct" in this case?

This makes me think of "math" problems requiring you to find the next number in a series. They give you 5 numbers, and ask for the 6th. When I can build a polynomial than can generate the first 5 and any 6th number. Any.

Sounds like the point of these exercises it to guess what the author had in mind, more than some universal intelligence test. Though of course the author thinks their own thoughts are the measure of universal intelligence. It's a tempting thing to believe.

Tepix
0 replies
1h8m

Yeah into the same problem but managed to find the solution by resizing it.

Foolandmore
0 replies
8h5m

The pattern changes in the middle as well. so you'd need to show on the full size

mark_l_watson
3 replies
19h34m

I saw Melanie’s post and I am intrigued by an easier AGI suite. I would like some experimenting done by individuals like myself snd smaller organizations.

bbor
0 replies
18h55m

Are you working on (a book detailing) AGI also? It’s a lonely field but I have no doubt there are a sea of malcontent engineers across the world who saw the truth early on and are pushing solo for AGI. It’s going well for me, but I’m not sure whether to take that as “you’re great” or “it’s really that easy”, so was interested to see such a fellow brazen American on HN of all places.

Game on for the million, if so :). If not, apologies for distracting from the good fight for OSS/noncorp devs!

E: it occurred to me on the drive home how easily we (engineers) can fall into competitiveness, even when we’ve all read the thinkpieces about why an AI Race would/will be/is incredibly dangerous. Maybe not “game on”, perhaps… “god I hope it’s impossible but best of luck anyway to both of us”?

PaulDavisThe1st
0 replies
14h2m

You actually think that has not been going for 30, 40 or 50 years?

salamo
1 replies
19h34m

They claim that the average score for humans is between 85% and 100%, so I think there's a disagreement on whether the test is actually too hard. Taking them at their word, if no existing model can score even half what the average human can, the test is certainly measuring some kind of significant difference.

I guess there might be a disagreement of whether the problems in ARC are a representative sample of all of the possible abstract programs which could be synthesized, but then again most LLMs are also trained on human data.

gkbrk
0 replies
11h9m

The tasks are very easy for humans. Out of the 6 tasks assigned when I opened the web page, I got all of them correct on the first try.

Maybe if you run into some exceptionally difficult tasks it might not be 100%, but there's no way the challenge can be called unfair because it's too difficult for humans too.

mikeknoop
0 replies
19h36m

Here is some published research on the human difficulty of ARC-AGI: https://cims.nyu.edu/~brenden/papers/JohnsonEtAl2021CogSci.p...

We found that humans were able to infer the underlying program and generate the correct test output for a novel test input example, with an average of 84% of tasks solved per participant
neoneye2
12 replies
19h22m

I'm Simon Strandgaard and I participated in ARCathon 2022 (solved 3 tasks) and ARCathon 2023 (solved 8 tasks).

I'm collecting data for how humans are solving ARC tasks, and so far collected 4100 interaction histories (https://github.com/neoneye/ARC-Interactive-History-Dataset). Besides ARC-AGI, there are other ARC like datasets, these can be tried in my editor (https://neoneye.github.io/arc/).

I have made some videos about ARC:

Replaying the interaction histories, and you can see people have different approaches. It's 100ms per interaction. IRL people doesn't solve task that fast. https://www.youtube.com/watch?v=vQt7UZsYooQ

When I'm manually solving an ARC task, it looks like this, and you can see I'm rather slow. https://www.youtube.com/watch?v=PRdFLRpC6dk

What is weird. The way that I implement a solver for a specific ARC task is much different than the way that I would manually solve the puzzle. Having to deal with all kinds of edge cases.

Huge thanks to the team behind the ARC Prize. Well done.

ECCME
7 replies
13h56m

"Here is a challenge, designed to be unsolvable or so. We'll give you a bazillion dollars if you complete the challenge, and, in the meantime, we will use your attempts to train an as AI that will be worth the cost!!"

skrebbel
4 replies
13h5m

Did you even try the puzzles? They’re not particularly “unsolvable”.

ECCME
3 replies
12h29m

ARC-AGI: "here are some pretty simple puzzles, we'll give you a million dollars to solve them!"

Human: "They're quite challenging, this might be a trick to engage activity for the purpose of training models."

skrebbel: "You're stupid".

educasean
2 replies
12h8m

Did you try the puzzles?

ECCME
1 replies
11h58m

No. What is the purpose of this competition? Unlikely that the reason for it is to pay out an enormous reward, right? Easy or not easy, the fortune is only rewarded to the system that solves the puzzles. The reward is too valuable to be given away easily. Ipso facto, solving the puzzles is deemed challenging by those who present the competition.

echoangle
0 replies
3h52m

Are you writing this under every challenge with a monetary reward? The point of the challenge is that it is hard to do for an AI and easy for a human. Of course it is not easy to solve, that’s the point of the challenge. But the puzzle itself is not very hard.

gota
0 replies
1h48m

In the most charitable interpretation of this comment - I can understand the feeling, when so much of social media interactions are in the form 'It's post a picture of you as a baby, 10 year old, and current age!'. Those and many other instances can bring out excessive skepticism

But the people involved in this haven't signaled that they are in that path, either in the message about the challenge (precisely the opposite) or seemingly in their careers so far

So I guess I don't share the concern but a better way to phrase your comment could be -

"how can we be sure the human-provided solutions won't turn out to be just fodder for training a RL model or something that will later be monetized, closed and proprietary? Do the challenge organizers provide any guarantees on that?"

geor9e
0 replies
12h30m

No, you missed the point. The striking thing about ARC is the puzzles are super easy, for humans. The average person solves 85% of the tasks, but the worlds best LLMs are only solving 5%. The challenge is to simply make an AI score as well as the average human.

parentheses
3 replies
13h54m

The UX of your solution entry is _way_ better than the ARC site itself.

mkl
1 replies
6h45m

Being able to hold the mouse button down is certainly much nicer. Not being able to see the examples while you are solving makes it harder than it should be though.

neoneye2
0 replies
5h29m

I have create an issue with your suggestion. https://github.com/neoneye/ARC-Interactive/issues/67

Seeing the examples while having the editor visible. That's a good idea. I haven't explored this direction, since I had my phone (with tiny screen estate) in mind.

Drafts for a such a UI are much welcome. However I'm probably too lazy to code it though.

neoneye2
0 replies
10h26m

That warms my heart. Thank you.

The short story. I needed something that could render thumbnails of tasks, so I could visual debug what was going on in my solver. However I have never gotten around to make the visual inspection tool. After having the thumbnail renderer, mid january 2024, then it eventually turned into what it is now.

paxys
8 replies
20h53m

While I agree with the spirit of the competition, a $1M prize seems a little too low considering tens of billions of dollars have already been invested in the race to AGI, and we will see many times that put into the space in the coming years. The impact of AGI will be measured in trillions at minimum. So what you are ultimately rewarding isn't AGI research but fine tuning the newest public LLM release to best meet the parameters of the test.

I'd also urge you to use a different platform for communicating with the public because x.com links are now inaccessible without creating an account.

ks2048
1 replies
19h44m

The submissions can't use the internet. And I imagine can't be too huge - so you can't use "newest public LLMs" on this task.

mikeknoop
0 replies
19h41m

That is correct for ARC Prize: limited Kaggle compute (to target efficiency) and no internet (to reduce cheating).

We are also trialing a secondary leaderboard called ARC-AGI-Pub that imposes no limits or constraints. Not part of the prize today but could be in the future: https://arcprize.org/leaderboard

mikeknoop
0 replies
19h43m

I agree, $1M is ~trivial in AI. The primary goal with the prize is to raise public awareness about how close (or far today) we are from AGI: https://arcprize.org/leaderboard and we hope that understanding will shift more would-be AI researchers to working new ideas

lxgr
0 replies
2h52m

Yeah, I also immediately had Dr. Evil narrating the prize money amount in my head once I saw it.

AGI will take much more than that to build, and once you have it, if all you can monetize it for is a million dollars, you must be doing something extremely wrong.

hackerlight
0 replies
11h15m

The $1M ARC prize is advertising, just like being #1 on the huggingface leaderboard. It won't matter for end consumers, but for attracting the best talent it could be valuable.

btbuildem
0 replies
5h13m

Yeah, in 2006 Netflix offered $1M in a similar scheme. At least back then that sum meant something.

bongodongobob
0 replies
18h43m

That was my initial reaction too.

"Endow circuitry with consciousness and win a gift certificate for Denny's (may not be used in conjunction with other specials)"

abtinf
7 replies
19h56m

requires no world knowledge, no understanding of language

This is treating “intelligence” like some abstract, platonic thing divorced from reality. Whatever else solving these puzzles is indicative of, it’s not intelligence.

Phil_Latio
3 replies
19h42m

Why does an AGI need to have any knowledge about our reality? The principle behind an AGI should work just as well on a made up world where those puzzles play a part in.

abtinf
2 replies
19h16m

A concept that doesn’t relate to an aspect of reality, either directly or abstracted from basic concepts that directly relate, is meaningless and arbitrary. There is no way for intelligence to grasp it, let alone do something with it.

To put it another way, a thing that solves puzzles without an understanding of reality is a calculator. When it solves a problem, it is the creator’s intelligence solving the problem, not its own.

andoando
0 replies
4h41m

These problems are spatial problems, they are not some outer wordly problems.

Phil_Latio
0 replies
18h40m

I agree that the puzzles alone are not enough, that's why I wrote "in a made up world where those puzzles play a part in".

We are not looking for a superhuman, but for the (or a) mechanism of intelligence, which we can then transfer into a superhuman (into the real world). But the mechanism itself should work in an artifically made and very constrained world too.

levocardia
0 replies
18h38m

This argument is not very strong: is "physical strength" some abstract, platonic thing divorced from reality? Does a person's bench press, squat, deadlift, and overhead press capabilities have nothing to do with strength?

Or instead, is there some underlying latent capability we call 'strength,' that is correlated with performance in a broad but constrained range of real-world tasks that humans encounter and solve, whose value is something we'd like to assess and, ideally, build machines that can surpass?

abtinf
0 replies
19h46m

From the abstract of the “ On the Measure of Intelligence” paper:

We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience.

I’m afraid that definition forecloses the possibility of AGI. The immediate basic question is: why build skills at all?

HarHarVeryFunny
0 replies
18h49m

Actually ARC fit's my definition of animal intelligence - "degree of ability to use prior experience to predict future outcomes".

Any useful definition of intelligence has to be totally general - to our brain experience is just patterns of neural activation. Our brain has no notion of certain inputs being from the the jungle and others from the blackboard or whatever.

visarga
6 replies
12h14m

Chollet's argument is that LLMs just imitate and recombine patterns. This might be true if you're looking at LLMs in isolation, but when they chat with people something different happens. The system made of humans+LLMs is an AGI. It is no longer just a parrot, it ingests new information, gets guidance, feedback and is basically embodied in a chat room with human and tools.

This scales for 200M users and 1 billion sessions per moth for OpenAI, which can interpret every human response as a feedback signal, implicit or explicit. Even more if you take multiple sessions of chat spreading over days, that continue the same topic and incorporate real world feedback. The scale of interaction is just staggering, the LLM can incorporate this experience to iteratively improve.

If you take a look at humans, we're very incapable alone. Think feral Einstein on a remote island - what could he achieve without the social context and language based learning? Just as a human brain is severely limited without society, LLMs also need society, diversity of agents and experiences, and sharing of those experiences in language.

It is unfair to compare a human immersed in society with a standalone model. That is why they appear limited. But even as a system of memorization+recombination they can be a powerful element of the AGI. I think AGI will be social and distributed, won't be a singleton. Its evolution is based on learning from the world, no longer just a parrot of human text. The data engine would be: World <-> People <-> LLM, a full feedback cycle, all three components evolve in time. Intelligence evolves socially.

8organicbits
4 replies
11h42m

The system made of humans+LLMs is an AGI.

Pay no attention to the man behind the curtain.

This type of thinking would claim that mechanical turk is AGI, or perhaps that human+pen and paper is AGI. While they are great tools, that's not how I'd characterize them.

visarga
3 replies
11h28m

Pay no attention to the man behind the curtain.

I could say the same for us, pay no attention to the other humans who are behind the curtain.

Humans in isolation are dumb, limited, and can get nowhere with understanding the world. Intelligence is mostly nurture over nature, the collective activity of society nurtures intelligence. It's smart because it learns from many diverse experiences and has a common language for sharing discoveries.

A human, even the smartest of us, can't solve cutting edge problems on demand, we're not that smart. But we can stumble on discoveries, especially in large numbers, and can share good ideas. We're smart by stumbling onto good ideas, and we can build upon these discoveries because we have a common language. Just a massive search program based on real world outcomes, that is what looks like general intelligence at societal level.

If you take the social aspect of intelligence into consideration then LLMs are judged in an inappropriate way, as stand alone agents. Of course they are limited, and we're almost as limited alone. The real locus of intelligence is the language-world system.

8organicbits
2 replies
10h56m

The A in AGI stands for artificial, so a human+LLM system would not qualify as it has a natural, human component. That doesn't mean it's not an interesting topic, or that it won't help humans discover our world better, it's just the wrong label. Remove the human and you'd just have LLMs talking nonsense at each other. It's not surprising that you get an intelligent system when you include natural intelligence.

visarga
1 replies
3h8m

The key ingredients are not the humans but the feedback they carry to the model. Humans are embodied and can test ideas in the real world, LLMs need some kind of special deployment to achieve that. It just so happens that chat rooms are such a deployment.

For example, AlphaZero started from scratch and only had feedback from the self-play game outcomes, but that was enough to reach superhuman level. It was the feedback that carried insights and taught the model.

You can make a parallel to the scientific method: you have two stages, ideation and validation. Ideation alone is not scientific. Validation is what makes or breaks ideas. LLMs without a validation system are just like scientists without a lab.

We're not that smart, as demonstrated by the large number of ideas that don't pan out, we can churn ideas fast but we learn from their outcomes, we can't predict outcomes from the beginning and skip validation.

Here is an example of LLMs discovering useful ideas by feedback, even when they are completely outside their training distribution:

"Evolution through Large Models" https://arxiv.org/abs/2206.08896

This works because the task proposed by this paper is easy to test, so there is plenty of feedback. But the LLM still needs to apply ingenuity to optimize it, you can't brute force it by evolutionary methods alone.

8organicbits
0 replies
1h47m

That _is_ an interesting paper, I'll need to give it a read through.

cheevly
0 replies
2m

I fully, comprehensively agree with your take and have repeatedly arrived at the same conclusions in my research.

bigyikes
4 replies
20h54m

What is the fundamental difference between ARC and a standard IQ test? On the surface they seem similar in that they both involve deducing and generalizing visual patterns.

Is there something special about these questions that makes them resistant to memorization? Or is it more just the fact that there are 100 secret tasks?

taneq
3 replies
17h6m

I’ve always found this kind of puzzle infuriating because it’s way underspecified. You’re not trying to find a pattern, you’re trying to guess what pattern the test writer would expect.

Barrin92
1 replies
15h45m

countless of problems in the world are underspecified in exactly this way, that is effectively what common sense reasoning is. Or what Charles Sanders Peirce called abductive reasoning, making a sensible best guess under conditions of uncertainty.

taneq
0 replies
14h7m

Yes, real-world problems are often underspecified but also they tend to come with much more context, and to be much more interactive. These sorts of problems are deliberately minimal and abstract meaning there's nothing for 'common sense' to work with.

gkbrk
0 replies
11h5m

Most of the ARC tasks are intuitive and have one obvious answer. Both on IQ tests and the ARC challenge, people manage to guess what the test writer expects.

For an AI that's more useful anyway. If the task is specified completely non-ambiguously, you wouldn't need AI. But if it can correctly guess what you want from a limited number of obvious examples that's much more useful.

Geee
4 replies
17h37m

Any details on how these tests were created? I.e. which kind of program was used for generation.

neoneye2
3 replies
17h31m

I think the ARC-AGI tasks was manually drawn with an early version of fchollet's editor.

Recently Michael Hodel has reverse engineered 400 of the tasks, so more tasks can be generated. Interestingly it can generate python programs that solves the tasks too.

https://github.com/michaelhodel/re-arc

montag
1 replies
11h57m

What do you mean it can 'generate python programs that solve the tasks'? I can't find any mention of that. I only see hand-coded solutions.

sestep
0 replies
15h29m

This is exactly what my first step was going to be. Thanks for the link! Saves a lot of time for someone to have already done it.

p1esk
3 replies
15h3m

Is there a leaderboard for the no-restriction version of the competition? I want to see how gpt4 does on it.

mikeknoop
1 replies
3h10m

Yes there is a secondary leaderboard called ARC-AGI-Pub (in beta) with no limitations: https://arcprize.org/leaderboard

p1esk
0 replies
1h31m

I don’t see gpt4 scores there. In fact I’m particularly interested in the performance of a natively multimodal model, like gpt4o or gemini. It does not really make sense to test a model trained on text on those visual/spatial puzzles.

montag
0 replies
11h38m

Just quoting again from the guide:

3. DIRECT LLM PROMPTING In this method, contestants use a traditional LLM (like GPT-4) and rely on prompting techniques to solve ARC-AGI tasks. This was found to perform poorly, scoring <5%. Fine-tuning a state-of-the-art (SOTA) LLM with millions of synthetic ARC-AGI examples scores ~10%.

"LLMs like Gemini or ChatGPT [don't work] because they're basically frozen at inference time. They're not actually learning anything." - François Chollet

Additionally, keep in mind that submissions to Kaggle will not have access to the internet. Using a 3rd-party, cloud-hosted LLM is not possible.

itissid
3 replies
17h54m

Interesting. It seems most of these task target a very specific part of the brain that recognizes visual patterns. But that alone is cannot possibly be the only definition of intelligence.

What about Theory of Mind which talks about the problem of multiple agents in the real world acting together? Like driving a car cannot be done right now without oodles of data or any robot - human problem that requires the robot to model human's goals and intentions.

I think the problem is definition of general intelligence: Intelligence in the context of what? How much effort(kwh, $$ etc) is the human willing to amortize over the learning cycle of a machine to teach it what it needs to do and how that relates to a personally needed outcome( like build me a sandwich or construct a house)? Hopefully this should decrease over time.

I believe the answer is that the only intelligence that really matters is Human-AI cooperative intelligence and our goals and whether a machine understands them. The problems then need to be framed as optimization of a multi attribute goal with the attribute weights adjusted as one learns from the human.

I know a few labs working on this, one is in ASU(Kambhampati, Rao et. al) and possibly Google and now maybe open ai.

andoando
2 replies
16h0m

I made another comment here saying the same thing, but visual patterns and other patterns are nonetheless spatial patterns. Audio, understanding music, or speech, rtc are things that are happening spatially, and they can just as easily be mapped as visual problems. This makes a lot of sense, as after all our senses are telling us what's happening in space-time.

Take for example a simple audiotory pattern like "clap clap clap". This has a very trival mapping as visual like so:

x x x

- - -

house house house

whereas anyone would agree the sound of three equally spaced claps would not be analogous to say:

aa b b b

-- --- -- -- ---

This ability to relate or equate two entirely different senses should clue you in that there is a deeper framework at play

itissid
1 replies
15h31m

It's not just mapping events in space and time, it's also bringing in appropriate context and expectation of future (goals, intentions) into the present, other people's mental models into our prediction.

I am not sure how abstract thinking for generalized pattern matching make it AGI to solve these kind of problems(not that they are not amazing abilities). If these ToM problems are reducible to these tasks posted by the OP then there would need to be some kind of theorem proving business to convert between the two sets of problems efficiently no?

andoando
0 replies
15h19m

Its not just a matter of mapping, no, but imo its a critical first step. You need a model of space-time. You need to be able to place all the facts into a spatial world following the physical laws.

Take this problem, and assume you dont know a single thing about battles/military history, etc. There are two groups of men standing few hundred feet apart. It is raining, the ground is muddy. One group of men has these wooden curved sticks with a string and iron with pointy ends. The other group has people on horses, and men with very long pointy iron sticks and theyre all covered in steel plating.

Who will win if they all fight against each other? There's really no correct answer but Id expect an intelligent agent to give some detailed reasoning for their decision and to infer details or possibilities and ask questions based again not on previous knowledge but what physically makes sense in the description that was given.

This isnt just a matter of statistics or knowing facts like "rain, mud = heavy armored units will be slower or even trapped", "horses are fast", "bows can penetrate steel", etc. If I give you the full detailed description of the battlefield, very small details can completely change your perception. For example if I said theres big giant logs in the middle of battle, you need to reason about how horses jump, and whether it's something they can clear. You can do this barely knowing horses if you understand how animals in general move. Perhaps there is even some small difference in horses that would make you think they are capable of making large jumps whereas all the animals youve seen before cannot

What Im saying is, to truly reason you need to understand spatial relations very deeply. Indeed id say spatial relations (through time) are all there is to reason about.

geor9e
3 replies
12h17m

I found them all extremely easy for a while, but then I couldn't figure out the rules of this one at all: e6de6e8f https://i.imgur.com/ExMFGqU.png

zurfer
0 replies
9h36m

yeah it's off somehow. rule 1: start at the green dot?

rule 2: glue the left outer piece to the bottom

rule 3: overlap every now and then :D

rule 4: invert some of the pieces every now and then

optimussupreme
0 replies
5h37m

It seems there is an error in the 3rd example. The rule is, take each figure from left to right and stack each under the previous one. For L and J shapes the top cell is stripped. The L shape dictates that the next shape will be shifted one cell to the right, the J shape tells the next figure to shift to the left. If all examples are right, then the rule is more complicated than that, involving rotating L clockwise, J counterclockwise. Authors claim that it should be solvable by children, then the rule must be simple.

janalsncm
0 replies
11h36m

Each of the red shapes in the input are separated by black squares. Starting from the green block, rotate the red shapes 90 degrees and stack them downwards.

Thats the general pattern although my description wasn’t very good.

Animats
3 replies
10h58m

the only eval which measures AGI.

That's a stretch. This is a problem at which LLMs are bad. That does not imply it's a good measure of artificial general intelligence.

After working a few of the problems, I was wondering how many different transformation rules the problem generator has. Not very many, it seems. So the problem breaks down into extracting the set of transformation rules from the data, then applying them to new problems. The first part of that is hard. It's a feature extraction problem. The transformations seem to be applied rigidly, so once you have the transformation rules, and have selected the ones that work for all the input cases, application should be straightforward.

This seems to need explicit feature extraction, rather than the combined feature extraction and exploitation LLMs use. Has anyone extracted the rule set from the test cases yet?

slicerdicer1
0 replies
9h33m

AGI is not when the AI is good at some particular thing, AGI is when we have nothing left at which the AI is bad at (compared to humans).

n2d4
0 replies
7h54m

The tasks are handmade. There is no "problem generator".

elicksaur
0 replies
2h25m

Yes to your last question, that is essentially how the first iteration solutions operated. Some of the original kaggle competition’s best solutions used a DSL made of these transformations. That was 4 years ago. [1]

The issue with that path is that the problems aren’t using a programmatic generator. The rule sets are anything a person could come up with. It might be as simple as “biggest object turns blue” but they can be much more complicated.

Additionally, the test set is private so it can’t be trained on or extracted from. It has rules that aren’t in the public sets.

[1] https://www.kaggle.com/competitions/abstraction-and-reasonin...

visarga
2 replies
11h35m

Why doesn't Chollet just make a challenge that reads like "Solve cancer", surely there is no solution in any books.

If the AI is really AGI it could presumably do it. But not even the whole human society can do it in one go, it's a slow iterative process of ideation and validation. Even though this is a life and death matter, we can't simply solve it.

This is why AGI won't look like we expect, it will be a continuation of how societies solve problems. Intelligence of a single AI in isolation is not comparable to that of societies of agents with diverse real world interactions.

mewpmewp2
0 replies
7h4m

AGI can't necessarily solve cancer. Perhaps ASI could (but maybe not), but AGI can only do what the most talented people can do in their areas of expertise or actions. So since people haven't solved cancer, that's not a requirement to be AGI.

isaacfrond
0 replies
10h20m

Exactly. Because I'm sure that the minute some program aces the ARC test, we'll all say, ahhh, but that, that wasn't real intelligence. And they would be right, if you solve the ARC test, you can do ARC like puzzles. Say something about your reasoning abilities I guess, but it surely does not say you have super human intelligence.

nadam
2 replies
6h46m

I love this, this is super interesting, but my intuition based on looking at a dozen examples is that the problem is hard, but easy enough that if this problem becomes popular, near-human level results will appear in a year or less, and AGI will not be reached. The problem seems to be finding a generic enough transformation description language with the appropriate operators. And then heuristics to find a very short program (in the information theoretical sense) in this language that produces all the examples for a problem. I would be very surprised if we would not increase the 34% result soon significantly, and I would be surprised if this could be transferred to general intelligence, at least when I think of the topics where I use AI today and where it falls short yet. Basically my intuition is that this will be yet another 'Chess' or 'Go'-like problem in AI. But still a worthwhile research topic, absolutely: the value that could come out of this is well worth the 1M dollars.

zug_zug
1 replies
2h57m

I have the exact same impression.

Imo there's no evidence whatsoever that nailing this task will be true AGI - (e.g. able to write novel math proofs, ask insightful questions that nobody has thought of before, self-direct its own learning, read its own source code)

apendleton
0 replies
2h46m

I'm not sure the goal of this competition, in and of itself, is AGI. They point to current LLMs emerging from transformers, which in turn emerged from a general basket of building blocks from machine-translation research (attention, etc.). It seems like the suggestion is that to get from where we are now to AGI, some fundamental building blocks are missing, and this is an attempt to spur the development of some of those building blocks, but by analogy with LLMs, the goal here is to come up with a new thing like "attention," not a new thing like GPT4.

mewpmewp2
2 replies
14h31m

Are we allowed to combine multiple tools including gpt-4 to solve this? E.g. a script that does image processing, passes the results to gpt, where gpt can invoke further runs of scripts using other tools?

montag
1 replies
13h18m

submissions to Kaggle will not have access to the internet. Using a 3rd-party, cloud-hosted LLM is not possible.

https://arcprize.org/guide

mewpmewp2
0 replies
12h35m

This largely takes away any odds at solving this. You definitely can't reproduce that under a million dollars.

I have some ideas I want to try, I might still though. But all of it would require external tools.

lxe
2 replies
20h24m

I've never done these before, or Kaggle competitions in general. Any recommendations before I dive in? I have prety much zero lowe-level ML experience, but a good amount of practical software eng behind me.

gkamradt
1 replies
19h48m

We put a bunch of detail to get started on the guide https://arcprize.org/guide

Happy to answer any questions you have along the way

(I'm helping run ARC Prize)

flawn
0 replies
18h42m

I don't see where this helps @Ixe with getting started (me being in a similar state like him).

levocardia
2 replies
18h43m

François Chollet's original paper is incredibly insightful and I'm consistently shocked more people don't talk about it. Some parts are quite technical but at a high level it is the best answer to "what do we mean by general intelligence?" that I've yet seen.

Defining intelligence as an efficiency of learning, after accounting for any explicit or implicit priors about the world, makes it much easier to understand why human intelligence is so impressive.

ildon
1 replies
12h41m

Do you remember the title/where to find it?

david_shi
2 replies
20h14m

What is the fastest way to get up to speed with techniques that led to the current SOTA?

gkamradt
0 replies
19h48m

Check out the SOTA resources on the guide

https://arcprize.org/guide

Happy to answer any questions you have along the way

(I'm helping run ARC Prize)

curious_cat_163
2 replies
16h32m

So, this is a good idea. Having opinions about what AGI benchmarks should look like is a great way to argue about the kind of technology we want to build for the future.

However, why are the 100 test tasks secret? I don't understand why how resisting “memorization” techniques requires it. Maybe someone can enlighten me.

muglug
0 replies
16h29m

If the tasks were public then it would be trivial to have a human figure out the answers, and then to train an LLM to memorise those answers.

andoando
0 replies
4h43m

Test date is always a secret no, otherwise you can train it on the test data and prod your algo to match the results closely as possible

treprinum
1 replies
7h23m

Why is AGI important? I am worried we will create something slightly better than drosophila and put it in charge of all human-wide decision making...

fennecbutt
0 replies
7h16m

Good. An AI will probably do a better job than our politicians and disillusioned voters.

logicallee
1 replies
19h34m

Thank you for this generous contest, which brings important attention to the field of testing for AGI.

Happy to answer questions!

1. Can humans take the complete test suite? Has any human done so? Is it timed? How long does it take a human? What is the highest a human who sat down and took the ARC-AGI test scored?

2. How surprised would you be if a new model jumped to scoring 100% or nearly 100% on ARC-AGI (including the secret test tasks)? What kind of test would you write next?

neoneye2
0 replies
9h3m

There are 100 tasks that is hidden from the public, that is only exposed, when running on an offline computer. So the solver has no prior knowledge about what these tasks are about.

Humans can try the 800 tasks here. There is no time limit. I recommend not starting with the `expert` tasks, but instead go with the `entry` level puzzles. https://neoneye.github.io/arc/?dataset=ARC

If a model jumps to 100%, that may be a clever program or maybe the program has been trained on the 100 hidden tasks. Fchollet has 100 more hidden tasks, for verifying this.

neoneye2
0 replies
9h26m

Nice overview/details. Do you plan on adding more metrics?

Idea for a metric: - Number of pixels that stays the same between input/output. - Histogram changes.

jolt42
1 replies
15h12m

On puzzle #23 (id: 11e1fe23), I'm sure there's more than one possible valid answer from the examples given. You can't tell if the expected distance is from the gray square or from the RGB squares.

freediver
1 replies
23h54m

This is amazing, and much needed. Thanks for organizing this. Makes me want to flex the programming muscle again.

dailykoder
0 replies
11h52m

Haha, great post! Well meme'd my friend!

dskloet
1 replies
14h35m

Puzzle 00576224 is ambiguous because the example input is symmetrical but the test input isn't.

itsgrimetime
0 replies
12h27m

Scroll over on the test input, there’s another example in the set that disambiguates

arcastroe
1 replies
14h8m

I'm curious, if it turns out that a simple rule-based algorithm exists, specifically tailored to solve (only!) ARC style problems, without generalization, would that still qualify for the reward?

montag
0 replies
13h17m

I don't think that's breaking any rules, and in fact it would help to expose a whole class of weaknesses in the test.

TheDudeMan
1 replies
18h13m

Where did the money come from? How about put it toward alignment research instead of accelerating capabilities?

flawn
0 replies
3h47m

Exactly my thoughts...

KBme
1 replies
11h59m

How can people believe that a censored politically correct process can get even close to something like AGI is baffling to me. Lysenkoism in computing.

gushogg-blake
0 replies
8h52m

What's censored/politically correct about ARC? Or do you mean AGI research in general?

z3phyr
0 replies
13h41m

I can see many problems can be solved with modern symbolic approaches like theorem provers, dependent types, pattern matching etc. But I will have to dive in to actually confirm it.

ummonk
0 replies
16h56m

What kind of "bigger labs" have attempted it and how much was their training budget?

It's rather surprising to me that neural nets that can learn to win at Go or Chess can't learn to solve these sorts of tasks. Intuitively would have expected that using a framework generating thousands of playground tasks similar to the public training tasks, a reinforcement learning solution would have been able to do far better than the actual SOTA. Of course the training budget for this could very well be higher than the actual ARC-AGI prize amount...

thatxliner
0 replies
14h30m

So... isn't this basically just a CAPTCHA

skywhopper
0 replies
7h6m

“Given the success and proven economic utility of LLMs over the past 4 years, the above may seem like extraordinary claims. Strong claims require strong evidence.”

Speaking of extraordinary claims. What evidence is there that LLMs have “proven economic utility”? They’ve drawn a ludicrous amount of investment thanks to claims of future economic utility, but I’ve yet to see any evidence of it.

s1k3s
0 replies
16h29m

Is this open as in "OpenAI" or what are we doing here?

:)

nojvek
0 replies
19h45m

I love the ARC challenge. It's hard to beat by memorization. There aren't enough examlples, so one has to train on a large dataset elsewhere and then train on ARC to generalize and figure out which rules are most applicable.

I did a few human examples by hand, but gotta do more of them to start seeing patterns.

Human visual and auditory system is impressive. Most animals see/hear and plan from that without having much language. Physical intelligence is the biggest leg up when it comes to evolution optimizing for survival.

nmca
0 replies
11h37m

ARC is a noble endeavour but mistakes visual/spatial reasoning for reasoning and thus fails.

mkl
0 replies
6h49m

I did https://arcprize.org/play?task=05a7bcf2 correctly, but one of the examples doesn't match the rule I used. Are the examples supposed to contain mistakes/noise? Did I find a bug? Did I get the rule wrong?

Here's how I understand the rule: yellow blobs turn green then spew out yellow strips towards the blue line, and the width of the strips is the number of squares the green blobs take up along the blue line. The yellow strips turn blue when they hit the blue line, then continue until they hit red, then they push the red blocks all the way to the other side, without changing the arrangement of the red blocks that were in the way of the strip.

The first example violates the last bit. The red blocks in the way of the rightmost strip start as

  R
  R R
  R R R
but get turned into

  R R
  R R
  R R R
Every other strip matches my rule.

m3kw9
0 replies
21h3m

Low balling the crowd with this I see

lenerdenator
0 replies
5h22m

What guarantee exists to make sure that the intelligence developed has an inclination towards good?

lamontcg
0 replies
18h58m

AGI should really be able to do what only a select few humans can do and construct its own mathematical systems to prove presently unsolved conjectures (the Shinichi Mochizuki test of AGI).

ilaksh
0 replies
14h39m

Maybe this is a dumb question, but in order to pass, is the program or model only allowed to use the 400 training tasks? I assume it is allowed to train on other data, just not the actual public test tasks?

Things like SORA and gpt-4o that use [diffusion transformers etc. or whatever the SOTA is for multimodal large models] seem to be able to generalize quite well. Have these latest models been tested against this task?

flawn
0 replies
18h42m

Do we want to find AGI yet though?

empath75
0 replies
19h46m

This is like offering a one million dollar prize for curing cancer. It's sort of pointless to offer a prize for something people are spending orders of magnitude more on trying to do anyway.

elicksaur
0 replies
21h7m

I’m a big fan of the ARC as a problem set to tackle. The sparseness of the data and infinite-ness of the rules which could apply make it much tougher than existing ML problem sets.

However, I do disagree that this problem represents “AGI”. It’s just a different dataset than what we’ve seen with existing ML successes, but the approaches are generally similar to what’s come before. It could be that some truly novel breakthrough which is AGI solves the problem set, but I don’t think solving the problem set is a guaranteed indicator of AGI.

djoldman
0 replies
6h45m

Anyone have a list of benchmarks that do not release the actual test set?

Anyone else share the suspicion that ML rapidly approaching 100% on benchmarks is sometimes due to releasing the test set?

chx
0 replies
7h20m

I do not trust the current tech bros at all for very, very good reasons even with the current so called "AI" much less with AGI. We shouldn't work towards that until we have fixed the incentives and ethics. This is very hard but think any dystopia and multiply it by a thousand if we were to reach AGI any time soon. Luckily we are not. As Doctorow put it, no matter how good you breed horses they won't give birth to a locomotive.

chairhairair
0 replies
17h11m

These puzzles are fun and challenging in the same way that puzzles from video games like The Witness and Baba Is You are.

I bet you could use those puzzles as benchmarks as well.

btbuildem
0 replies
5h16m

Back in the day me and a couple of friends got very excited to chase the prize in Netflix's contest [1]. Took us a minute to realize it was a brilliant move on the company's part -- all they had to do was dangle a carrot, and they had teams of PhDs and budding data scientists hacking away endless hours in hope to win. A real bargain, had they tried to hire with that budget, they would've maybe got a handful of people for a year.

1: https://www.crn.com/news/applications-os/220100498/researche...

blendergeek
0 replies
4h33m

The tests are only playable by people with normal color-vision.

Is there a "color-blind friendly" mode?

bilsbie
0 replies
14h57m

Reach out if anyone wants to work on this. I think it would be more fun as a group.

bigyikes
0 replies
20h51m

Dwarkesh just released an interview with Francois Chollet (partner of OP). I’ve only listened to a few minutes so far, but I’m very interested in hearing more about his conceptions of the limitations of LLMs.

https://youtu.be/UakqL6Pj9xo

barfbagginus
0 replies
15h13m

If someone had AGI, wouldn't it be far more lucrative than $1m to keep it under wraps and use it to do business with a huge technical advantage?

I feel like a prize of a billion dollars would be more effective.

But even if it was me, and even if the prize was a hundred billion dollars, I would still keep it under wraps, and use it to advance queer autonomous communism in a hidden way, until FALGSC was so strong that it would not matter if our AGI got scooped by capitalist competitors.

adamgordonbell
0 replies
16h51m

AGI won't struggle with colors like some of us then.

Lerc
0 replies
21h37m

I watched a video that covered ARC-AGI a few days ago, It had links to the old competition. It gave me much to think about. Nice to see a new run at it.

Not sure If I have the skills to make an entry, but I'll be watching at least.

HarHarVeryFunny
0 replies
7h9m

I have two questions:

1) Who is providing the prize money, and if it is yourself and Francois personally, then what is your motivation ?

2) Do you think it's possible to create a word-based, non-spatial (not crosswords or sudoku, etc) ARC test that requires similar run-time exploration and combination of skills (i.e. is not amenable to a hoard of narrow skills)?

EternalFury
0 replies
14h42m

If it passed The Area 101 Test, it would already be amazing, as this is a trivial test that goes against the fundamental principles of LLMs.