return to table of content

Is my toddler a stochastic parrot?

LASR
63 replies
20h31m

Often the "stochastic parrot" line is used as a reduction on what an LLM truly is.

I firmly believe that LLMs are stochastic parrots and also that humans are too. To the point where I actually think even consciousness itself is a next-token predictor.

Where the industry is headed - multi-modal models. This really I think is the remaining frontier of LLM <> Human parity.

I also have a 15 month old son. It's totally obvious to me that he's definitely learning by repetition. But the sources of training data is much more high bandwidth than whatever we're training our LLMs on.

It's been a couple of years since GPT-3. It's time to abandon this notion of "stochastic parrot" as a derogatory. Anyone stuck in this mindset really is going to be hindered from making significant progress in developing utility from AI.

Probiotic6081
22 replies
20h25m

I firmly believe that LLMs are stochastic parrots and also that humans are too. To the point where I actually think even consciousness itself is a next-token predictor.

Almost every time I'm on hackernews I end up baffled by software engineers feeling entitled to have an unfunded opinion on scientific disciplines outside of their own field of expertise. I've literally never encountered that level of hubris from anyone else. It's always the software people!

Consciousness is far from being fully understood but having a body and sensorimotor interactions with the environment are already established as fundamental preconditions for cognition and in turn consciousness.

Margaret Wilsons paper from 2002 is a good read:https://link.springer.com/content/pdf/10.3758/BF03196322.pdf

peace

mirekrusin
4 replies
20h21m

Are you saying that ie. paralyzed people don't have consciousness?

zhynn
1 replies
20h8m

There is a spectrum between conscious and unconscious. You could say that under general anesthesia you are a 0/10 on the conscious scale, asleep is 1 or 2, just woken from sleep is maybe 3.... and up to a well rested, well fed, healthy sober adult human near the top of the scale. These are common sense notions of objective consciousness and they are testable and noncontroversial outside of a philosophy argument.

Does this make sense as a rebuttal to your reductio argument?

mirekrusin
0 replies
3h26m

People dream under general anesthesia, which is a form of consciousness.

I don't think ie. Stephen Hawking was on "lower spectrum of consciousness" either.

bena
1 replies
20h15m

First of all, paralyzed people do have bodies. And sensorimotor functions.

Second of all, it wouldn't matter if individually they did or did not. The species does and now our species has developed consciousness. It's part of the package.

If you wanted a counterexample, you should look to plant life. There is some discussion on whether or not plant systems have a form on consciousness. But, then again, plants have bodies at the very least.

mirekrusin
0 replies
3h23m

What about people with amputations or missing body parts due to genetics - are they less conscious?

Chabsff
3 replies
20h21m

The befuddlement goes even farther for me. LLMs are, effectively, black-box systems that interface with the world via a stochastic parrot interface.

You'dthinkthat software engineers would be a group that easily understands how making radical assumptions about implementation details when looking at nothing but an interface is generally misguided.

I'm not saying that there isn't a strong case to be made against LLMs being intelligent. It's pointing at the stochastic parrot as evidence enough in of itself that confuses me.

lainga
1 replies
20h18m

Ah, but HN is a platform for not just any software engineer, but the entrepreneurial type.

NoGravitas
0 replies
11m

Which is to say, the type that will believe anything if it's currently easy to hype to VCs.

throw0101a
0 replies
20h13m

You'dthinkthat[…]

As a stochastic parrot I'm unable to do that.

pzo
2 replies
17h37m

having a body and sensorimotor interactions with the environment are already established as fundamental preconditions for cognition and in turn consciousness.

Does someone who is blind is not conscious then? How about when someone who is paralysed? or deaf? or someone with low IQ or mental illness?

Stephen Hawking was paralysed most of his life but was a smart guy. If he was not only paralysed but also blind and deaf he would be smart guy still.

I don't think AGI needs to have a body, sensorimotor interaction or even vision to be conscious. We need those for training - if you would be blind, paralysed, deaf from the beginning it would be hard for you learn anything and interact in any way.

Machines have 6th sense that humans don't have - kind of a telepathy where they can exchange tokens/thoughts with different machines or humans much faster than we humans can type or speak.

Teever
1 replies
12h8m

I've been thinking about this for a while now but I've been approaching it from the opposite direction.

If we were attempting to put someone into some sort of Matrix like reality simulator but we lacked the technology to provide a perfect simulation what level of simulation would be 'good enough' that a human would consider it reality and be able to develop into something we could relate to?

If you gave someone the Helen Keller level of experience, but with reduced tactile sensation, how much could you reduce that touch sensation before they wouldn't be like us?

GenericPoster
0 replies
10h50m

If we were attempting to put someone into some sort of Matrix like reality simulator but we lacked the technology to provide a perfect simulation what level of simulation would be 'good enough' that a human would consider it reality and be able to develop into something we could relate to?

Have you tried VR before? You really don't need perfect simulation to be fooled. Good enough is already here, albeit for a short amount of time.

nix0n
2 replies
20h8m

software engineers feeling entitled to have an unfunded opinion on scientific disciplines outside of their own field of expertise

There's an XKCD about this behavior[0]. The title is actually "Physicists", but I also have seen it on HN (especially with psychology).

[0]https://xkcd.com/793/

dekhn
0 replies
19h4m

This is known as the "Why don't you just assume a spherical cow?"

User23
0 replies
20h1m

Well with psychology it’s more fair. Thanks to the replication crisis we can say with a straight face that psychologists aren’t even experts on psychology. As usual the demonstrable expertise in the field lies with the pragmatic types. For psychology that means salesmen, pickup artists, advertisers, conmen, propagandists, high school queen bees, and so on.

danielmarkbruce
1 replies
20h18m

Yeah, it's only software people. No one else has unfounded opinions.

But... a parrot has a body. And sure, you'll say "they don't literally mean parrot".. but it's a vague term when you unpack it, and people saying "we are stochastic parrots" are also making a pretty vague comment (they clearly don't mean literally). Anyone who has a small child and understands LLMs is shocked by how much similar they seem to be when it comes to producing output.

dekhn
0 replies
17h23m

The question is: does polly really want a cracker?

oh_sigh
0 replies
20h22m

I suspect you don't know what OPs field of expertise is. I also doubt OP would disagree with that statement that the only conscious things we know of have a body and sensorimotor interactions with the environment.

hackinthebochs
0 replies
20h4m

Embodiment is an intellectual dead end in explaining consciousness/sentience. Sure, its relevant to understanding human cognition as we are embodied entities, but it's not much relevant to consciousness as such. The fact that some pattern of signals on my perceptual apparatus is caused by an apple in the real world does not mean that I have knowledge or understanding of an apple in virtue of this causal relation. That my sensory signals are caused by apples is an accident of this world, one we are completely blind to. If all apples in the world were swapped with fapples (fake apples), where all sensory experiences that have up to now been caused by apples are now caused by fapples, we would be none the wiser. The wide content of our perceptual experiences is irrelevant to literally everything we know and how we interact with the world. Our knowledge of the world is limited to our sensory experiences and our deductions, inferences, etc derived from our experiences. Our situatedness in the world is only relevant insofar as it entails the space of possible sensory experiences.

Our sensory experience is the medium by which we learn about the external world. We learn of apples not because of the redness of the sensory experience, but because the pattern of red/not-red experience entails the shape of apples. Conscious experience provides the medium, modulations of which provide the information about features of the external world. It is analogous to how modulations of electromagnetic waves provides information about some distant information source. Understanding consciousness is an orthogonal matter to one's situatedness in the world, just like understanding electromagnetic waves is orthogonal to understanding the information source being modulated into them.

fragmede
0 replies
20h18m

To be fair, it's any of the exalted professions that the blessed extend their expertise to. Doctors, lawyers, software engineers. they (we) start with the notion that I'm a smart person, so from first principles, I can conquer the world. nevermind that that's an existing body of work, with their own practitioners to build off of.

dekhn
0 replies
20h14m

Some of us who believe that humans are at least partly statistical parrots have PhDs in relevant fields- for example, my PhD is in Biophysics, I've studied cognitive neuroscience and ML for decades, and while I think embodiment may very well be a necessary condition to reproduce the subjective experience of consciousness, I don't think "having a body and sensorimotor interactions are established as fundamental preconditions for cognition and in turn consciousness". Frankly I think that's an impractical question to answer.

Instead, I work with the following idea: it seems not unlikely that we will, in the next decade or so, create non-embodied machine learning models which simply can't be told apart from a human (through a video chat-like interface). If you can do that, who really cares about whether it's conscious or not?

I don't really think philosophy of the mind is that important here; instead, we should treat this as an engineering problem where we assume brains are subjectively conscious, but that's not a metric we are aiming for.

PaulDavisThe1st
0 replies
17h15m

From the first line of Wilson's paper:

There is a movement afoot in cognitive science to grant the body a central role in shaping the mind.

It is far from true to say "having a body and sensorimotor interactions with the environment are already established as fundamental preconditions for cognition and in turn consciousness"

It is a popular idea in some groups of people that study these questions. But there are many other similar groups of peopel studying these questions who do not agree with it, certainly not stated as strongly as you have put it here.

Also, a reminder that HN readership and its commentariat, while dominated by SWE, is not limited to them.

otabdeveloper4
19 replies
20h21m

LLMs don't create new information, they only compress existing complexity in their train and inference data sets.

Humans definitely create new information. (Well, at least some humans do.)

wongarsu
6 replies
20h9m

Do we?

Humans canobservenew information, but that's obviously not that unique. We can reason about existing information, creating new hypotheses, but that is arguably a compression of existing information. When we act on them and observe the effects they become information, but that's notuscreating the information (and LLMs can both act on the environment and have the effects fed back to their input to observe, so it's not really unique).

There is this whole field of art, but art is constantly going through a mental crisis whether anyone is creating anything new, or if it's all just derivations of what has come before. Same with dreams, which appear "novel" but might just be an artefact of our brain's process that compresses the experiences of the day.

chx
2 replies
19h57m

let's say you wanted to count the number of coins on a table

you organize them into piles of ten

this created new information

BlueTemplar
1 replies
18h49m

More like you converted information you had about energy powering your muscles into that one - resulting in less total information for you in the end.

chx
0 replies
16h24m

that's not how information theory works

zhynn
1 replies
20h5m

Life creates novelty. Humans are a member of that category, but all life is producing novelty.

More information:https://www.nature.com/articles/s41586-023-06600-9

Zambyte
0 replies
12h6m

Is the inverse true? Does novelty create life? Can something be novel devoid of life?

otabdeveloper4
0 replies
4h36m

I'm not arguing semantics; there's a large amount of math and theory here.

LLM's are basically lossy compressors, they decrease information entropy.

Humans increase information entropy whenever we do anything.

(Some) information entropy is potentially very valuable, so in effect LLMs destroy value by design, while humans can potentially create value. From an economic point of view humans can never be replaced by an LLM.

chrbr
4 replies
19h29m

I agree. A thought experiment I had recently:

Let's say we could somehow train an LLM on all written and spoken language from the western Roman civilization (Republic + Western Empire, up until 476 AD/CE, just so I don't muddy the experiment with near-modern timelines). Would it, without novel information from humans, ever be able to spit out a correct predecessor of modern science like atomic theory? What about steam power, would that be feasible since Romans were toying with it? How far back do we have to go on the tech tree for such an LLM be able to "discover" something novel or generate useful new information?

My thought is that the LLM would forever be "stuck" in the knowledge of the era it was trained in. Something in the complexity of human brains working together is what drives new information. We can continue training new LLMs with new information, and LLMs might be able to find new patterns in data that humans can't see and can augment our work, but the LLM's capability for novelty is stuck on a complexity treadmill, rooted in its training data.

I don't view this ability of humans as some magic consciousness, just a system so complex to us right now that we can't fully understand or re-create it. If we're stochastic parrots, we seem to be ones that are magnitudes more powerful and unpredictable than current LLMs, and maybe even constructed in a way that our current technology path can't hope to replicate.

PeterisP
2 replies
17h32m

For your thought experiment, I'd assert that the key missing part is experimentation in the real world, asthatis what acquires new information, not the complexity of human brains working together.

If you took millions of genius-level immortal humans with all the same Roman data but had them sit in a blank, empty room with their hands tied and simply discuss philosophy for eternity, I'm certain that they would not be able ever spit out a correct predecessor of modern science like atomic theory. Perhaps they could spit out billions of theories including the atomic theory as well, but they would have no data to presume that the atomic theory is more relevant than any other. Extensive information processing can squeeze out every last ounce of knowledge from some data, but anything that isn't in that data can't be acquired by mere thinking about it. On the other hand, if you gave some "LLM++" the ability to toy around with reality and attempt all kinds of experiments to test various hypotheses, then I wouldn't assume that it would not be forever stuck in the knowledge of the era it was trained in.

chrbr
1 replies
15h52m

Yeah, I like that improvement/clarification. Good assertion. Now I wonder if it changes my stance: are the path modern LLMs are on ever going to replicate this environment for acquiring new information that humans currently operate in?

CamperBob2
0 replies
15h19m

As usual in these "Yeah, but can AI dothis?" threads, the answer is yes, it is already happening:https://www.space.com/mars-oxygen-ai-robot-chemist-splitting...

Izkata
0 replies
14h44m

Let's say we could somehow train an LLM on all written and spoken language from the western Roman civilization (Republic + Western Empire, up until 476 AD/CE, just so I don't muddy the experiment with near-modern timelines). Would it, without novel information from humans, ever be able to spit out a correct predecessor of modern science like atomic theory?

Funny example - depending on how close the predecessor has to be, the answer is maybe:https://en.wikipedia.org/wiki/Atomism

danielmarkbruce
2 replies
20h16m

How are you defining "create new information" ?

otabdeveloper4
1 replies
4h43m

In the information theoretic sense. (Increasing information entropy.)

danielmarkbruce
0 replies
1h27m

What do you define as the system here? The LLM itself? The world of all text? What about for the human?

Zambyte
2 replies
20h9m

Lossy compression + interpolated decompression = new information

otabdeveloper4
1 replies
4h42m

Lossy compression decreases information entropy by definition.

Zambyte
0 replies
4h35m

Interpolated decompression increases information entropy by definition.

Gringham
0 replies
20h18m

Do they though? Or do humans just combine things they have learned about the world?

function_seven
8 replies
20h23m

I'm in the same boat. It feels wrong to contemplate that our consciousness might not be a magical independent agent with supernatural powers, but is rather an emergent property of complex-but-deterministic actions and reactions.

Like it somehow diminishes us. Reduces us to cogs and levers and such.

But I can't imagine how it could be otherwise, though. I'm still baffled by the existence of qualia, phenomenology, etc. Awareness. But bafflement on that front isn't a good reason to reject the possibility that the only thing that separates me from a computer is the level of complexity. Or the structure of the computation. Sometimes things are just weird.

jocaal
3 replies
20h14m

emergent property of complex-but-deterministic actions and reactions

I think you mean non-deterministic. The last century of physics was dominated by work showing how deterministic systems emerged from non-deterministic foundations. It seems that probability and statistics were the branches of maths behind everything. Who would have thought.

function_seven
1 replies
20h6m

Thanks. I did actually mean to use "deterministic", but only as it sits in opposition to "free will". Is there a better word for what I meant?

Of course there is randomness as well. So, yeah, I should clarify: We don't impose upon the world any kind of "uncaused cause", even if it feels like we do. Everything we think and do is a direct result of some other action. Sometimes that action can trace its lineage to a random particle decay. (Maybe—ultimately—all of them can?) Maybe we even have a source of True Randomness inherent in our minds. But even so, that doesn't lend any support to the common notion of our minds and consciousnesses as being somehow separate from the physical world or the chains of information that run through everything.

jocaal
0 replies
19h57m

I get what you are saying. I was just thinking about the stochastic parrot analogy for consciousness, but I see your comment is more about there not being special sauce to conciousness. But hey, the fact that such behaviour can emerge from simple processes is still pretty damn cool.

dale_glass
0 replies
20h1m

I think "deterministic" is the more correct option.

Yes, of course deep down there's unimaginable numbers of atoms randomly bumping around. But so far everything suggests that biological organisms do their damnedest to abstract that away and operate on a deterministic basis.

Think like how you keep changing -- gaining and losing cells, changing chemical balance, and on the whole it takes a lot of damage, even brain damage, to produce measurable results.

somewhereoutth
2 replies
20h0m

Indeed our consciousness is an emergent property from a complex system, however the complexity of that system - the human brain - is almost unfathomably beyond the complexity of anything we might make in silicon now or in the foreseeable future.

For example, it is possible to write down a (natural/whole) number that completely describes the state of an LLM - its connections and weights etc - for example by simply taking the memory space and converting into a number as a very long sequence of 1s and 0s. The number will be very big, but still indescribably smaller than a number that could perfectly describe the state of a human brain - not least asthatnumber is likely to lie on the real line, or beyond, even if it was calculable. See Cantor for a discussion on the various infinities and their respective cardinalities.

BlueTemplar
1 replies
18h59m

Under the assumption that human brains are limited by our current understanding of physics, their state is finite - infinities are forbidden by definition.

( See the relationship between information theory and Heisenberg's uncertainty principle that results in the law of paraconservation of information :http://www.av8n.com/physics/thermo/entropy-more.html#sec-pha...)

somewhereoutth
0 replies
5h35m

Not true. The idea that QM discretises our universe is incorrect.

hiAndrewQuinn
0 replies
20h11m

I don't think we're ever going to arrive at a satisfactory answer for where qualia comes from, for much the same reason it's impossible to test the quantum suicide hypothesis without actually putting yourself through it enough times to convince yourself statistically.

The only "real" evidence of qualia you have is your own running stream of it; you can try to carefully pour yourself, like a jug of water, into the body of another, and if you go carefully enough you may even succeed to carry that qualia-stream with you. But it's also possible that you pour too quickly, or that you get bumped in the middle of the pouring, and poof - the qualia is just gone. You're left with a p-zombie.

Or maybe not. Maybe it comes right back as soon as the transfer is done, like the final piece of a torrented movie. The important part is you won't know unless you try - and past success never guarantees future success. Maybe you just got lucky on the first pour.

mo_42
3 replies
20h1m

I firmly believe that LLMs are stochastic parrots and also that humans are too. To the point where I actually think even consciousness itself is a next-token predictor.

I agree with the first sentence but not with the second one. Consciousness most probably does not arise from just a next-token predictor. At least not from an architecture similar to current LLMs.

Both humans and LLMs basically learn to predict what happens next. However, LLMs only predict when we ask them. In contrast, humans predict something all the time. Even when we don't have any sensory input, our brain plays scenarios. Maybe consciousness arises because the result of our thinking is fed back as input. In that sense, we simulate a world that includes us acting and communicating in that world.

Also noteworthy, the human brain handles a variety of sensory information and it's output is not only language. LLMs are restricted to only language. But to me it seems like it's enough for consciousness if we can give it the self-referential property.

throwaway4aday
1 replies
19h53m

In order to predict what happens next we need to create a model of the world. We exist as part of the world so we need to model ourselves within it. We also have to model our mind for it to be complete, including the model of the world it contains. Oops, I just created an infinite loop.

mo_42
0 replies
19h42m

Not necessarily infinite. It stops when there's reasonable accuracy. Similar to how we would implement this in software.

xigency
0 replies
16h45m

However, LLMs only predict when we ask them. In contrast, humans predict something all the time. Even when we don't have any sensory input, our brain plays scenarios.

Pretty easy to do this exercise with an LLM. At least, easier than building an LLM in the first place. Leave it running, let it talk to itself, revisit partial memories, and explore noise. Really a duct-tape problem here more than anything.

voitvod
1 replies
17h19m

I would have agreed until these recent podcasts that Chomsky did.

Everyone is basically talking out their ass when it comes to language and linguistics. That becomes incredibly obvious listening to Chomsky on chatGPT.

I was even so stupid to think Chomsky wasn't a fan of chatGPT because it somehow invalidated some of his language theories. Low and behold, no, Chomsky actually knows what he is talking about when it comes to linguistics.

blast
0 replies
10h18m

What recent podcasts? Can you link to some?

meindnoch
0 replies
20h3m

That's a pretty bold statement, coming from someone with the subjective experience of consciousness.

gardenhedge
0 replies
20h12m

Did you teach your child to crawl? To laugh? To get excited?

esjeon
0 replies
19h32m

Anyone stuck in this mindset really is going to be hindered from making significant progress in developing utility from AI.

I think this specific line shouts out that this is a typical tribalism comment. Once people identify themselves as a part of something, they start to translate the value of that something as their own worth. It's a cheap trick that even young kids play, but can LLM do this? No.

Some might say multi-modal this, train on that-thing, but it already takes tens of thousands of the most advanced hardware and gigawatts of energy to push around numbers to reach where it is. TBH, I don't see it going anywhere, considering ROI on research will decrease as we dig deeper into the same paradigm.

What I want to say is that today's LLM is certainly not the last stop of AI technology, but a lot of advocates tend to consider it as the final form ofintelligence. It's certainly a case of extrapolation, and I don't think LLM can do that.

davedx
0 replies
20h16m

Those who don’t understand a concept are doomed to reduce it to concepts they do understand.

I’m currently reading I Am A Strange Loop, a pretty extensive dive into the nature of consciousness. I’m reserving final judgment on how much I agree with the author, but I find it laughable to claim consciousness itself is on the same level as an LLM.

IanCal
0 replies
20h14m

I disagree they're stochastic parrots, I find othello-gpt very convincing that these models can create world models and respond accordingly.

throw0101a
50 replies
20h15m

But he could understand so much more than he could say. If you asked him to point to the vacuum cleaner, he would.

Perhaps worth noting that it is possible to teach infants (often starting at around 9 months) sign language so that they can more easily signal their desires.

Some priority recommended words would probably be:

* hungry/more

* enough/all done (for when they're full)

* drink (perhaps both milk/formula and water† gestures)

See:

*https://babysignlanguage.com/chart/

*https://www.thebump.com/a/how-to-teach-baby-sign-language

These are not (AFAICT) 'special' symbols for babies, but the regular ASL gestures for the work in question. If you're not native English-speaking you'd look up the gestures in your specific region/language's sign language:

*https://en.wikipedia.org/wiki/List_of_sign_languages

*https://en.wikipedia.org/wiki/Sign_language

† Another handy trick I've run across: have different coloured containers for milk and water, and consistently put the same contents in each one. That way the infant learns to grab a particular colour depending on what they're feeling like.

yojo
12 replies
20h5m

FWIW, I tried this with both my sons. They both started using the gestures the same day they started actually talking :-/

I have friends who had much more success with it, but the value will largely depend on your child’s relative developmental strengths. A friend’s son with autism got literally years’ benefit out of the gestures before verbal speech caught up.

throw0101a
4 replies
19h58m

FWIW, I tried this with both my sons. They both started using the gestures the same day they started actually talking :-/

Could still useful: instead of shouting across the playground on whether they have to go potty you can simply make the gesture with minimal embarrassment. :)

vel0city
1 replies
19h18m

I also usually had success with signs when the child was otherwise too emotional to verbalize their desire. They're really upset and crying hard so it is hard to talk especially when talking clearly is already a challenge, but signing "milk" or "eat" or "hurt" or "more" can come through easily.

jtr1
0 replies
15h32m

Wow I had not considered this at all. We used a bit of sign language before my toddler started talking, but have more recently run into these situations where big feelings are crowding out speech and it’d be useful to get anything through. I’ll give this a shot tomorrow

toast0
0 replies
19h38m

Yeah, a handful of signs is useful for adults in many situations where voice comms don't work. And, at least in my circles, there's a small shared vocabulary of signs that there's a good chance will work. Potty, ouch, sleep, eat, maybe a couple more.

petsfed
0 replies
18h24m

Tread carefully: the sign for poop looks close enough to a crude gesture (cruder than just shouting "poop" at a playground, as it turns out) that an ignorant bystander might take it significantly wrongly.

ASalazarMX
1 replies
19h50m

There's probably variation among babies. One of my nephews would examine his feet if you asked them where are his shoes, even before walking. He got so proficient with signs that it delayed talking; he preferred signaling and grunting :/

LoganDark
0 replies
16h39m

He got so proficient with signs that it delayed talking; he preferred signaling and grunting :/

Please don't blame this on the signs! This doesn't mean that he would have learned to speak earlier if not for the signs. I'd be glad that he could communicate proficiently at all.

thealfreds
0 replies
19h50m

Same with my nephew. He also has autism and the first thing the speech therapist did when he was 3 was teach him simple sign language. It became such a great catalyst for communication. He's nowhere near his his age (now 6) developmentally but within ~6 weeks he went from completely non-verbal to actually vocalizing the simple words he learned the sign language for.

kuchenbecker
0 replies
19h59m

My kids both picked it up, but my younger was similar. Being able to sign "please" and "all done" helps anyway because "eeess" and "a ya" are what she actually says.

fsckboy
0 replies
6h58m

They both started using the gestures the same day they started actually talking

were you always talking when you signed to them? maybe they thought it went together.

cozzyd
0 replies
10h36m

The gestures also help disambiguate some words. Sometimes it's hard to tell the difference between "Mama", "More" and "Milk" the way my toddler pronounces them, but her gestures make it clear...

4death4
0 replies
13h53m

I had the opposite experience. My daughter had multiple signs down by 7 months.

dools
10 replies
13h36m

We taught both our kids to sign.

My favourite moment was in March, my daughter was about to turn 2 and wasn't speaking yet.

I asked her if she would like to hear some music.

She made the sign for dog.

I searched youtube for some songs about dogs and she shook her head.

She made the sign for tree.

I was like "dog, tree", she nodded. Hmmm...

I was searching for "dog tree music" when one of the pictures that came up was a christmas tree.

She pointed to that excitedly!

I was like "dog christmas tree music" ... it took me a second to realise that she wanted to listen to the Charlie Brown Christmas soundtrack that I had had playing off YouTube at Christmas 3 months previously!

I put that on again and we danced around to it.

I thought that was totally wild! It was the first time I remember her communicating a really sophisticated preference other than just wanting to eat/drink/help etc.

hammock
6 replies
12h54m

This must have been what it was like when the settlers were trying to communicate with native Americans at first

stavros
1 replies
9h2m

Which group is the toddler here?

defrost
0 replies
8h57m

One group had an advanced democratic government of the people which the other group struggled to learn from:https://www.sciencenews.org/article/democracy-indigenous-ame...

stavros
0 replies
9h2m

Unfortunately, the settlers never really got what the native Americans were trying to say.

lotsofcows
0 replies
8h22m

The settlers were greeted by English speaking native Americans who had been working the shipping trade.

kqr
0 replies
8h10m

The crazy thing about this happening with toddlers (at least in my experience) is that you're not really sure how complex their desires are until they manage to communicate them.

Settlers interacting with natives knew full well how complex their desires were – they lived side-by-side, traded, socialised, and learned from each other.[1] Any suggestion of a primitive native is self-comforting propaganda from the industrial complex that comes after the settlers.

[1]:Indians, Settlers, and Slaves in a Frontier Exchange Economy; Usner; Omohundro Institute; 2014.

LNSY
0 replies
11h46m

And then the genocide started

Jasp3r
2 replies
6h59m

When signing dog, was she referring to reindeers?

glandium
0 replies
6h37m

Charlie Brown -> Snoopy, would be my guess.

dools
0 replies
5h15m

The album cover for the Charlie Brown Christmas soundtrack has Snoopy sitting on top of a Christmas tree. I was playing the album from YouTube through the stereo and the album cover was showing on the TV to which the computer was connected.

brainbag
9 replies
19h35m

I had heard about this before my son was born. We didn't try to teach him anything, anytime we remembered (which was sporadic) we just used the gestures when talking to him. I was amazed at how quickly he picked up on it, and he was able to communicate his needs to us months before he was able to verbalize.

It took very minimal effort on our part, and was very rewarding for him; certainly a lot better than him crying with the hope that we could guess what he wanted. Definitely recommended for any new parents.

The best moment was when he was sitting on the floor, and looked up at his mom and made the "together" sign, it was heart melting.

esafak
6 replies
19h4m

In other words, you can invent your own sign language because your child won't need to use it with other people.

AlecSchueler
3 replies
18h12m

Why not use a common sign language and give then a head start if they ever do want to use it outside the family?

skeaker
2 replies
17h49m

They might not have the dexterity required for some of the more complex signs, I would guess. If you devise your own gestures they can be much simpler.

schwartzworld
0 replies
15h47m

The signs chosen to teach to babies tend to be pretty simple. Things like "more" or "milk" are very easy.

doubleg72
0 replies
15h23m

No, the basics like hungry and please/thank you are fairly simple. The daycare my son goes to teaches all the kids sign language starting at like 6 months.

jimmygrapes
1 replies
15h35m

plus you can use it as a battle language for your clan

rcbdev
0 replies
9h39m

Exactly! Using a pre-made sign language is missing the point entirely...

soks86
0 replies
14h20m

I'm not crying, you're crying!

jjeaff
0 replies
9h18m

I love seeing how language develops in my kids and how they start to invent ways to communicate. Our first, she would say "hold you" when she wanted to be picked up, which she learned from us saying "do you want me to hold you?" My 2 year old now says "huggy" when he wants to be picked up.

petsfed
2 replies
18h20m

One of the funniest interactions I had with my eldest daughter was the day we baked cookies together, when she not yet 2. She was verbalizing a lot, but also signing "milk" and "more" quite a bit. And when she bit into her very first chocolate chip cookie of her entire life, she immediately signed "more" and said as muchthroughthe mouthful of cookie.

jtr1
0 replies
15h29m

Once mine learned the sign for “cookie” it became the only word in her vocabulary for a month

RationalDino
0 replies
17h21m

You remind me of the following.

At about the same age, I bought my son a mango lassi. He looked suspiciously at it, but took a sip. With a look of shocked delight he tilted it back, back, back, and emptied the cup!

Then he put it down, looked at me, and said, "Want more!"

I'm looking forward to kids out of the house. But there are some moments that I treasure.

hiisukun
2 replies
17h24m

This is good advice but with a caveat: some of the muscle control required for particular signs is not able to be learned by children until they're a bit older.

For example, voluntary supination/pronation of the forearm is generally not something a 9month old can do. If you try and teach them a common sign for "enough/finished" (fist closed, thumb pointed out, then rotation of the forearm back and forth), or "done" and "more" in the parent link, they probably won't be able to do it properly. They can copy something close to that (thumb out and wobbling their hand around? good enough!) so you have to go with the flow.

There are quite a few signs like that actually, so try and think about how many muscles move together, and how controlled or complex that is. Simple stuff is good -- and doable.

dools
1 replies
13h33m

Yeah my daughter used to stick out her index finger and wave it back and forth for finished.

One of my favourite memories of that was when we went to see the Vivid light show in Sydney and there was a contortionist on the street so we stopped to watch. I looked into the stroller and said "What do you think?" and she made the sign for "finished". So we moved on.

kqr
0 replies
8h5m

Yeah my daughter used to stick out her index finge

This is interesting. My son also did "index finger up" in response to "thumbs up" for the longest time. Why is the thumb so hard to manipulate? Late addition to the evolutionary sequence?

rkuska
0 replies
6h29m

We learned ours following: * more * all done * milk (as in breast feeding) * drink

I think it makes a big difference, both for us (parents, to have a clue what they want) and the kid (being understood, when they can’t speak).

pamelafox
0 replies
18h35m

My toddler learnt "more" and now uses it to get me to repeatedly sing the same song OVER AND OVER again. They haven't used the word yet, though they do speak other words.

I wish I'd learnt sign language before having kids so I just already knew how to do it, it's so cool. Props to the Ms. Rachel videos for including so many signs.

mcpackieh
0 replies
17h57m

regular ASL gestures for the work in question. If you're not native English-speaking you'd look up the gestures in your specific region/language's sign language:

It probably doesn't matter either way for babies, but fyi ASL isn't a sign version of English; it is its own language. In fact American Sign Language is more closely related to French Sign Language than to British Sign Language. The Australian and New Zealand Sign Languages are largely derived from British Sign Language, so there isn't really a correlation between English speaking regions and ASL. Canadians mostly use American Sign Language and French Canadian Sign Language.

marcod
0 replies
17h28m

Worked incredibly well for our first born, but 2nd child just wanted to talk like their sibling.

Almost 20 years later, I still know all the signs for baby food ;)

corethree
0 replies
15h55m

Perhaps worth noting that it is possible to teach infants (often starting at around 9 months) sign language so that they can more easily signal their desires.

You can teach chatgpt too as well. It's like a toddler. A very articulate toddler:

https://chat.openai.com/share/40c94561-2505-4938-8331-7d10ae...

It makes mistakes as any human baby would. And as a parent you can correct it.

All this means is that learning an arbitrary sign language isn't a differentiator.

chthonicdaemon
0 replies
11h21m

Interestingly there isn't any correspondence between spoken language and sign language in the linguistic sense. Correspondence between the dominant sign language and the dominant spoken language is mostly due to geographical colocation. So while you are right to say "your specific region's sign language", there are several distinct sign languages in places that all have English as their primary spoken language.

bradfitz
0 replies
17h22m

We did this with our boys. The oldest picked up a sign we weren't even trying to teach: whenever I changed his poopy diaper I'd say "phoo phoo phoo!" jokingly and fan my noise. One day he was playing on the other side of the room and fanned his nose. He'd pooped and was telling us. Super cool.

Tade0
0 replies
10h56m

I made an attempt with this, but my toddler never picked up anything because:

-She learned some gestures of mine instead, which I didn't realize I was doing.

-Defaulted to speech as soon as possible because it was just easier.

Izkata
0 replies
18h33m

My mom taught us some words somewhere around 5-8 years old, so we could signal things to each other instead of interrupting conversations. The three in particular I remember are "hungry", "bored", and "thank you" (so she could remind us to say it without the other person realizing).

EvanAnderson
0 replies
19h9m

Every kid is different. YMMV. We did some ASL gestures/words with our daughter and it worked very well. I'd encourage everyone to at least give it a try. She took to it and was "talking" to us (mainly "hungry" and "milk", but we got "enough" sometimes too) pretty quickly.

I can't remember exact ages and timeframes-- that time of my life is "blurry". I wish I could remember all the gestures we used. (The only ones I can remember now are "milk", "apple", and "thank you".) As she became verbal she quickly transitioned away from them.

Mattasher
42 replies
20h25m

Humans have a long history of comparing ourselves, and the universe, to our latest technological advancement. We used to be glorified clocks (as was the universe), then we were automatons, then computers, then NPC's, and now AI's (in particular LLM's).

Which BTW I don't think is a completely absurd comparison, seehttps://mattasher.substack.com/p/ais-killer-app

ImHereToVote
21 replies
20h12m

Except LLM's are built on neural networks. That are based on how neurons work. The first tech that actually copies aspects of us.

TaylorAlexander
18 replies
20h10m

sigh

Neural networks are not based on how neurons work. They do not copy aspects of us. They call them neural networks because they are sort of conceptually like networks of neurons in the brain but they’re so different as to make false the statement that they are based on neurons.

robwwilliams
5 replies
19h47m

If you study retinal synaptic circuitry you will not sigh so heavily and you will in fact see striking homologies with hardware neural networks, including feedback between layers and discretized (action potential) outputs via the optic nerve.

I recommend reading Synaptic Organization of the Brain or getting into if you are brave, the primary literature on retinal processing of visual input.

__loam
1 replies
16h6m

I will continue to sigh. The visual cortex is relatively simple and linear. You're not saying something that's as impressive as you think it is.

l33t7332273
0 replies
13h59m

I think the point of the example is that that is an important part of our brains that is relatively simple and linear and we’ve been able to mimic it.

TaylorAlexander
1 replies
17h13m

Actually it’s funny my best friend is a neuroscientist and studies the retina and in particular the way different types of retinal cells respond to stimulus. I have watched her give presentations on her work and I do see that there are some similarities.

But it is nonetheless the case that “neural networks” are not called that because they are based on the way neurons work.

ImHereToVote
0 replies
7h33m
smokel
0 replies
19h18m

The book "The Synaptic Organization of the Brain" appears to be from 2003. Is it still relevant, or is there perhaps a more recent book worth checking out?

martindbp
4 replies
19h36m

Sigh... Everyone knows artificial neurons are not like biological neurons. The network is the important part, which really is analogous to the brain, while what came before (SVMs and random forests) are nothing like it.

mecsred
1 replies
18h55m

Sigh... Every man knows the mechanisms of the mind are yet unlike the cogs and pinions of clockwork. It remains the machinery, the relation of spring and escapement, that is most relevant. Hitherto in human history, I think, such structure has not been described.

TeMPOraL
0 replies
8h18m

If you build a neural network out of cogs and pinions, sure.

Comparing the brain to most complex machines in history wasn't a mistake, any more than refining laws of physics were. Successive approximations.

And we're no longer at the point where we're just comparing brain to most complex machines. We have information theory now. We figured out computation, in form independent of physical medium used. So we're really trying to determine the computational model behind the brain, and one of the ways to do it is to implement some computational models in whatever is most convenient (usually software running on silicon), and see if it's similar. Slowly but surely, we're mapping and matching computational aspects of the brain. LLMs are just one recent case where we got aspectacularlygood match.

TaylorAlexander
1 replies
17h12m

Everyone knows artificial neurons are not like biological neurons.

Not, apparently, the person I was replying to!

ImHereToVote
0 replies
9h30m

I'm him, and I didn't say that. ANNs didn't arise in a vacuum and they aren't called neural networks for the fun of it.

https://www.ibm.com/topics/neural-networks#:~:text=Their%20n....

renewiltord
2 replies
19h13m

Doesn't really matter to modern CS, but Rosenblatt's original perceptron paper is a good read on this. ANNs were specifically inspired by Natural NNs and there were many attempts to build ANNs using models of how the human brain works, specifically down to the neuron.

dekhn
1 replies
19h6m

I;m sure you know but one of the best ways to get neuro folks worked up is to say anything about neural networks being anything like neurons in brains.

(IMHO, Rosenblatt is an underappreciated genius; he had a working shallow computer vision hardware computer long before people even appreciated what an accomplishment that was. The hardware was fascinating- literally self-turning potentiometer knobs to update weights.

renewiltord
0 replies
16h9m

If I'm being honest, I do know they get annoyed by that stuff but I've never really understood why. It's a somewhat common pattern in Mathematics as an avenue for hypotheses to take an existing phenomenon, model some subset of its capabilities, use that to define a new class of behaviour, follow that through to conclusions, then use that to go back to seeing if those conclusions apply to the original phenomenon.

A theoretical such thing might be for us to look at, say, human arms and say "Well, this gripping thing is a cool piece of functionality. Let's build an artificial device that does this. But we don't have muscle contraction tech, so we'll put actuators in the gripping portion. All right, we've built an arm. It seems like if we place it in this position it minimizes mechanical wear when not in action and makes it unlikely for initial movement to create undesired results. I wonder if human arms+hands have the same behaviour. Ah, looks like not, but that would have been interesting if it were the case"

Essentially that's just the process of extracting substructure and then seeing if there is a homomorphism (smooshy type abuse here) between two structures as a way to detect yet hidden structure. Category theory is almost all this. I suppose the reason they find it annoying is that there are many mappings that are non-homomorphic and so these are the false cognates of concepts.

Still, I think the whole "An ANN is not a brain" thing is overdone. Of course not. A mechanical arm is not an arm, but they both have response curves, and one can consider a SLAM approach for the former and compare with the proprioceptive view of the latter. It just needs some squinting.

Anyway, considering your familiarity with R and his work, I think I'm not speaking to the uninitiated, but I thought it worth writing anyway.

og_kalu
1 replies
15h54m

They are though. They quite literally are. Saying otherwise is like saying planes weren't based on how birds work when Wright brothers spent a lot of time in the 1800s studying birds.

Both Humans and GPT are neural networks. Who cares that GPT doesn't have feathers or flap its wings? That's not the question to care bout. We are interested in whether GPT flies. You can sigh to Kingdom come and nothing will change that.

We've developed numerous different learning algorithms that are biologically plausible, but they all kinda work like backpropagation but worse, so we stuck with backpropagation. We've made more complicated neurons that better resemble biological neurons, but it is faster and works better if you just add extra simple neurons, so we do that instead. Spiking neural networks have connection patterns more similar to what you see in the brain, but they learn slower and are tougher to work with than regular layered neural networks, so we use layered neural networks instead.

The Wright brothers probably experimented with gluing feathers onto their gliders, but eventually decided it wasn’t worth the effort. Because that's not what is important.

ImHereToVote
0 replies
9h28m

There are drones with feathers now however. The spring in feather flaps help conserve energy, but only in flapping wings obviously.

Terr_
0 replies
20h1m

*brandishes crutches*

"Behold! The Mechanical Leg! The first technology that actually copies aspects of our very selves! Think of what wonders of self-discovery it shall reveal!" :p

P.S.: "My god, it is stronger in compression rather than shear-stresses, how eerily similar to real legs! We're on to something here!"

ImHereToVote
0 replies
9h3m

Science history should be mandatory for undergrads. I didn't think what I said is controversial. This is established history. Sorry if it scares you.

tsukurimashou
1 replies
8h5m

I wish this myth would die

ImHereToVote
0 replies
5h28m

This is basic scientific history. You are simply uneducated, or scared of the implications.

https://cs.stanford.edu/people/eroberts/courses/soco/project....

Key word: "neurophysiologist"

MichaelZuo
12 replies
20h14m

Each successive comparison is likely getting closer and closer to the truth.

beezlebroxxxxxx
9 replies
19h50m

Or each successive comparison is just compounding and reiterating the same underlying assumption (and potentially the same mistake) whether it's true or not.

bigDinosaur
8 replies
18h31m

The jump to 'information processing machines' seems far more correct than anything that came before, I'm curious how you would argue against that? Yes, there are more modern and other interesting theories (e.g. predictive coding) but they seem much closer to cognitive psychology than say, the human brain working like a clock.

jtr1
6 replies
15h26m

I think the argument is that you need to ask how you are measuring when you say it seems more correct than anything that came before. You may just be describing the experience of swimming in the dominant paradigm

naasking
3 replies
12h20m

Have you managed any kind of conversation with a clock before? Because you can actually have an intelligent conversation with an LLM. I think that's a pretty compelling case that it's not just swimming in the dominant paradigm.

Peritract
2 replies
7h45m

People thought they were having intelligent conversations with Eliza; people even have satisfying emotional conversations with teddy bears.

It's not a good measurement.

naasking
1 replies
3h44m

People thought they were having intelligent conversations with Eliza

Sure, but who has had a good conversation with aclock?

people even have satisfying emotional conversations with teddy bears.

No they don't. Kids know very well that they're pretending to have conversations. The people who actually hear teddy bears speak also know that that's not normal, and we all know that this is a cognitive distortion.

Peritract
0 replies
3h6m

we all know that this is a cognitive distortion.

This is also true of those who form emotional attachments with current AI. The people developing romantic relationships with Replika etc. aren't engaging in healthy behaviour.

schoen
1 replies
14h2m

One attempt could be "it allows us to make better predictions about the mind".

This article mentions excitement about neural networks overgeneralizing verb inflections, which human language learners also do. If neural networks lead to the discovery of new examples of human cognitive or perceptual errors or illusions, or to the discovery of new effective methods for learning, teaching, or psychotherapy, that could count as evidence that they're a good model of our actual minds.

dotforest
0 replies
9h55m

If the article is talking about the neural network in McClelland and Rumelhart’s Parallel Distributed Processing, there’s actually a paper by Steven Pinker and some other linguists drilling into it and finding that it doesn’t model children’s language acquisition nearly as closely or as well as M&R think it does.

cscurmudgeon
0 replies
13h43m
beepbooptheory
1 replies
19h11m

Very curious to know what the telos of "truth" is here for you? A comparison is a comparison, it can get no more "true." If you want to say: the terms of the comparisons seem to verge towards identity, then you aren't really talking about the same thing anymore. Further, you would need to assert that our conceptions of ourselves have remained static throughout the whole ordeal (pretty tough to defend), and you would also need to put forward a pretty crude idea of technological determinism (extremely tough to defend).

Its way more productive and way less woo-woo to understand that humans have a certain tendency towards comparison, and we tend to create things that reflect our current values and conceptions of ourselves. And that "technological progress" is not a straight line, but a labyrinthine route that traces societal conceptions and priorities.

The desire for the llm to be like us is probably more realistically our desire to be like the llm!

PaulDavisThe1st
0 replies
17h35m

An apple is like an orange. Both are round fruits, containing visible seeds, and relatively sweet. If you're hungry, they are both good choices.

But then again, an apple is nothing like an orange, particularly if you want to make an apple pie.

The purpose of a comparison is important in helping to define its scope.

cmrdporcupine
4 replies
18h39m

Step A: build a machine which reflects a reduced and simplified model of how some part of a human works

Step B: turn it on its head "the human brain is nothing more than... <insert machine here.>"

It's a bit tautological.

The worry is that there's a Step C: Humans actually start to behave as simple as said machine.

PaulDavisThe1st
3 replies
17h34m

What machines have we built that reflect a reduced and simplified model of how some part of a human works (other than as a minor and generally invisible research projects) ?

dekhn
1 replies
16h11m

any chemical or large industrial plant built in the last 30 years

PaulDavisThe1st
0 replies
15h3m

I think most industrial chemists would likely disagree. But I guess YMMV.

zmgsabst
0 replies
14h5m

Electronic are a simplified model of the brains used by computers:

They emulate the faculty, rather than biology.

Tallain
0 replies
19h36m

Not just technological advancements; we have a history of comparing ourselves to that which surrounds us, is relatively ubiquitous, and easily comprehended by others when using the metaphor. Today it's this steady march of technological advancement, but read any older work of philosophy and you will see our selves (particularly, our minods) compared to monarchs, cities, aqueducts.[1]

I point this out because I think the idea of comparing ourselves to recent tech is more about using the technology as a metaphor for self, and it's worth incorporating the other ways we have done so historically for context.

[1]:https://online.ucpress.edu/SLA/article/2/4/542/83344/The-Bra...

ChuckMcM
0 replies
20h16m

I always enjoyed the stories of 'clock work' people (robots).

xigency
30 replies
20h6m

Regarding dismissals of LLM’s on ‘technical’ grounds:

Consciousness is first a word and second a concept. And it’s a word that ChatGPT or Llama can use in an English sentence better than billions of humans worldwide. The software folks have made even more progress than sociologists, psychologists and neuroscientists to be able to create an artificial language cortex before we understand our biological mind comprehensively.

If you wait until conscious sentient AI is here to make your opinions known about the implications and correct policy decisions, you will already be too late to have an input. ChatGPT can already tell you a lot about itself (showing awareness) and will gladly walk you through its “thinking” if you ask politely. Given that it contains a huge amount of data about Homo sapiens and its ability to emulate intelligent conversation, you could even call it Sapient.

Having any kind of semantic argument over this is futile because a character AI that is hypnotized to think it is conscious, self-aware and sentient in its emulation of feelings and emotion would destroy most people in a semantics debate.

The field of philosophy is already ripe with ideas from hundreds of years ago that an artificial intelligence can use against people in debates of free will, self-determination and the nature of existence. This isn’t the battle to pick.

slibhb
18 replies
13h30m

With a simple computer, speaker, and accelerometer, you can build a device that says "ouch" when you drop it. Does it feel pain?

My point is that there are good reasons to believe that a hypothetical LLM that can pass a Turing test is a philosophical zombie, i.e. it can mimic human behavior but doesn't have an internal life, feelings, emotions, and isn't conscious. Whether that distinction is important is another question. LLMs provide evidence that consciousness may not be necessary to create sophisticated AIs that can pass or exceed human performance.

naasking
16 replies
12h14m

it can mimic human behavior but doesn't have an internal life, feelings, emotions, and isn't conscious

I'm curious how you know this. Certainly an LLM doesn't havehumaninternal life, but to claim it hasnointernal life exceeds our state of knowledge on these topics. We simply lack a mechanistic model of qualia from which we can draw such conclusions.

smoldesu
12 replies
12h2m

An LLM is math. It outputs text. Those things aren't alive, and both the software and hardware used to facilitate it is artificial.

Once you get that out of the way, sure, I guess it could be "alive" per the same loose definition that any electrical system can exist in an actuated state. It's software, though. I don't think it's profound or overly confident to say that we very clearly know these systems are inanimate and non-living.

persnickety
5 replies
9h1m

This argument got long in the tooth, but suffice to say that you can analyze a human in exactly the same way, breaking one down to the basic physics and chemistry (and what does "artificial" mean, anyway?).

Then there's the concept of panpsychism.

intended
4 replies
8h31m

I would very strongly urge people to actually work and build tools with LLMs.

Its readily apparent after a point that the previous poster is correct, and that these are simply tools.

They are impressive tools yes. But life they are not. At best, they are mimics.

persnickety
1 replies
8h16m

I can't help but notice that you backed up your assertions with nothing objective at all.

intended
0 replies
7h32m

effort commensurate to the depth of the thread and effort of others.

I have longer comments in my history, you can read those if you must.

Besides, why wouldnt someone build things? This is HN.

naasking
1 replies
3h55m

But life they are not. At best, they are mimics

A mimicry of fire is not fire because fire is a reaction between matter. A mimicry of intelligence may well be intelligence, because intelligence is just information processing.

We're not talking here about "life", but intelligence and "internal life", aka feelings, thoughts, etc.

Edit: as for the argument that "LLMs are simply tools", it has been quite easy for our ancestors to also dehumanize other people and treat them as tools. That was a core justification of slavery. This is simply not a convincing argument that LLMs lack an internal life. People fool themselves all of the time.

intended
0 replies
2h30m

I havent read it, but Blindsight did a very good job of explaining LLMs.https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

I am saying that LLMs DONT process information, not the way people are implying.

The best I can suggest is to please try building with an LLM. Its very hard to explain the nuance, and I am not the only one struggling with it.

naasking
1 replies
3h50m

Yes, an LLM is math. Particle physics is also math, and that makes up all of existence. Furthermore, I'm not sure why "artificiality" has any bearing on the question of whether something is alive, whatever that means.

I'm also not even sure why you're talking about "alive" or "inanimate". From context it's clear we're talking about "internal life", meaning experience, feelings, thoughts, etc., not some ability to procreate, or even necessarily some continuity of existence.

smoldesu
0 replies
1h47m

it's clear we're talking about "internal life", meaning experience, feelings, thoughts, etc.

So do you have any evidence ofthatexisting, then? Because so far, it sounds like speculation.

LLMs do not feel or think or experience anything, in the same way that particle physics is not inherently sentient. LLM architectures are not designed to resemble life, or even understand human language for the most part. It's understandable that someone would look at written text and say "it's a sign of life!", but Markov Chains and LISP bots were no closer to achieving "internal life" or what have you.

Next, you'll play the "but how are us humans alive?" card. Humans notonlyqualify as animate life, but our brains are not artificial purpose-built tools. We are an imperfect computer, loaded with irreconcilable differences and improper conduct. We feel biological needs and immediate responses to stimuli. Our preconception of 'love' and 'hate' is not built on the words they're associated with, but the emotions they make us feel. Our lived experience is not limited to a preserved context, but rather our whole lives stretching across years of existence. Our thoughts are not dictated by what we write and say, and the way we present ourselves is not limited by the way we are finetuned.

Human life is self-evident and precious. AI life is seemingly non-existent, and based on dubious non-technical claims.

lordnacho
1 replies
9h17m

What about a hypothetical brain simulator? That would also be a software/hardware mash, with known mathematics. Would that be alive?

smoldesu
0 replies
1h11m

Now that's a better question! My verdict is no, but I could see where people might disagree.

This is a much harder hypothetical, though. Assuming you get everything right, you'd have a pretty powerful simulation at-hand. I think what you're describing is the difference between an artificial lung and a real-life cloned sheep embryo, though. An iron lung is only alive in the sense that it simulates a complex biological process with technology. It's damn impressive and very capable, but living respiration looks and behaves differently. Just simulating the desired input and output won't give you de-facto biology.

bennyelv
1 replies
9h50m

A few counterpoints to offer...

An LLM is math

Isn't everything math? Math is what we use to describe and model the nature of our reality.

Those things aren't alive

We don't have a universal definition of alive, or at least I'm not aware of one that isn't purely philosophical or artistic. We therefore can't actually determine what is and isn't alive with much confidence.

both the software and hardware used to facilitate it is artificial

Define artificial? We're made of exactly the same stuff. Complex chemical elements that were created from more basic ones in collapsing stars, arranged in a particular configuration with electricity flowing through it all and providing some kind of coordination. We're definitely a bit more "squishy", I'll give you that.

smoldesu
0 replies
1h20m

A few responses in kind:

Isn't everything math? Math is what we use to describe and model the nature of our reality.

Most things are measurable, yea. LLMs in particular rely on math to understand language. Outside of a probabilistic context, LLMs do not understand words or sentences. That's... not really how you or I think. It's not how we understand animals or even plants to behave. Probibalism in nature gets you killed; deviation and unique fitness is what keeps the Darwinian wheel rolling.

We therefore can't actually determine what is and isn't alive with much confidence.

Our speculation does a good enough job, I'd say. Sand and rocks aren't alive. Carbon isn't alive unless it is dictated by biological autonomy. Silicon chips aren't "alive" outside the sense that copper wires and your wall outlet has a lived experience. Software has a "lifecycle", but that's more used to describe human patterns of development instead of software behavior.

When you roll all this up into one scenario, I see a pretty airtight argument that AI is not animate. If someonedoesprove that AI is a living organism, the implications would honestly be more direoutsideof AI. It would overturn all modern understanding of software, which is why I figure it's not likely to be the case.

slibhb
2 replies
2h45m

From that perspective, we can't say that a rock lacks an internal life. That seems silly to me. Perhaps we can'tknowbut we can approach certainty.

naasking
1 replies
2h21m

From that perspective, we can't say that a rock lacks an internal life

That's what panpsychists believe. Regardless, this is a silly comparison. Show me a rock that can carry an intelligent conversation and then you might have a point.

slibhb
0 replies
1h30m

Using a rock as an example makes sense given the post I replied to. The argument that we lack a "mechanistic model of qualia" has nothing to do with behavior.

That aside, we should be close to certain that LLMs don't have internal lives. That computer programs can mimic human behavior in some contexts is a tremendous discovery but it's no reason to assume an internal life. Conversely, many animals can't mimic human behavior but clearly have internal lives.

The capacity for an internal life evolved over a billion odd years. It has something to do with our bodies and if we want to understand it, we should focus on biology and neuroscience. A machine with an internal life would need hardware designed or evolved for that purpose and modern computers lack that.

IshKebab
0 replies
8h26m

Does it feel pain?

In a sense, yes! But to understand that you will first have topreciselydefine "pain". Good luck.

dsign
3 replies
19h9m

Exactly. To that I’m going to add that the blood of our civilization is culture (sciences, technology, arts, traditions). The moment there is something better at it than humans, it’s our “horse-moment”.

jtr1
2 replies
15h13m

Simplifying greatly, but 1. LLMs “create” these cultural artifacts by recombining their inputs (text, images, etc), all of which are cultural artifacts created by humans 2. Humans also create cultural artifacts by recombining other cultural artifacts. The difference is that they combine another input which we can’t really prove AI has: qualia, the individual experience of being in the world, the synthesis of touch and sound and feeling and perspective.

I’m not saying computers can’t have something like it, but it would be so fundamentally different as to be completely alien and unrelatable to humans, and thus (IMO) non-contiguous with human culture.

dsign
0 replies
10h20m

The difference is that they combine another input which we can’t really prove AI has: qualia, the individual experience of being in the world, the synthesis of touch and sound and feeling and perspective.

I wish we could keep this as an immutable truth, but give some sick girls and boys in Silicon Valley a few more years and they will make true creatures. No, I think that we should be honest to ourselves and stop searching for what make us special (other than our history, and being first, that is). It's okay to be self-interested and say "we want to remain in control, AIs should not be, no matter how much (or if) better they are than us."

Terretta
0 replies
14h48m

Oh! TY for this thought.

drew-y
3 replies
17h42m

Totally agree with the sentiment. I find the constant debates on "is AI conscious" or "can AI understand" exhausting. You can't have a sound argument when neither party even agrees on a concrete definition of consciousness or understanding.

Regarding this line:

ChatGPT can already tell you a lot about itself (showing awareness) and will gladly walk you through its “thinking” if you ask politely.

Is it actually walking you throughitsthinking? Or is it walking you through an imagined line of thinking?

Regardless, your main point still stands. That a program doesn't think the same way a human does, doesn't mean it isn't "thinking".

xigency
2 replies
17h12m

Is it actually walking you through its thinking? Or is it walking you through an imagined line of thinking?

You can prompt an LLM model to provide reasoning first and an answer second and it becomes one and the same.

Worth keeping in mind that all of these points are orthogonal to the quality of reasoning, the bias, or the intentions of the system builders. And building something that emulates humans convincingly, you can expect it to emulate both the good and bad qualities naturally.

intended
0 replies
8h23m

why does performance improve after chain of thought prompting?

Because a human is measuring it unfairly.

The output without CoT is valid. It is syntactically valid. The observer is unhappy with the semantic validity, because the observer has seen syntactic validity and assumed that semantic validity is a given.

Like it would if the model was alive.

This is observer error, not model error.

Davidzheng
0 replies
15h39m

In split brain experiments (where the corpus callosum is cut), sometimes the half which is nonverbal is prompted and the action is taken. Yet when the experimenter asks for the explanation the verbal half supplies an (incorrect) explanation. How much of human reasoning when prompted occurs before the prompt? it's a question you have to ask as well.

syndacks
0 replies
16h44m

Sarah Connor was right!

john-radio
0 replies
13h42m

The field of philosophy is already ripe with ideas from hundreds of years ago that an artificial intelligence can use against people in debates of free will, self-determination and the nature of existence. This isn’t the battle to pick.

Uh, wouldn't that apply equally to any other topic that has been argued extensively before, and to any tenable position on those topics? Like, I can make ChatGPT argue against your LLM sentience apologist-bot just as easily.

dotforest
0 replies
9h41m

“And it’s a word that ChatGPT or Llama can use in an English sentence better than billions of humans worldwide.”

I think the whole point the article is making, though, is that overvaluing LLMs and their supposed intelligence because they excel at this one axis of cognition (if you’re spicy) or mashed-up language ability (if you’re a skeptic) doesn’t make sense when you consider children, who are not remotely capable of what LLMs can do, but are clearly cognizant and sentient. The whole point is that those people—whether they can write an essay or not, whether they can use the word “consciousness” or not—are still fundamentally alive because we share a grounded, lived, multi-sensory, social reality.

And ChatGPT does not, not in the same way. Anything that it expresses now is only a mimicry of that. And if it eventually does have its own experiences through embodied AI, I’d be interested to see what it produces from its own “life” so to speak, but LLMs do not have that.

xianshou
18 replies
20h15m

Pour one out for the decline of human exceptionalism! Once you get material-reductionist enough and accept yourself as a pure function of genetics, environment, past experience, and chance, this conclusion becomes inevitable. I also expect it to be the standard within a decade, with AI-human parity of capabilities as the key driver.

dekhn
12 replies
20h13m

We'll see! I came to this conclusion a long time ago but at the same time I do subjectively experience consciousness, which in itself is something a mystery in the material-reductionist philosophy.

idle_zealot
7 replies
20h1m

I hear this a lot, but is it really a mystery/incompatible with materialism? Is there a reason consciousness couldn't be what it feels like to be a certain type of computation? I don't see why we would need something immaterial or some undiscovered material component to explain it.

kaibee
3 replies
19h56m

Well the undiscovered part is why it should feel like anything at all. And this is definitely relevant because consciousness clearly exists enough that we exert physical force about it, so its gotta be somewhere in physics. But where?

idle_zealot
1 replies
19h34m

Why does it need to be in physics? What would that look like, a Qualia Boson? It could be an emergent property like life itself. Physics doesn't "explain" life, it explains the physical mechanisms behind the chemical interactions that ultimately produce life by virtue of self-replicating patterns spreading and mutating. There is no Life is physics, and yet we see it all around it because it emerges as a complex result of fundamental rules. My expectation is that experience is emergent from computation that controls an agent and models the world around that agent, and that experience isn't a binary. A tree is more aware than a rock, an ant more than a tree, a squirrel more than an ant, and so on until you reach humans, but there may not be any real change in kind as the conditions for experience become more developed.

I guess my real point is that I don't view experience or consciousness as something special or exceptional. My guess is that over time we come to understand how the brain works better and better but never find anything that definitively explains consciousness because it's totally subjective and immeasurable. We eventually produce computers complex enough to behave totally convincingly human, and they will claim to be conscious, maybe even be granted personhood, but we'll never actually know for sure if they experience the world, just like we don't know that other humans do.

PH95VuimJjqBqy
0 replies
13h44m

If life is an emergent property of physics then it's replicable which means it's not spiritual.

robwwilliams
0 replies
19h32m

Toward the end of Hofstadter’s Gödel, Escher, Bach there is a simple and plausible recursive attentional explanation, but without any serious neuroscience. The thalamo-cortico-thalamic system is a massively recursive CNS system. Ray Guillery’s biok The Brain As A Tool is a good introduction to the relevant neuroscience with a hat tip to consciousness written long after GEB.

beezlebroxxxxxx
1 replies
19h34m

Why is consciousness computation? What does it even mean to say something feels like being "a type of computation"?

The concept of consciousness is wildly more expansive and diverse than computation, rather than the other way around. A strict materialist account or "explanation" of consciousness seems to just end up a category error.

I take it as no surprise that a website devoted to the computers and software ofteninsiststhat this is the only way to look at it, but there are entire philosophical movements that have developed fascinating accounts of consciousness that are far from strict materialism, nor are they "spiritual" or religious which is a common rejoinder by materialists, an example is the implications of Ludwig Wittgenstein's work fromPhilosophical Investigationsand his analysis of language. And even in neuroscience there is far from complete agreement on the topic at all.

ux-app
0 replies
13h25m

Why is consciousness computation?

if there is no duality, then what else can it be?

there are entire philosophical movements that have developed fascinating accounts of consciousness that are far from strict materialism

philosophers can bloviate all day. meanwhile the computer nerds built a machine that can engage in creative conversation, pass physician and bar exams, create art, music and code.

I'm not saying there'snothingto learn from philosophy, but gee you have to admit that philosophers come up a little short wrt practical applications of their "Philosophical Investigations"

dekhn
0 replies
19h48m

I mean, if you're a compatibilist, there's no mystery. In that model, we live in a causally deterministic universe but still have free will. I would say instead "thesubjective experience of consciousnessis anemergent propertyofcomplex systems with certain properties". I guess that's consistent with "the experience of consciousness is the feeling of a certain type of computation".

Personally these sorts of things don't really matter to me- I don't really care if other people are conscious, and I don't think I could prove it either way- I just assume other people are conscious, and that we can make computers that are conscious.

And that's exactly what I'm pushing for: ML that passes every Turing-style test that we can come up with. Because, as they say "if you can't tell, does it really matter?"

throwaway4aday
2 replies
19h59m

It makes perfect sense if you can disprove your own existence

Vecr
1 replies
10h9m

You can't disprove your existence, I'm pretty sure that does not make any sense. "I think therefore I am." No need for free will of any type there, and a currently existing computer could probably do the same thing.

NoGravitas
0 replies
18m

You can prove that "thinking is happening", but when you go looking for the "I", you'll never find it — only more thoughts. Gautama, Hume.

corethree
0 replies
13h0m

We can't even define what that experience means. From all the information we have the experience is most likely just a physical manifestation of what the parent poster describes.

jancsika
2 replies
19h52m

I'm not convinced that this material-reductionist view wouldn't just be functionally equivalent to the way a majority of citizens live their lives currently.

Now: a chance encounter with someone of a different faith leads a citizen to respect the religious freedom of others in the realm of self-determination.

Future: a young hacker's formative experience leads to the idea that citizens should have the basic right to change out their device's recommendation engine with a random number generator at will.

Those future humans will still think of themselves as exceptional because the AI tools will have developed right alongside the current human-exceptionalist ideology.

Kinda like those old conservative couples I see in the South where the man is ostensibly the story teller and head of household. But if you listen long enough you notice his wife is whispering nearly every detail of importance to help him maintain coherence.

PH95VuimJjqBqy
1 replies
13h46m

Kinda like those old conservative couples I see in the South where the man is ostensibly the story teller and head of household. But if you listen long enough you notice his wife is whispering nearly every detail of importance to help him maintain coherence.

You've never actually seen this happen because it doesn't happen. This is not how real people interact, it's how the caricatures in your head interact.

gotoeleven
0 replies
11h31m

No Jansicka's Tales of Things that Totally Happened in the South is a #1 New York Times best seller.

hawski
0 replies
2h27m

I do not share your view, but partly understand it. I would worry that the "decline of human exceptionalism" will be used by corporations to widen their stance even more. After all corporations are also people (wink).

IshKebab
0 replies
8h22m

The conclusion of the article is that the toddler isnota stochastic parrot. But only because it has a life in the real world. I think she's trying to say that the toddler ismorethan a stochastic parrot because of real life experiences.

But then I have no idea how she goes on to conclude:

Human obsolescence is not here and never can be.

There's no fundamental reason why you can't put ChatGPT in the real world and give it a real life. The only things that probably willneverbe replaced by machines are things where our physical technology is likely never going to match biology - i.e. sex.

karmakurtisaani
11 replies
20h14m

Nice article, great presentation.

However, it's a bit annoying that the focus of the AI anxiety is how AI is replacing us and the resolution is that we embrace our humanity. Fair enough, but at least to me the main focus in my AI anxiety is that it will take my job - honestly don't really care about it doing my shitty art.

ryandrake
6 replies
20h0m

More specifically, I think we're worried about AI taking ourincomes, not our jobs. I would love it if an AI could do my entire job for me, and I just sat there collecting the income while the AI did all the "job" part, but we know from history (robotics) that this is not what happens. The owners of the robots (soon, AI) keep all the income and the job goes away.

An enlightened Humanity could solve this by separating the income from the job, but we live in a Malthusian Darwinian world where growth is paramount, "enough" does not exist, and we all have to justify and earn our living.

ketzo
2 replies
19h23m

I mean, I definitely hear (and feel) a lot of worry about AI taking away work that we find meaningful and interesting to do, outside of the pure money question.

I really like programming to fix things. Even if I weren’t paid for it, even if I were to win the lottery, I would want to write software that solved problems for people. It is a nice way to spend my days, and I love feeling useful when it works.

I would be very bummed - perhaps existentially so - if there were nopracticalreason ever to write software again.

And I know the same is true for many artists, writers, lawyers, and so on.

sushisource
0 replies
16h58m

The practical reason _is_ that it's fun and you like it. That could be enough.

I'm not too concerned about that being a reality in our lifetimes though.

cmpalmer52
0 replies
2h14m

There’s no practical reason to draw, paint, play music, or write as a hobby.

magneticnorth
1 replies
18h36m

At some point someone said to me, "How badly did we fuck up as a society that robots taking all our jobs is a bad thing."

And I think about that a lot.

bjelkeman-again
0 replies
9h5m

Isn’t the problem that the person loosing the job and income isn’t the same person that owns the robot? If everyone owned a robot and could go to the beach instead it would be nice, except that some would work at the same time and outcompete those that didn’t?

IcyWindows
0 replies
11h36m

You make it sound like we all aren't those owners keeping the income?

Right now, we all own "robots" who spellcheck our words (editor) , research (librarian), play music (musican), send messages (courier), etc.

All these jobs are "lost", but at the same time, we wouldn't have had money to pay these many employees to live in our pocket.

djmips
1 replies
18h49m

Your job is your art.

karmakurtisaani
0 replies
2h31m

I live in Europe, so my job is just my job.

nsfmc
0 replies
20h5m

here's another piece in the issue that addresses your concernhttps://www.newyorker.com/magazine/2023/11/20/a-coder-consid...

intended
0 replies
8h19m

I dont think that was the point of the article.

I think it was clear about how language is not thought.

That leads to the intrinsic realization that our physical existence, our observance of reality is what is critical.

Also, AI taking jobs at scale is unlikely, the only place its going to do anything is low level spam and content generation.

For anything which has to be factual, its going to need humans.

Retr0id
10 replies
20h2m

This is a really beautiful article, and while there are certainly fundamental differences between how a toddler thinks and learns, and how an LLM "thinks", I don't think we should get too comfortable with those differences.

Every time I say to myself "AI is no big deal because it can't do X", some time later someone comes along and makes an AI that does X.

alexwebb2
4 replies
19h6m

There's a well-documented concept called "God of the gaps" where any phenomenon humans don't understand at the time is attributed to a divine entity.

Over time, as the gaps in human knowledge get filled, the god "shrinks" - it becomes less expansive, less powerful, less directly involved in human affairs. The definition changes.

It's fascinating to watch the same thing happen with human exceptionalism – so many cries of "but AI can't do <thing that's rapidly approaching>". It's "human of the gaps", and those gaps are rapidly closing.

allemagne
1 replies
17h11m

"God is dead" is beyond passé in the 2020s, but in the 19th century nobody really needed a "god of the gaps." If a Friedrich Nietzsche equivalent was active today, and let's just say was convinced that AGI was possible, I kind of wonder what generally accepted grand narrative he'd declare dead beyond just human exceptionalism. Philosophy itself?

NoGravitas
0 replies
14m

Philosophy is already a self-destroying enterprise (it lets you think out to the limits of thought). Eugene Thacker's "The Horror of Philosophy" series is probably the best thorough explanation of this.

Barrin92
1 replies
15h28m

I honestly don't know where this is supposed to be happening. I have observed the opposite for decades. When Gary Kasparov was beaten by Stockfish, people proclaimed the end of chess and went on AI hyperboles. Same with Watson winning Jeopardy, Alexa, etc.

Every time computers outperform humans in some domain this is conflated with deep general progress, panicked projections of replacement and job losses, the end times and so on. People are way faster to engage in reductionism of human capacity and Terminator-esque fantasies than the opposite. Despite the fact that it never happens.

It's even reflected in our popular culture. I can barely recall a single work of science fiction in the last 30 years that would qualify as portraying human exceptionalism. AI doomerism, overestimation and fear of machines is the norm.

alexwebb2
0 replies
14h50m

“but AI can’t create art”

“but AI can’t write poetry”

“but AI can’t do real work”

Panic is probably not warranted. Too much incentive stacked against the truly apocalyptic scenarios. But yeah, a lot of jobs are probably going to shrink.

PaulDavisThe1st
4 replies
17h29m

Then never say to yourself "AI is no big deal because it can't do X".

Say instead (for example) "It is important that I understand the differences between both the capabilities and internal mechanisms of AI and people, even if, over some period of time, the capabilities may appear to converge".

Retr0id
3 replies
16h16m

I'm not terribly interested in truisms, I'm more interested in figuring out what AIsactuallycannot fundamentally do, if anything (present or future).

PaulDavisThe1st
2 replies
15h4m

What are the things that humans actually cannot fundamentally do, if anything?

Retr0id
1 replies
13h3m

That's a very large set of things. Calculating 1000000 digits of the decimal expansion of pi within an hour would be just one example.

cozzyd
0 replies
10h14m

To be super cheeky:

  wget https://bellard.org/pi/pi2700e9/tpi-0.9-linux.tar.gz
  tar xvf  tpi-0.9-linux.tar.gz 
  time ./tpi -T 8 -o pi.txt 1M
  real 0m0.176s
  user 0m0.500s
  sys 0m0.049s

carlossouza
7 replies
20h33m

Great essay; impressive content.

The fact that it's very unlikely for any of the current models to create something that even remotely resembles this article tells me we are very far away from AGI.

atleastoptimal
3 replies
20h32m

don't underestimate exponentials

mempko
0 replies
14h57m

Exactly. Global Warming and species decline/extinction (we are in the 6th mass extinction) all appear to be exponential. The question is, will we have AGI before we destroy ourselves?

dekhn
0 replies
20h3m

or the power of sigmoids to replace exponentials when the exponential peters out.

breuleux
0 replies
20h8m

Let's not see exponentials everywhere either. It's not because things seem to be progressing very fast that exponentials are involved. More often than not they are logistic curves.

pcthrowaway
2 replies
20h27m

Did you miss the disclaimer at the bottom that both visual artwork and prose were produced by a combination of generative AI tools and creative prompting? The entire seamless watercolor style piece was just a long outpainting

iwanttocomment
0 replies
20h22m

I read the article, read your comment, went back to review the article, and there was no such disclaimer. If this is /s, whoosh.

carlossouza
0 replies
20h23m

hahaha that's why I love HN :)

calf
6 replies
19h31m

I think it's reductivism to assume that neural networks cannot emergently support/implement a non-stochastic computational model capable of explicit logical reasoning.

We already have an instance of emergent logic. Animals engage in logical reasoning. Corollary, humans and toddlers are not merely super-autocompletes or stochastic parrots.

It has nothing to do with "sensory embodiment" and/or "personal agency" arguments like in the article. Nor the clever solipsism and reductivism of "my mind is just random statistics of neural firing". It's about finding out what the models of computation actually are.

PaulDavisThe1st
5 replies
17h21m

Imagine that for some reason and by some unknown agency, somebody in 1068 could build something that was functionally equivalent to one of the contemporary LLMs of today (it would likely be mechanical and thus slower, but let's just ignore speed because that's mostly a substrate artifact).

OK, time to train up our LLM.

Oops. Printing not yet invented. Books are scarce, and frequently not copied. We have almost no input data to give to our analog LLM.

Will it be of any use whatsoever? I think we can say with a high degree of confidence that it will not.

However, contrast a somewhat typical human from the same era. Will they suffer from any fundamental limitation compared to someone alive today? Will there be skills they cannot acquire? Languages they cannot learn? It's not quite so easy to be completely confident about the answer to this, but I think there's a strong case for "no".

The data dependence of LLMs compared to human (or ravens) makes a compelling argument that they are not operating in the same way (or, if they are, they have somehow reduced the data dependency in an incredibly dramatic way, such that it may still be reasonable to differentiate the mechanisms).

intelkishan
3 replies
10h59m

A common counterargument is that organisms already have information stored in their DNA, which in turn has a noticeable impact on their behaviour. Plus the fact that they receive a large amount of sensory data which is often discounted in such debates.

cozzyd
1 replies
10h29m

Great, instead of using a corpus, just use videos from toddlers walking around with a go pro as training data for your AI model and see how far that gets you.

calf
0 replies
8h27m

Well I think toddlers get both offline training from millions of years of evolution (idea being that evolution itself is nature's very own training algorithm, a computational process that creates genetic information), and online training in the form of nurture and environmental interaction -- and of course AI professors are saying the way to go is to research was to make online training possible for models.

But the crux is that both kinds of "training" - almost a loaded term by now - are open areas of scientific research.

PaulDavisThe1st
0 replies
2h36m

OK, so LLM's (a) have no pre-wiring (b) have no sensory connection but they can still spit out remarkably good language, and thus they must be a good model for humans and animals?

calf
0 replies
8h43m

I don't find that a compelling argument :)

If one computational model accepts only assembly code, and another model accepts only C++, it tells us nothing about what is going on inside their black boxes. They could be doing computationally fundamentally equivalent things, or totally different things. We don't know till we look.

My point was really about neural networks in general. The argument is that neural networks include both humans and synthetic ones. The myopic focus on LLMs (because of mainstream media coverage) as they exist now is what limits such arguments.

readams
4 replies
20h22m

I suspect people will argue about whether AIs are truly conscious or just stochastic parrots even as all human-dominated tasks are replaced by AIs.

The AIs will be creating masterwork art and literature personalized for each person better than anything Shakespeare ever wrote or Michaelangelo ever sculpted, but we'll console ourselves that at least we're _really_ conscious and they're just stochastic parrots.

lancesells
2 replies
19h34m

The AIs will be creating masterwork art and literature personalized for each person better than anything Shakespeare ever wrote or Michaelangelo ever sculpted

I don't think it will. Art is truth and AI is a golem replicating humanity.

esafak
1 replies
18h54m

It depends on what you value in art. Some value the only artifact per se; others the story around it. Unfortunately, I don't see too many complaints that image generators have no human artist behind them.

lancesells
0 replies
17h3m

I've yet to see too much art from image generators. For sure content, but not art.

101008
0 replies
20h19m

Masterwork is not independent of its time, context, etc. It's hard to say AI will be creating masterwork, especially when focused for each person.

ganzuul
3 replies
17h46m

You are an evolutionary being, part of a much greater process than the process in your head. Your mind extends beyond your body and even if your mind was frozen in time like an LLM it would still be alive because it remains in a living environment.

Like so the Earth is alive and the universe would not be complete without you. Rest easy, little bird. One day you will spread the wings of your soul and the wind will do the rest.

Vecr
2 replies
10h2m

Like so the Earth is alive and the universe would not be complete without you.

That's a definitional argument, there is very little actual meaning there. The argument can't discriminate between a baby and a nuclear explosion destroying Manhattan.

ganzuul
1 replies
7h56m

The part about your mind being frozen in time is a lot to take in. It is reflected in concept by spreading your wings and letting the wind alone do the work of flight.

This defines a transcendence of mortal concerns and ascending to eternity so it doesn't need to treat a baby and a bomb in a city differently. When we attempt to find peace with AI this is qn appropriate universe of discourse.

Vecr
0 replies
7h0m

I don't have a problem with the "frozen in time" thing, as I have a reasonably good understanding of context windows, the feed-forward nature of the next token predictor, and information erasing computation. I agree that you would still be alive, but I'm not sure being in a "living environment" has any bearing on that.

cperciva
3 replies
20h22m

When my toddler was first learning to talk, we had some pictures of felines hanging on the walls; some were cats and others were kittens.

She quickly generalized; henceforth, both of them were "catten".

djmips
1 replies
18h47m

My toddler understood that places to buy things were called stores and understood eating - so restaurants were deemed 'eating stores'. And we just went with that for a long time and now they are grown we still call them eating stores for fun sometimes. :)

liminalsunset
0 replies
14h38m

Interestingly, in Chinese, the actual word for a "restaurant" is often quite literally translated, "meal/food store", or 饭店, and a 店 is just a store.

teaearlgraycold
0 replies
20h1m

Good toddler

_nalply
2 replies
9h50m

I am a father and this story touches me a lot. I have two boys not toddlers anymore. Both go to school. The older one told me laughingly: "That's funny, there's no Father Nature, but Mother Nature!". The younger one learnt yesterday not to touch a cake even if I didn't lock it away. I said: "Tomorrow this cake is for you and for your brother. If you eat it today, you won't have cake tomorrow."

This said, there's an important difference between LLMs and humans: Humans have instincts. The most important ones seem to be the set of instincts concerning an immaterial will for life and a readiness to overcome deadly obstacles. In other words: LLMs don't have the primitive experience of what is life.

This might change in future. A startup might put neural networks into soft robots and then in a survival situation. Robots that don't emerge functioning are wiped, repaired and put back. In other words, they establish an evolutional situation. After careful enough iterations of curation they have things that "understand" life or at least better than current LLM instantiations.

EDIT: typo

misja111
0 replies
9h8m

I see instincts as something like a pre-installed ROM. It shouldn't be too hard to add something like that to LLM. In fact, I think this is being done already, for instance by hardwiring LLM to not have racist or sexual conversations.

To make LLM's more similar to humans, we'd need to hardwire them with a concept of 'self', and, importantly, the hardwired idea that their self was unique, special, and should survive. But I think the result would be terrible. Imagine a LLM that would be begging not to be switched off, or worse, trying to subtly manipulate its creators.

NoGravitas
0 replies
22m

Humans and other animals have not only instincts, but also the experience of interaction with the material world. As a result, human language isn't free-floating. The relationships between things in the world we often observe both inside and outside of language, and there's a tie between language as humans use it, and the world. That's not true for LLMs, because they only have language.

For a better explanation and responses to objections, consider the National Library of Thailand thought experiment:https://medium.com/@emilymenonbender/thought-experiment-in-t...

sickcodebruh
1 replies
19h56m

One of my favorite experiences from my daughter’s earliest years was the realization that she was able to think about, describe, and deliberately do things much earlier than I realized. More plainly: once I recognized she was doing something deliberately, I often went back and realized she had been doing that same thing for days or weeks prior. We encountered this a lot with words but also physical abilities, like figuring out how to make her BabyBjorn bouncer move. We had a policy of talking to her like she understood on the off-chance that she could and just couldn’t communicate it. She just turned 5 and continues to surprise us with the complexity of her inner world.

marktangotango
0 replies
18h57m

We did this, and I'd add that repeating what they say back to them so they get that feedback is important too. It's startling to see the difference between our kids and their class mates, who's parents don't talk them (I know this from observing at the countless birthday parties, school events, and sports events). Talking to kids is like watering flower, they bloom into beautiful beings.

og_kalu
1 replies
19h17m

Paraphrasing and summarizing parts of this article,https://hedgehogreview.com/issues/markets-and-the-good/artic...

Some ~72 years ago in 1951, Claude Shannon released his "Prediction and Entropy of Printed English", an extremely fascinating read now.

The paper begins with a game. Claude pulls a book down from the shelf, concealing the title in the process. After selecting a passage at random, he challenges his wife, Mary to guess its contents letter by letter. The space between words will count as a twenty-seventh symbol in the set. If Mary fails to guess a letter correctly, Claude promises to supply the right one so that the game can continue.

In some cases, a corrected mistake allows her to fill in the remainder of the word; elsewhere a few letters unlock a phrase. All in all, she guesses 89 of 129 possible letters correctly—69 percent accuracy.

Discovery 1: It illustrated, in the first place, that a proficient speaker of a language possesses an “enormous” but implicit knowledge of the statistics of that language. Shannon would have us see that we make similar calculations regularly in everyday life—such as when we “fill in missing or incorrect letters in proof-reading” or “complete an unfinished phrase in conversation.” As we speak, read, and write, we are regularly engaged in predication games.

Discovery 2: Perhaps the most striking of all, Claude argues that that a complete text and the subsequent “reduced text” consisting of letters and dashes “actually…contain the same information” under certain conditions. How?? (Surely, the first line contains more information!).The answer depends on the peculiar notion about information that Shannon had hatched in his 1948 paper “A Mathematical Theory of Communication” (hereafter “MTC”), the founding charter of information theory.

He argues that transfer of a message's components, rather than its "meaning", should be the focus for the engineer. You ought to be agnostic about a message’s “meaning” (or “semantic aspects”). The message could be nonsense, and the engineer’s problem—to transfer its components faithfully—would be the same.

a highly predictable message contains less information than an unpredictable one. More information is at stake in (“villapleach, vollapluck”) than in (“Twinkle, twinkle”).

Does "Flinkle, fli- - - -" really contain less information than "Flinkle, flinkle" ?

Shannon concludes then that the complete text and the "reduced text" are equivalent in information content under certain conditions because predictable letters become redundant in information transfer.

Fueled by this, Claude then proposes an illuminating thought experiment: Imagine that Mary has a truly identical twin (call her “Martha”). If we supply Martha with the “reduced text,” she should be able to recreate the entirety of Chandler’s passage, since she possesses the same statistical knowledge of English as Mary. Martha would make Mary’s guesses in reverse.

Of course, Shannon admitted, there are no “mathematically identical twins” to be found,butand here's the reveal, “we do have mathematically identical computing machines.”

Those machines could be given a model for making informed predictions about letters, words, maybe larger phrases and messages. In one fell swoop, Shannon had demonstrated that language use has a statistical side, that languages are, in turn, predictable, and that computers too can play the prediction game.

alexk307
0 replies
15h32m

Fascinating, thanks for explaining that

mempko
1 replies
14h51m

LLMs are like human minds the same way airplanes are like birds. Both birds and airplanes can fly, but very differently. Birds can land on a dime, airplanes can't. Airplanes can carry a lot more weight and go faster.

Similarly LLMs can do things humans can't. No human has as much knowledge about the world as a typical LLM model. However, an LLM model can't generalize outside it's training set well (recent Deep Mind research). LLM doesn't have a memory outside it's input. LLM isn't Turing complete (has fixed amount of steps and always terminates).

We are building airplanes, not birds. That much is obvious.

omoide
0 replies
8h13m

We are building airplanes, not birds. That much is obvious.

It does to me as well, but reading the comments here, it doesn't seem that obvious to many people.

dekhn
1 replies
19h55m

I think the position of Gebru, et al, can best be expressed as a form of "human exceptionalism". As a teenager, my english teacher shared this writing by Faulkner, which he delivered as his Nobel Prize speech. I complained because the writing completely ignores the laws of thermodynamics about the end of the universe.

"""I believe that when the last ding-dong of doom has clanged and faded from the last worthless rock hanging tideless in the last red and dying evening, that even then there will still be one more sound: that of man's puny, inexhaustible, voice still talking! ...not simply because man alone among creatures has an inexhaustible voice, but because man has a soul, a spirit capable of compassion, sacrifice and endurance."""

PeterisP
0 replies
18h7m

I don't think that the position of Gebru et al (or even more specifically the stochastic parrot paper) can be dismissed as solely relying on "human exceptionalism". While some of that sentiment arguably is there, the paper does make very valid points about the limitations of what can be learned solely from surface forms without any grounding in reality.

This is partially reflected by later observations from training LLMs where we see that the performance of LLMs increases even on purely language tasks when adding extra modalities such as computer code or images, which, in a sense, bring the model closer to different aspects of reality; and we observe that adding tiny quantities of "experimental interaction" through RLHF can bring features that additional humongous amounts of pure surface form training data can't, and it certainly seems plausible that making a qualitative leap further would require some data fromactualcausal interaction with the real world (i.e. not replay of data based on "someone else's" actions but feedback from whatever action the model currently feels is the most "interesting" i.e. the outcome is uncertain to the model but with potential for surprise data), where relatively tiny amounts of such data can enable learning what large amounts of pure observations can't - just as the hypothetical octopus from the stochastic parrot paper thought experiment.

csours
1 replies
18h9m

So what are GPTs missing?

I've got:

- Falsifiability - am I saying something false? Some products have an output checker, but it's a bolt on at this time for unacceptable output, which is not the same thing as falsifying.

- Agency - much ink already spilled

- Context Window size (too small, or need to summarize parts of the context to fit in context window, and know where to go back to for full context)

- Directed Self-Learning (I'm sure there are things like this that people may point out, but some kernel of self-learning isn't there yet)

chewxy
0 replies
12h36m

The GPTs are good at interpolative and extrapolative generalization, owing to the manifold of the training data. They can't do abstractive, inductive, abductive generalizations. And those are the more common forms of generalizations that humans can do.

birdman3131
1 replies
19h15m

Interesting article. However it becomes very close to unreadable just after the cliffhanger. Given that they use the word vertigo around the same point it all straightens back up I assume it is an explicit choice but it feels like a very bad choice for anybody with any sort of reading disorder. Very wavy lines of text along with side by side paragraphs with little to differentiate them from each other.

bibanez
0 replies
18h10m

When there's a lot of text that represents distress/anxiety, I just skip it. Call me a bad reader, but I get the gist of it and I think that's more important that the exact details of what's it saying

barbazoo
1 replies
20h25m

People are not gonna like that they have to scroll so much here :)

ale42
0 replies
18h58m

Usually I don't, but this one I enjoyed it... also because, it's just pure scrolling, no strange animations that change while you scroll.

atleastoptimal
1 replies
20h35m

I wonder when the rate of improvement of SOTA benchmarks will exceed the rate of improvement of irl early childhood cognitive development

082349872349872
0 replies
20h20m

Jared Diamond mentions several attributes which make a species suitable for domestication. Humans fit all but one: we are almost too altricial to be suitable for economic exploitation.

Imagine the contribution to GNP if we in practice, like Bokanovsky* (by Ford!) in fiction, could greatly reduce the 10-30 year lead time currently required to produce new employees...

Edit: * cfhttps://www.depauw.edu/sfs/notes/notes47/notes47.html

andersrs
1 replies
17h31m

With AI devaluing human creativity and thought I wonder what is left to be proud of? One's body? It seems like humans will be valued more based on their looks than anything else. Apps like Tinder, Instagram, OnlyFans have already been pushing in this direction but AI will take it further and devalue the one valuable trait that the less attractive people could compete on. AI will also compete with porn but the tech for replacing physical human bodies is nowhere near as advanced. The winners will be attractive women and maybe plumbers. The losers will be everyone else but especially men.

phist_mcgee
0 replies
17h28m

Hairdressers and telephone sanitisers will eventually get the last laugh.

westurner
0 replies
19h9m

Language acquisition > See also:https://en.wikipedia.org/wiki/Language_acquisition

Phonological development:https://en.wikipedia.org/wiki/Phonological_development

Imitation > Child development:https://en.wikipedia.org/wiki/Imitation#Child_development

https://news.ycombinator.com/item?id=33800104:

"The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-Step, Lasting Change for You and Your Child"https://www.google.com/search?kgmid=/g/11h7dr5mm6&hl=en-US&q...

"Everyday Parenting: The ABCs of Child Rearing" (Kazdin, Yale,)https://www.coursera.org/learn/everyday-parenting

Re: Effective praise and Validating parenting[and parroting]
uoaei
0 replies
17h33m

No because your toddler has vision, audio, and touch sensors attached in order to correlate language use with 3D physical phenomena. Additionally, it has the ability to connect causal explanations of those phenomena with experiments it can perform to test or reject those explanations. And a brain composed of neurons that is exceptionally good at associative learning. LLMs have none of these features.

telepathy
0 replies
15h42m

No, your toddler has qualia

sublinear
0 replies
15h5m

no

raytopia
0 replies
19h56m

Really sweet story.

oh_sigh
0 replies
20h25m

Stochastic parrot was coined by bender, gebru, et al, but it was probably "borrowed" from Regina Rini's "statistical parrot", coined 6 months before stochastic parrots hit the scene.

https://dailynous.com/2020/07/30/philosophers-gpt-3/#rini

mattigames
0 replies
10h2m

Monkeys with anxiety find a way to make rocks and lighting do what only they could do before, the monkeys are severely shocked by this development, they fear how soon until they can make it can do everything only they can do as of now, as usual with every new anxiety source these monkeys like to project mysticism and philosophical ramifications into it.

m3kw9
0 replies
17h13m

Is like a stochastic parrot but it’s so much more than a LLM is that’s what he’s asking

m3kw9
0 replies
17h1m

Artificial intelligence disrupting human intelligence, who would have thought?

johnea
0 replies
18h51m

Wow, it really seems a trend that people are unable to understand/contemplate reality outside of an internet meme.

Beleive it or not, reality exists outside of anything you ever read in a whitepaper...

jeswin
0 replies
11h57m

A stochastic parrot, as described in a seminal paper [1] ....

A bit of an exaggeration. The paper reads like a long blog post to me.

[1]:https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf

jddj
0 replies
20h19m

Very nice.

I don't personally follow her all the way to some of those conclusions but the delivery was awesome

gumballindie
0 replies
19h9m

Probably, if they suffer from developmental issues. People are anything but, no matter how much deities of the gullible want to make you think they are.

gniv
0 replies
9h45m

A toddler has a life, and learns language to describe it. An L.L.M. learns language but has no life of its own to describe.

That's an interesting insight that seems truefor now. Are there attempts to create LLMs that "have a life"?

fallingfrog
0 replies
15h49m

This is not, not, not good. How am I going to convince my 8 year old that it’s worthwhile to learn to draw if an ai can do it for her? Or how to code or play a musical instrument? These skills take decades to perfect. And that means to maintain them they need to be useful and necessary for society.

Look: when a factory is outsourced to a different country, the people who worked there have no way to pass on their skills. In one generation the community knowledge is gone and if you move the factory back, you’re starting from scratch. Technology can belost. It’s an emergent thing that comes from clusters of people doing the same thing and learning from each other.

If we stay on the path we are on: only hobbyists will be able to do these things, human effort will be unnecessary, and our minds will degrade and cease to be any use. Despite decades of Hollywood warning us of terminators and Hal 2000, the ai’s don’t need to attack us or try to kill us off to defeat us. They can destroy us just by trying to help.

dsQTbR7Y5mRHnZv
0 replies
18h51m
dekhn
0 replies
19h52m

If you wonder about toddlers, wait til you have teenagers.

In fact I've observed (after working with a number of world-class ML researchers) that having children is the one thing that convinces ML people that learning is both much easier and much harder than what they do in computers.

burrish
0 replies
3h56m

ok ?

armchairhacker
0 replies
20h13m

Toddlers can learn, LLM "learning" is very limited (fixed-size context and expensive fine-tuning)

BD103
0 replies
20h13m

Ignoring the topic of the article, the artwork and design was fantastically done! Props to whoever designed it :)