return to table of content

What does the cerebellum do?

jbandela1
63 replies
19h22m

The cerebellum has a repeated, almost crystal-like neural structure:

As a software engineer who did neurosurgery residency, my intuition/guess is that the cerebellum is kind of like the FPGA of the brain.

The cerebrum is great for doing very complicated novel tasks, but it takes time and energy. The cerebellum on the other hand is specialized in being able to encode common tasks so it can do them with quickly and efficiently. A lot of our motor learning is in fact wiring the cerebellum correctly.

This can actually lead to an interesting amnesia, where a person can learn a skill (cerebellum) but not remember learning the skill (cerebrum). So you could end up with a person who would think that he had never seen a basketball hoop or basketball before but could be doing layups, dunks, and 3 pointers with ease.

jjtheblunt
23 replies
19h13m

What a great comment, seriously.

It just made me start thinking and then I realized perhaps another analogy is a just in time compiler where code or skills used often enough, your body manages to compile into native neurological code and stores that appropriately.

satvikpendem
14 replies
18h28m

It is always funny to see brain metaphors morph to resemble our current stage of technological development, as the years go by. First it was anima, or hydraulic analogies of spirits and fluid moving through the body. Then it was clocks, the mechanistic processes of the brain. And so on and so on until today we metaphorize the brain to be like computer hardware. In vogue as well is comparing it to neural networks, due to the influence of machine learning and AI today. I wonder what metaphors we will come up with next.

http://mechanism.ucsd.edu/teaching/w12/philneuro/metaphorsan...

chaxor
3 replies
14h27m

The next metaphor would be quantum computers.

There are a few that have started suggesting quantum mechanics playing a large role in cognition, but very few take them seriously (obviously it has an effect, but likely much can be understood more classically, etc).

The fact that few are moving toward that style of thinking seems to give a bit more credibility to NNs being closer to the correct model. If spiking NNs take off more, we'll probably see more arguments around that, and if Blue Brain's full in-silico modeling takes off we may see the succinct description given by those studies used to describe ideas. However, to first approximation, NNs and spiking NNs aren't really a bad way to reason about large descriptions of brain dynamics, in many circumstances.

Retric
2 replies
13h38m

There’s zero evidence that there’s anything more quantum mechanical about the brain than a brick. IE: Physical and chemical interactions that emerge from quantum behavior, but can be modeled just fine without QM.

Instead people seem to just equate two different complex things they don’t understand with each other.

chaxor
1 replies
11h20m

Your comment has a feel to it of a rebuttal; but I hope it's clear that the original comment has effectively this same stancen as this.

Retric
0 replies
3h37m

I didn’t disagree with the what you said, but I think some people may have misinterpreted it.

AngaraliTurk
2 replies
8h45m

In all honesty, I believe the reverse is true. Our technology seems modeled after humans and the environment we inhabit. Airplanes being glorified birds, wheels being glorified feet, computers being glorified brains or neural networks...well.

hoseja
1 replies
5h8m

It's hard to imagine two objects in the vehicle-ground-interface conceptual space much farther away from one another than feet and wheels.

AngaraliTurk
0 replies
4h31m

Try to spend more time imagining and entertaining that thought. They both have the same function but execute it differently.

patcon
1 replies
12h34m

Heh not sure if you're aware, but our brain seems to have a special treatment or logic for contextualizing "high technology", as indicated by one well-documented failure modes: the "influencing machine" is a feature of schizophrenia which features a delusion that contemporary high technology (magnets, pneumatics, gears, mind-control drugs, satellites, prob AI now, etc) is being used by mysterious attackers to control the sufferers body and mind: https://en.m.wikipedia.org/wiki/On_the_Origin_of_the_%22Infl...

Though not mentioned in the post, schizophrenia (oddly enough) is also tied to cerebellar dysfunction: https://neuro.psychiatryonline.org/doi/10.1176/jnp.12.2.193#...

meindnoch
0 replies
8h43m

But air looms are real: https://www.theairloom.org/

Dylan16807
1 replies
16h42m

Though it's not like we're flitting from one bad analogy to another. Hydraulics are a great metaphor for understanding how computers work, for example.

shagie
0 replies
14h54m

Some 50 year old cartoons about said topic... https://github.com/larsbrinkhoff/crunchly

passion__desire
0 replies
17h56m

I always liken this process of reality being a fractal boundary of mandelbrot and our attempts to understand it through language and metaphors as a way to approximate and fit that boundary. Consider the successive colored stripes like a updated and accurate metaphors in the following video

https://youtu.be/u_P83LcI8Oc?si=ObkNyUfCCSUCb0Vt

interstice
0 replies
16h43m

I wonder if each iteration gets closer as we go

function_seven
0 replies
13h24m

I always thought neural networks were an example of the analogy working the other direction. Instead of modeling our brain on the technology of the time, we chose to model the next technology on how we think our brains work?

NKosmatos
7 replies
18h53m

Both are spot on examples on what the cerebellum does. If I may, a third example/analogy that comes to mind is cache memory or L2ARC drives, at least that’s how I have it stored in my mind (pun intended) :-)

rzzzt
6 replies
18h38m

"The brain is like a computer that"-style analogies are rarely fitting, or so vague as to being almost useless. My fridge is an L2 cache for food I want to eat soon.

yjftsjthsd-h
1 replies
17h18m

My fridge is an L2 cache for food I want to eat soon.

I would think fridge is RAM, L2 is table, L1 is plate. (I am deliberately ignoring the pun potential for Cold Storage.)

But other than bickering about the exact mapping, I don't see the problem with that analogy?

doubled112
0 replies
13h8m

Cold storage is my freezer. There's often a delay between when I need it and when it is ready.

xattt
0 replies
18h23m

My fridge is an L2 cache for food I want to eat soon.

A less-volatile form of L2 cache.

Food pocketed in my cheeks are the CPU registers?

kazinator
0 replies
18h16m

Some fridges are like magnetic tapes, with files from 1978.

chaxor
0 replies
14h36m

What's wrong with the fridge analogy?

It's an analogy for a reason. It bothers me when people combat analogies so incredibly hard. Of course, it is not really fitting or the same thing - it's an analogy.

burke
0 replies
18h29m

It would be a useful analogy for someone intricately familiar with computers but who was only sort of vaguely familiar with the concept of eating, has thought about houses only on occasion, and knows about refrigerators only insofar as they’re a food-related thing inside a house.

throw1234651234
12 replies
18h34m

I am way more interested in how you gave up $800,000+ a year to do software engineering.

p1necone
3 replies
17h46m

$800,000 is a lot of money, even after taxes you'd only have to work a few years at that salary before you could live reasonably comfortably for the rest of your life without working at all.

Seems perfectly reasonably to switch to a lower stress career at some point.

Spooky23
2 replies
14h45m

The personalities attracted to the role aren’t really amenable to thinking that way.

aziaziazi
0 replies
8h44m

People do change, especially when exposed a long time to stressful environnement. Ask post-burnout fellow. Fortunately most re evaluate their life before going to burn out.

astrange
0 replies
9h29m

There's also lifestyle inflation, and of course retiring early rarely impresses your spouse.

naveen99
3 replies
15h45m

Neurosurgery residency is very, very, very intense. Unfortunately not everyone finishes. When I was in medical school, I remember some general surgery residents quiting after falling asleep in the middle of an operation; another neurosurgery resident I rotated with was pretty miserable, I found out later he quit. I would have liked to be a neurosurgeon, but simply didn’t have the physical stamina.

I ended up becoming a radiologist. Never heard a radiology resident quiting, although have seen a few residents get kicked out for mental issues or gross incompetence.

bsder
2 replies
15h35m

gross incompetence

How on Earth do you get to "resident" and have "gross incompetence"?

There are sooo many gates before getting to be a "resident" that this completely baffles me.

devilbunny
0 replies
13h30m

It happens. Often incompetence is specific to one specialty - neurosurgery is competitive, so you can assume that anyone who gets it has at least adequate grades/test scores. But that doesn't mean that they're clinically worth a damn.

I'm an anesthesiologist. There are people who wash out because they just don't have the temperament for it. They're not dumb, they're not even bad doctors, they just aren't mentally equipped to sit back and relax while running a code.

ChainOfFools
0 replies
15h21m

Undergrad/premed: live with family, have 100% 24/7 familial support of your education, living at home with all of your essential basic living needs taken care of.

Residency: move to a different location away from family, no longer living in a dormitory environment with the expectations associated with being a student but are now a real adult making your way in the world. Suddenly you have to make the whole package work on your own without laundry/cooking/mental health/financial support.

Now you can no longer put 100% of yourself into your studies, but instead can only manage the 60 or 70% that most people can muster when they have to actually maintain their physical existence while also meeting their professional expectations.

lostlogin
2 replies
18h5m

Obviously not OP and not in this position, but I have worked with people who left surgical training positions and their reasons were health and a realisation that they would miss every family milestone, never get a real break and have every part of their life revolve around their job with the money and god-like power not compensating for that.

Obviously that’s one side of the equation, I don’t have any surgeon friends I know well enough to give the opposing view.

lacrimacida
0 replies
17h56m

Some can do it. Are they really different than the rest?? Perhaps higher tolerance to stress or even thriving in it?

dogmatism
0 replies
17h56m

OP said "did" implying finished. It's a six year residency minimum, though the first year is general surgery. It's not often people do the whole damn thing, then decide to bail. Usually it's after 2 or 3 years

Though some people are less burdened by golden handcuffs and sunk-cost fallacy

Plus, I've never met a happy (or sane) neurosurgeon

xmprt
0 replies
11h52m

Assuming they only got as far as their residency (and didn't end up as attending physician), it's possible that they didn't see themselves spending a full 7 years as a resident doctor (making under $100k/year working 80+ hour weeks) only to spend the rest of their lives doing more of the same except with a much higher salary. If they already graduated their residency then the reasoning is the same except it's be a much harder decision because of the sunk cost.

echelon
11 replies
18h48m

What signals cause rewiring of the cerebellum?

Is there any way to induce that state exogenously?

Some thoughts include dopamine / pain receptor reinforced learning. Maybe there's a faster way?

armada651
8 replies
18h27m

What signals cause rewiring of the cerebellum?

Any signal, that's the point. The cerebellum learns the patterns of signals involved in motor control.

This is why you train your skills by doing the correct movement over and over again. Once the cerebellum has adjusted to the correct motor signal patterns the correct movement will become effortless.

echelon
7 replies
17h0m

This is why you train your skills by doing the correct movement over and over again.

Yes, but can you make this to go faster?

mjan22640
1 replies
10h25m

The pattern of muscle activation timing in the correct move form needs to be figured out by exploration of the space.

echelon
0 replies
3h58m

We don't typically efficiently explore the space. This is why coaches exist.

The feedback loops are often long. Getting a review on a performance, etc.

If a device were set up to trigger pain within some milliseconds of an incorrect activation, surely we could speed this up?

Spooky23
1 replies
14h44m

Train harder. Develop habits.

echelon
0 replies
14h34m

I think we're in the "punch card" phase of biology. I'd be willing to bet (timeline uncertain) that there will be shortcuts to this process.

For now, opportunity cost rules the day. I'm 120% maxed out.

Balgair
1 replies
13h39m

You'd have to apply an adverse stimulus in under a ~5ms threshold to actions that were 'wrong'. It would depend on the exact task you're trying to do though. That would then cause other areas to potentiate that specific movement/firing as incorrect.

Its a active area of research in sports and DoD. As you'd theoretically be able to train marksmen and athletes at a much faster and better rate. However, even really really fast computers aren't quite fast enough to apply the adverse stimulus to 'wrong' movements/firing.

Also, your computer better be really accurate and never mess up, or that person is going to have a hell of a time retraining their brain. Also, their brain may view that the clouds/temperature/itchy grass/breakfast are the reasons for the adverse stimulus, as this is all happening at in subconscious time frame. So, good luck there.

echelon
0 replies
4h1m

This is great info, thanks!

Jensson
0 replies
16h46m

Pain or satisfaction, probably.

When a motion "feels good", or is painful, probably means you learn. So chase those.

willy_k
0 replies
15h24m

Psilocybin and other psychedelics may be that exogenous agent, they release Brain-derived neurotrophic factor (BDNF) [0], which plausibly could cause “rewiring of the cerebellum” [1][2], and may even do this with sub-perceptible (micro)doses [3].

[0] https://pubmed.ncbi.nlm.nih.gov/37280397/

[1] https://www.nature.com/articles/s41386-022-01389-z

[2] https://www.frontiersin.org/articles/10.3389/fpsyt.2021.7246...

[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8033605/

Podgajski
0 replies
17h44m

I’m going to say it’s all about the glutamate.

https://www.nature.com/articles/s41467-023-38475-9

eurekin
2 replies
18h46m

A lot of our motor learning is in fact wiring the cerebellum correctly.

That got me interested: since the wiring is so long (from limbs to cerebellum), what kinds of motor learning?

Do we know, if cerebellum needs more energy than the rest of the brain?

Sharlin
0 replies
1h12m

What do you mean? The cerebellum is closer to the spinal cord than the rest of the brains. And there's no learning happening anywhere but the brain, vertebrates don't have distributed central nervous system like octopuses do. The only thing vertebrate limbs can do on their own are certain hardcoded reflex actions.

LoganDark
0 replies
16h45m

That got me interested: since the wiring is so long (from limbs to cerebellum), what kinds of motor learning?

I'd imagine it's things like training a dominant hand. The skills required for precise motor control, to produce the right movements for e.g. handwriting. Since the wiring is so long, and feedback is delayed, you need to be able to precalculate these movements.

Also imagine how e.g. an intent to move somewhere actually gets implemented. You don't always have to think about each individual step of walking, or pay explicit attention to things like your sense of balance. You probably don't even have to choose that you're going to walk, or think about how to get up. When you want to go somewhere, you just do it, and somehow it's all calculated for you and happens.

When you try to move a specific limb, how do you know which muscles correspond to that limb? In fact, how many of those muscles can you even individually address? You can learn to individually address them, but I bet you don't come with that ability by default.

Then of course there's the question of what even causes your limbs to move once you will them to move.

agumonkey
2 replies
16h14m

I thought the cerebellum predated the rest ?

Out of curiosity, having the kind of weird symptoms not far from what you describe as weird amnesia, do you know any books / resources to understand advanced brain neurology like this ?

Thanks in advance

Balgair
1 replies
13h50m
agumonkey
0 replies
6h14m

Thanks a lot

apwell23
1 replies
18h17m

So you could end up with a person who would think that he had never seen a basketball hoop or basketball before but could be doing layups, dunks, and 3 pointers with ease.

I know a lot of people with the opposite problem.

Wherecombinator
0 replies
6h23m

Thanks for that. Don’t often laugh at a comment

samstave
0 replies
15h19m

mk-ultra project monarch

jvanderbot
0 replies
18h32m

Yeah, the discussion of classical conditioning led me to the same sort of conclusion. The fact that cerebelum has been growing faster in human like primates as a percentage of our already larger brains, well, I can't help but think that all our social reactions, drives, and complex needs are essentially some kind of cooption of this FPGA for optimization purposes. Like cerebrum does training and cerebelum et al do evaluation.

hyperthesis
0 replies
15h28m

Its striated structure matches sequential timed operations.

codetiger
0 replies
13h49m

This comment is the best example of why I come to read comments in HN.

LoganDark
0 replies
16h56m

Do you think this could be responsible for some part of "muscle memory"? Sometimes when I switch to other identities (DID), they can forget steps. That presumably happens because those steps are automatic for me, so I don't have to think about them, but when others try to do the same thing (not think about them), the automatic thing doesn't happen, and they end up missing the step entirely. They have to remind themselves to think consciously even about things that are normally automatic for me, because they don't have the same muscle memory.

I also wonder if neurodivergency affects this region. I'm autistic, so my brain is detail-oriented. Sometimes it feels like I can perceive "neural circuits" that are implemented by the so-called FPGA. When I have a compulsive behavior or trigger, I can sometimes observe the entire execution flow, not just the result. I think that's neat.

AndrewKemendo
0 replies
18h19m

This is great and I agree with the reprogrammable part.

However think of it more like switch or a router between the PNS/CNS (minus vision) and the “higher brain“ longer term planning systems.

akavi
16 replies
13h31m

This is fascinating to me. The list of things the author suggests the cerebellum handles is a tailor-made list of things I'm oddly bad at:

1. I'm very uncoordinated, with a noticeable intentional tremor

2. I'm particularly bad at sequencing dependencies for projects/errands/household tasks. I have to write down even fairly simple sequences of subtasks or get lost in yak-shaving loops

3. When flustered, I make very distinctive disfluencies in speech around conjugations (swapping "but"/"and"/"although") and sequencing (ie, placing objects before subjects/verbs) in sentences as well swapping relationships (referring to someone's parent as their child or vice versa, swapping "you" and "I/me", etc)

4. I tend to have a "ground up" approach to writing (building clauses first and then moving them around to construct sentences), which doesn't resemble the approach of other people I've shoulder-surfed.

All of these are fairly mild in terms of life impact, to be clear (or perhaps I'm just able to satisfactorily compensate for them in various ways) but I wonder if they all share some underlying minor cerebellar dysfunction.

weinzierl
11 replies
10h32m

I can't help thinking that this is where humanity is heading. Given that the different parts of the brain have to compete for resources and given what the cerebellum does, it makes sense that a lesser developed one can be an advantage: It frees up resources for parts of the brain that are more important in our times.

NhanH
4 replies
8h25m

Human society provides nowhere enough pressure for evolution to have effect (for good reasons), and our humanity timeline is nowhere near lengthy enough so far either. So I don’t think we are heading anywhere in that regards

pi-e-sigma
1 replies
2h22m

How would you explain increasing average height in the US youth in the last 30 years? It can't be food availability, because 30 years ago people weren't hungry either

larkost
0 replies
40m

Absolutely food, or at most a hormonal change based on food. Possibly with s side-order of lack of a few of the childhood diseases that have been controlled with inoculations. 30 years is WAY too small a timeframe for human evolution. That works on the order of 50-100 generations, not a single generation.

sorokod
0 replies
6h48m

I agree but on the flip side it is interesting to consider what were the pressures and the timeline to bring us to where we are now.

Great ape brains are distinguished from monkey brains by their larger frontal and cerebellar lobes. The Neanderthals had bigger brains than us but smaller cerebella. And, most strikingly, modern humans have much bigger cerebella than “anatomically modern” Cro-Magnon humans of only 50,000 years ago (but relatively smaller cerebral hemispheres!)

hyperthesis
0 replies
4h49m

Selective reproduction is evolutionary pressure; survival is mere prerequisite.

kiba
2 replies
2h47m

Movement is very important to our health. People who don't move as often just die younger than those who don't.

I don't know how it would impact our evolution long term.

timeagain
1 replies
2h21m

Doesn’t matter if you live to 55 or 95 if you have some kids when you’re 25.

yjftsjthsd-h
0 replies
2h6m

I don't think that's correct; raising your kids - and potentially involvement with grandkids! - impacts long term results.

helboi4
1 replies
8h8m

I'm not sure about that. Being unable to anticipate context sounds terrible for pretty much any task. Having to think through every step of anything is terrible. Not being able to form sentences fluently and instead having to arrange them like puzzles is far too time consuming. The article literally is explaining how a large cerebellum is crucial for humans' high intelligence. Reallocating resources to other parts of the brain would make us stupider.

altruios
0 replies
1h41m

Having no context does allow for a fresh perspective...

Having to think things through slowly step by step may reveal errors other's glossed over...

the cerebellum is important. But maybe, since we really know less about the brain than we think we do, differently wired does not equate 'stupider'. It takes all sorts to make a world.

I agree with the language part, tedious... but maybe in certain situations it might be useful.

close04
0 replies
8h0m

I imagine it's like performing a task in software ("big" brain, cerebrum) or hardware (cerebellum). One of them is faster and more efficient but very specialized. If it breaks you're left with the task being performed slower and less efficiently on the brain that can execute arbitrary code.

But I can't imagine any change in this split in responsibilities will happen on human relevant timescales. The cerebellum is probably not evolving very fast anyway, while the cerebrum might evolve comparatively faster but it has no pressure to do it. And "faster" still means tens of thousands of years.

emayljames
1 replies
7h54m

These seem highly related to neurodivergence, such as ASD and ADHD.

anon84873628
0 replies
2h24m

Indeed, the article reminded me of the link between executive dysfunction (ADHD) and other problems like sensory processing disorders and postural sway.

Turns out studies have confirmed the overlap in these conditions and also linked it with reduced grey matter volume in the cerebellum:

https://psychcentral.com/adhd/postural-sway-adhd#postural-sw...

plufz
0 replies
9h35m

Often with the brain multiple regions are needed for a single function. For example planing action including movement has much to do with the prefrontal cortex and the basal ganglia is much involved in making smooth motions and inhibiting tremors, etc. Problems with the dopamine system also affect things like planing (ADHD) and tremors (Parkinson’s).

codeflo
0 replies
3h47m

I thought that the list of cognitive impairments sounds like a laundry list of ASD symptoms, and indeed, at least some researchers seem to believe that there's a connection between Autism and cerebellum dysfunction:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3677555/

cubefox
9 replies
18h42m

So classical conditioning in humans requires special cells in the Cerebellum (Purkinje cells), which can even do single-cell learning. Which artificial neurons can't do, as only weights (artificial synapses) are updated. So how is classical conditioning actually implemented in artificial neural networks? I assume there is some minimum network which makes it work.

im3w1l
2 replies
16h59m

Well classical conditioning kind of only makes sense in the context of an agent that is receiving inputs and taking actions on them. Many neural networks don't solve problems of that type, and so have no need for classical conditioning.

But when you do have such a problem conditioning is not very complicated. The normal algorithms and neural structures are designed to learn stuff like "when a given input happens a certain action must be taken" and thats all you really need for conditioning. How it actually does it? Well I guess with gradient descent it would work something like this: Every time there is a puff of air the network will be like "damn I should have blinked to avoid this" and so it makes its current internal statement a little more likely to lead to blinking. Gradually as it happens more times it will learn a strong association for the ringing bell or whatever.

A small RNN could learn this.

cubefox
1 replies
9h25m

Yeah. It's just not quite clear what the minimal example for such a network would be. I assume you have N inputs and one output. The output always active when input 1 is active, otherwise the output is inactive. So the other inputs are ignored. However, when one of those other inputs, x, tends to be temporally correlated with input 1, after a while x will generate an output upon activation even if input 1 isn't active. If x becomes decorrelated with input 1, x will again get ignored. Not sure what the simplest network architecture looks like that implements this behavior.

im3w1l
0 replies
1h7m

The output always active when input 1 is active.

Neural networks don't have instinctive behavior like that.

eurekin
2 replies
18h32m

I always assumed ANN simply is a universal and learnable function approximator. That is, there is no direct equivalent of classical conditioning. Only data in, expected output pairs

cubefox
1 replies
18h14m

There must be a minimal ANN architecture which implements classical conditioning. This architecture could be quite limited in what it can learn compared to ANNs in general. Similar to how feed-forward networks are limited compared to RNNs.

david-gpu
0 replies
16h31m

You can train single layer neural nets. Not very useful, but they do exist.

There are certain ANN architectures that relied on, essentially, classical conditioning based on Hobbian learning rules and variants thereof. Kohonen self-organizing maps are an example of that.

Not that such historical systems are popular today, though.

ChainOfFools
1 replies
15h11m

special cells in the Cerebellum (Purkinje cells), which can even do single-cell learning.

As a neuroscience novice, I've always assumed that something about the gross model of the neuron, as far as I understand it, cannot be correct or is incomplete. Because I never understood why single cells aren't already performing single cell learning, given that there are always far more dendrites than axons.

Since this characteristic turns each neuron into a lossy compression function, there has to be some process by which certain dendrites are considered 'more important' carriers of information than others, in order to make a tie-breaking decision about what to include in the compressed signal, and what to throw out, as the cell decedes whether or not to transmit an impulse (including whether or not to override prior inhibitory signals) back up the axon.

cubefox
0 replies
9h51m

Well, not all incoming signals get the same weight for the outcoming signal, as the dendrites are e.g. more or less close to the part of the cell where the spikes are generated. But this computation is just analogous to the connection weights together with the activation function in artificial neural networks. That's not what enables classical conditioning in single Cerebellum cells.

h0l0cube
0 replies
17h25m

At a rough guess, each Purkinje cell is an MLP unto itself, and as the article states, this implies some orders of magnitude more computation for a brain simulation. I also heard something like 'a neuron is an MLP unto itself' on the Brain Inspired podcast. It's likely we've vastly underestimated the processing power of the brain.

bsdpufferfish
9 replies
13h16m

Why must mental functions be localized to physical components?

doug_durham
4 replies
13h6m

Are you suggesting a metaphysical structure? I don't understand what you are getting at.

bsdpufferfish
3 replies
13h3m

Even if you are a full materialist it's fallacious to assume there is one "part" that does something, like it's a factory assembly. Instead it might be a function of the composition of brain parts.

It's like asking which part of a bat makes it fly. The wings? Well kind of, but you need more than that. I guess you can fly without feet... it's just not a well formed question.

red75prime
1 replies
10h6m

If you ask question a bit differently, then it's not a mystery at all. Why brain parts that have neural structure, which is conductive to fine and agile motor control, perform motor control?

bsdpufferfish
0 replies
9h3m

Why is it in a part? Which part of the violin makes it sound in tune?

dragonwriter
0 replies
8h54m

Even if you are a full materialist it's fallacious to assume there is one "part" that does something, like it's a factory assembly. Instead it might be a function of the composition of brain parts.

Its true that a particular function may mot be localizable more specifically than the brain (or even the whole body), because defining it as a distinct function may not reflect the organization of conponents within the body. But that's still performed by a defined physical system, and there are still sub-functions necessary to perform that function that are localizable to narrower components.

It's like asking which part of a bat makes it fly.

Its like that in that we absolutely can describe specific parts of the bat and what each contributes to flight.

fallingfrog
0 replies
4h42m

What other sort are there? Imaginary components? Metaphorical components?

eurekin
0 replies
7h37m

My guess: because it's useful with following analogy.

We use text embeddings to represent concepts written words. They lack the nuance, when the same word has different meaning in different concepts. LLM's use text embeddings and enrich it with the Attention mechanism.

For words, that in reality are used to represent a single concept, a text embedding is working perfectly on it's own.

For those concepts that are context dependent, we're using the attention mechanism to gently guide the text embedding closer to the intended, identified by surrounding words, meaning. That's the role of the Value vector of the K, Q, V triplet in the Attention mechanism to be precise.

So, this is a simplistic approach, which corresponds to the "first approximation", which could be good enough for some cases. We don't know which exactly yet, but we'll know, once enough evidence is given to the contrary.

It's not a good model, but a very good approach in order to do research in a stepwise manner. With time, it'll get more and more nuanced, one approximation at a time.

dragonwriter
0 replies
8h58m

Because physical components are all that exists.

chrisfosterelli
0 replies
13h2m

They probably aren't, at least at the level we often teach. Much of our knowledge about the brain comes from observing people who have pieces missing and seeing how their behaviour differs from a normal adult or putting people in an fMRI scanner and saying "wow, that area used a lot of oxygen compared to baseline". This, and a scientist's nature to classify things, led to a lot of overoptimistic categorization of brain function to specific regions. As neuroscience has matured the field has grown to recognize a more nuanced view that most computation in the brain is more distributed than we first assumed, and different areas are often involved in overlapping functions. It can also change over time or after extreme brain trauma. But it's not correct to say it's fully distributed either. The honest answer is we still have an extremely poor understanding of how the brain works.

padolsey
7 replies
18h43m

What a marvellous article. One thing I’ve never quite appreciated in neuroscience is how useful physical movement is as a debugging layer. During a task, in observing gaits, tremors, speed, accuracy, etc., you’re able to gain a deeper understanding of how cognition works for non-movement tasks. I guess cognition is, after all, still just movement, but through a conceptual plane instead of a physical one.

retrac
4 replies
18h23m

It's been argued that most, maybe all, of human cognition is based on a sort of folk-physics mental model with objects. We throw ideas into the ring to be chewed on and distilled and maybe brought in to practice or thrown out as useless. The linguistic metaphors we use, at least, to talk about ideas and abstractions, are never more than one, maybe two, steps away from a hand moving or rearranging something.

Terr_
2 replies
9h22m

folk-physics mental model with objects

Kind of like how object permanence [0] must be learned by babies, slowly worked into their internal model, even though "things don't usually vanish when they go behind other things" seems like reliably low-hanging fruit for any process (whether evolutionary or meddling demigod) to wire up as instinctual physics knowledge.

[0] https://en.wikipedia.org/wiki/Object_permanence

trealira
0 replies
1h8m

Animals like newborn deer fawns are born knowing how to walk, follow their mothers around, and run away from danger, although their legs are weak at first. So this makes me wonder if having to learn object permanence is just one more example of human babies being underdeveloped compared to those of other species of animals.

circlefavshape
0 replies
22m

To me it makes very little sense for object permanence to be learned rather than innate - have a look at the "contradicting evidence" in the article you linked

AndrewKemendo
0 replies
18h17m

In fact the Glasgow coma scale (1), which gives you levels of human consciousness, is precicely that.

https://www.ncbi.nlm.nih.gov/books/NBK513298/

stcredzero
0 replies
1h55m

One thing I’ve never quite appreciated in neuroscience is how useful physical movement is as a debugging layer.

My sister, who is a choreographer, had some interesting views on how movement could be used as therapy. (Specifically, crawling, as a base-level movement.) I thought that was woo-woo, but later I read that there was some medical support for this.

mistersquid
0 replies
17h20m

The first video linked in the OP is astonishing to me, a layperson with no medical training.

It's giving me lots to think about with regard to motor impairment. (Basically, I'm reviewing my prejudices and nodding due better understanding the plight of afflicted individuals.) [0]

[0] https://www.youtube.com/watch?v=FFki8FtaByw

nabla9
5 replies
17h46m

There are people without cerebellum. It affects thought and emotion.

https://www.npr.org/sections/health-shots/2015/03/16/3927897...

mynameishere
4 replies
16h38m

More famously, Joey Ramone didn't have one:

https://www.youtube.com/watch?v=rjWZJQyykeM

e40
1 replies
4h6m

Can you provide a reference for this? I searched and found nothing.

lewispollard
0 replies
3h7m

I think it was a joke - The Ramones have a song with lyrics about a missing cerebellum, but it's a song about a fictional character with a lobotomy.

lqet
0 replies
3h31m

But even more strangely, he did have this:

He was born with a parasitic twin growing out of his back, which was incompletely formed and surgically removed
BurningFrog
0 replies
11h23m

One of the greatest rhymes is rock history!

   Now I guess I'll have to tell 'em
   That I got no cerebellum

lacrimacida
4 replies
17h53m

Is cerebellum responsible for muscle memory?

Balgair
2 replies
13h51m

Not really, but it can be involved for some tasks. 'Muscle Memory' is a bit of a complex thing. It's not so much the firing of the neurons, as much as it is the timing of that firing. Your reaction time is at the ~5ms level. Much longer than the muscles need to move in concert to, say, hit a 3-pointer. Controlling all of that can take place all the way from the brain down to the ganglia of the spinal chord. Drinking a cup of tea while reading will mostly take place before the brain gets a chance to intervene, for example. While riding a bike will involve more of the brain. I want to stress, it's a complex and not well studied area of active research.

Wherecombinator
1 replies
6h13m

It is interesting though. Do you know if there is a name for this particular area of neuroscience?

Balgair
0 replies
3h5m

There is not, it would just be general neuroscience. I'm unaware of specific labs either. Google would be your best friend in terms of trying to find specific researchers and in reaching out to them.

fallingfrog
0 replies
4h19m

I’ve many times had the experience of trying to debug someone’s computer problem, and trying to describe how to fix something, I couldn’t think of what to do in words. So I said, “my hands know where the answer is” and once I had the mouse I clicked around and did the task fairly quickly. I wonder if that was the cerebellum solving the problem for me?

bbor
4 replies
17h44m

Amazing article, and for a layperson who’s been reading a lot of neuroscience, the perfect level of complexity. I love an article that makes you (internally) shout “why haven’t I wondered about that before?” over and over, so thanks for that. A compliment as to clarity of purpose, I suppose.

Materially;

  That’s bad news for anyone hoping to simulate a brain digitally. It means there’s a lot more relevant stuff to simulate (like the learning that goes on within cells) than the connectionist paradigm of treating each biological neuron like a neural-net “neuron” would imply, and thus the computational requirements of simulating a brain are higher — maybe vastly higher — than connectionists hope.
I get where she’s coming from, and she’s not wrong, but it seems like unnecessary detour to dunk on another AI “camp” in the field for drama points and satisfaction - the Marcus Maneuver, if you will. Connectionism isn’t a cult or an institution, it’s a paradigm that emphasis the utility of big nets of interconnected smaller pieces. Any self-avowed connectionist (are there any left? Honest question) could just retreat to “ok well it’s networks of brain cells plus smaller intracellular networks” and keep their paradigm. And all we can say to that is “ugh, I guess”, as far as I can tell!

einsum
1 replies
6h15m

Precisely, connectionism in modern AI boils down to the idea that learning should be expressed in terms of DAGs that are composed of simpler units. It’s quite likely that the units that are currently used are too abstract, but this doesn’t necessarily mean the paradigm itself is flawed.

jampekka
0 replies
1h59m

I don't think connectionism is restricted to acyclic graphs. Or even graphs in general. But you're right that the connectionism as an approach is more abstract than just simulating neuron behavior.

jampekka
0 replies
2h3m

At least one self-avowed professional connectionist here. I was coming to make similar critique here.

Connectionism isn't and never was about trying to simulate the biological neural networks and other anatomy. As you said, it's about emphasizing the network and network emergent phenomena over isolated pieces (e.g. "grandmother neurons" or strict localization of brain function). At the information processing level the contrast is to classicism/symbolicism that tries to explain cognition as atomic and modular operations on symbols.

eurekin
0 replies
8h42m

I was also stumped by this exact quote. The whole article was in the best spirit, until this.

It's a model we have, it will get updated in order to be more useful. Every engineering field builds a model of reality. Who are the connectionists? Is this some kind of "they people" label for, what causes fear in a typical layperson?

strontian
3 replies
15h40m

Anyone have cogsci or neurosci reading recs?

Balgair
1 replies
13h55m

https://www.amazon.com/Principles-Neural-Science-Fifth-Kande...

As Jackson is to E&M, Kandel is to neuro. It's the text.

seaslug
0 replies
5m

Kandel is superb but it's written for grad students and advanced undergrads with a solid biology foundation. A typical undergrad neurosci textbook would be an easier start for a non-biology person.

angra_mainyu
0 replies
11h46m

Ask and ye shall receive:

- cognitivescience.substack.com

- seantrott.substack.com

- understandingai.substack.com

- neuralnews.substack.com

- brain2mind.substack.com

- biomedworks.substack.com

aetherson
3 replies
18h47m

My overall read on this article is that its claims are probably overconfident. Like, it seems interesting, but like she's seeing a few results and making big claims about how the cerebellum plays into overall cognition, and my general sense is that lots of humility is usually warranted here: that simple and decisive statements usually turn out to be riddled with provisos and unexplained behavior.

HarHarVeryFunny
1 replies
18h27m

Did you also read the comment by Steve Byrens with his own theory on what the Cerebellum does, and the author's reply of "seems right" ?!

danw1979
0 replies
11h47m

I missed this, but after reading Steve’s comment, I don’t see much in his “little time machine” theory of function that conflicts with the original article’s ideas, except on the classical conditioning point.

TaupeRanger
0 replies
16h46m

You are correct. We don't really know what's going on. Every claim can be met with an equally emphatic opposite claim with equally compelling evidence by someone cherry picking the "correct" studies and listening to the "correct" people. What we call neuroscience is still in a pre-Newtonian era.

taneq
2 replies
18h55m

It seems weird to have a whole separate organ for “make motor and cognitive skills work somewhat better.”

Does it? I’d think this was an absolute gimme.

lostlogin
1 replies
18h1m

Do we have other examples of this?

There is a second kidney, lung, eye etc, but a hot spare isn’t quite the same as a completely different structure.

Jensson
0 replies
17h50m

Your stomach and intestines, they aren't all necessary but they help. And you can see their structure varying significantly in different animals.

Or a tail, an entire extra limb just to keep balance a bit better.

Or ears, you can hear without them but they help capture sound a bit better.

woadwarrior01
1 replies
18h16m

In total, the cerebellum contains 80% of all neurons!

Apples and oranges, but that's so reminiscent of MLPs in Transformers. A similarly large fraction of the weights in transformers come from the MLPs in each layer.

eurekin
0 replies
8h48m

Well, MLPs is the actual neural network (the approximator), whereas the attention (the rest) is more of a text to relevant embedding extractor

solax
1 replies
8h49m

That's an interesting perspective. While I agree that the pace of advancements in neuroscience is slower compared to AI, I think it's important to note that understanding the brain is a fundamentally different problem than building intelligent machines. The human brain is an incredibly complex system with billions of interconnected neurons, and we still have a long way to go in terms of fully understanding how it works.

AI, on the other hand, is designed to solve specific problems efficiently, and it can be engineered to mimic certain aspects of human cognition without necessarily needing to understand the underlying mechanisms.

While it's possible that AI could eventually help us better understand the brain, I believe that advancements in neuroscience will continue to be crucial for unlocking the full potential of AI. Understanding how the brain processes information, learns, and makes decisions could lead to the development of more sophisticated and human-like AI systems.

gmanner123
0 replies
2h47m

Hi ChatGPT

rvba
1 replies
11h11m

So can it be compared to simd (single instruction multiple data)? An accelerator for walking? Speculative execution?

eurekin
0 replies
8h40m

I'm betting a higher frequency, lower latency, high throughput dedicated part. The slow stuff thinks for longer and sends a "goal" to this part, which translates that into a series of higher frequency (comparing to the slow part) activations, which are fast enough to produce fluid movement. Probably in the order of few milliseconds

mihaaly
1 replies
4h28m

I like the beginning of the article quite a lot for giving an overview of the cerebellum, teaching that it is the home of unconscious learning, but to me it goes into weak speculations quite quickly, first with the 'Purkinje cells learn individually but not other found do that' leads to 'neuron connectivity is not enough to simulate' brain activity while knowing that higher level mental activity exists without cerebellum. Also that cerebellum might be the home of measurement just beacuse Purkinje cell can time a reaction, then based on the headlines (lost interest to attentive reading) speculating that it is the place for anticipation and sensing. Having a feeling that wants to expand cerebellum onto as much as fantasy stretches. The whole cerebellum topic sounds fascinating without 'rethinking intelligence' completely.

stcredzero
0 replies
1h44m

I find this particularly interesting:

The brain is not like a neural network where the only thing that is “learned” or “updated” is the weights between neurons. At least some learning evidently happens within individual neurons.

That’s bad news for anyone hoping to simulate a brain digitally. It means there’s a lot more relevant stuff to simulate (like the learning that goes on within cells) than the connectionist paradigm of treating each biological neuron like a neural-net “neuron” would imply, and thus the computational requirements of simulating a brain are higher — maybe vastly higher — than connectionists hope.

I had heard this as well close to ten years ago on some NPR radio show: That researchers had reasons to suspect that a whole lot more processing happens within the synapses themselves.

halhod
1 replies
5h15m

HN may enjoy this old Christmas special I wrote on Ethan, a teenager born without a cerebellum https://www.economist.com/christmas-specials/2018/12/18/the-...

tomjakubowski
0 replies
20m

Piercing the Economist's veil of anonymity eh?

denton-scratch
1 replies
8h12m

While humans don’t have these kinds of sensory systems

(He's talking about sensing the 3D environment using electric fields)

I wonder whether binaural hearing is such a sensory system. You can blindfold someone, then lead them into a space. They can tell whether they're outdoors, or in a small, bare room, or a concert hall, or a room with furniture and drapes. Perhaps they can tell whether they're near or far from a wall, and in which direction.

bregma
0 replies
6h35m

A sighted person who is not blindfolded can do the same thing, relying on reconstructing their 3D environment from the way electromagnetic radiation affects the rhodopsin in their retinas rather than the movement of hairs in their cochlea due to air pressure changes over time, and integrating the differences between a spatially separate pair of detectors.

cyco130
1 replies
7h39m

If you're curious about the pronunciation of Purkinje like me: In Czech, [purkɪɲɛ] (spelled Purkyně); in English, per-kin-jee [/pɝkɪnd͡ʒiː/].

https://www.youtube.com/watch?v=23MFfOsTDIs

hoseja
0 replies
5h13m

Perkingee, when you read one orthography with the rules of another orthography. Curiously, also happens to "Czech" or "Czechia" itself which I've been shocked to learn some pronounce chechia instead of checkia, explaining the baffling confusion with Chechnya.

GlenTheMachine
1 replies
17h54m

"The cerebellum may also inspire artificial-intelligence approaches somewhat, especially approaches to robotics or other control, in that it may be be beneficial to include a fast feedforward-only predictive modeling step to control real-time actions..."

This is pretty widespread in controls, actually. The dominant control technique for legged robotics is model-predictive control ("MPC") which explicitly uses such a predictive model to determine the best inputs to the actuators.

einsum
0 replies
6h33m

Predictive models are also behind many of the SOTA results in modern reinforcement learning, although they are often used to generate fictive data from which a policy is learnt.

orena
0 replies
17h35m

Come on, no one knows... after 10 years in computation neuroscience + experimental neuroscience. The slope of neuroscience advances in very very low (some will say negative). The slope of AI advances in much much higher. --> we will get an AI to understand the brain an explain to us, it will not come from a lab.

Just my point of view

hyperthesis
0 replies
4h24m

Timing is also needed in speech, which is quite fast compared to conscious desponse; so the problem with conjunctions might just be that the cerebellum anticipates them.

Note that time perception is distorted so that we don't notice how slow conscious response is.

So the feeling of "competence" is when your cerebellum is anticipating correctly.

h0l0cube
0 replies
17h18m

This article made me wonder if dyspraxia was related to impaired or inhibited cerebellum function. A cursory search yields at least one article supports the idea:

Results revealed that children with DCD had reduced grey matter volume in several regions, namely: the brainstem, right/left crus I, right crus II, left VI, right VIIb, and right VIIIa lobules
SillyUsername
0 replies
8h56m

Tldr; it's classical conditioning, i.e. the base firmware before installing the higher intelligence and education.

Anotheroneagain
0 replies
6h3m

"ah yes, the thinking happens in the cerebellum.”

Why do we NOT think so? It must be capable of thinking alone, as only mammals have the neocortex. It would only be logical to expect the more universal cognitive abilities to happen in the cerebellum, and only those that are specific to mammals alone to happen in the neocortex.

48864w6ui
0 replies
51m

Possibly of interest: cerebellum involved in asd https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8998980/