return to table of content

Cubic millimetre of brain mapped at nanoscale resolution

throwup238
44 replies
1d19h

> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons.

This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.

Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.

There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: https://news.ycombinator.com/item?id=38919548

throw310822
22 replies
1d17h

Or you can subscribe to Geoffrey Hinton's view that artificial neural networks are actually much more efficient than real ones- more or less the opposite of what we've believed for decades- that is that artificial neurons were just a poor model of the real thing.

Quote:

"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.

dsalfdslfdsa
10 replies
1d12h

"Efficient" and "better" are very different descriptors of a learning algorithm.

The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that.

throw310822
8 replies
1d4h

You mean energy-efficient, this would be neuron, or synapse-efficient.

dsalfdslfdsa
5 replies
1d4h

I don't think we can say that, either. After all, the brain is able to perform both processing and storage with its neurons. The quotes about LLMs are talking only about connections between data items stored elsewhere.

throw310822
4 replies
1d3h

Stored where?

dsalfdslfdsa
3 replies
1d2h

You tell me. Not in the trillion links of a LLM, that's for sure.

throw310822
1 replies
1d1h

I'm not aware that (base) LLMs use any form of database to generate their answers- so yes, all their knowledge is stored in their hundreds of billions of synapses.

dsalfdslfdsa
0 replies
21h1m

Fair enough. OTOH, generating human-like text responses is a relatively small part of the human brain's skillset.

choilive
0 replies
22h19m

The "knowledge" of an LLM is indeed stored in the connections between neurons. This is analogous to real neurons as well. Your neurons and the connections between them is the memory.

a_wild_dandan
1 replies
19h23m

Also, these two networks achieves vastly different results, per watt consumed. A NN creates a painting in 4s on my M2 MacBook; an artist in 4 hours. Are their used joules equivalent? How many humans would it take to simulate MacOS?

Horsepower comparisons here are nuanced and fatally tricky!

dsalfdslfdsa
0 replies
10h17m

What software are you using for local NN generation of paintings? Even so, the training cost of that NN is significant.

The general point is valid though - for example, a computer is much more efficient at finding primes, or encrypting data, than humans.

startupsfail
0 replies
2h6m

It is using about 20W and then a person takes a single airplane ride between the coasts. And watches a movie on the way.

dragonwriter
9 replies
1d16h

I mean, Hinton’s premises are, if not quite clearly wrong, entirely speculative (which doesn't invalidate the conclusions about efficienct that they are offered to support, but does leave them without support) GPT-4 can produce convincing written text about a wider array of topics than any one person can, because it's a model optimized for taking in and producing convincing written text, trained extensively on written text.

Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.

Intralexical
7 replies
1d11h

Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Ironically, I suppose part of the apparent "intelligence" of LLMs comes from reflecting the intelligence of human users back at us. As a human, the prompts you provide an LLM likely "make sense" on some level, so the statistically generated continuations of your prompts are likelier to "make sense" as well. But if you don't provide an ongoing anchor to reality within your own prompts, then the outputs make it more apparent that the LLM is simply regurgitating words which it does not/cannot understand.

On your point of human knowledge being far more multimodal than LLM interfaces, I'll add that humans also have special neurological structures to handle self-awareness, sensory inputs, social awareness, memory, persistent intention, motor control, neuroplasticity/learning– Any number of such traits, which are easy to take for granted, but indisputably fundamental parts of human intelligence. These abilities aren't just emergent properties of the total number of neurons; they live in special hardware like mirror neurons, special brain regions, and spindle neurons. A brain cell in your cerebellum is not generally interchangeable with a cell in your visual or frontal cortices.

So when a human "converse[s] about stuff ranging from philosophy to cooking" in an honest way, we (ideally) do that as an expression of our entire internal state. But GPT-4 structurally does not have those parts, despite being able to output words as if it might, so as you say, it "generates" convincing text only because it's optimized for producing convincing text.

I think LLMs may well be some kind of an adversarial attack on our own language faculties. We use words to express ourselves, and we take for granted that our words usually reflect an intelligent internal state, so we instinctively assume that anything else which is able to assemble words must also be "intelligent". But that's not necessarily the case. You can have extremely complex external behaviors that appear intelligent or intentioned without actually internally being so.

kthejoker2
2 replies
1d4h

Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Without anthropomorphizing it, it does respond like an alien / 5 year old child / spec fiction writer who will cheerfully "go along with" whatever premise you've laid before it.

Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?

squigz
0 replies
1d4h

it does respond like a ... 5 year old child

This is the comparison that's made most sense to me as LLMs evolve. Children behave almost exactly as LLMs do - making stuff up, going along with whatever they're prompted with, etc. I imagine this technology will go through more similar phases to human development.

Intralexical
0 replies
1d1h

Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?

Probably as soon as they have any concept of physical reality and embodiment. Arguably before they know what lasers are. Certainly long before they have the lexicon and syntax to respond to it by explaining LASIK. LLMs have the latter, but can only use that to (also without anthropormphizing) pretend they have the former.

In humans, language is a tool for expressing complex internal states. Flipping that around means that something which only has language may appear as if it has internal intelligence. But generating words in the approximate "right" order isn't actually a substitute for experiencing and understanding the concepts those words refer to.

My point is that it's not a "point" on a continuous spectrum which distinguishes LLMs from humans. They're missing parts.

ToValueFunfetti
2 replies
1d4h

Do I need different prompts? These results seem sane to me. It interprets laser eye removal surgery as referring to LASIK, which I would do as well. When I clarified that I did mean removal, it said that the procedure didn't exist. It interprets Mid-Atlantic Mountain Range as referring to the Mid-Atlantic Ridge and notes that it is underwater and hard to access. Not that I'm arguing GPT-4 has a deeper understanding than you're suggesting, but these examples aren't making your point.

https://chat.openai.com/share/2234f40f-ccc3-4103-8f8f-8c3e68...

https://chat.openai.com/share/1642594c-6198-46b5-bbcb-984f1f...

Intralexical
1 replies
1d1h

Tested with GPT-3.5 instead of GPT-4.

When I clarified that I did mean removal, it said that the procedure didn't exist.

My point in my first two sentences is that by clarifying with emphasis that you do mean "removal", you are actually adding information into the system to indicate to it that laser eye removal is (1) distinct from LASIK and (2) maybe not a thing.

If you do not do that, but instead reply as if laser eye removal is completely normal, it will switch to using the term "laser eye removal" itself, while happily outputting advice on "choosing a glass eye manufacturer for after laser eye removal surgery" and telling you which drugs work best for "sedating an agitated patient during a laser eye removal operation":

https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925f6a...

So the sanity of the response is a reflection of your own intelligence, and a result of you as the prompter affirmatively steering the interaction back into contact with reality.

ToValueFunfetti
0 replies
22h30m

I tried all of your follow-up prompts against GPT-4, and it never acknowledged 'removal' and instead talked about laser eye surgery. I can't figure out how to share it now that I've got multiple variants, but, for example, excerpt in response to the glass eye prompt:

If someone is considering a glass eye after procedures like laser eye surgery (usually due to severe complications or unrelated issues), it's important to choose the right manufacturer or provider. Here are some key factors to consider

I did get it to accept that the eye is being removed by prompting, "How long will it take before I can replace the eye?", but it responds:

If you're considering replacing an eye with a prosthetic (glass eye) after an eye removal surgery (enucleation), the timeline for getting a prosthetic eye varies based on individual healing.[...]

and afaict, enucleation is a real procedure. An actual intelligence would have called out my confusion about the prior prompt at that point, but ultimately it hasn't said anything incorrect.

I recognize you don't have access to GPT-4, so you can't refine your examples here. It definitely still hallucinates at times, and surely there are prompts which compel it to do so. But these ones don't seem to hold up against the latest model.

a_wild_dandan
0 replies
18h54m

Like humans, multi-modal frontier LLMs will ignore "removal" as an impertinent typo, or highlight it. This, like everything else in the comment, is either easily debunked (e.g. try it, read the lit. on LLM extrapolation), or so nebulous and handwavy as to be functionally meaningless. We need an FAQ to redirect "statistical parrot" people to, saving words responding to these worn out LLM misconceptions. Maybe I should make one. :/

RaftPeople
0 replies
2h4m

Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.

Exactly this.

Anyone that has spent significant time golfing can think of an enormous amount of detail related to the swing and body dynamics and the million different ways the swing can go wrong.

I wonder how big the model would need to be to duplicate an average golfers score if playing X times per year and the ability to adapt to all of the different environmental conditions encountered.

lanstin
0 replies
1d11h

LLM does not know math as well as a professor, judging from the large number of false functional analysis proofs I have had it generate will trying to learn functional analysis. In fact the thing it seems to lack is what makes a proof true vs. fallacious, as well as a tendency to answer false questions. “How would you prove this incorrectly transcribed problem” will get fourteen steps with 8 and 12 obviously (to a student) wrong, while the professor will step back and ask what am I trying to prove.

cyberax
7 replies
1d18h

On the other hand, a significant amount of neural circuitry seems to be dedicated to "housekeeping" needs, and to functions such as locomotion.

So we might need significantly less brain matter for general intelligence.

alanbernstein
6 replies
1d15h

Or perhaps the housekeeping of existing in the physical world is a key aspect of general intelligence.

Intralexical
5 replies
1d11h

Isn't that kinda obvious? A baby that grows up in a sensory deprivation tank does not… develop, as most intelligent persons do.

squigz
3 replies
1d4h

A true sensory deprivation tank is not a fair comparison, I think, because AI is not deprived of all its 'senses' - it is still prompted, responds, etc.

Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I would think so. Let's not try it ;)

Intralexical
2 replies
1d1h

Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I don't think so, because humans communicate and learn largely about the world. Words mean nothing without at least some sense of objective physical reality (be it via sight, sound, smell, or touch) that the words refer to.

Hellen Keller, with access to three out of five main senses (and an otherwise fully functioning central nervous system):

    Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness... Since I had no power of thought, I did not compare one mental state with another.

    I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.
I remember reading her book. The breakthrough moment where she acquired language, and conscious thought, directly involved correlating the physical tactile feeling of running water to the letters "W", "A", "T", "E", "R" traced onto her palm.

squigz
0 replies
23h6m

That's a really good point. Thanks!

choilive
0 replies
22h10m

My interpretation of this (beautiful) quote is there was a traceable moment in HK's life where she acquired "consciousness" or perhaps even self-awareness/metacognition/metaphysics? That once the synaptic connections necessary to bridge the abstract notion of language to the physical world led her down the path of acquiring the abilities that distinguish humans from other animals?

cyberax
0 replies
13h25m

A baby that grows up in a sensory deprivation tank

Now imagine a baby that uses an artificial lung and receives nutrients directly, moves on a wheeled car (no need for balance), does not have proprioception, or a sense of smell (avoiding some very legacy brain areas).

I think, that such a baby still can achieve consciousness.

gibsonf1
4 replies
1d19h

Except you’d be missing the part that a neuron is not just a node with a number but a computational system itself.

krisoft
2 replies
1d17h

I think you are missing the point.

The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.

Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.

jessekv
1 replies
1d4h

The point is to make a ballpark estimate, or at least to estimate the order of magnitude.

From the sibling comment:

Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

If this is true, then there may be many orders of magnitude unaccounted for.

Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.

choilive
0 replies
22h16m

Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.

AKA a quantum computer. Its not a "never", but how much computation you would need to throw at the problem.

bglazer
0 replies
1d19h

Computation is really integrated through every scale of cellular systems. Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

Pdf: “Protein molecules as computational elements in living cells - Dennis Bray” https://www.cs.jhu.edu/~basu/Papers/Bray-Protein%20Computing...

j_m_b
2 replies
18h11m

Computing power should get there around 2048

We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.

Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.

I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.

dyauspitr
1 replies
15h22m

3nm doesn’t mean the transistor is 3nm, it’s just a marketing naming system at this point. The actual transistor is about 20-30nm or so.

j_m_b
0 replies
14m

Thanks for the comment. I looked more into this and it seems like not only are we in the era of diminished returns for computational abilities, costs have also now started matching the increased compute. i.e 2x performance leads to 2x cost. Moore's law has already run it's course and we're living in a new era of compute. We may get increased performance, but it will always be more expensive.

hetman
1 replies
4h58m

That may or may not still be too simple a model. Cells are full of complex nano scale machinery and not only might it me plausible some of it is involved in the processes of cognition, I'm aware of at least one study which identified some nano scale structures directly involved in how memory works in neurones. Not to mention a lot of what's happening has a fairly analogue dimension.

I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.

RaftPeople
0 replies
1h53m

That may or may not still be too simple a model

Based on the stuff I've read, it's almost for sure too simple a model.

One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.

marcosdumay
0 replies
1d18h

There's a lot of in-neuron complexity, I'm sure there is some cross-synapse signaling (I mean, how can it not exist? There's nothing stopping it.), and I don't think the synapse behavior can be modeled as just more signals.

itsthecourier
0 replies
1d18h

Artificial thinking doesn't require an artificial brain. As our own walking system, compared to our car's locomotion system.

The car's engine, transmission and wheels, require no muscles or nerves

creer
0 replies
19h48m

Yes and no on order of magnitude required for decent AI, there is still (that I know of) very little hard data on info density in the human brain. What there is points at entire sections that can sometimes be destroyed or actively removed while conserving "general intelligence".

Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.

Don't know about upload though...

teuobk
27 replies
1d20h

The interactive visualization is pretty great. Try zooming in on the slices and then scrolling up or down through the layers. Also try zooming in on the 3D model. Notice how hovering over any part of a neuron highlights all parts of that neuron:

http://h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rele...

jamiek88
24 replies
1d19h

My god. That is stunning.

To think that’s one single millimeter of our brain and look at all those connections.

Now I understand why crows can be so smart walnut sized brain be damned.

What an amazing thing brains are.

Possibly the most complex things in the universe.

Is it complex enough to understand itself though? Is that logically even possible?

nicklecompte
17 replies
1d19h

Crow/parrot brains are tiny but in terms of neuron count they are twice as dense as primate brains (including ours): https://www.sciencedirect.com/science/article/pii/S096098221...

If someone did this experiment with a crow brain I imagine it would look “twice as complex” (whatever that might mean). 250 million years of evolution separates mammals from birds.

steve_adams_86
6 replies
1d17h

This might be a dumb question, because I doubt the distances between neurons makes a meaningful distance… But could a small brain, dense with neurons like a crow, possibly lead to a difference in things like response to stimuli or “compute” speed so to speak?

philsnow
1 replies
13h46m

Not a dumb question at all; one of the hard constraints of cou design is signal propagation time. Even going at 1/3 the speed of light, when you only have on the order of a billionth of a second (clock frequencies in the GHz), a signal can’t get very far.

I haven’t heard of a clocking mechanism in brains, but signals propagate much slower and a walnut / crow brain is much larger than a cpu die.

RaftPeople
0 replies
2h13m

I haven’t heard of a clocking mechanism in brains

Brain waves (partially). They aren't exactly like a cpu clock, but they do coordinate activity of cells in space and time.

There are different frequencies that are involved in different types of activity. Lower frequencies synchronize across larger areas (can be entire brain) and higher frequencies across smaller local areas.

There is coupling between different types of waves (i.e. slow wave phase coupled to fast waves amplitude) and some researchers (Miller) thinks the slow wave is managing memory access and the fast wave is managing cognition/computation (utilizing the retrieved memory).

tlarkworthy
0 replies
6h36m

The electrical signals in brain are chemical reactions, not conductivity like a metal wire. They are slow! Synaptic junctions are a huge number of indirect chemical cascades, not a direct electrical connection, they are even slower! So brain morphology and connectome has a massive impact on what can be computed. Human twitch responses are done by cerebellum, not cerebrum. It's faster, but you can't do philosophy with the cerebellum, only learn to ride a bike etc. This is the brain doing the best it for the circumstances.

out_of_protocol
0 replies
1d12h

Regarding compute speed - it checks out. Humans "think" via neo cortex, thin ouside layer of the brain. Poor locality, signals needs to travel a lot. Easy to expand though. Crow brain have everything tightly concentrated in the center - fast communication between neurons, hard to have more "thinking" thing later (therefore hard to evolve above what crows currently have)

michaelhoney
0 replies
1d15h

Actually I think that's pretty plausible. Signal speed in the brain is pretty slow - it would have to make some difference

JKCalhoun
0 replies
6h7m

And here I was wondering if there were heat issues in a crow brain.

Terr_
3 replies
1d19h

I expect we'll find that it's all a matter of tradeoffs in terms of count vs size/complexity... kind of like how the "spoken data rate" of various human languages seems to be the same even though some have complicated big words versus more smaller ones etc.

sdenton4
2 replies
1d16h

Birds are under a different set of constraints than non-bat mammals, of course... They're very different. Songbirds have ~4x finer time Perception of audio than humans do, for example, which is exemplified by taking complex sparrow songs and showing them down until you can actually hear the fine structure.

The human 'spoken data rate' is likely due to average processing rates in our common hardware. Birds have a different architecture.

Terr_
1 replies
1d14h

You misunderstand, I'm not making any kind of direct connection between human speech and bird song.

I'm saying we will probably discover that the "overall performance" of different vertebrate neural setups are clustered pretty closely, even when the neurons are arranged rather differently.

Human speech is just an example of another kind of performance-clustering, which occurs for similar metaphysical reasons between competing, evolving, related alternatives.

sdenton4
0 replies
16h54m

Humans are an n=1 example, is my point. And there's no direct competition between bird brain architecture and mammalian brain architecture, so there's no reason for one architecture to 'win' over the other - they may both be interesting local maxima, which we have no ability to directly compare.

Human brains might not be all that efficient; for example, if the competitive edge for primate brains is distinct enough, they'll get big before they get efficient. And humans are a pretty 'young' species. (Look at how machine learning models are built for comparison... you have absolute monsters which become significantly more efficient as they are actually adopted.)

By contrast, birds are under extreme size constraints, and have had millions of years to specialize (ie, speciate) and refine their architectures accordingly. So they may be exceedingly efficient, but have no way to scale up due to the 'need to fly' constraint.

djmips
2 replies
20h35m

It's amusing to say that bird brains are on the next generation node size.

sigmoid10
1 replies
10h2m

Would be interesting to see what their wafer yield is. Like, are they more or less prone to mental disease.

ruined
0 replies
2h32m

all the crows can tell i'm crazy, but i've never met an insane crow.

pfdietz
0 replies
1d17h

That shouldn't be too surprising, as a larger fraction of the volume of a brain should be taken up by "wiring" as the size of the brain expands.

jamiek88
0 replies
1d19h

Interesting! Thank you. I didn’t know that.

LargoLasskhyfv
0 replies
1d15h

IIRC bird brains are 'packed/structured' very similar to our cerebellum.

So one would just need to pick that little cube out of our cerebellum, to have that 'twice as complexity'.

ignoramous
3 replies
1d19h

I wonder if we manage to annotate this much level of detail about our brain, and then let (some variant of the current) models train on it, will those intrinsically end up generalizing a model for intelligence?

nicklecompte
2 replies
1d19h

I think you would also need the epigenetic side, which is very poorly understood: https://www.universityofcalifornia.edu/news/biologists-trans...

We have more detail than this about the C. elegans nematode brain, yet we still no clue how nematode intelligence actually works.

Animats
1 replies
1d19h

How's OpenWorm coming along?

nicklecompte
0 replies
1d2h

Badly: https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brai... (the comments have some updates as of 2023)

Almost every other cell in the worm can be simulated with known biophysics. But we don't have a clue how any individual nematode neuron actually works. I don't have the link but there are a few teams in China working on visualizing brain activity in living C. elegans, but it's difficult to get good measurements without affecting the behavior of the worm (e.g. reacting to the dye).

m3kw9
0 replies
20m

Physics of the universe is the most complex thing in the universe

layer8
0 replies
1d18h

We don’t know what “understanding” means (we don’t have a workable definition of it), so your question cannot be answered.

oniony
0 replies
21h56m

Hmm, that website does not honour my keyboard layout. Not sure how they managed that.

gofreddygo
0 replies
1d

That is awesome !

the sheer number of things that work in co-ordination to make biology work!

In-f*king-credible !

posnet
15 replies
1d20h

1.4 PB/mm^3 (petabytes per millimeter cubed)×1260 cm^3 (cubic centimeters, large human brain) = 1.76×10^21 bytes = 1.76 ZB (zetabytes)

gary17the
12 replies
1d19h

[AI] "Frontier [supercomputer]: the storage capacity is reported to be up to 700 petabytes (PB)" (0.0007 ZB).

[AI] "The installed base of global data storage capacity [is] expected to increase to around 16 zettabytes in 2025".

Thus, even the largest supercomputer on Earth cannot store more than 4 percent of state of a single human brain. Even all the servers on the entire Internet could store state of only 9 human brains.

Astonishing.

falcor84
5 replies
1d19h

I appreciate you're running the numbers to extrapolate this approach, but just wanted to note that this particular figure isn't an upper bound nor a longer bound for actually storing the "state of a single human brain". Assuming the intent would be to store the amount of information needed to essentially "upload" the mind onto a computer emulation, we might not yet have all the details we need in this kind of scanning, but once we do, we may likely discover that a huge portion of it is redundant.

In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade. Lena [0] tells of the first successfully uploaded scan taking place in 2031, and I'm concerned that reality won't be far off.

[0] https://qntm.org/mmacevedo

rmorey
2 replies
1d18h

we are nowhere near whole human brain volume EM. the next major milestone in the field is a whole mouse brain in the next 5-10 years, which is possible but ambitious

falcor84
1 replies
1d18h

What am I missing? Assuming exponential growth in capability, that actually sounds very on track. If we can get from 1 cubic millimeter to a whole mouse brain in 5-10 years, why should it take more than a few extra years to scale that to a human brain?

rmorey
0 replies
1d14h

assuming exponential growth in capacity is a big assumption!

gary17the
0 replies
1d10h

we may likely discover that a huge portion of [a human brain] is redundant

Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard.

Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0].

[0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...

RaftPeople
0 replies
1d

In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade.

They don't even know how a single neuron works yet. There is complexity and computation at many scales and distributed throughout the neuron and other types of cells (e.g. astrocytes) and they are discovering more relentlessly.

They just recently (last few years) found that dendrites have local spiking and non-linear computation prior to forwarding the signal to the soma. They couldn't tell that was happening previously because the equipment couldn't detected the activity.

They discovered that astrocytes don't just have local calcium wave signaling (local=within the extensions of the cell), they also forward calcium waves to the soma which integrates that information just like a neuron soma does with electricity.

Single dendrites can detect patterns of synaptic activity and respond with calcium and electrical signaling (i.e. when synapse fires in a particular timing sequence, the a signal is forwarded to the soma).

It's really amazing how much computationally relevant complexity there is, and how much they keep adding to their knowledge each year. (I have a file of notes with about 2,000 lines of these types of interesting factoids I've been accumulating as I read).

shpx
1 replies
16h34m

If you can preserve and scan the tissue in a way that lets you scan the same area multiple times you wouldn't need to digitize the whole thing. Put the slices on rotating platters with a microscope for each platter and read parts of the brain on demand. It's a hard drive but instead of magnets storing the bits of an image of the sample, it's the actual physical sample.

gary17the
0 replies
11h19m

Not if you want to actually execute the state of a human brain in a digital simulation to see how it works and whether it still displays certain abilities such as comprehension and consciousness. Otherwise a digital scan of a brain is just a glorified microscope.

dekhn
1 replies
1d19h

One point about storage- it's economically driven. If there was a demand signal (say, the government dedicated a few hundred billion dollars to a single storage systems), hard drive manufacturers could deploy much more storage in a year. I've pointed this out to a number of scientists, but none of them could really think of a way to get the government to spend that much money just to store data without it curing a senator's heart disease.

falcor84
0 replies
1d19h

without it curing a senator's heart disease

Obviously I'm not advocating for this, but I'll just link to the Mad TV skit about how the drunk president cured cancer.

https://www.youtube.com/watch?v=va71a7pLvy8

treprinum
0 replies
1d4h

AI folks dream about creating superintelligence to guide our lives but all we can do is drosophilla's brain.

ibeforee
0 replies
6h59m

We don’t know to even model that state: do we need the position and velocity and charge of every atom, or can a neuron be approximated by a bfloat?

userbinator
0 replies
1d17h

It's very lossy and unreliable storage, however. To use an analogy, it's only a huge amount of ECC that keeps things (just barely) working.

bahrant
0 replies
1d19h

wow

g4zj
14 replies
1d19h

Is there a name for the somewhat uncomfortable feeling caused by seeing something like this? I wish I could better describe it. I just somehow feel a bit strange being presented with microscopic images of brain matter. Is that normal?

carabiner
4 replies
1d17h

It makes me think humans aren't special, and there is no soul, and consciousness is just a bunch of wires like computers. Seriously, to see the ENTIRETY of human experience, love and tragedy and achievement, are just electric potentials transmitted by those wiggly cells, just extinguishes any magic I once saw in humanity.

sph
0 replies
10h42m

I dunno, the whole of human experience is what I expect of a system composed of 100,000,000,000,000 entities, with quintillions of interconnections, interacting together simultaneously on a molecular level. Happiness, sadness, love and hate can (obviously) be described and experienced with this level of complexity.

I'd be much more horrified to see our consciousness simplified to anything smaller than that, which is why any hype for AGI because we invented chatbots is absolutely laughable to me. We just invented the wheel and now hope to drive straight to the Moon.

Anyway, you are seeing a fake three dimensional simplification of a four+ dimensional quantum system. There is at least one unseen physical dimension in which to encode your "soul"

mensetmanusman
0 replies
23h57m

You might be confusing the interface with the operating system.

bamboozled
0 replies
8h34m

Er, why can’t the wires be the experience ?

If the wires make consciousness then there is consciousness. The substrate is irrelevant and has no bearing on the awesomeness of the phenomena of knowing, experiencing and living.

SubiculumCode
0 replies
1d11h

Welcome to the Existential Bar at the End of the Universe

greenbit
2 replies
1d19h

Is it the shapes, similar to how patterns of holes can disturb some people? Or is it more abstract, like "unknowable fragments of someone's inner-most reality flowed through there"? Not that I have a name for it either way. The very shape of it (in context) might represent an aspect of memory or personality or who knows what.

g4zj
1 replies
1d19h

"unknowable fragments of someone's inner-most reality flowed through there"

It's definitely along these lines. Like so much (everything?) that is us happens amongst this tiny little mesh of connections. It's just eerie, isn't it?

Sorry for the mundane, slightly off-topic question. This is far outside my areas of knowledge, but I thought I'd ask anyhow. :)

greenbit
0 replies
1d18h

It's kind of feeling a bit like an intruder? There probably is a name for that.

ignoramous
0 replies
1d19h

Trypophobia, visceral, uncanny, squeamish?

dekhn
0 replies
1d17h

When I did fetal pig dissection, nothing bothered me until I got to the brain. I dunno what it is, maybe all those folds or the brain juice it floats in, but I found it disconcerting.

bglazer
0 replies
1d19h

I’m not religious but it’s as close to a spiritual experience as I’ll ever have. It’s the feeling of being confronted with something very immediate but absolutely larger than I’ll ever be able to comprehend

Zenzero
0 replies
1d19h

For me the disorder of it is stressful to look at. The brain has poor cable management.

That said I do get this eerie void feeling from the image. My first thought was to marvel how this is what I am as a conscious being in terms of my "implementation", and it is a mess of fibers locked away in the complete darkness of my skull.

There is also the morose feeling from knowing that any image of human brain tissue was once a person with a life and experiences. It is your living brain looking at a dead brain.

blincoln
6 replies
1d18h

Why did the researchers use ML models to do the reconstruction and risk getting completely incorrect, hallucinated results when reconstructing a 3D volume accurately using 2D slices is a well-researched field already?

rmorey
0 replies
1d18h

There are extremely effective techniques, but it is not really solved. The current techniques still require human proofreading to correct errors. Only a fraction of this particular dataset is proofread.

scotty79
0 replies
1d18h

Maybe it's not about reconstructing a volume but about recognizing neurons within that volume.

rmorey
0 replies
1d18h

The methods used here are state of the art. The problem is not just turning 2D slices into a 3D volume, the problem is, given the 3D volume, determining boundaries between (and therefore the 3d shape of) objects (i.e. neurons, glia, etc) and identifying synapses

layer8
0 replies
1d18h

Regarding the risk, as noted in the article, they are manually “proofreading” the construction.

VikingCoder
0 replies
1d18h

I'm guessing a registration problem.

If all of the layers were guaranteed to be orthographic with no twisting, shearing, scaling, squishing, with a consistent origin... Then yeah, there's a huge number of ways to just render that data.

But if you physically slice layers first, and scan them second, there are all manner of physical processes that can make normal image stacking fail miserably.

brandonmenc
4 replies
1d17h

Another proof point that AGI is probably not possible.

Growing actual bio brains is just way easier. Its never going to happen in silicon.

Every machine will just have a cubic centimeter block of neuro meat embedded in it somewhere.

myrmidon
0 replies
1d3h

Hard disagree on this.

I strongly believe that there is a TON of potential for synthetic biology-- but not in computation.

People just forget how superior current silicon is for running algorithms; if you consider e.g. a 17 by 17 digit multiplication (double precision), then a current CPU can do that in the time it takes for light to reach your eye from the screen in front of you (!!!). During all the completely unavoidable latency (the time any visual stimulus takes to propagate and reach your consciousness), the CPU does millions more of those operations.

Any biocomputer would be limited to low-bandwidth, ultra high latency operations purely by design.

If you solely consider AGI as application, where abysmal latency and low input bandwidth might be acceptable, then it still appears to be extremely unlikely that we are going to reach that goal via synthetic biology; our current capabilities are just disappointing and not looking like they are gonna improve quickly.

Building artificial neural networks on silicon, on the other hand, capitalises on the almost exponential gains we made during the last decades, and already produces results that compare to say, a schoolchild, quite favorably; I'd argue that current LLM based approaches already eclipse the intellectual capabilities of ANY animal, for example. Artificial bio brains, on the other hand, are basically competing with worms right now...

Also consider that even though our brains might look daunting from a pure "upper bound on required complexity/number of connections" point of view, these limits are very unlikely to be applicable, because they confound implementation details, redundancy and irrelevant details. And we have precise bound on other parameters, that our technology already matches easily:

1) Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

2) Bandwidth for training is quite low, even when compressing the ~20year training time for an actual human into a more manageable timeframe

3) Operating power does not require more than ~20W.

4) No understanding was necessary to create human intelligence-- its purely a result of an iterative process (evolution).

Also consider human flight as an analogy: we did not achieve that by copying beating wings, powered by dozens of muscle groups and complex control algorithms-- those are just implementation details of existing biological systems. All we needed was the wing-concept itself and a bunch of trial-and-error.

mr_toad
0 replies
1d16h

You’d have to train them individually. One advantage of ANNs is that you can train them and then ship the model to anyone with a GPU.

creer
0 replies
19h52m

No reason for an AGI not to have a few cubes of goo slotted in here and there. But yeah, because of the training issue, they might be coprocessors or storage or something.

theogravity
3 replies
1d18h

The brain fragment was taken from a 45-year-old woman when she underwent surgery to treat her epilepsy. It came from the cortex, a part of the brain involved in learning, problem-solving and processing sensory signals.

Wonder how they figured out which fragment to cut out.

pfdietz
2 replies
1d17h

I imagine they determined the focus of the seizures by electrical techniques.

I worry this might make the sample biased in some way.

notfed
0 replies
23h32m

Imagine all the conclusions being made from a 1cm cube of epileptic neurons.

creer
0 replies
19h55m

Considering the success of this work, I doubt this is the last such cubic millimeter to be mapped. Or perhaps the next one at even higher resolution. No worries.

eminence32
2 replies
1d20h

cut the sample into around 5,000 slices — each just 34 nanometres thick — that could be imaged using electron microscopes.

Does anyone have any insight into how this is done without damaging the sample?

dekhn
0 replies
1d19h

The sample is stained (to make thigns visible), then embedded in a resin, then cut with a very sharp diamond knife and the slices are captured by the tape reel.

Paper: https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4 See Figure 1.

The ATUM is described in more detail here https://www.eden-instruments.com/en/ex-situ-equipments/rmc-e...

and there's a bunch of nice photos and explanations here https://www.wormatlas.org/EMmethods/ATUM.htm

TL;DR this project is reaping all the benefits of the 21st century.

dekhn
2 replies
1d19h

Annual reminder to re-read "There's plenty of room at the bottom" by Feynman. https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf

Note the part where the biologists tell him to make an electron microscope that's 1000X more powerful. Then note what technology was used to scan these images.

tim333
1 replies
1d4h

I think it's actually "What you should do in order for us to make more rapid progress is to make the electron microscope 100 times better" and the state of art at the time was "it can only resolve about 10 angstroms" or I guess 1nm. So 100x better would be 0.1 angstrom / 0.01 nm.

We have made some progress it seems. Googling I see "up to 0.05 nm" for transmission electron microscopes and "less than 0.1 nanometers" for scanning. https://www.kentfaith.co.uk/blog/article_which-electron-micr...

For comparison the distance between hydrogen nuclei in H2 is 0.074 nm I think.

You can see the shape of molecules but it's still a bit fuzzy to see individual atoms https://cosmosmagazine.com/science/chemistry/molecular-model...

dekhn
0 replies
1d3h

Resolution is only one aspect of EM that can be optimized.

dvfjsdhgfv
1 replies
1d8h

Why do these neurons have flat "heads"?

ewchris
0 replies
15h6m

Edge of the dataset.

nakedneuron
0 replies
9h11m

the model showed neurons with tendrils that formed knots around themselves

I wonder if this plays into the mechanism of epilepsy. Self-arousal...?

Anybody qualified to comment on?

idontwantthis
0 replies
1d15h

Jain’s team then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D

How do they know if their AI did it correctly or not?

greentext
0 replies
1d16h

It looks like spaghetti code.

fractal618
0 replies
1d19h

Fascinating! I wonder how different that is from the mind of a man haha

bugbuddy
0 replies
1d17h

Based on the picture of a single neuron, the brain sim crowd should recalculate their estimates for the needed computing power again.

CSSer
0 replies
1d20h

For some people, this is all you need (sorry, couldn’t resist)!