I've been listening a lot to Sean Caroll's mindscape podcast [0]. In it they have this notion of complex-to-intelligent systems. Their loose definition is that such systems can hold an internal state that represents the world around them. A sort of model to interact with and to extrapolate future events from (time travel!). In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced. One can imagine, somehow in the feedback loops (I think, that she thinks, that I think, that she thinks, ...), something like consciousness (awareness [a model?] of the self in the world?) evolved.
Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.
I'm just a cocktail philosopher, but aren't we all.
But still - why is consciousness required? Because a model of the World could be held even without it, in my view.
E.g., I wouldn't think GPT-4 is conscious, but I'm pretty sure there's a representation of abstract World and relationships within it following the neurons and weights. Otherwise it wouldn't be able to do much of it, that it is.
Also I think model of the World is just that - which can be represented as relationships between neurons, symbolising that model of the World.
And I think you can have a complex and a perfect set of neurons and their connections to represent everything in the most efficient manner for that size of parameters (neurons and connections together). There probably is the perfect configuration, but it couldn't even be achieved using training or evolutionary methods.
And none of it requires consciousness in my view.
Having a purely representative model of the world is less useful than having a sandbox for modeling choices and outcomes. Do I need to duck before entering that doorway?
That introspective analysis is consciousness. Humans have just improved the same mechanism allowing for more abstract analysis.
But GPT-4 can also take the decision to duck.
For instance, here I have GPT-4 with a system message: "You are an AI human agent in a video game performing actions.".
And I give it the following information as current environment:
Current environment:
1. Your goal is to get out of the current room.
2. You are moving in direction of North at 4km/h.
3. There's a door 3m in the North.
4. The door height is 150cm.
5. Your height is 180cm.
What is the sequence of actions that you plan to perform?
Its response:
---
1. Slow down as I approach the door to avoid collision.
2. Inspect the door for any lock or handle.
3. If a handle or a means to open the door is present, try to open the door.
4. If the door is locked, search for a key or any other opening mechanism within the immediate surrounding area.
5. If the door isn't lock or after opening it, crouch down slightly to accommodate my height which is more than the height of the door to avoid my head hitting the door frame.
6. Proceed through the door and stand tall again after passing through.
7. Continue moving North after exiting the room.
---
So here I did hint quite a bit by giving this specific environmental data, I could add more noise, and unnecessary information, but even then I think it can do it based on my experience.
I had temperature of 1 for the above, which I think why it made some odd word choices, but I also decided for example to change the info where the door is to "South", and this is how it responded:
1. Stop moving in the North direction.
2. Turn around 180 degrees to face the South direction.
3. Start moving towards the door in the South direction at a moderate pace.
4. Upon reaching the door, duck down slightly to accommodate the difference in height between myself and the door.
5. Proceed through the door to exit the room."
-------
So I guess, what I would like to see as an example. Assuming GPT-4 is not conscious, what is something that an average conscious human can do, thanks to presumed consciousness, that a GPT-4 couldn't do and why is consciousness required for that?
Consciousness isn’t required for anything. It’s useful for dealing with a complex and adverse environment not solving any specific problem.
That said, without testing I would assume if ask chat GPT-4 to change how quickly it responds it’s going to fail based on my understanding of their architecture. As in recite the pledge of of allegiance inserting 10 second pauses between each word. That said having provided a specific test it’s possible for that particular capability to have already been added to the UI layer via a pause token or something even if the model itself responds at the same rate.
As an example, what I tried right now:
USER I'm responding to you roughly every second with current time, say "Stop!", after 10 seconds have passed. Respond only with silent when you do not wish to say anything.
Current Time: 13:04:11.21
Say "OK!" now if you have understood the challenge.
ASSISTANT OK!
USER Current Time: 13:04:12.24
ASSISTANT silent
USER Current Time: 13:04:16.21
ASSISTANT silent
USER Current Time: 13:04:19.18
ASSISTANT silent
USER Current Time: 13:04:23.12
ASSISTANT Stop!
That’s almost but not actually waiting. Picking the nest token for a context window is something GPT-4 is quite good at. But the underlying approach has some real limitations which I discussed in response to your other comment.
Using 1 hour instead of 10 seconds and then list every Current Time incrementing by .01 seconds and you overflow the context window. The solution isn’t to have a larger context window, the solution is to have an internal representation independent of that context window.
The issue is you can’t train a system to use that internal memory by simply giving it a large corpus of text examples. The text lacks the internal representation associated with its creation.
In this case you wouldn't spam the context window, you would perhaps just send latest time every time and lose portion of the previous times.
That solves this specific issue, but it isn’t a general solution to the finite size of a context window. Ideally the system should be able to decide for itself what information it needs to pay attention to, but LLM’s simply don’t operate in that fashion.
But you can, e.g. you allow GPT to call functions to store embeddings that it can later retrieve for the memory. Meanwhile default context memory window is just recent info more detailed and historical info gets more and more summarized, possibly summarized by GPT itself.
If you used GPT as a brain you could provide the current time as an input for it, but otherwise yeah, GPT doesn't have time within it's input by default, but if you did:
Made the instruction to be: Say "Stop!" when 10 seconds are done. And then had a loop where
You give it current time in a loop, it would surely be able to do that.
But I'm not sure if or how that is exactly related to consciousness?
Time, the best way to think of it is probably a series of ticks anyway, and I assume in people's brains there is something to do that, so it would be reasonable to add a mechanism for GPT as well.
GPTs goal is to only act as the intelligent part of the brain based on the input.
Modify the system enough and it’ll eventually be conscious.
It’s not about a pause token, but the internal processes. You can have a long conversation on the subway with someone without forgetting you’re going home from work. Overflow it’s context window and GPT-4 has no recourse it just forgets. The difference is essentially prioritizing information, but LLM’s really don’t function like that it’s all about predicting the next token from a given context.
Give a future generation of AI systems internal working memory, a clock, and the ability to spend arbitrary time updating that internal memory and IMO that’s pretty close to consciousness. At least assuming it was all functional.
But it's not really difficult to inject this mechanism into the context window.
GPT-4 turbo latest version allows for 100k tokens or 75k words. The whole subway thing and more could easily be kept there, but what ever else can't can be designed in the prompt to always keep certain amount of tokens in context for different layers of memory, where you the more into the past you go the less details you have, but it's more like a Title of your most important learnings through out life, but at any given time GPT-4 can call the function to ask extra bit of content about it, if it seems relevant to the situation at hand.
So for example in each prompt context you would have:
1. Short description of what you have done each year in your life. 2. Key findings, goals, that you currently have. 3. The whole current day (or how much seems reasonable). 4. Past weeks in a bit more detail than the short description for a year.
So basically you could try to find 70k words to represent as much context and most important details (that are decided by GPT itself, what is most important).
I've been building an assistant for myself, that has such memory management system, it gets past N (like 40 messages) in full detail, then it will have summaries from before that time and in addition it will have messages and learnings in the past stored, and these will also be passed to the context depending on the query that matches it.
And if you want to compare it to human process of sleeping, it occasionally goes through the all the messages and "compresses" them, to extract most important findings and short summaries so they can be used in the next day's context.
So to me it's just basically giving it tools, and the other things like memory, longer term memory, inputs it currently doesn't get, are fine to be solved by other tools. I think that human brain also has different parts of brains working on different things, so it's similar in a sense.
Then once you have 70k spent on this historical context, you will run prompt in a loop allowing it to perform fn each time, like retrieve further info, or store some important fact, etc.
The real actual problem would be cost, because the costs would rack up quite quickly making looped 70k token requests.
GPT has no sense, or care when it is wrong or right, such sense is only (arguably) driven by human through prompt interaction and throughout training of model, while humans and other animals able to update there's internal state just from single observation or interaction, and integrate future information with such single observation for very long time.
What if GPT was integrated with a module to
1. Take light input. Video/images.
2. Take sound input.
3. Touch, heat input.
And other inputs from the environment. Then there were mechanisms which could also be neural networks that will transform this data into more digestible way for GPT and GPT was also in addition specifically trained to act based on this input.
Then it would run in cycles, where it gets this input, and it will provide output on how it plans to react to the data, maybe every 100ms.
It then could also have a storage it can use, where it can store data as part of the output to later retrieve it again.
So it would be a set of modules that is controlled and interpreted by GPT.
It could then do all of that above, no? And all of it should be just a matter of implementing. The only near time challenges may be certain types of inaccuracies and or producing tokens in some cases might take too long time to have fast reaction time.
So basically you'll try to run as frequent cycles as possible with the inputs mentioned above, other neural networks identifying the objects, in many different ways and all the context about the environment, unless a new version of GPT becomes completely multi-modal.
And you run those loops, then GPT gives output what it wishes to do, e.g. store some fact for later usage, move there, move here, etc. Or retrieve some information using embeddings then decide again, and short term memory would just be this context sized window, and if it needs more it just looks into its own memory for embeddings.
at least in my limited perspective, root issue is learning dynamics of internal model that represents self (internal world) and external world.
What do you mean by these dynamics exactly?
how system updates and maintains own model(s) when new information added in form of single/few observation and/or interactions, without ability to replay data, without catastrophic forgetting etc..., and importantly, how such system's model(s) grow in complexity while retaining stability and redundancies.
I'm not convinced GPT can infer correct actions needed for novel scenarios for contexts that have not been directly experienced in training.
Consider:
I have never in my multiple decades of existence had a ball roll out in front of me while driving. But if I were driving by a playground and a ball rolled out in front of me from between parked cars, and the ball itself is not a hazard because it has already passed my lane, I can infer that I need to slow or stop because a child may rapidly follow that ball.
In your ducking scenario, the AI needs direct perception of a door. I assume it cannot imagine a need to duck without 1) direct perception of the doorframe, or 2) previous experience of ducking through a doorframe.
I'm not convinced that GPT or any AI can infer that context for handling novel edge cases at this point. It could just be a need for more training data to correlate different contexts, but I don't think it's there yet.
Analysis of what-if scenarios and simulating likely expected outcomes of proposed actions is a very useful ability, but IMHO that is not part of what we'd call 'consciousness'.
It’s a large fraction of what consciousness is actually doing, not the definition of consciousness.
Someone considering what to say is simulating tradeoffs.
In practice simulation isn’t limited to future possibilities. Trying to remember something you might recall vague sensory information like the tilt of your hips and try and create a scenario where that fits.
I would not automatically treat all the things that people do consciously as intrinsically linked with consciousness, unless there's some clear reasoning why consciousness is required to do that particular thing - and not in the sense that it's required because unconscious people can't do stuff because they're incapacitated.
Like, saying that simulating tradeoffs is what consciousness is actually doing seems to me equivalent to saying that smashing my fist on the table is something 'my consciousness' is doing, which is kind of technically true but not really relevant, as IMHO a non-human entity that's clearly without any consciousness could both smash something against the table and also simulate tradeoffs for various options, having a subjective experience isn't required for that.
Sleepwalking shows most things people do can happen without consciousness. People can make breakfast without being conscious etc.
Choosing between tradeoffs is however intrinsically linked to consciousness.
It's true, p-zombies make more sense to physics, yet here we are.
Indeed, here we are, literal p-zombies.
You don't have any experiences? Weird, I see a colored-in world, enjoy the aroma of fresh coffee, don't enjoy the wet cold in winter, don't like being awakened up from an engaging dream, and feel a mild sense of irritation by humans pretending not be conscious because of their materialistic commitments.
I have "experiences", they just don't really have the properties that a naive interpretation of sensory data would entail.
Funny, because I feel a mild sense of irritation by humans pretending that their flawed senses yields some kind of direct observation of fundamental reality.
You don't think your experiences have sensations?
I never said anything about fundamental reality. I don't understand saying we're literal p-zombies. You have to explain away your sensations, dreams, inner dialog etc., by appealing to some theoretical commitment.
I'm not claiming what consciousness is, only that we have it.
I don't think sensations are what they appear to be either. Eliminative materialism is very misunderstood.
But you did, by claiming that we have consciousness. The only reason you think you have consciousness is an alleged direct perception of subjective qualities.
There are only two real paths to reconciliating this with some form of scientifically sound materialism: panspychism, where subjective qualities permeate all of reality and we are perceiving some of them, or concluding that our perceptions are flawed and untrustworthy and so the subjective qualities we think we're perceiving are not real/illusory (eliminative materialism). Given the demonstrable flaws in all other perceptions, the most parsimonious position seems clear.
Of course, you could try to resurrect idealism, but idealism is very problematic.
P-zombies believe they have consciousness, say they have it, write whole screeds about how absurd it is to claim they don't have it, but they don't have the subjective qualities as we've defined them. Sounds like a bullseye to me.
We don't perceive subjective qualities, subjective qualities constitute our perceptions, dreams, memories and other conscious mental states.
If you have a commitment to doing so. I don't particularly, since materialism is model of how the world is constituted, abstracted from our shared conscious experiences. A very useful one with scientific backing. But nature is more than what we think it is (map/territory distinction).
And there are other options you omitted such as identity theory, strong emergence, property dualism, neutral monism and cognitive closure. They all have their drawbacks, but so does every attempt to account for consciousness.
Yes, but by definition, they lack the subjective qualities of our experiences. I believe Chalmers coined the term to demonstrate the problem with physicalism.
And, also by definition, each p-zombie has (at least in an accessible possible world) a physically identical, but conscious, doppelgänger. Speaking and writing are physical processes, so in any circumstance where the conscious doppelgänger writes about the reality of its consciousness, the physically-identical p_zombie will do the same.
In the case of the p-zombie, there was a causal process leading to the physical act of writing in which consciousness played no part - and, ex hypothesi, the same physical causality must have played out in the zombie's conscious doppelgänger. In general, whatever explains the zombie's observable characteristics is sufficient to explain the doppelgänger's observable characteristics, and that explanation cannot be dependent on consciousness. In particular, if one accepts the possibility of p-zombies, one must logically accept that no explanation of consciousness, regardless of how thorough and well-supported it seems to be, has any basis in any facts about consciousness.
A consistent belief in zombies seems to lead to epiphenomenalism, the notion that we have conscious experiences, but they play no part in what happens to our bodies. Frank Jackson, author of the well-known "Mary the neuroscientist" thought experiment, came to the same conclusion, which is why it appeared in a paper with the title "Epiphenomenal Qualia."
For more on zombies, see Dennett, "The Unimagined Preposterousness of Zombies."
Right, so one can reject the possibility of p-zombies. Consciousness plays a role in why we think we're conscious.
Although I disagree somewhat here, because logical isn't the same thing as physical. All you can say is there is no physical explanation of consciousness. But you could logically come up with one. For example, Chalmers has proposed an additional natural law connecting consciousness to information rich processes. Natural doesn't necessitate physical, if the universe contains more than physical stuff.
Or if physicalism is the wrong ontology of nature. I take the p-zombie and Mary Room arguments to be critiques of physicalism.
I feel you may be overlooking the significance of the final clause in the sentence you quoted. Sure, one could come up with some hand-wavy outline of a non-physical explanation of consciousness which does not clearly stand in contradiction with a belief in the possibility of zombies - panpsychists do it all the time! - but that final clause is about verification. If zombies are possible, then consciousness is not doing anything observable that is not attributable to just the physical processes of the physical body.
I suppose zombiephiles could simply posit some sort of non-physical observable, but then they would be just piling on further unsubstantiated claims for the sake of preserving the zombie-possibility doctrine. They could adopt epiphenomenalism, but that means accepting that the apparent causal effectiveness of qualia is illusory. They might adopt a solipsistic stance and say that their conscious experiences are the only things they can be sure about, but that comes with a hefty dose of motte-and-baileyism: you doubt the external world, but you're sure about zombies?
On Chalmers' "additional natural law connecting consciousness to information rich processes": Firstly, proposing something is cheap; turning it into a verifiable hypothesis and then following through is hard, and Chalmers et. al. seem to have no interest or intention of doing that, preferring instead to endlessly repeat their anti-physicalism arguments. Secondly, there is an attempt to redefine what's meant by 'physical'. The physical was originally defined by what's tangible to our senses, and as additional phenomena having a causal connection to the physical world were discovered, they were naturally subsumed into an expanded concept of it (gravity, for example, despite Newton's unease over doing so.) Chalmers is free to propose causes that lie outside of our current concept of the physical (and he's even free to look for them!) but to insist, without any justification, that they would be non-physical, is at least tendentious and probably question-begging. To settle the question, we need more facts, not more speculation.
I'm not sure what to make of your last sentence: the p-zombie and Mary Room arguments are indeed intended to be critiques of physicalism, but as you have rejected the possibility of zombies, I assume you do not take them both to be successful arguments. One thing about Mary's Room is that you can make essentially the same argument, but with Mary's prior knowledge including Chalmers' additional natural laws.
There are many non-physical things, like the laws of physics. We know they exist, just at a higher dimension of being.
I agree that epiphenomenalism entails absurdities, therefore either all people in any possible world are p-zombies or p-zombies cannot exist in any world.
And I say the opposite, that our sensory perceptions, dreams, memories and other mental states drive processes that lead us to conclude that we've perceived subjective qualities in them.
To make a long story short: identity theory, neutral monism and panpsychism are basically the same on the property that everything has subjective qualities in some sense, strong emergence is a speculative concept at best that fundamentally amounts to dualism, dualism can itself be easily dismissed as any dualist theory will have a non-dualist equivalent that posits fewer entities and so dualism will never be preferred, and cognitive closure is not even a candidate as regardless of our ability to know consciousness, it still must logically fall into one of the categories I outlined.
Yet you observe them and don't put down their pretense to your flawed senses. Which suggests that something fundamental is impinging on your senses, however flawed your understanding of the underlying fundamentals may be.
I do actually. I think this internal sense of subjective awareness is just as if not more flawed than our other senses, and it has yielded all sorts of false conclusions because people consider it more reliable. I liken it to the illusion of multitasking on single CPU computers. I think Graziano is on the right track:
A conceptual framework for consciousness, https://www.pnas.org/doi/10.1073/pnas.2116933119
p-zombies don't require an illusion of free will for the universe's entertainment.
A p-zombie is, by definition, physically identical to and physically indistinguishable from another individual who has conscious experiences, yet does not itself have those experiences. It's not clear to me that this makes more sense to physics, particularly when one considers the p-zombie and its conscious twin both being asked the same questions about their conscious experiences.
I think most of our world model is actually a human model. Our social relationships are more important than we give credit for.
So there's an arms race. The more brains you have the more you can model your tribe to know how to help or succeed. AND the bigger everyone's brain is the harder they are to model simply.
In this model consciousness is the "self model" or "self-consciousness" that allows you to model others opinion of yourself by having such an opinion yourself. And adjust their opinion by providing a narrative about yourself which you first have to craft, .... nd on and on with higher levels of abstractions.
I’d argue that there is no default model. The model you have is just whatever you’ve spent time interacting with. If yours is mostly social then that perhaps plays a big role in your life. I would say mine is more heavily spatial and oriented around things and their interactions.
Supporting evidence for this are feral children who could never learn to integrate into society. They didn’t have built in models of social interaction.
feral children never develop what I would call the higher thinking. They still have mobility, spatial, etc. But "consciousness" is probably not something they really put together.
This is just my opinion. I think it took most of your brain to formulate that rebuttal, not concurrently, but as a deep and rich model of how readers might respond to a given argument, and weighing in a narrative sense your remembered experience vs what you "heard" me say in your head when you looked at these squiggly lines. Probably you even had a nascent image of me that was an amalgamation of your past experiences with internet idiots (fairly!) etc.
That's a lot, lot more than what a bug does when it finds its way past a complex maze, and probably something feral children lack.
I think all our higher level "reasoning" is a cooption of these systems.
There's a point in infant development where speech and social skills are developed, after which they cannot be. I'd go so far as to argue you'll never make a mathematician or physicist out of a feral child. Something critical to our higher level thinking is formed then. If missed, it cannot be recovered.
The flip side is I deeply believe if you can hold this conversation with me, then we both can be the best mathematicians there are, if only we took time to develop that language.
I like this model because consciousness just becomes another function, and it lets us explain some of the weirder modalities; for instance, being able to predict free-will choices way ahead of time in human subjects. Also, SMBC had a funny comic about this (funny comics always making science true!):
https://www.smbc-comics.com/comic/consciousness-3
The reason I like this theory is that it provides the groundwork for and over eagar agency-modeller that explains a lot of human quirks like belief in gods, seeking root causes, modelling etc
I think some problems are simple enough that they can be dealt with "blindly", but some problems turned out to be tricky in special ways that evolution was more able to solve via consciousness than blind information processing. And from there, we find ourselves, with that new capability in hand, able to repurpose consciousness to newer and newer things. Then retrospectively it can look like consciousness wasn't "needed" for certain problems.
So I think even if you want to make the sense that consciousness solves a lot of problems it doesn't need to, it may have been a "real" solution to a "real" problem at some point in our history. And from that point on, it was no longer important whether it was the best solution.
I do think it's fair to say that lots of remarkably complex informational problems are solved in a p-zombie way, which is to say, with every outward appearance of intelligence (slime molds solving mazes, collective behaviors of ants). So I do think evolution or nature writ large "agrees" with you that consciousness isn't strictly necessary.
There's also the idea of the "hive mind" of ants and bees, so maybe quite literally our brains efficiently evolved as a meta-hive-mind internalized by our network of neurons.
That's an incredible way of thinking about it. I would say that is the most creditable way of dealing with the question of consciousness as it would apply to collective behavior, which is to say, it is a clear way of stating the difference between what those behaviors are, and how far removed they are from actual consciousness, without dismissing the "interestingness" of the behavior.
I would suspect though, that in our case it may have been something like centralized "nerve cluster" getting more and more complicated over time rather than anything that is in any sense a hive, although that is a very imaginative idea.
The intermediate Value Theorem, in existential form:
A single one of my brain cells is not me,
but all of my brain cells is me.
Somewhere between these two endpoints there is a line where me has to begin, and not me ends.
Is there any way you could have a being like a human, who when asked would say they're not conscious? Is a definition of consciousness allowing that possible?
I'm not talking about whether they are or aren't, but surely all intelligent beings would say and think they're conscious?
I mean you certainly could, in the sense that you could get it to tell you the sky is green, and that it's the spirit of Rodney Dangerfield.
To your point though, so far as I can tell it could not answer that question as a product by actively engaging conscious introspection to discover its own absence of consciousness, which I take to be your point.
So if its way of answering involves dipping into its own active "theater" of consciousness it likely could not, at least given our usual understanding.
However I do wonder if its something where that could actually be complicated to answer, and where, like illness, as we grow in understanding, it breaks out into all kinds of unexpected subclasses.
I can, for instance, know, in a sense, that I can't see UV rays. But I am not so sure if consciousness writ large could every be tested for, via conscious introspection, and come out in the negative, even if we allow for the idea that it will explode into all kinds of nuances and subclasses with future research.
There’s a really neat book (fiction) that touches on this concept. It’s called Blindsight by Peter Watts.
Of course consciousness is required for intelligence - you need to model yourself as part of the world to have a complete model.
Personally, I doubt that self-awareness can be achieved without some form of consciousness, and I feel that self-awareness is a key component of higher intelligence.
If intelligence and/or consciousness arise as emergent properties in the right sort of complex system, they will disappear from view in a low-level analysis of the causal processes occurring in that system.
Did someone say it is? Parent explicitly called it out as a by-product.
With a human brain you're got
1) A huge stream on sensory data only some of which gets promoted to conscious awareness.
2) Some of that raw data and other conscious outputs are persisted into working, short, and long term memory.
3) And your consciousness works recursively using (2) as well as (1) as inputs.
All the stuff in GPT that gets called "memory" in machine learning seems much more like (1) and it lacks any ability to persist data outside its context window so we're still missing something.
I highly recommend reading "Blindsight" by Watts it touches on that.
A thermostat is a system that can hold an internal state (nominally, temperature) that represents the world around them. You can also build a thermostat with a switch and a bimetallic strip with differing rates of thermal expansion -- a device that is clearly not intelligent. I'm not sure I can subscribe to this definition...
In lectures I thermostats as an example of an intelligent system that matches most attempts at defining intelligence. And I have no qualms saying they are intelligent. Intelligence is a very vague and very context dependent thing that can be at most used to compare some things in some scenarios.
A bimetallic strip is intelligent?
https://en.wikipedia.org/wiki/Bimetallic_strip
Thermal expansion is clearly a physical reaction; if arranged in an intelligent way, it's able to achieve some human-relevant objective... but if we're going to say that two pieces of bulk metal hooked together is intelligent, then the term "intelligence" is so broad as to be roughly meaningless.
A bimetallic strip is only intelligent when placed in the context of a thermostat system. Similarly, your brain isn't intelligent when removed from your body and placed on a table. Brains are only intelligent when they're in conjunction with living bodies.
A brain is intelligent regardless of where it is, so long as it's "powered" so to speak.
{{Citation needed}}
{{fart noises}}
Not without lungs, you don't...
But actually there's a real interesting thing here. People used to believe that the heart did the thinking. Phinneas Gage upended that theory, and now people think that the brain is the thinker. More recently, we're learning that the gut has a massive neural network, and the brain-gut connection is being investigated today. Likewise, "muscle memory" isn't in the muscles nor the brain, it's distributed among neural clusters all throughout your body. How much of "you" (your knowledge, your personality, your physical skills) is actually contained within the head is entirely non-obvious.
This is sophomoric. First of all, muscle memory is not contained in your muscles or "distributed" throughout your body but is known to reside in your cerebellum and basal ganglia. The nerves going down to your muscles are simply a conduit for actuation, a wire, nothing more.
As far as your gut, if you claim that a thermostat is intelligent then sure your gut is also intelligent (don't forget the microbiome in there too).
Peripheral nervous system has significant plasticity and its reflex archs can produce complex and purposeful behavior without the brain. E.g. cats can walk with their brain removed and famously decapitated chicken do all sorts of stuff.
Octopi don't even have a centralized brain.
I'm not disputing that. What does that have to do with whether a human brain is inherently intelligent?
I was replying to this:
Says the user who just made fart noises. Your confidence here is dunning.
I literally just followed suit. Don't like the response, don't lead. You're being hypocritical.
Sure, replace a living body with a machine, and replace the thermostat around the bimetallic strip with some other kind of mechanism. It makes no difference.
So a brain is only "intelligent" in certain (but infinite) contexts as opposed to intelligence as an inherent property of a brain?
If so, your definition of "intelligence" is incomplete. What are those contexts? Can you really generalize all of those contexts for all possible intelligence systems? That sounds like an epistemological nightmare.
Intelligence is not an inherent property of a brain, brains are only intelligent when placed in particular configurations with supporting hardware. Are you going to fart at me too?
I used to think this but I don't know anymore. Without our bodies to provide stimuli what input is the brain going to use for thinking or reasoning? So much of what we do day to day is hard-linked to our body/sensorium. The brain without the spinal cord/nervous system is useless.
Our brains are clearly physical reactions. Check and mate.
Perhaps check, but I don't think it's checkmate until you can explain the hard problem of consciousness via physical reactions.
What makes you assume thermostats aren't conscious?
this really is getting at something important. however the question I would pose is, if the thermostat is conscious, does it have a sense of self, i.e. an identifiable boundary between itself and the world around it? I would say not. My understanding is that humans have an identifiable region(s) of the brain that correspond to a sense of self: https://en.wikipedia.org/wiki/Neural_basis_of_self#:~:text=T....
I think in a lot of these conversations people say consciousness and assume that consciousness always mirrors their own, which just so happens to come along with a sense of self for 99.9% of sober individuals. So I think it's important to distinguish between the two things.
Would this imply that someone who claims to lose the sense of self (or seems to realize it as a hallucination) through “sober” techniques like meditation would no longer be conscious?
Quite the opposite. I'm saying that a sense of self is not necessary for consciousness. And that many people write off simple objects as having consciousness when really what they mean as that those objects don't have a sense of self.
Because they don't have any other higher level function that could reason about the state other than a single binary response. If intelligence is a continuum you might say it is at the lowest end (a bit state and actuation on that state). But consciousness seems more like a meta-intelligence that would require more than one bit and one actuation.
I don’t make any of those assumptions. My point was that I also don’t know that we can distill consciousness to strict materialism until we can find the mechanisms to explain the hard problem of consciousness.
Some definitions of intelligence (or even consciousness) are systems that simply process data. From that context, a thermostat is processing temperature data. It also underscores the GP's point that there isn't necessarily a consensus on the definition.
You can set a context as broad as you want, but it changes utility.
You can set a definition and context so broad that everything and anything is intelligent.
The possibility of an outlandish alternative, does not does inform on if a consensus exists or not
You can set the context as narrow as you want (e.g., abstract reasoning specifically defines intelligence). That might make your ontology feel crisp, but it also doesn't make it correct; it's still arbitrary and a matter of convention.
sure. but there is value in defining a useful working convention so that people can discuss, learn, and study.
Interjecting the perspective that words are meaningless constructs is rarely helpful. If someone wants directions, telling them that there is no concrete metaphysical concepts of north, south, left, or right, isnt constructive.
I agree on the usefulness piece and that words/ideas have meaning. But usefulness shouldn’t be conflated with the underlying truth and neither should classifications of convention. As the saying goes, all models are wrong, but some are useful. I can say that burying the dishes from a meal that made me sick may help avoid future food poisoning; while useful, it doesn’t get me closer to the underlying truth of germ theory.
I think the heart of these more philosophical debates is a desire for fundamental truth, not just a useful mental model.
Indeed, the only value of categories in how useful they are. It's clear there's a continuous spectrum of intelligence/complexity. I believe the main reason we're still arguing in circles about what is or isn't intelligent, is that we also have a distinct, ill-defined category called "intelligence" that carries ethical / moral connotations. We don't know how to handle a continuum of moral weight / personhood.
It's useful in many ways to put a thermostat, an LLM, a monkey and a human on the same "intelligence"/"complexity" scale - there are many interesting things to discuss other than moral weight. And then, for discussing personhood, it makes no sense to include a thermostat.
I think intelligence is a continuum from not very intelligent to very intelligent. I guess a bimetallic strip would be at the bottom end of the scale above a lump of coal but below Einstein. At least in thermostat form it can keep something stable in contrast to say Turkey and it's currency as mentioned in another HN thread today. Maybe they could rig a bimetallic strip to raise or lower interest rates with a heat input driven somehow by consumer prices? They could then use that to replace Erdogan.
vs
You're moving the goal posts
There are some who claim that a bimetallic strip shows evidence of intelligent design, but every fifth-grader knows that if a million volcanoes erupt over 5 billion years, there is a lot of weird stuff that can result from that random activity.
Language is quite flexible, but I don't think it is very helpful to call a thermostat intelligent. Doing so does not advance our understanding the sort of self-aware, theory-of-mind-holding intelligence that is generally meant by the term (putting aside its alternative meaning as a synonym for information), and adopting the broad definition just increases the burden of disambiguating what we mean when what we mean is the latter.
The broad definition is not justified by the fact that certain attempts to define the term would attribute intelligence to thermostats; a more useful conclusion to draw is that the definitions in question could be improved.
It is sometimes said that we won't be able to gain an understanding of intelligence until we have a precise definition, but it's actually the other way round: definitions get tighter as our understanding increases. We now understand many things that not only were lacking definitions in the past, but were not even imagined.
Thermostat with a servomechanism is self-aware of its state. A thermostat could be quite simply augmented to have a theory of another thermostat. Probably there are such coupled termostats.
I use intelligence mostly just as perspective of analysis to a system. E.g. Can it be seen to process information? How complicated is the transform from the information to some action? Does the processing lead to some good outcome, e.g. self-preservation of the system? Could the information processing be improved for this outcome?
Saying that a thermostat with a servomechanism is self-aware is just repeating the same broadening-beyond-usefulness with a different (though related) concept, in this case self-awareness. The difference between the self-awareness of humans and that which you see in a thermostat is vast, and understanding how a thermostat works gives no useful insight into how the human mind works.
Any system processes information, and in a thermostat it is not complicated at all. Even the most self-aware thermostats currently being manufactured are not going to improve themselves on their own volition. Applying the term 'intelligence' this broadly turns it into an empty tautology, or a little piece of uninformative noise on a communications channel, no more useful in understanding those systems than in understanding actual intelligence.
That's either contradictory, to state that something has property X where X is vague to the point of undefined, or you are setting a very low bar for intelligence (I'm actually okay with the latter).
Just a question, if a bimetallic strip can be intelligent, is a marble intelligent for having the wit to find a low point on an uneven surface by rolling downwards?
The marble question is good. For most purposes I'd say the bimetallic strip (one connected to an actuator at least) is more intelligent than a marble. Studying dynamics of internal properties of the termostat is more fruitful than of the marble that is easy to describe as an unchanging object.
Taken to the logical conclusion, we are all just rolling downwards to a minimum energy state just like the marble.
I don't think intelligence is an intrinsic, let alone binary, property of an object or a system.
A thermostat is certainly intelligent.
The definition of intelligent I give is “to mitigate uncertainty.” If it does not mitigate uncertainty, it is not intelligent.
It is merely of constrained intelligence. Perhaps your expectations are too broad.
If the thermostat reacts appropriately to environmental changes then it is performing its role intelligently.
In the context of a person being trapped in a cave, is some aspect of light shining distantly intelligent? It's reducing uncertainty about where a viable exit is, after all.
Information is the reduction of uncertainty. If it does not reduce uncertainty it is not information.
Intelligence is the mitigation of uncertainty.
The revelation of the light is information. Navigating the contours of darkness is intelligence.
If the cave walls glowed in response to proximity (of the observer) then the light source would have a modicum of intelligence.
But what if the cave walls glowing misleads the observer and they get lost further? Then it wouldn't seem to be intelligent. In any case, the light source being intelligent occurs in the context that there is an intelligent observer who may be able to escape easier given a light source. The observer sees the light, evaluates it, and makes a decision. Even if it's just instincts, there is a decision made. What might be intelligence is if the cave (or elements of it) acts according to "this person is lost at this location, so light up this specific pathway". If the cave somehow facilitated more functionality, such as leading a person to clean groundwater, it would be more intelligent. In the context of helping the person escape or aiding them in the meantime, the cave tends towards accidental sophisticated technology ("...[which] is indistinguishable from magic"). Functional intelligence, I'll call it. Then perhaps there could be a theory of intelligent agents, who have goals and plans.
Didn't Nest have a >$1B valuation at some point? /s
Juicero was $120M /not s
My thermostat may not be intelligent but it is certainly smart... At least it says so on the box.
Anyway, the strip does contain the state of the world around it: the temperature is modeled by how much the bimetal is bent. I think indeed it is a minimal example of a complex system, one that at first glance defies explanation, it seems to have purpose (keep temp stable), until you understand the inside.
Anyway, "Is a virus alive?", "Are these specimens the same species?", ... Us humans like our boxes, but at the edges they almost always go wrong.
It's not just the internal state but the prediction that makes it intelligent.
Your brain is taking in a lot of information at the edges of your awareness, light, sounds, touch, etc. are all getting absorbed and transmitted to your brain. As that information is transmitted along your neurons it's getting summarized, then merged with other summarized information and summarized again. The brain is getting summaries of summaries, and developing a unified categorizing of the global state across all it's inputs.
Then the brain takes that summary and makes a prediction about the future state. The summarization is energy-efficient. By categorizing all that data into a global state you make decision making possible. "When my boss seems stressed all week, then calls a bunch of people one-by-one into his office on Friday afternoon, I know lay-offs are coming. I better polish up my resume." From "stress/anxiety/unease" in the environment to "danger is coming I need to fight/flight".
Your brain is taking that summary/categorization and figuring out what it needs to do next. If "X" happens then I should do "Y" to "stay-safe/get-food/find-a-mate". The brain is very good at capturing and summarizing data, and making a prediction because that process is much more efficient than doing otherwise. Instead of foraging everywhere for food and hoping I just bump into something that will provide sustenance, I know if X, Y, and Z happen then food will be "here", and I can get lots of it.
You can apply this same model to all actions the brain directs. It also helps make sense of why maladaptive behaviors develop. Sometimes the summary is incorrect, or was formed based on past information that no longer applies, and it may need to be unlearned.
Is this because it is a completely man-made system and not one that evolved slowly over time through natural processes?
I'm also a cocktail philosopher, but isn't consciousness different to just having a model of the world and self within it? Consciousness is the lived experience. The world model and feeling of self appear in consciousness. I think a complex system could plausibly be conscious without having a belief of a self within it. Not sure if consciousness is possible without any world model though.
My impressions about this were strongly influenced by Sam Harris's Waking Up book and app.
I agree that "consciousness is different to just having a model of the world and self within it" indeed. I'm just saying it feels like that modelling ability (which has clear and major evolutionary advantages) is a step towards consciousness, indeed something in the now (as we experience it). A (near) real-time model perhaps that constantly projects and adjusts. I guess this still doesn't require consciousness, but maybe consciousness results from this? Does it require a sense of "now" and identity relative to the world model?
I feel like the matrix is about the eject me btw.
Thanx, I'm looking for Harris' books right now.
Mixing consciousness with cognition is a mistake, Imagine a conscious experience of constant redness and nothing else, is there any intelligence needed for this?
The reason why this is likely not possible is because consciousness appears to require a notion of time passing by and having a constant experience means that there is no time.
For time to exist some sort of motion is needed, including either internal motion like the perception of breathing or a heartbeat or a train of thought or external ones like a clock or a change of color from red to blue.
We have no way of knowing whether there is any time even with the "notion of time passing", as that notion depends on a notion of memory of a past, and we have no way of knowing if that is real, including whether or not there is a train of thought or any change.
You are misunderstanding what I wrote.
I don't think so. The point being that we can't tell whether we're having a "constant experience" or not, and we can't tell from that whether or not there's any time or no time, or punctuated time, nor can we for that matter tell whether consciousness requires any notion of time.
It's all pure speculation because we have no way of assessing it outside of our own frame of reference. E.g. I see you in another answer saying that "the fact is that all kinds of state changes happen", but we don't know if any state changes ever happen.
We have no way of telling a dynamic universe apart from a static one where we're all just suspended in a single unchanging moment.
We can choose to handwave a lot of this away ordinarily because as long as the experience is reasonably consistent it makes sense to assume it is objectively real, because we have nothing better. It doesn't matter if the world didn't exist a second ago when e.g. considering whether gravity work, because it appears as if it did.
But when trying to determine the nature of consciousness we run headlong into the fact our observation of consciousness is direct only in the case of ourself, and even then heavily limited. We have no direct measure that puts us in a position to prove consciousness even in other people. We can show that within our perceived world we can measure brain activity that correlates to though, but not whether that reflects merely the operation of an automata, or the existence of consciousness, or if there is a distinction at all.
As such, it's meaningless to suggest we have a way of saying much about the limitations of consciousness at all.
I am curious to know why you think so. What would you say about repeatably observable causality, in that case?
If the universe is static, then there is no "repeatably observable causality" or indeed any causality at all. In that scenario any perception of time and causality would inherently have to just be our perception lying to us about a past that we have had not part in, if it exist in any sense at all. If so, we have not had this conversation, and your experience of it is just a static momentary perception of having had it.
Maybe time is a spatial dimension, and there are infinite moments of consciousness fixed in the same spatial location with no time passing.
Consider how you'd tell if a drawing is an individual drawing or a frame from a cartoon if all you have to go by is that single frame. You can argue that the drawing hints convincingly at motion, but that does not require that this motion has taken place.
Or consider a trace of a simulation, sliced and diced into snapshots of individual moments. We can argue that it's unlikely any entities in such snapshots would have consciousness, but if we're arguing on the basis that we appear to experience the motion of time, we'd equally make that argument if we were wrong about consciousness and indeed locked in snapshots of individual moments. We can even construct simulations where individual parts look causally connected but where the existence of one calculated frame tells us nothing about whether any individual other frames have even ever been instantiated (e.g. imagine a very complex function over time, where only punctuated values have ever been calculated).
I'm not saying I believe that is our situation - I'm saying we can't distinguish between that and an infinite set of other possible options, because "from the inside" there is an infinite set of possibilities that could all look the same from our vantage point. We can debate which possibilities seem more likely, but they will always be speculation as long as we're locked inside the asylum, so to speak...
Incidentally, this is an argument for a materialistic view of the universe, not against it, on the basis that absent a way of "peeking out" and seeing our situation from the outside, it's meaningless to treat the world as anything but what we measure and perceive - it doesn't matter whether or not this world is what it seems like to us or not as long as it is the only view we have of it. We just can't say if it is some inherently true objective view of the universe, and most likely it won't make any difference to us.
It only makes a difference when we tinker with philosophy around the edges, like these conversations about whether what we experience can tell us anything about the experience of other entities.
Is it possible to have perception in a static environment? It seems like perception requires flux of some sort.
Clarification: meaning the machinery of the perceiver must have flux, otherwise it's not perception, it's just static state.
Is it? If we are in a static environment, then it would mean it is possible, and that this intuition is wrong. Since we don't have a way of determining experimentally if is wrong or not, then at least for now it does not even help us quantify the odds. If we're not in a static environment, then maybe, maybe not - we don't know what the subject experience of consciousness is at all.
We so far have no way of splitting perception or conscience down in slices of ever shorter moments to see where it stops being whatever it is and becomes something "inert", but even if we did, we would not know whether that was an inherent limitation of objective reality or of our subjective reality and whether those two are the same or not.
IMO those are details, we could drop the constatness and just ask - what is required to have a single moment of redness? Or even simpler, a single moment of one tingle.
BTW experienced mediators apperently can experience no time and no space, no idea how that tastes like.
Sure, they might feel so, but the fact is that all kinds of state changes happen, so time goes on.
I think it's arguable that this "conscious experience of nothing but constant redness" might not be actually possible. We can imagine it, of course, but we can also imagine many things that can't actually exist in the world. It's similar to the idea about p-zombies: just because a concept exists doesn't mean it's physically realizable.
It's very much possible that what we would call consciousness can only exist in certain types of minds that have a particular level of intelligence, but also a particular structure.
Of course, it's also possible that you're right and the two are entirely separate, and that consciousness doesn't require intelligence (and possibly vice-versa).
Perhaps, that's the question. What is required for the simplest possible consciousness?
Good question, being investigated in the field of consciousness studies. Like Thomas Metzinger's minimal phenomenal selfhood: https://youtu.be/8f4ykI9har8
I think all these term are too loaded and deep to answer your question, almost all words in the sentences we exchange are subject to subtle interpretation and definition differences.
I still enjoy the process though, which perhaps also doesn't require consciousness, yet here I am.
At least when it comes to human level consciousness, I agree it's something 'more than', but what exactly?
If we look at our advancing world models in things like self driving cars, when would we consider them conscious by our own rules? It has a sensor network that keeps track of it's own internal states. It has external sensors that monitor the external world and adjust it's internal state. The internal state attempts to predict future states and take actions to avoid negative outcomes.
So when does this loop become something we consider consciousness?
One possibility at least is that "the experience" is not something that really happens. That is, it's possible that we don't actually "feel" anything, and our impression that we are is just the story that our self-model comes up to explain (and help predict) our reactions to ourselves. Just like our world model has ideas like "the rock wants to fall down", it's possible that our self-model does too.
We already know that our self-model can be entirely wrong about our feelings. People with paralyzed or even missing limbs often believe that they just don't want to move that limb at first. So, they think they are having one experience, but they are wrong about their own internal experience: in fact, they are not moving that limb because they can't. And there are many other similar examples of people being wrong about their own intenal experiences, typically but not exclusively because of some illness.
So, it's possible that our internal experiences are in fact only a model in which one part of our brain interprets the actions of other parts of our brain, often retroactively.
Note: I'm not claiming this is the truth or silly things like "if you believe in science you have to believe this". It's just another cocktail philosopher's story of what consciousness might be. Other stories are just as plausible, and just as consistent with the little we do know in this area.
I think it's rather the opposite, they aren't wrong about their internal experience, it's just that their internal experience doesn't match the objective reality of their body (which in this sense is external).
I think it is indeed entirely possible that our self-model can fool us about the realities of various situations, even those with our own body/emotions/etc, but I'm not sure how one could then derive the conclusion that the experience doesn't exist. It would just be instead that the experience is rather inaccurate/confabulated.
I don't think it's correct to call this an external experience. It's an experience about will and desire and direct control of the body - what can be more internal than that?
It's not like an optical illusion, where you think you are misinterpreting an external stimulus. This is a situation where you are trying to control a limb, not getting any reply, and concluding that "you don't want to move" instead of "I can't move".
The experience is internal, the body (the limb that's missing) is external to the experience. The confabulatory experience of "I don't want to move" is an internal experience and cannot itself be an illusion, it's simply an incorrect assessment of the actual state of the body.
Sure, the limb is external. But the experience "I don't want to move" is not wrong because the limb is actually missing, it is wrong because I did actually want to move. "I" did the exact same thing as every time I try to move (i.e. the brain sent the exact same signals to the limb).
Sure, none of what you said there would lead to the conclusion that the "experience is not something that really happens," though it's also possible there's a just a failure of communication here and I'm not understanding what you meant.
It definitely seems like aspects of conscious experience are just more parts of the world model, e.g. we experience having a continual visual view of the world through our eyes, like watching a movie, but we know that's not what is actually happening physically.
Where I get confused though is this sentence: "...the story that our self-model comes up to explain (and help predict) our reactions to ourselves". What is "ourselves" here? That implies some observer that the self-model is explaining things to.
Speaking in the context of thinkers like Harris this is the reason why awareness is differentiated from consciousness. Consciousness is the ability to grasp the world "out there". Awareness is the blank canvas where consciousness is experienced. The analogy here is a movie projected onto a movie screen. Consciousness is the movie being played. The screen is the blank space of experience where the movie is projected. I think the confusion is there is no logically consistent differentiation of concepts like intelligence, consciousness, awareness in the sciences and they end up being used interchangeably.
Thanks, the distinction between awareness and consciousness is definitely something I've overlooked. I tend to think of them interchangeably.
That is a good point, language is pretty important in these things. I just meant "ourselves" as the overall brain/mind. That is, the self-model component just feeds these models into the planning component, so that the planning component can decide the next actions to take.
> That is, it's possible that we don't actually "feel" anything, and our impression that we are is just the story that our self-model comes up to explain
It seems to me that you are trying to banish things from the existence by a reductionism based exorcism ritual.
If we know a structure of some phenomenon it doesn't mean the phenomenon doesn't exist. We know that life is a chemistry, and there is no life in periodic table. But it is not a reason to conclude, that life doesn't exist.
If my feelings come into existence as a result of a complex information processing, it doesn't mean my feelings do not exist or are not real.
If "I" is just a retroactively written story, then what? Does it mean "I" is not real?
It depends what you mean by "is real". Of course in some sense it is still real, just like triangles are real. But it's not real in the same sense that rivers or rocks are real.
The relevance in this context is that it gets rid of some privileged position for consciousness. If this explanation I gave were true (I have no reason to believe it is!), then there is no problem in reconciling qualia with materialism. Qualia are just an aspect of the modeled "I", not a raw experience that can't be explained. P-zombies are all that exists.
> Of course in some sense it is still real, just like triangles are real. But it's not real in the same sense that rivers or rocks are real.
Rivers and rocks are less real than consciousness. Our minds can work with mental objects only, these objects often are built as an images of something outside of our minds. Most of the time images have nothing in common with underlying reality of an imaged phenomenon: for example we need to remind ourselves that you cannot cross the same river twice, because our model is a static object, while really a river is a dynamic process of H2O molecules falling out of sky and then rolling down to an ocean.
But our consciousness is the model of ourselves. The object (me) and its model (me consciousness) are much closer, there is no long chain of causes-effects passing information from external river to neurons. And moreover consciousness is a pure magic among other images: I can believe as hard as I can that river must be made of milk, and it will not change the river. But if I believe something about myself, it leads to real changes. Doesn't it mean that "I" is more real than any river?
Or... scratch that. I can agree with you we have confused naive understanding of consciousness mixing causes and effects. But I'm not sure that the rest is as simple as you describe. I'm sure a dualist will find a way to explain qualia as something that can't be reproduced in AI.
These things are never easy and they never have a definite answer.
Also an armchair philosopher and I enjoy thinking about these things a lot. The theory you've described is the one that seems to have the most explanatory power.
Andy Clark's "Surfing Uncertainty-Prediction, Action, and the Embodied Mind" is a book-length exploration of this idea. Highly recommend.
So what would it mean for us to not really have sensations of color, sound, etc? People can be wrong about some of their experiences, but those would be called illusions or hallucinations, not lack of experience. Illusions and hallucinations are themselves kinds of experiences, even if the experiences are faulty. You can dream about all sorts of ridiculous things, but it would be very odd to use that to claim that you don't actually experience dreaming.
If our internal experiences are the result of self-interpreting an internal model, that doesn't make the experiences go away. It means there's something about that activity which causes subjectivity.
You can say that. You can say a lot of things to explain consciousness in a materialistic sense, as in how it could've emerged. But I cannot fathom how material interacting with other material and forces gives arise to subjective experience. It simply makes no sense to me. If I create a copy of my brain, it would be conscious, but with its own unique subjective experience. This makes sense so far, but what exactly is this subjective experience and how can "mere" mechanical matter create such an entity.
So in short: I cannot understand what is the actual substance of subjective experience.
Have you ever been under anesthesia like propofol?
I feel like most of what we call "consciousness" is converting short term memory into selected long-term memory, facilitated by language. Because then you're under, you can even be "interactive" but you're not conscious of it because your short term memory has been disabled.
As to "human intelligence", honestly, I think that human languages that let us convert our "consciousness" into a shared hallucination is the key evolutionary advantage. Human intelligence comprises a hive mind in a sense, that our experience of the world is hugely affected by the shared social experience where language transfers memory from person to person.
But why should it be like something to convert short term memory to long term memory? Computational systems can convert data between different forms of memory but we don't typically believe consciousness emerges from this operation.
Basically in this argument there is no such thing as what we call experiences or consciousness. It's an imagined concept, one that we think we have but we are merely misunderstanding.
Basically in this model, there is no present. The brain just invented the concept "I feel this" as a model of it's own inner workings, just like we invented the concept "object" to define certain aspects of physical reality that aren't 100% there.
So, basically you are imagining your experience, when in reality you are just an automaton. Same as how you imagine a robot is happy or sad when in reality it has no feelings.
This is precisely my hypothesis. We imagine that our human consciousness differs from animals when in fact it it exactly the same, but with a more-expressive language system on top.
Not to say this addresses the concern but in that case a computation system is being driven by a pure line signal (the power) and is ultimately both dependant on that for operation, and requires a high level of stability of that line to be "effective".
The alternative in this case is imagining that you have a biological process that has adaptive energy response controlling the movement from short term, to long term and itself optimizing based on the performance trade-off of input stimulus overload.
In that case, the process of deciding what to, and when to move data from short term to long term memory (in a feedback loop) could start to approach more of the appearance of consciousness (choosing what to process, what to remember, adjusting the input rate and types of input based on the limitations of processing and retention).
Many animals have consciousness as well so I think it's important to be more open to reducing the bare essentials of what a process of "awareness" is or could be rather than looking at it from a hyper evolved language/information influenced state of humans.
Note I agree that it is not consciousness, but i think it's an interesting exploration to look at the modeling of it from computer forms since we are made of non-conscious atoms and still made it here
I don't find this convincing. If you asked someone under such anesthesia if they were currently conscious, they'd probably say "yes". Doesn't it seem more likely that you just don't remember having been conscious? If I magically wiped away a year's worth of your memory, does that retroactively mean you had actually been unconscious during that year?
But their saying "Yes" doesn't necessarily make it so. I've made comparable claims while asleep. (Conversely, I've believed myself completely awake while actually asleep and unaware of what my body was doing.)
There would need to be a deeper test than either long-term memory or a yes-or-no question asked in the moment.
I feel like saying that one is in a sense "conscious" while dreaming..? Though this use of "conscious" seems to conflict with another use which has "asleep" as a primary example of "not conscious"...
I guess maybe this calls for another word. Not sure what word is best.
This problem just goes away if you assume that there is no dividing line between the "experience" of you and the "experience" of any other computational system. Actually try to think, what does a computer "experience"? An atom? What does it feel like to be a standing desk?
Panpsychism is going mainstream again.
It seems like the only solution that's compatible with materialism. If you accept that it "feels like something" to be you, and you accept that your intelligence is just a computational process which differs from others only in the specific computations involved (as I think you should, especially in the age of LLMs), then you must infer that it "feels like something" to be any number of computational processes. If you infer otherwise, you're positing that there's an unmotivated, invisible, in-kind difference between human intelligence and all other physical phenomena which seem to mirror its abilities.
The conclusion I come to, which doesn't actually seem that strange, is that what's special about our experience is the complexity of our thought, our ability to turn extremely high-dimensional raw experience into low-dimensional models of reality, our ability to maintain a sense of mostly-stable identity over a long period of time, our capacity for emotion and sensation. It may "feel like something" to be a rock or a standing desk, but it isn't a very interesting or complicated feeling.
I'm sympathetic to panpsychism, but it isn't obvious to me that intelligence should be what's behind awareness.
I've been blackout drunk before, and my brain was capable of intelligence, I got home just fine, but I had no conscious awareness. Same for a dreamless sleep and being put under for surgery.
Additionally, I the observer, can observe and be aware of intelligence which seems to indicate to me that it's something outside of what I'd call "me".
I think you are still conscious during the blackout, you just don't have a memory of it. Amnesia is the effect here, which is a little different than actually being unconscious the whole time.
When in surgery you really are unconscious though, hopefully at least.
I would compare this to forgetting what exactly you did on this exact day and this exact time one year ago. You can't remember it (most likely), but it doesn't mean you were unconscious during that time.
If we take the behavior of matter as the observable signs of how it feels — this is true if sensation motivates action — then many rich states of sensation have been observed in subatomic particles.
The computational process of an organism is evolved to operate the organism, toward survival of the species. I don’t think these sensations would necessarily be more complex than those of subatomic particles, but they would need to be directed by the system to represent the state of the system and the observed environment, to link to actions the system could take, and to bias actions toward survival of the system.
If we wrote software to do this, but we were so incompetent that we couldn't fill in the model with correct data, we might just say "who gives a fuck, fill that with random garbage and we'll fix it in a later version". And then we never do.
Your subjective experience is that incompetent model. Your model doesn't know how to correctly judge human character, so you misread people and always wonder why they say one thing but another is true, and it doesn't click that they're lying the whole time. You can't keep track of time because the internal clock just isn't implemented, so the who day seems to drag on, or maybe fly by too quickly.
It's all just really shitty software. Layers upon layers. And because humans believe this to be some mystical thing, rather than trying to fix it from the inside, they assume that it's awesome, necessary, and why would anyone want to fix it?
No small fraction of it is simply because our memory is faulty. The only time you ever remember anything is the first time you remember it, every memory access of that is really you remembering the last time you remembered it. Each access is lossier than the last, and confabulation is guaranteed. This seems to be true even moments after the event.
If it was anyone other than evolution who wrote your code, you could sue them for criminal negligence.
And that's before we even get to the part where you find out you're not even you. Inside your skull is another being, an intelligent one, with desires and goals. But you can't see, hear, or feel this being. It's invisible. The "you" that I'm talking to, exists because this being once upon a time needed to simulate the other humans around him, so he could anticipate them well enough to not be out-competed. He has a pretty good idea what they'd say if he asked them questions, how they'd respond to threats and challenges, what sets them off (so he can avoid fights). And, by mistake or design, he used this simulator to simulate himself (maybe to bootstrap it? if the simulation's output matches his known answers, it's working correctly?).
You're the simulation. When the judge asks the psycho teenager why he put the cat in the microwave and he says "I dunno" he's telling the truth. He does not know why. When your girlfriend cheats on you, and she's crying hysterically and can't tell you why she did it, she's not just lying (either to hurt you or to spare feelings)... she doesn't know. It was that other being in their skulls doing these things. They're just the simulations.
Now, you've either been poking around in your own head, seeing little glimpses of what I'm talking about, making you wonder if I'm not on to something, or you're incapable of that. I've met both kinds of people. If you're the former, you're wondering just how much of it I understand, because some of the glimpses paint a slightly different picture from what I describe. That's because our minds weren't built the same way. No two are alike, not in a special snowflake way, but instead like no two shacks in shantytown have the same kind of leaky cardboard roofs. And, if you're the latter...
I never bought into this subconscious thing. If my actions follow my internal voice, it seems unfathomable to me to say, "no you did that because of other reasons". A simple example is, I go to the fridge because I am hungry. If I were to cheat on somebody, there sure as hell was a lot of voices in my head that had to do with either lust, anger, feeling neglected/unloved or something.
Sure, there are deeper reasons for my voice is what it is, but thats something else entirely. I think its more so that as we evolve, we forget our initial motivations for thinking the way we do so that when we do reflect on our past, and are honest with ourselves, they do somewhat remerge.
Your internal voice is making up justifications after the fact. You ask yourself "why did I do that"... and it doesn't know, but has to have an answer anyway.
You were taught well. Teachers and parents and adults don't like it when the child says "I don't know why I did it". They punish that answer. You learned the lesson soon, well, and maybe without even having to be punished.
Oh, and your internal voice is just another one of those simulations.
Except many millions of people go to the fridge, and they're provably, objectively not hungry. They say "because I'm hungry" too. If you had to make up a fictitious-but-reasonable reason why people go to fridges, that would be at the top of the list. That's why you say it. Few people, if any at all, are even aware of their hunger. In the west, few have even ever experienced hunger, and if they have, almost certainly only a few times in their lives.
None of this is anything that you are really aware of. You can't be. You can't even express how you think your mind works without using the word "voice". I used to think people like you were schizos when I was a child and hadn't figured all of this out yet. It was clear that it wasn't just a failure of vocabulary, I'd met many people even just out in public who didn't seem to know enough words to describe things, this wasn't that. And early on I could tell that it wasn't a joke, eventually that it wasn't even a metaphor. You do not know the reasons you do things, and when forced to explain, you make up stuff and choose to believe that's why. The "I don't know" answer becomes painful to adults. It nags at you. You're supposed to know yourself well enough to have reasons most of the time. It's one of those scary glimpses. There's a ghost inside your skull, moving the furniture around when you're not looking.
Also, I think a large part of "I don't know" is purely the fact that the mind is driven by chemicals. I can't explain for example why I can't focus or why I am tired, or why I no longer feel something that I used to.
There just isn't a perceivable reason for everything, as we can't fully perceive our internal state. But if youre aware of these things, they can be explained too. For example, you may identify that when you eat greasy foods you become lethargic and that if you don't eat all you become irritable.
Thats assuming I need to make up a justification for why I did something to protect my ego. Most actions I take in my life don't need that reason, that includes going to the fridge because I am in fact, hungry. In fact I'd say 99% of the actions throughout my day are super simple decisions. There are some things where I am conflicted, and unsure about that I don't understand myself sure.
You don't think "I am going to go left" and go right, after all. You don't think "I am going to grab this" and open up your fists. Saying your voice has NO relation to your actions or your true self is just absurd.
For more complex actions that have more emotional depth, again it really depends on how honest you are with yourself. You may not understand everything, but you have some idea or feelings you should be aware of.
Also consider the technical actions we do make. If I am doing my job, I can fully explain my rationale behind why I did the things I did. Why would it be any different for the actions I take in my personal life? I felt insecure about not having a relationship, so I started thinking about how I can improve, so I started thinking about my body, so I started going to the gym. I bought low calorie coke because I am trying to stay fit, because of the previous reasons. You're telling me that's all just complete bs and I did all these things for some completely different, unknown reason?
Or Ive really been hating my job. I started playing piano and I really connected with it and I see its something I am really good at. So I quit my job and enrolled in music school. Complete bs? When I was sitting there at work and had a bunch of negative thoughts about it, I actually didn't feel that way?
I think this is a pretty fictitious statement to make. Yeah most people in the west have not experienced "true" hunger, but we all feel hungry every day as we do in fact, need to eat everyday just to sustain. I am not overweight, I am barely holding onto the weight I have. If anything I should be eating more. Yet I feel very hungry several times throughout the day.
I see "I don't know" largely as "I don't want to confront my feelings or publicly tell you". If what you were saying is true, there would be zero point in self reflection or therapy. There would be no point in judging peoples personalities, listening to what they say, or the promises they make.
I don't think you're entirely wrong, in that if you go deep enough asking why, you'll get lost. But thats not because of your unconscious self, thats because you completely forgot/blocked out/retold yourself some other story.
I buy into this theory, and the other one about consciousness being a step or two behind and fabricating some cohesive explanation for why you did what you did.
If you are unfortunate enough to experience some human body failure modes you can get a glimpse into this process. The cohesive "veil" breaks down and you realize there's more "you" than that voice in your head. The distributed nature of the brain/body peeks through.
It only makes sense if
- you can represent complex state in a distributed way, so each neuron only encodes a small part of a larger signal
- the system has a working model of the environment, including our value judgements for all states, which are basically our emotions
Such a system can have experience because it has a latent space to encode experience in. It feels like something to be an agent because of the external environment and internal models of the environment, which include imagination and emotions. And this feeling is essential in choosing our actions, so there is a feedback loop action-to-emotion, then emotion-to-action. Our feelings are causal.
Experience isn’t the right way to put it. The system you describe is straightforward and makes total sense how it can understand and act upon its environment. What is unexplainable, and perhaps always will be, is the awareness that I am. Even if I meditate and remove all aspects of the surrounding world from me what I’m left with is a pure awareness that groups of neurons simply can’t explain.
Why am I present at all and not just a mindless automaton? Surely an automaton could evolve and be just as successful simply by being a complex neural network. This is why I, and lots of others, think that awareness is fundamental.
You can never remove internal body sensations even in meditation. The "self" is just a stationary point in this inner space, an element referenced in most situations, but not all, such as when listening to great music we can forget about ourselves.
So my idea was - we have an actual system for representing sensations, a system for imagination and a system for value prediction. We have the hardware for it, and we train our consciousness hardware on environment data.
If you're thinking - how can simple neurons or protein interactions do this? it's the same with a neural net. We don't understand how it can tell cats from dogs, or why it can answer our questions so well. Yet it does. Hard to believe unless you see it and even then it's hard to accept, unless you come up with your own private inputs and test it in a way that is sure not to be memorized.
No I’m saying it makes total sense that neurons can model the environment and act upon them. That isn’t at all hard to grasp. What is unexplainable is the fundamental awareness of being a thing at all.
Essentially why is it like something to be this system of neurons modeling the environment and acting on it.
I mean. You could write program with a "mind" object that recieves a bunch of data through various sensory "experience". From the perspective of the "mind", the data is "subjective", and the mind is "implemented" in exactly a way that it can represent itself as an entity "I".
I don't think the biological reality is conceptually any more complicated, except that the mind and data are complex in exactly a way that completely hides the abstraction, roughly by being very good at ignoring meaningless artifacts of abstraction.
The hard part isn't imagining such a subjectivity, but imagining that I am that.
But, where does the color red come from? The actual first person experience of seeing red. Not eyes seeing a specific wavelength, but your mind seeing red.
What I find so frustrating in these conversations is exactly this mismatch. There's nothing special about the wavelength that creates the color red, but the color red is completely distinct from that. They're barely even related!
Frankly, your non-understanding is not unique. We're all rowing the same boat :)
What makes sense to me is that consciousness is not an emergent property but a core of all things, with the additional property that is replicative/additive. That is smaller consciousness's can form larger consciousness's.
As to what it is, or why it exists at all, I don't think there will ever be answer to that. It just is.
Its definitely a strange thought, but it seems more likely to me than neurons or whatever other brain matter somehow produce consciousness out of thin air as soon as they some level of composition.
Pure materialism also seems very ill defined to me. The physical world is after all only observable/detectable/can be studied upon, through conscious experience. At best we can say what is real is what is universally agreed upon by all observing conscious agents. If hypothetically there were only two of us, and I said "There is no ball in front of us" and you said "There is", then what is the meaning of physical true/reality?
You can say lets use a detector. But then again, if I experienced the detector as saying false and you said its true, what do we do?
It seems unavoidable that reality is a part of conscious experience, and not the other way around.
Something something LLMs can only predict the next word.
I hate to spin up this trendy debate again, but it's always funny to me to see the dissonance when talking about the exact same things in biological and mathematical cases.
LLMs don't even come close to the complexity of the human mind though. They're a pastiche of human language, a fuzzy jpeg of the Internet.
The human mind is _so much more_ than a prediction machine, and incredibly complex... All that's before you get into the way the endocrine system interacts with your mind.
A single neuron has an average of 250000 connections in some parts of the brain. The speed at which neuronal signals travel varies neuron to neuron from 2.5m/s to 200m/s.
Human minds are more than just prediction. The anterior lateral prefrontal cortex has the sole responsibility of prediction (not that nothing else does, just that the ALPC seems solely dedicated to that task) and is extremely good at it. Prediction can influence all sorts of mental processes such as most all forms of perception... But it is _not_ the same as _being_ all forms of perception. If something unpredictable enough happens in front of you: you'll still see it.
Sure there are limits to that: when focused on a task the predictive parts of sight tend to filter out visual data that doesn't match the signal you're looking for (see: basketball players passing the ball and a moon walking man in an ape suit) but if every basketball player turned into Spaghetti-os and started screaming you'd still hear the sounds and see the O's.
So sure: LLMs do a good job at basic prediction but they're nowhere near the complexity of the human mind, of which prediction is only a small piece.
(And let's not even get into efficiency... A brain runs on 20W of power)
At the end of the day this is mostly meaningless.
Biological creatures have to be efficient because they need to harvest energy from the environment around them. But to limit machines to that would be like saying we have to feed biomass to airplanes.
Their point was the insane efficiency, not the low consumption. Modern LLMs are many orders of magnitude less energy efficient; despite all our progress computational biology makes our silicon looks primitive in some metrics.
And they are generated by their host piece by piece (brain growth). And they last longer than AI processors with basic maintenance anyone can do (eat/sleep). And they usually continue working when their programs break (eg hardware failures or gamma rays) while NN’s and GA’s don’t.
There’s hardly a comparison. If they want to, I’d like to see them start with using dynamic chunking instead of tokenization like the mind does. Also, they can tell me what pre-training method uses 20W per second for 1+ trillion parameters. I could use that right now.
Of course, it's only a small piece of the puzzle, but I do wonder what kind of existing organism would roughly match it by complexity, at least in terms of problem solving capacity.
A bunch of cells likely have more than a few GB of data between them, but most of it is just instructions on how to fold proteins to replicate themselves. Is that intelligent behavior or just mindlessly executing a script, albeit in DNA form? I would lean towards the latter.
I suppose it's hard to compare biological intelligent systems directly since they all have to process all available sensory info in a timely manner, not just very narrow text input. Once there are more LMMs trained on vision, sound, and kinematic data it should get far easier to draw comparisons.
Unless of course someone had set up a Somebody else's problem field generator around them :)
Man why do all people working the most menial tech jobs have such an obsession suggesting some shitty "research" fad in CS as a solution centuries-old complex problems in all other science fields? It's cringe, reeks ignorance and the comparisons are flat out wrong most of the time.
It's especially worse when low-quality popular science journalism promotes this notion, like this Quanta article about the human vision system working just like transformers do.
How do you know what job moffkalast has, and why does it matter? This reads like pure snobbery to me.
(Also: moffkalast did not in fact suggest that anything is a solution to a centuries-old problem. "Some common rhetoric about LLMs is too simplistic" is a far cry from "LLMs resolve all the perplexities about human consciousness and thought".)
This the right term to use here.
If human brains have a model, then is language the transport layer on top of that? Is trying to get to intelligence via language no better than trying to get to "google" by modeling its TCP/IP traffic?
You can generate all kind of sentences like this all day you want in your consciousness. That does not make it any true.
There is zero evidence for existence of physical matter/materialism.
The only thing we know for sure that exists is consciousness.
And you suggest the complete opposite with zero evidence.
You may want to read on Descartes critiques from last century. Not even your own consciousness is a given.
Obviously there is no you. Just the experience. I thought that was pretty obvious.
Again, I suggest you read some more recent philosophy. This is a good start:
https://en.wikipedia.org/wiki/Cogito,_ergo_sum#Critique
There is also zero "evidence", by this extremely restrictive standard of "evidence", for existence of any consciousness aside one's own. This rhetorical strategy thus has a weakness: who or what exactly are you trying to convince?
Yes. That is correct.
Ego dissolution/self realiszation takes time. It's my goal in life. But haven't achieved yet. But your comment do point that I should not waste time on engaging in things like this which builds up the ego more.
I'm not even a cocktail biologist, but my understanding is cells effectively operate via a web of complex chemical reactions, so the notion of a cell holding primitive models might be analogous to the way a CPU executes an assembly instruction: not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU).
In a similar way, "anticipating events" could involve an analogue to computer memory: the processes that have run so far have lead to certain state being saved to memory that will now influence how the system reacts to stimuli in a way that's different from how it reacted before (e.g. sum a value with the value stored in a register).
CPUs are anticipating all the time how the future will evolve. They have caches (to be specific, expiration strategies), branch predictors, and speculative execution. Albeit for a very different purpose: to enhance processing speed, not to react to external events.
This CPU analogy of yours doesn’t comport very well with the article we’re commenting on, which detailed some specific experiments that show cells are not reacting in a predefined way that is due to their ‘wiring’, contrary to previous and maybe incomplete understanding of how cells work. I don’t know if the RAM analogy helps since the surprise is that non-brain cells do have memory and do cooperate with other non-brain cells to solve certain problems, and these collections of non-brain cells can apparently remember solutions to problems over time. So yes, memory can help with anticipating events, but that really supports the idea that cells are dynamic and doing some non-trivial processing vs the possibly outdated notion that they’re hard-wired and deterministic.
I think these are likely different only by way of their level of complexity. We simply substitute a word like "think" when the reactions to stimuli are far too complex and numerous for us to track fully. But ultimately said "thinking" is made up to many, many cells following those same stimulus/reaction patterns.
That web of complex reactions is sufficient. See Anticipatory Systems (Rosen) https://en.wikipedia.org/wiki/Anticipatory_Systems
What if it is wired to think?
Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.
I've occasionally run into science podcasts, going back almost a decade, where some researcher talks about the computational power of cell membranes, and how the synapses evolved from these mechanisms. Amoebas and paramecia navigate their environments, sense, and react through their cell membranes. Apparently, synapses evolved from these mechanisms.
The upshot of this for AI, is that the neural network model may be drastically incomplete, with far more computation actually happening inside actual individual neurons.
Nobody is attempting to have one-to-one correspondence between neurons and artificial "neurons", the fact that a single biological does much more doesn't imply some limitation or incompleteness (as long as the same computations can be implemented simply by having more of them, and as far as we understand, that seems to be the case) - the choice is primary because due to how our hardware parallelization works, we'd prefer to implement the exact same behavior with 1000 structurally identical simple "neurons" rather than have a single more complex "emulated neuron" that requires more complicated logic that can't be straightforwardly reduced to massive matrix multiplication.
that requires more complicated logic that can't be straightforwardly reduced to massive matrix multiplication
What are the rationale for thinking it doesn't matter? Or is it the case that researchers think a larger number of simpler neurons could be equivalent to a smaller number of "complex emulated neurons?"
Yes, computation as such is sufficiently universal that it's very reasonable to assume that a larger number of simpler neurons would be equivalent to a smaller number of "complex emulated neurons".
As another cocktail philosopher.
I think everyone should ponder this, when thinking about how they think, like as if they are the one thinking at all.
"Man can do what he wills but he cannot will what he wills.” ― Arthur Schopenhauer, Essays and Aphorisms
Just tried will willing ... success!
And yet, whatever it is that you willed is inexplicable. Why didn’t you will something else?
Ah - dare I say, "free"?
We have a deep-founded believe that the atom is the core of reality.
And everything emerges from there.
This materialism stems from René Descartes and his fellow philosophers.
And in the West it's often subconsciously combined it with evolutional theory. consciousness developed because it was useful somehow. However that's a very big leap to make.
Both theories have good arguments going for them but are very theoretical and need a lot more proof. Yet they form the basis for pretty much all Western thought
From a scientific perspective we have no idea how to create new consciousness or what it is.
From a human's experience it's more the other way around, reality is an emerging property of consciousness.
At the same time we also learned that matter & time is not as solid as we thought a few centuries ago.
In the brain there is an emergent reflection of a material reality happening where the brain is creating a fully constructed model of the world with its own independent existence, our day to day experience is a dream that's cohering to sense input. Whether or not that is what consciousness or our apparent point of view lives I don't know because I don't see how to logically prove it either way, but experimentally it seems like it does because our experiences align, and because you can alter people's state of consciousness through chemical and physical means.
What you're saying is the brain creates the illusion of a consciousness, that's something you would need more proof for now you're just assuming it because it's the dominant view based op on our materialistic culture.
If anything you are arguing the world is an illusion, not consciousness.
You can change what inputs a being get visually & other stimuli.
But you don't the change consciousness itself. The pictures & emotions change, but the experiencer doesn't.
Same with dementia, the body & connection to the world changes, the inner consciousness & it's ability to experience doesn't.
The irony is that we believe the world is real because our brain/conciousness does a really good job. Take for instance the simple fact that certain cultures can't distuingish green from blue. Or certain animals almost live based on feel of audio. Which experience of reality is the "real" world?
Materialism feels natural because our world feels real. But what we think is real and solid is what our brain has created for us, in reality both us and the world is very intertwined and hard to separate experience & experiencer.
The particular podcast didn’t come across with that link. Can you provide the title or number? I’d like to listen to it! I reviewed a fair amount the podcast list, but didn’t find a match to your description.
The title of the podcast series is: "Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas".
Joscha Bach also talks about this a lot. He calls the consciousness the monkey with a stick controlling the elephant. For a starting point, listen to his Lex Fridman interviews.
What we call consciousness may have same relationship to what creates it as Plato's cave shadows to what generates them.
This is one of Hofstadter’s big ideas that he explored in his main work: GEB, Mind’s I, and I am a Strange Loop. The latter is a good intro to his work.
Not everyone is a philosopher with a cocktail, but surely we're all cocktail philosophers.