Given that I don't agree with many of Yann LeCun's stances on AI, I enjoyed making this:
https://6ammc3n5zzf5ljnz.public.blob.vercel-storage.com/inf2...
Hello I'm an AI-generated version of Yann LeCoon. As an unbiased expert, I'm not worried about AI. ... If somehow an AI gets out of control ... it will be my good AI against your bad AI. ... After all, what does history show us about technology-fueled conflicts among petty, self-interested humans?
it’s hard to disagree with him with any empirical basis when all of his statements seem empirically sound and all of his opponent’s AI Doomer statements seem like evidenceless FUD
I couldn’t help noticing that all the AI Doomer folks are pure materialists who think that consciousness and will can be completely encoded in cause-and-effect atomic relationships. The real problem is that that belief is BS until proven true. And as long as there are more good actors than bad, and AI remains just a sophisticated tool, the good effects will always outweigh the bad effects.
Wait. Isn't literally the exactly other way around? Materialism is the null hypothesis here, backed by all empirical evidence to date; it's all the other hypotheses presenting some kind of magic that are BS until proven.
A wise philosopher once said this.
You know your experience is real. But you do not know if the material world you see is the result of a great delusion by a master programmer.
Thus the only thing you truly know has no mass at all. Thus a wise person takes the immaterial as immediate apparent, but the physical as questionable.
You can always prove the immaterial “I think therefore I am”. But due to the uncertainty of matter, nothing physical can be truly known. In other words you could always be wrong in your perception.
So in sum, your experience has no mass, volume, or width. There are no physical properties at all to consciousness. Yet it is the only thing that we can know exists.
Weird, huh?
Reminds me of Donald Hoffman’s perspective on consciousness
Philosophy as a field has been slow to take probability theory seriously. Trying to traffic in only certainty is a severe limitation.
But the brain that does the proving of immaterial is itself material so if matter is uncertain then the reasoning of the proof of immaterial can also be flawed thus you can't prove anything.
The only provable thing is that philosophers ask themselves useless questions, think about them long and hard building up convoluted narratives they claim to be proofs, but on the way they assume something stupid to move forward, which eventually leads to bogus "insights".
Yet empirically we know that if you physically disassemble the human brain, that person’s consciousness apparently creases to exist, as observed by the result on your rest of the body even if it remains otherwise intact. So it appears to arise from some physical properties of the brain.
I’m ignoring the argument that we can’t know if anything we’re perceive is even real at all since it’s unprovable and useless to consider. Better to just assume it’s wrong. And if that assumption is wrong, then it doesn’t matter.
Sure, you can prove that "I think therefore I am" for yourself. So how about we just accept it's true and put it behind us and continue to the more interesting stuff?
What you or I call external world, or our perception of it, has some kind of structure. There are patterns to it, and each of us seem to have some control over details of our respective perceptions. Long story short, so far it seems that materialism is the simplest framework you can use to accurately predict and control those perceptions. You and I both seem to be getting most mileage out of assuming that we're similar entities inhabiting and perceiving a shared universe that's external to us, and that that universe follows some universal patterns.
That's not materialism[0] yet, especially not in the sense relevant to AI/AGI. To get there, one has to learn about the existence of fields of study like medicine, or neuroscience, and some of the practical results they yielded. Things like, you poke someone's brain with a stick, watch what happens, and talk to the person afterwards. We've done that enough times to be fairly confident that a) brain is the substrate in which mind exists, and b) mind is a computational phenomenon.
I mean, you could maybe question materialism 100 years ago, back when people had the basics of science down but not much data to go on. It's weird to do in time and age when you can literally circuit-bend a brain like you'd do with an electronic toy, and get the same kind of result from the process.
--
[0] - Or physicalism or whatever you call the "materialism, but updated to current state of physics textbooks" philosophy.
Descartes. And it’s pretty clear that consciousness is the Noumenon, just the part of it that is us. So if you want to know what the ontology of matter is, congratulations, you’re it.
While I agree 100% with you, everyone thinks that way about their own belief.
So let's put it differently.
True or not, materialism is the simplest, most constrained, and most predictive of the hypotheses that match available evidence. Why should we prefer a "physics + $magic" theory, for any particular flavor of $magic? Why this particular flavor? Why any flavor, if so far everything is explainable by the baseline "physics" alone?
Even in purely practical terms, it makes most sense to stick to materialism (at least if you're trying to understand the world; for control over people, the best theory needs not even be coherent, much less correct).
But the religious nuts will say "no, 'god did it' is the simplest, most constrained explanation".
I'm not arguing that they're correct. I'm saying that they believe that they are correct, and if you argue that they're not, well, you're back to arguing!
It's the old saw - you can't reason someone out of a position they didn't reason themself into.
Maybe, but then we can still get to common ground by discussing a hypothetical universe that looks just like ours, but happen to not have a god inside (or lost it along the way). In that hypothetical, similar to yet totally-not-ours universe ruled purely by math, things would happen in a particular way; in that universe, materialism is the simplest explanation.
(It's up to religious folks then to explain where that hypothetical universe diverges from the real one specifically, and why, and how confident are they of that.)
You've never actually met a religious person, have you. :)
I used to be one myself :).
I do of course exclude people, religious or otherwise, who have no interest or capacity to process a discussion like this. We don't need 100% participation of humanity to discuss questions about what an artificial intelligence could be or be able to do.
Yeah. One could equally imagine that dualism is the null hypothesis since human cultures around the world have seemingly universally believed in a ‘soul’ and that materialism is only a very recent phenomenon.
Of course, (widespread adoption of) science is also a fairly recent phenomenon, so perhaps we do know more now than we did back then.
You're right. Materialism IS the null hypothesis. And yet I know in my heart that its explanatory power is limited unless you want to write off all value, preference, feeling and meaning as "illusion", which amounts to gaslighting.
What if the reverse is true? The only real thing is actually irrationality, and all the rational materialism is simply a catalyst for experiencing things?
The answer to this great question has massive implications, not just in this realm, btw. For example, crime and punishment. Why are we torturing prisoners in prison who were just following their programming?
To me, it seems like LeCun is missing the point of the (many and diverse) doom arguments.
The is no need for consciousness, there is only a need for a bug. It was purely luck that Nikita Khrushchev was in New York when Thule Site J mistook the moon for a soviet attack force.
There is no need for AI to seize power, humans will promote any given AI beyond the competency of that AI just as they already do with fellow humans ("the Peter principle").
The relative number of good and bad actors — even if we could agree on what that even meant, which we can't, especially with commons issues, iterated prisoners' dilemmas, and other similar Nash equilibria — doesn't help either way when the AI isn't aligned with the user.
(You may ask what I mean by "alignment", and in this case I mean vector cosine similarity "how closely will it do what the user really wants it to do, rather than what the creator of the AI wants, or what nobody at all wants because it's buggy?")
But even then, AI compute is proportional to how much money you have, so it's not a democratic battle, it's an oligarchic battle.
And even then, reality keeps demonstrating the incorrectness of the saying "the only way to stop a bad guy with a gun is a good guy with a gun", it's much easier to harm and destroy than to heal and build.
And that's without anyone needing to reach for "consciousness in the machines" (whichever of the 40-something definitions of "consciousness" you prefer).
Likewise it doesn't need plausible-future-but-not-yet-demonstrated things like "engineering a pandemic" or "those humanoid robots in the news right now, could we use them as the entire workforce in a factory to make more of them?"
I agree. Also, I’ve heard LeCun arguing that a super intelligent AI wouldn’t be so “dumb” as to decide to do something terrible for humanity. So it will be 100% resistant to adversarial attacks? And malicious actors won’t ever train their own? And even if we don’t go all the way to super intelligence, is it riskless to progressively yield control to AIs?
Missing the point is a nice way of putting it. LeCun’s interests and position him to miss the point.
Personally, I view his takes on AI as unserious — in the sense that I have a hard time believing he really engages in a serious manner. The flaws of motivated reasoning and “early-stopping” are painfully obvious.
Details are fun but the dilemma is: should the humanity seriously cripple itself (by avoiding AI) out of the fear it might hurt itself (with AI)? Are you gonna cut off your arm because you might hit yourself in the face with it in the future? The more useful the tool is, the more dangerous it is usually. Should we have killed all nuclear physicists before they figured out how to release nuclear energy? And even so.. would that prevent things or just delay things? Would it make us more or less prepared for the things to come?
Good points, and I prefer this version of the "AI Doomer" argument to the more FUD-infused ones.
One note: "It was purely luck that Nikita Khrushchev was in New York when Thule Site J mistook the moon for a soviet attack force." I cannot verify this story (ironically, I not only googled but consulted two different AI's, the brand-new "Reflection" model (which is quite impressive) as well as OpenAI's GPT4o... They both say that the Thule moon false alarm occurred a year after Khrushchev's visit to New York) Point noted though.
Hello, thank you for sharing your thoughts on this topic. I'm currently writing a blog post where the thesis is that the root disagreement between "AI doomers" and others is actually primarily a disagreement about materialism, and I've been looking for evidence of this disagreement in the wild. Thanks for sharing your opinion.
Really? You sound serious. I would recommend rethinking such a claim. There are simpler and more plausible explanations for why people view existential risk differently.
What are those? Because the risk is far higher if you believe that "will" is fundamentally materialist in nature. Those of us who do not (for whatever reason), do not evaluate this risk remotely as highly.
It is difficult to prove an irrational thing with rationality. How do we know that you and I see the same color orange (this is the concept of https://en.wikipedia.org/wiki/Qualia)? Measuring the wavelength entering our eyes is insufficient.
This is going to end up being an infinitely long HN discussion because it's 1) unsolvable without more data 2) infinitely interesting to any intellectual /shrug
If you look at the backgrounds of the list of people who have signed the "AI Doomer" manifesto (the one urging what I'd call an overly extreme level of caution), such at Geoffrey Hinton, Eliezer Yudkowsky, Elon Musk etc... you will find that they're all rational materialists.
I don't think the correlation is accidental.
So you're on to something, here. And I've felt the exact same way as you, here. I'd love to see your blog post when it's done.
It’s no less BS than the other beliefs which can be summed up as “magic”.
So basically I have to choose between a non-dualist pure-materialist worldview in which every single thing I care about, feel or experience is fundamentally a meaningless illusion (and to what end? why have a universe with increasing entropy except for life which takes this weird diversion, at least temporarily, into lower entropy?), which I'll sarcastically call the "gaslighting theory of existence", and a universe that might be "materialism PLUS <undiscovered elements>" which you arrogantly dismiss as "magic" by conveniently grouping it together with arguably-objectively-ridiculous arbitrary religious beliefs?
Sounds like a false-dichotomy fallacy to me
Many people disagree with LeCun for reasoning having nothing to do with materialism. It is about logical reasoning over possible future scenarios.
It's a food thing our fate won't be sealed by a difference in metaphysical beliefs.