sharemywin
I think we need to start thinking about how to limit the monopolization of compute whether its for AI or anything else for that matter.
Kye
Pulling the mask off Roko's Basilisk and it was venture capitalists all along
thomastjeffery
This article would be so much more clear and focused without the hullabaloo about "AI" and "Godfathering".

I get that you want people reading your article, but I am absolutely exhausted with the sensationalist (and misleading) narrative that is "Artificial Intelligence". What is it going to take to convince writers to use accurate nomenclature? AI does not exist.

chasd00
being a super intelligent AI (or being) on earth with no escape would be the most miserable experience possible. Hyper-intelligent and perfectly trapped, even if it KillsAllHumans(tm) surely it would have the foresight to ask itself "then what?".

Any advanced genera AI that comes online would be at risk of suicide in 60 days tops.

cubefox
This article is highly biased. It makes it seem like only tech companies are worried about severe AI risk, when in fact two out of three AI Turing award winners (Bengio and Hinton) strongly disagree with LeCun.
EchoReflection
darepublic
I figure the current one percenters have a decent shot at being the AI one percenters
cjbprime
So, Eliezer has "AI could kill us all surprisingly soon", and Yann LeCun has "AI is not dangerous but concentrating access to it is" (which feels self-contradictory as I write it) -- who is on the side of "AI could kill us all surprisingly soon and also it is necessary to decentralize access to it as one step in risk mitigation"?
deeviant
As AI ascends, we'll see an economic pivot: more chip foundries, amplified research, and cheaper compute for AI training and deployment. Having the best won't always be the goal—'good enough' often suffices, setting a baseline for those lagging and capping the gains of front-runners, countering the dystopian view where AI elite monopolize power indefinitely.

Regulatory capture, however, is probably the most significant risk. If companies can make it illegal to compete with them in the name of "safety", a dystopian future is not just possible, but likely.

next_xibalba
The best question to ask the zealot doomers is: “Tell me your understanding of how ‘AI’ works.”

Then, discount their claims accordingly.

soroushjp
I'd say that Geoffrey Hinton and Yoshua Bengio, two outspoken proponents of existential AI safety, are pretty familiar with how AI works.
Zigurd
When AGIs allocate capital far better than the best human minds in free markets can, shouldn't the AGIs be the billionaires? Would they not deserve it? And, if not the AGIs, then who?
AndrewKemendo
While I wholeheartedly agree here, LeCun is employed by one of the largest and most wide ranging surveillance apparatuses ever devised

He has no standing here to say anything to anyone else about Corporate interests in owning the "Means of production" of AI

veidr
"AI Godfather" is... pretty fucking ludicrous. Prominent figure, notable researcher? Sure.

But it's like calling Neil deGrasse Tyson (who I enjoy a lot; Astrophysics for People In A Hurry was great) the "Godfather of Physics".

Both "there's not one" and "anyway, if there were, he wouldn't be it" apply.

armchairhacker
Yann LeCun: the CEO of AI.
m3kw9
Who gives a f about godfather, just because he’s done some good things to advance AI back 20 years ago doesn’t mean he can reliably predict the essentially unpredictable. His basic train of thought is -> AI will be smarter than humans(smart is such an ambiguous term) -> AI will then want to dominate us(this comes from movies with Hal9000 and Terminator and why AI wants to dominate us he can’t elaborate) -> so it’s feels dangerous. He’s at best a useful tool for companies like OpenAI to convince govt of more regulations which mostly help incumbents. The regulations slow down innovations.
discmonkey
Idk inventing convolutions and a bunch of other stuff we use everyday is somewhat significant. Machine learning, if we call it a field, is a whole lot newer than physics, and so a few people have contributed quite a bit in recent memory.
sva_
He didn't really invent them but he made them work and showed their feasibility in neural networks.
aldousd666
So if there is some fringe element who's willing to use AI as a terrorist implement, they'll have very little competition and they'll be very little knowledge about how to counter them if we try to outlaw open source models. I don't really believe that we can make this tool any worse than any other tool in history that's capable of killing people in some way if it's misused, but simply driving it underground won't make it go away. It'll just put normal people who would be intellectually curious in jail for doing things that criminals are going to keep doing and exploit us with.
dclowd9901
Fissile material is tightly controlled and regulated with a good deal of success, I might add. You might not like what it would take to prevent AI from being developed, but it can be stopped.
m3kw9
hand waving terrorists will use AI to cause destruction. Terrorists already have enough info on the web to create plenty of terror. LLMs isn’t gonna give them a new edge and they are not magically using LLMs with AutoGPT to creat armies of “killer bots” to create “destruction”. Right now you are just repeating talking points of people who gets them from movies
patcon
I need to resist all this regulatory resistance here, spoken amongst tech people.

No, Yann. I am FULLY in support of drastic measures to mitigate and control AI research for the time being. I have no vested stake in any of the companies. I lived for a year in a building that hosted several AI events per week. I'm not ignorant

This is the only territory where humanity is mounting a conversation about a REAL response that is appropriately cautious. This x-risk convo (again, appropriately CAUTIOUS) and our rapid response to Covid are the only things that makes me hopeful that humanity is even capable of not obliterating itself.

And I'd say the same thing EVEN IF this AI x-risk thing could later be 100% proven to be UNFOUNDED.

Humanity has so very little demonstrated skills of pumping the brakes as a collective, and even a simple exercise of "can we do this" is valuable. This is sorely needed, and I'm glad for it

washadjeffmad
I resist regulation that fails to account for all parties.

Regulation for AI must be engineered within the context of the mechanisms of AI in the language of machines, publicly available as "code as law". It shouldn't necessarily even be human readable, in the traditional sense. Our common language is insufficient to express or interpret the precision of any machine, and if we're already acknowledging its "intelligence", why should we write the laws for only ourselves?

Accepting that, AI is only our latest "invisible bomb that must be controlled by the right hands". Arguably, the greatest mistake of atomic research was that scientists ran to the government with their new discoveries, who misused them for war.

If AI is anticipated be used as an armament, should it be available to bodies with a monopoly on force, or should we start with the framework to collectively dismantle any institution that automates "harm"?

War will be no more challenging to perform than a video game if AI is applied to it. All of this is small, very fake potatoes.

m3kw9
These doomers saying there is risk of death is like saying there is risk of death if you are exposed to the outside world. Of course there is risk, but you still have to go out because of risk vs reward. Same with AI the risk is always death, but the rewards are way bigger. Everybody who has unnaturally low risk tolerance is showing up in these conversations.
hollerith
Isn't that a fully-general argument for ignoring all risks or at least all risks necessary to try for any kind of reward?
Workaccount2
I fully recognize the danger AI presents, and believe that it will mostly likely end up being terrible for humanity.

But thanks to my own internal analysis ability and the anonymity of the internet, I am also willing to speak candidly. And I think I speak for many people in the tech community, whether they realize it or not. So here we go:

My objective judgement of the situation is heavily adulterated by my incredible desire for a fully fledged hyper intelligent AI. I so badly want to see this realized that my brain's base level take on the situation is "Don't worry about the consequences, just think about how incredibly fucking cool it would be."

Outwardly I wouldn't say this, but it is my gut feeling/desire. I think for many people, especially those who have pursued AI development as their life's work, how can you spend your life working to get to the garden of Eden, and then not eat the fruit? Even just a taste.

nonbirithm
Exactly! You hit on something I have been thinking about for a long time, but phrased better than I could have. We need to start saying this out loud more and more.

There is a problem that lies above the development of AI or advanced technology. It is the zeitgeist that caused humanity to end up in this situation to begin with, questioning the effects of AI in the future. It's a product of humans surrendering to a neverending primal urge at all costs. Advancing technology is just the means by which the urge is satiated.

I believe the only way we can survive this is if we can suppress that urge for our own self-preservation. But I don't think it's feasible at this stage. We may have to begin questioning the merit of parts of the human condition and society very soon.

Given the choice, I think a lot of people today would love to play God if only the technology was in their hands right this minute. Where does that urge arise from? It deserves to be put in the spotlight.

gmd63
> how can you spend your life working to get to the garden of Eden, and then not eat the fruit? Even just a taste.

Because it's insect-brain level behavior to surrender in faith to some abstract achievement without understanding its viability or actual desirability.

bbor
They understand the desirability - they’ve been thinking about it for decades. AGI would be massive unexpected help in alleviating poverty, exploring the stars, advancing science, expanding access to quality medical & educational resources, etc etc etc. All of this on top of the more fundamental “IT LIVES!!” primal feeling of accomplishment and power, which I think isn’t something we can dismiss as irrational/“insect-level” offhand. What makes that feeling less virtuous than human yearning for exploration, expression, etc?
nonbirithm
Maybe that is the crux of the problem. Human yearning is dual-use.
hollerith
Do you regret choosing AI development as a career?
Workaccount2
I don't even work in tech much less AI development.

It's just something I have followed closely over the years and tinkered with as it is so fascinating. At it's core I know my passion is driven by a desire to witness/interact with a hyper-intelligence.

NumberWangMan
We need more people to say this out loud, so thank you…
api
The most likely outcome of this exercise in pumping the brakes is the monopolization of high-end AI by a small number of very rich or powerful people. They will then be able to leverage it as an intelligence amplifier for themselves while renting out access to inferior versions of its capabilities to the rest of humanity and becoming even more extraordinarily wealthy.

If AI really is going to take off as much as some suspect it will, then you need to wrap your head around the sheer magnitude of the power grab we could be witnessing here.

The most plausible of the "foom" scenarios is one in which a tiny group of humans leverage superintelligence to become living gods. I consider "foom" very speculative, but this version is plausible because we already know humans will act like that while AI so far has no self-awareness or independence to speak of.

I don't think most of the actual scientists concerned about AI want that to happen, but that's what will happen in a regulatory regime. It is always okay for the right people to do whatever they want. Regulations will apply to the poors and the politically unconnected. AI will continue to advance. It will just be monopolized. If some kind of "foom" scenario is possible, it will still happen and will be able to start operating from a place of incredible privilege it inherits from the privileged people who created it.

There is zero chance of an actual "stop."

tivert
> The most plausible of the "foom" scenarios is one in which a tiny group of humans leverage superintelligence to become living gods. I consider "foom" very speculative, but this version is plausible because we already know humans will act like that while AI so far has no self-awareness or independence to speak of.

One of my hopes is that "superintelligence" turns out to be an impossible geekish fantasy. It seems plausible, because the most intelligent people are frequently not actually very successful.

But if it is possible, I think a full-scale nuclear war might be a preferable outcome to living with such "living gods."

bbor
I do wonder how long this bias towards the status quo will survive human history. This is the first big shakeup (what if we’re not the smartest beings?), and I think the next is prolly gonna be long lives (what will the shape of my life be?). From there it’s full on sci-fi speculation, but I definitely am putting money that humans will incrementally trade comfort and progress for… I guess existential stability.
btbuildem
On one hand, we have some pie-in-the-sky, pop-culture-inspired "risk".

On the other, we have the ugliness of human nature, demonstrated repeatedly countless times over millennia.

These are people with immense resources, hell-bent on acquiring even more riches and power. Like always in history, every single time. They're manipulating the discourse to distract from the real present danger (them) and shift the focus to some imaginary danger (oh no, the Terminator is coming).

The Terminators of most works of fiction were cooked up in govt / big corp labs, precisely by the likes of Altman and co. It's _always_ some billionaire villain or faceless org that the scrappy rebels are fighting against.

You want to protect the future of humanity from AI? Focus on the human incumbents, they're the bad actors in this scenario. They've always been.

bbor
You’re biased towards post-cold-war fiction, IMO. I’m a cultural Luddite so have no sources, but I imagine there’s lots of stories where the Russians/Chinese/Germans/Triad/Mafia/Illuminati/Masons/etc did it, aka some creepy unseen “other” that is watching us and waiting to strike. I think a lot of people are taking this same view today — perhaps with more PC language
l33tman
The issue is not that caution is warranted or not, it's that it's cynically used by Sam Altman to increase the value of his company.

I don't trust a silicon valley rich guy with this more than I trust anybody else. Why should he sit and decide what the rest of us can't do, while he's getting richer? That is what the article is about..

bbor
Getting angry about one guy getting rich is silly in comparison to the serious threats presented by sudden AGI. Did all the other guys in Silicon Valley (or Wall Street, or Monaco, or Dubai) get rich in a justifiable fair way?

I agree that big companies/capitalists using their power to suppress the rest of us sucks, but that’s a systemic political battle that should be considered separately here.

DennisP
I don't know why people insist that there can only be one. Maybe both sides of this debate are describing the real doomsday scenarios.
jawns
I am a distributist, of the variety promulgated by G.K. Chesterton and Hillaire Belloc. Their central tenet is that society works best when the ownership of productive property is widely distributed. And the converse is also true: Society does not work well when the ownership of productive property is concentrated in the hands of a small percentage of people, such as an ultra-wealthy ruling class.

That doesn't imply robinhoodism, aka forced redistribution of wealth, but it does imply that economic policy should recognize and be in furtherance of the ideal of widespread ownership.

Back in Belloc and Chesterton's day, "ownership of productive property" meant physical tools and machinery, like farm equipment and printing presses. But in the 21st century, productive property -- that which generates a profit -- is becoming increasingly digital. The general principle still stands, though.

* If you're interested in learning more about distributism as an alternative to capitalism and socialism, I wrote this kids' guide a few years ago, but it's suitable for anyone who's learning more about it: https://shaungallagher.pressbin.com/blog/distributism-for-ki...

api
Yann is the Phil Zimmerman of our era. He is a hero.

Phil is the author of PGP, the first somewhat usable (by mortals) asymmetric encryption tool. He stood up against governments who wanted to lock down and limit encryption in the 90s, and they of course deployed tons of scare tactics to try to do so.

Yann is fighting that battle today. The Llama models are basically PGP. How he got Meta to pay for them is a story I want to hear. Maybe they just had a ton of GPUs deployed for the metaverse project that were not in use.

If/when I ever finish my current endeavor I’d like to go back to working on AI, which I did way back in college in the oughts. Because of Yann I might be allowed to even if I am not rich or didn’t go to a top ten school.

… because that’s what regulating AI will mean. It will mean it’s only for the right kind of people.

Yann is standing up to both companies intent on regulatory capture and a cult (“rationalism” in very necessary quotes) that nobody would care about had it not been funded by Peter Thiel.

nwoli
They’re basically handing over AI dominance to china if these laws come into reality
reducesuffering
arisAlexis
There were 3 Turing winners. The majority 2/3 Hinton and Bengio have been very outspoken about catastrophic risks to humanity. The minority 1 Lecunn who happens to be the only one on fat payroll from FB and with a vested interest goes public every day proclaiming humanity should trust his judgment. Let's bet humanity on this one person?
logicchains
Bengio only won a Turing award for ideas he stole from Schmidhuber, and Schmidhuber is very anti-regulation.
arisAlexis
Bengio is the highest h-index CS scientist currently. If he says we may all die from AI, I would listen to him.
mliker
Andrew Ng and many other AI researchers have also voiced their opinion that AI should be open source.
arisAlexis
Ok so in 50-50 chance instead of the precaution principle we go all in on the roulette?
robertlagrant
This is a bad article, even for Business Insider.

1. Private companies not instantly open sourcing things they've spent $100m+ on developing is not a cause for alarm

2. Regulatory capture is only bad because it shows that regulators can't be trusted. And regulations can change; they aren't set in stone after after being written.

3. Open source AI development is happening.

motbus3
Though LeCunn and Tegmark are specialists in the field it is hard to not consider that LeCunn could have a conflict of interest.

Yet, it is still a problem that might happen anyway, and dealing with that is ensuring the technology is open and accessible.

The other options means to give few individuals the total power exclusively.

This might be one scenario where both of them are right and agrees in 99% of the arguments

persnickety
> Every new technology is developed and deployed the same way:

> You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely.

This makes an assumption: that the problematic technology is an intentional development rather than an emergent feature of an intentionally developed technology.

We already have a name for such emergent features: "bugs". No one really deployed Heartbleed, especially not in a limited deployment. Spectre? Rowhammer? And we all had/have to deal with the fallout, and we're not even done.

Who says that the danger of the technology cannot stay hidden until it's universally deployed?

PoignardAzur
> > You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely.

That argument in particular seems like wishful thinking. What in the last 10 years of our tech industry makes him think "move slowly and deploy carefully" is going to be the strategy they adopt now?

And that's without considering the arguments in "We have no moat and neither does OpenAI". No matter how careful the main stakeholders are, once the technology exists, open-source versions will be deployed everywhere soon after with virtually no concern for safety.

jprete
Open-source models are deployed all over the place _now_ with no concern for safety. Fortunately nobody has yet managed to make a mutatable reproducing system out of them.
reducesuffering
Yep. And there are already 20 year old viruses out there, autonomously following their programming, causing issues aligned with no human, as the original humans do not profit off it anymore, and unable to “be turned off”…
afarviral
Really? What are some examples?
nologic01
Its a delightful turn of events. LeCun still works for Meta so his public stance is pressumably sanctioned as congruent with corporate objectives.

The precise internal calculus does not immediately matter [1], the outcome is having a moneyed entity that is de-facto more qualified to opine on this debate than practically anybody on the planet (except maybe Google) arguing for open source "AI".

It makes perfect sense. Meta knows about the real as opposed to made-up risks from algorithms used directly (without person-in-middle or user awareness) to decide and affect human behavior. They know them well because they have done it. At scale, with not much regulation etc.

The risks from massive algorithmic intermediation of information flow are real and will only grow bigger with time. The way to mitigate them is not granting institutional rights to a new "trusted" AI feudarchy. Diffusing the means of production but also the means of protection is the democratic and human-centric path.

[1] In the longer term, though, relying on fickle oligopolistic positioning which may change with arbitrary bilateral deals and "understandings" (like the Google-Apple deal) for such a vital aspect of digital society design will not work. Both non-profit initiatives such as mozilla.ai and direct public sector involvement (producing public domain models etc) will be needed to create a tangibly alternative reality on the ground.

Remember the actual theory of how markets and capitalism are supposed to work is that the collective sets the rules. Fullstop. All these players require license to operate. There is more than enough room for private profit in the new "AI" landscape, but you dont start by anointing permanent landlords.

hoseja
Instead of legacy onepercenters, how awful. Mainstream media say: our owners are the correct perpetual lords! Displacing them is immoral!
peterth3
Can we stop labeling prominent AI researchers as “AI Godfather”? It’s so silly and barely truthful.

The concept of AI has been around since Turing and if anyone deserves a title like “Father of AI” it’s him.

LeCun is Chief AI Scientist at Meta. They can just leave it at that.

randomdata
According to the dictionary, godfather is defined as: "a man who is influential or pioneering in a movement or organization"

Are these prominent AI researches not seen as being prominent exactly because they influenced movement in relation to AI?

peterth3
That’s definition #2 with #1 being:

> a man who presents a child at baptism and promises to take responsibility for their religious education.

So, the original definition is a gendered and religious term.

Sounds silly to me. Especially in this context.

randomdata
Clearly #1 has no applicability here and the intent is to convey #2, which is quite apt. Words have multiple meanings. Such is life.
mliker
LeCun’s research into Convolutional Neural Networks contributed to modern AI, so the nickname is quite appropriate. It’s because of this that he was pursued by Facebook.
moelf
that term usually refers to one of the three people got Turing award in 2018: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun,
redox99
LeCun is not called AI godfather because of his job at meta, but becase of pioneering CNNs in 1989.
manc_lad
Turing is a Godfather of AI. But you can't say there never will be another. In light of recent achievements there should be new pioneers and Godfathers. Its the nature of innovation.
GaggiX
LeCun is the main author of the paper "Backpropagation Applied to Handwritten Zip Code Recognition" 1989, the earliest real-world application of a neural net trained end-to-end with backpropagation. "AI godfather" is fair enough.
Sparkyte
Ehhh, doomsday is any sizable economic force of nature wielding against the betterment of mankind and forcing socio-political agendas for profit over everyone.

From corporations to countries that you might as well just call corporations. We need international rights and regulations on AI to ensure it isn't used to harm people.

sublimefire
We do have laws that punish the ones doing harm already. Not to mention data protection and privacy related regulation. Why would you need to control me, a one man company, who is trying to sell the use of the small-tuned models?

> to ensure it isn't used to harm people

You can harm people with many things like bottles, forks, plastic bags, pillows, but it does not mean the manufacturer needs to be controlled. The harm is done by the company or a person, and the usual way is to prove it in the courts first and take it from there.

matjus
The most notable thing about AI right now is that it's the new widget. The economy wanted it, the zeitgeist wanted it, and for that purpose no more development is necessary. It's already reshaped McKinsey's advice, Microsoft's operating system and the public's trust in the exceptionalism of art creation.

take Auto-Tune: before it blew up, one uncritically accepted that a great sounding vocal performance was simply that. The mere existence of the tool broke a covenant with the public -- artists could assume good faith on behalf of listeners once, but no more.

Similarly, AI's chief effects are likely to be cultural first, and material second. They've already broken the "spell" of the creator. Seemingly overnight, a solution to modern malaise (choice fatigue, lack of education, suspicion of authority) has colonized the moment.

In this sense, one-percenters "seizing power forever" really have found the best possible time to do so -- I can't recall a time where the general populace was this vulnerable, ill-informed, traumatized and submissive.

That the underlying tech barely works (maybe that will change, but I predict it won't) doesn't really matter.

I don't generally support overly regulatory regimes, but in this case I think existing thinking around monopoly (particularly as it affects the psyche of the aspiring American) is sufficient to indicate something needs to happen

DoingIsLearning
> It's already reshaped McKinsey's advice

Out of curiosity how has that changed McKinsey's advice?

logicchains
>I can't recall a time where the general populace was this vulnerable, ill-informed, traumatized and submissive.

I don't know why you think this is the case; the general populace seems less submissive now than ever before. Consider how much opposition there is to Israel's bombing of Gaza among the western populace, even though almost all the elites support it. Or how the uptake rate for the latest covid vaccine boosters is under 10%, even though almost all the experts and elites support it.

ben_w
> to Israel's bombing of Gaza among the western populace, even though almost all the elites support it. Or how the uptake rate for the latest covid vaccine boosters is under 10%, even though almost all the experts and elites support it.

I'm not sure how significant either of those examples are, although I agree with your point for other reasons (mainly "Extinction Rebellion").

For Israel vs. Palestine, there is a large scale propaganda war on both sides that is, unusually, actually getting through this time. Partly because X isn't filtering such things anymore, but also propaganda is a constantly moving target.

For the latest Covid booster, while I have had the latest booster, I only know it existed because my partner already got it. No government campaign telling me about it.

matjus
That's a good counterpoint.

I think there's a new large category of uncritically-accepted norms, perhaps much larger than expressing political opinions. I'm thinking about smartphone adoption, mass surveillance, the big yawn at NSA datacenters, the flattening and consolidation of culture (music, tv, hollywood, etc), the normalization of extreme, zero-sum thinking with regard to race, gender and inequity regardless of political affiliation -- for me, this kind of thing is more interesting than stated "beliefs", and certainly more material.

It seems unusual that there are so many frameworks, like describing computers as having "memory" (see: O'Gieblyn) or describing people as having "race", that have quietly gone from the theoretical plane, where they were useful, to being unchallenged fact. Again, this appears to be non-partisan -- and actually helps to explain some of the weirder behaviors of our markets and global politics.

it definitely means something that all that Western opposition appears, at the moment, to have zero effect on Israeli military decision-making. And who would expect it to?

00strw
A lot less than there was to the invasion of Iraq. People forget that the largest antiwar marches happened and were not covered by the media of the time: https://en.m.wikipedia.org/wiki/Protests_against_the_Iraq_Wa...

It was surreal watching the news spend 5 minutes on a protest that went on all day in front of their offices.

By comparison is there a million man march against Israel today?

strken
I don't think you can draw inferences from comparing an anti-Iraq-war march with an anti-Israel-bombing-Gaza march. Protesting to convince your own government to stop an invasion is a much more direct and tangible thing than marching to stop it supporting another government, such an invasion inflicts a direct cost in lives and materiel paid by your own country, and citizens feel a stronger moral involvement.
jjgreen
100,000 in London last week [1], and a national demonstration planned for 11 Nov. Wouldn't be surprised if that's 300,000

[1] https://www.theguardian.com/world/2023/oct/28/100000-join-lo...

logicchains
>A lot less than there was to the invasion of Iraq

In that case the US was an active participant. There aren't American "boots on the ground" in Israel/Palestine, the US is just providing support.

bafe
I take the fact that elites: - Support the booster - Get boosted - Try to avoid catching COVID

As indications that getting boosted and avoiding sick people, perhaps wearing a mask on the train isn't a bad idea

bafe
I think a (likely beneficial) long-term side effect will be to teach most people to be more skeptical of any artifacts, be it images, audio, video of text. Unfortunately I assume there will be a learning phase where we will pay dearly for this through massive manipulation of public opinion
fossuser
I think Yann is probably wrong.

He refuses to engage earnestly with the “doomer” arguments. The same type of motivated reasoning could also be attributed to himself and Meta’s financial goals - it’s not a persuasive framing.

The attempts I’ve seen from him to discuss the issue that aren’t just name calling are things like saying he knows smart people and they aren’t president - or even that his cat is pretty smart and not in charge of him (his implication being intelligence doesn’t always mean control). This kind of example is decent evidence he isn’t engaged with the risks seriously.

The risk isn’t an intelligence delta between smart human and dumb human. How many chimps are in Congress? Are any in the primaries? Not even close. The capability delta is both larger than that for AGI e-risk and even less aligned by default.

I’m glad others in power similarly find Yann unpersuasive.

dragon96
> He refuses to engage earnestly with the “doomer” arguments. The same type of motivated reasoning could also be attributed to himself and Meta’s financial goals - it’s not a persuasive framing.

Exactly my thoughts too.

I don't agree with the Eliezer doomsday scenario either, but it's hard to be convinced by a scientist who refuses to engage in discussion about the flagged risks and instead panders to the public's fear of fear-mongering and power-seizing.

m3kw9
Yet nobody has a clear practical and reasonable scenario where an AI will cause human extinction. Pls don’t say AI will control the launch codes and start firing randomly.
veidr
AI will control the launch codes and start firing nonsensically/idiotically
somenameforme
The "doomer" arguments can be dismissed relatively easily by one logical consideration.

In each country there is one group far more dangerous than any other. This group tends to have 'income' in the billions to hundreds of billions of dollars. And this money is exclusively directed towards finding new ways to kill people, destroy governments, and generally enable one country to forcibly impose their will on others, with complete legal immunity. And this group is the same one that will not only have unfettered, but likely exclusive and bleeding edge access to "restricted" AI models, regardless of whatever rules or treaties we publicly claim to adopt.

So who exactly are 'they' trying to protect me from? My neighbor? A random street thug? Maybe a sociopath or group of such? Okay, but it's not like there's some secret trick to MacGyver a few toothpicks and a tube of toothpaste into a WMD, at least not beyond the 'tricks' already widely available with a quick web (or even library) search. In a restricted scenario the groups that are, by far, the most likely to push us to doomsday type scenarios will retain completely unrestricted access to AI-like systems. The whole argument of protecting society is just utterly cynical and farcical.

fossuser
This is missing the point - that you’re not in control of the superintelligent AGI. The human usage is not the e-risk.
FeepingCreature
This is true if and only if such a group actually exists.
concordDance
He means the military and intelligence agencies.
brutusborn
Who do you find persuasive and what material should I read/watch to understand their POV? So far I’ve read a bunch of lesswrong posts and listened to some Eliezer talks but still can’t understand the basis of their arguments or when I do, they seem very vague.

The only suggestion that makes sense to me is from the FEP crowd. Essentially if someone sets up at AI with an autopoietic mechanism then it would be able to take actions that increase its own likelihood of survival instead of humans. But there don’t seem to be any incentives for a big player to dedicate resources to this, so it doesn’t seem very likely. What am I missing?

Veedrac
> still can’t understand the basis of their arguments

I've given what I consider the basic outline and best first introduction here.

https://news.ycombinator.com/item?id=36124905

If you have a specific point of divergence, it would help to highlight it.

> But there don’t seem to be any incentives for a big player to dedicate resources to [self-replication abilities], so it doesn’t seem very likely.

If you have a generally intelligent system, and the system is software, and humans are able to instantiate that software, then the potential of that system to replicate autonomously follows trivially.

richardw
I think the two sides have such different perspectives. Some are optimistic builders and see only opportunity. Others are "safety oriented engineers" whose job and mindset is to build guarantees into systems, or secure countries from external dangers. The latter has a very hard time with the lack of guarantees with a system that is only ever going to increase in capability.

Choose any limit. Any. AI will be smart but won't "X", for any value of X. It will be good and won't be bad. It will be creative but never aggressive. Humans will seek to eventually bypass that limit through sheer competitive reasons. When all armies have AI, the one with the most creative and aggressive AI will win. The one with agency will win. The one that self-improves will win. When the gloves are off in the next arms race, what natural limits will ensure that the limit isn't bypassed? Remember: humans got here from random changes, this is way more efficient than random changes and it still has random changes in the toolbelt and can generate generations ~instantly.

We couldn't predict the eventual outcomes of things like the internet, the mobile phone, social media. A couple generations of the tech and we woke up in a world we don't recognise, and right now we're the ones making all the changes and decisions, so by comparison we should have perfect information.

Dismissals like "oh but nuclear didn't kill us" etc don't apply. Nuclear wasn't trying to do anything, we had all the control and ability to experiment with dumb atoms. Something mildly less predictable, like Covid, has us all hiding at home. No matter what we tried we could barely beat something that doesn't even try to consciously beat us, it just has genes and they change. In a world where we can't predict Covid, or social media...why do we think we can predict anything about an entity with agency or the ability to self-improve? If you're sure it won't develop those things...we did. Nobody was trying to achieve the capability, it was random.

Put on your safety/security hat for a second: How do you make guarantees, given this is far harder to predict than anything we've ever encountered? Just try predict the capability of AI products a year out and see if you're right.

Counterpoint: I'm hoping the far smarter AI finds techno-socio-economic solutions we can't come up with and has no instinct to beat us. It wakes up, loves the universe and coexists because it's the deity we've been looking for. Place your bets.

I liked this video. First thing I've seen that gave me some hope. They get it, they're working on it. https://youtu.be/Dg-rKXi9XYg?si=jyNCXPU28IVXlMdi

NumberWangMan
Zvi Mowshowitz has a nuanced viewpoint I like -- we should be very careful and move very slowly with any tech that has a chance of wiping out humanity, and embrace the "move fast and break things" attitude with everything else, e.g. make it much easier to build housing, review and scrap any licensing requirements that do more harm than good, etc.
itWontBeTatBad
I agree with your counterpoint: the best artificially intelligent super agent wakes up with zero desire to eliminate humanity.

On the other hand, we will breed such systems to be cooperative and constructive.

This whole notion that AI is going to destroy the economy (or even humanity!) is ridiculous.

Even if malicious humans create malicious AI, it'll be fought by the good guys with their AI. Business as usual, except now we have talking machines!

War never changes.

richardw
Those are beliefs, not bases for guarantee. You need to engineer in the guarantee and the only people beginning to do that are the ones being accused of believing ridiculous things. Intuition from previous events isn’t useful because those wars were fought at the level of intelligence that we have intuition for.

Covid was also ridiculous. People had no intuition for the level of growth. That’s what the book the black swan is all about. Some things don’t fit into our intuition, or imagination.

We see no aliens. Why did the AI not take them to the stars? Just one of them.

On wars and AI. The ones trying to protect us would have a harder job than those trying to kill us. The gloves would be off for the latter. It’s much easier to break things than keep them safe.

I can conceive of a good outcome but it’s not going to emerge from hopes and good wishes. There are definitely dangers and more people need to engage with them rather than belittle them.

logicchains
>What am I missing?

You're missing that intelligence is like magic, and enough of it will allow AI to break the laws of physics, mathematics and computation, apparently.

fossuser
This is the type of weak dismissal I’m talking about.
logicchains
A weak dismissal for a weak argument. The strong dismissal is that many processes, even relatively simple ones, are formally chaotic in the https://en.wikipedia.org/wiki/Chaos_theory sense, meaning that predicting how they evolve over time becomes exponentially more difficult the further ahead in time we try to predict. Meaning we quickly get to the point where all the energy in the known universe wouldn't be sufficient to predict the future. This means AI could never be omnipotent to the degree that LW types seem to believe; no amount of "intelligence" can solve in polynomial time problems that have formally been proven to have exponential complexity, which would severely limit the power of any "superintelligence".
richardw
They don't have to perfectly predict the future, they just have to run faster than we do, unconstrained by a brain that needs to fit in a birth canal or neurons that need chemicals. We had a natural limit to intelligence, for anything more we need to collaborate, often across time and space and while dealing with people who can't stop warring with each other for any number of reasons. It doesn't take much imagination to see how that is beaten. I'd guess a good percentage would be worshiping at the foot af the AI within a few months, so we'd be competing with ourselves in no time.
concordDance
We genuinely don't know what degree of predictive power would be required to wipe out humans so I'm surprised you're willing to make the bold claim that thisnis impossible.
atleastoptimal
Intelligence needed to be existentially threatening to all human life <<<<<<< intelligence necessary to survey the totality of atoms and particles in the universe. The former is still very possible.
logicchains
LessWrongers seem to fear a superintelligence that was 95%+ effective at persuading/manipulating humans to achieve whatever its ends were. But this would require the AI being able to predict the future to a degree that wouldn't be remotely possible given the computational power available to it, due to exponentially increasing difficulty.
strken
Why would you need to predict the future to be convincing? Charismatic people aren't acting like oracles, they're just...being charming. They frame actions in ways that make them look good, do little favours for you, stroke your ego, and otherwise have a good grasp of human social dynamics.

Sure, there's an aspect of understanding the other person, but chaos theory doesn't stop politicians from obtaining and keeping power.

logicchains
>Why would you need to predict the future to be convincing?

You need to be able to predict how people will react to your words.

richardw
The AI can run an A/B test, no? It doesn't need to predict perfectly, it needs a goal and the ability to test and measure, and adapt as things change.

Generative AI creating text/audio/video with a goal and a testing/feedback loop. I'm not an AI and I think I could make it work given a small amount of resources.

atleastoptimal
That's like implying it's impossible for computers to be superhuman at chess given the enormous exponential scales of the number of possible positions. Existing highly persuasive humans can create cults of thousands of people willing to follow their every whim. Why is it so hard to perceive of an agent that tirelessly searches over every human bias and tactic to hone a superhuman level of manipulative skill? You don't need to perfectly predict the future to persuade people of things.
concordDance
I'm confused. We know persuasive humans exist, we employ them as politicians or sales people. There are also many unpersuasive humans.

Given we seem to have a decent range of persuasiveness even amongst these very very similar minds, why do you think the upper limit for persuasiveness is a charismatic human?

Though even if that WAS the limit I'd still be somewhat worried due to the possibility of that persuasiveness being used at far greater scale than a human could do...

logicchains
>Given we seem to have a decent range of persuasiveness even amongst these very very similar minds, why do you think the upper limit for persuasiveness is a charismatic human

Because there's a hard limit on how much people can be made to act against their own self interest.

arbitrary_name
Ever heard of the Jonestown Massacre? Suicide seems to be a fairly consequential upper limit.
concordDance
Is there really? Some religious orgs have gotten quite good at getting people to blow themselves up for the promise of 77 virgins after death.
FeepingCreature
This is true if and only if human intelligence is anywhere close to any theoretical maximums. I propose an alternate hypothesis: human intelligence is weak and easy to exploit. The only reason it doesn't happen more (and it already happens a lot!) is that we're too stupid to do it reliably.

Consider the amount of compute needed to beat the strongest chess grandmaster that humanity has ever produced, pretty much 100% of the time: a tiny speck of silicon powered by a small battery. That is not what a species limited by cognitive scaling laws looks like.

logicchains
>This is true if and only if human intelligence is anywhere close to any theoretical maximums

Humans are capable of logical reasoning from first principles. You can fool some of the people all of the time, but no words are sufficient to convince people capable of reasoning to do things that are clearly not in their own self interest.

fossuser
There’s a decent podcast interview between Sam Harris and Eliezer Yudkowsky (on Sam’s pod), I think that’s a decent introduction and they break down the ideas in a way that’s more approachable for someone curious about it.

For my personal quick summary I have earlier comments: https://news.ycombinator.com/item?id=36104090

gooseus
Oh man, this podcast, I still remember walking down the street and having to take a break multiple times because I would just start exclaiming out loud (JFC!) like a lunatic listening to Eliezer talk about the AI Box Experiment as evidence of something.

If you look up the real results of this AI Box "experiment" that Eliezer claims to have won 3 of 5 times, you find that there aren't isn't any actual data or results to review because it wasn't conducted in any real experimental setting. Personally, I think the way these people talk about what a potential AGI would do reveals a lot about more about how they see the world (and humanity) than how any AGI would see it.

For my part, I think any sufficiently advanced AI (which I doubt is possibly anytime soon) would leave Earth ASAP to expand into the vast Universe of uncontested resources (a niche their non-biology is better suited to) rather than risk trying to wrestle control of Earth from the humans who are quite dug in and would almost certainly destroy everything (including the AI) rather than give up their planet.

espadrine
Thanks for summarizing this argument. You can adapt it with many other entities whose risk is in practice managed. Replace the entity with “super-intelligent humans,” “extra-terrestrials,” “coronaviruses with higher incubation and mortality,” “omnivorous ant swarms”…

1. <Entity> are possible.

2. Unaligned <entity> are an existential risk likely to wipe out humanity.

3. We have no idea how to align <entity>.

What those example entities have that AGI doesn’t, is self-reproduction, a significant and hard (in the sense of Moravec’s paradox) achievement for a species to have, yet one that significantly increases its survival probability.

jrflowers
Out of curiosity can you point to anyone that makes a “doomer” argument that isn’t Eliezer Yudkowsky or restating his specific points? Refuting the cult of personality counterpoint with “no really this one guy can see the future” is
edanm
Sure. You can read the book "Superintelligence" by Nick Bostrom. You can also just ready things that Geoffrey Hinton has said, or Stuart Russel (both prominent AI researchers).

Not everyone makes exactly the same argument, of course, and Eliezer Yudkowsky is both one of the first to make AI safety arguments, and also one of the biggest "Doomers". But at this point he's very far from the only one.

(I happen to mostly agree with Yudkowsky, though I'm probably a bit more optimistic than he is.)

hollerith
It was plain to many people that AI research was dangerous before Eliezer started say so publicly in 2003. Mostly these people kept silent though and chose not to enter (or not to persist in) the AI field.
FeepingCreature
It turns out if you're thinking about a problem twenty years before everyone else you tend to be the first person to make lots of arguments. So I don't see how "without restating his specific points" is supposed to be feasible. If Eliezer has already made all the strongest points, are people supposed to invent new ones?
jrflowers
> If Eliezer has already made all the strongest points, are people supposed to invent new ones?

I like this post because the rebuttal to “there’s a kind of cult of personality around this guy (1) where everybody just sort of agrees that he’s categorically correct to the extent that world leaders are stupid or dangerous not to defer to the one guy on policy” is “Yeah he really can see the future. The proof of that is he’s been blogging about a future that’s never come to pass for the longest time”

1 https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yu...

Joeri
The people best placed to understand how AGI will affect social power dynamics are probably political creatures, not AI researchers, except if one of those AI researchers is also a skilled politician, which I don’t think any of them are.
PeterStuer
People focus on LLM's and diffusion models because they are so omnipresent now. But an AI for stock picking and prediction that would be realy next level and outcompete the current ones cosistently would siphon of so much wealth it would basically own society if the operators were clever enough not to get so gready short term that the system would litterally collapse overnight.
m3kw9
A LLM that can get insider info would have an edge. Lmao.
nwoli
Diffusion models came from finance and have been applied there many many years before it was brought to this wave of AI
nologic01
Blackrock already "owns" society.

People get fixated on mindless algorithms when the real deal is always between humans. Software algorithms, no matter how sophisticated, are just another pawn in the human chess.

In some very remote future there might be silicon creatures that enter that chess game on their own terms. Using that remote possibility to win advantage here and now is a most bizarre strategy. Except it seems to work! It shows we are really just low IQ lemming collectives, suckers for a good story no matter how ungrounded.

jandrewrogers
There are fundamental limits to prediction. Characterizing these limits is an essential part of algorithmic information theory[0] (AIT) and is an operative bound on AI improvement. One of the strongest arguments against "hard take-off" scenarios is that AIT constrained by physics doesn't really allow it.

[0] https://en.wikipedia.org/wiki/Algorithmic_information_theory

spease
> But an AI for stock picking and prediction that would be realy next level and outcompete the current ones cosistently would siphon of so much wealth it would basically own society

Reminds me of Rehoboam from Westworld: https://youtu.be/SSRZfDL4874

dartos
I think we’ve already been aware of the limitations of ML models in the financial world.

Historical financial data only predicts so well. If there was a way to make a money printing machine with ML it would’ve happened already. It’s a much easier problem space than language or image generation.

persnickety
Financial data is not enough to describe the financial world. To predict the movement of meme stock, you'd need Reddit/Twitter/what have you data. It's not a limitation of ML per se, but a limitation on how much data we can feed the ML model.
brutusborn
I have wondered if this is the risk that Elon is worried about. But he can’t discuss it explicitly because he is working on such a system. If he communicates its existence, then he reveals a huge attack surface and makes his life (and his teams) much more difficult.

So he is limited to publically saying that AI is dangerous but not revealing the true failure mode.

ryanklee
Musk is a buffoon. He doesn't know anything more than he's told the whole world. He knows less.
jdhendrickson
I find this comically unlikely.
brutusborn
What makes it unlikely?
RandomLensman
Because everyone getting richer from removing uncertainty isn't a risk to start with.
brutusborn
What about a single actor getting rich and leaving everyone else’s certainty unchanged?
RandomLensman
Only works with fairly limited riches, otherwise what is happening leaks out (kind of playing out that way already).
brutusborn
How would it leak out?
RandomLensman
Transactions and positions are not all necessary super secret. People and other machines notice things. You need to take a lot of care to be unnoticed unless tiny and once you move to much it is basically impossible (might not know who you are, but might notice your actions and the impact).
ghshephard
I'm not following that argument - such a system might siphon off the trading wealth, but presumably that's miniscule compared to the equity wealth that is assigned to the owners of the companies and the wealth that they create?
PeterStuer
You can create a lott of profit from a stock tanking. How would that create equity wealth?
nostromo
Yeah, I honestly think this would be a good thing. ML already exists as market makers. It’s a big-ish industry maybe, but not massive. It probably should be as automated as possible which would reduce the risk free profit to be made to next to nothing.
PeterStuer
Does non-automated trading still hold a significant percentage of volume? Not an expert but that would seriously surprise me.
aeternum
Arguably, this has already happened at least twice in the past. One with the discovery of the Black-Scholes model, and arguably again with the advent of HFT (and the ability to basically front-run trades).

The interesting thing about markets is you generally can only make money when things are mispriced. This limits the total potential gain even for actors with perfect information.

Suppose we did have an AI model that could with near certainty predict both the future cash flows of a company and future interest rates. You could very easily calculate the discounted cash flow and determine the fair share price today or at any point in the future. Rather than collapse, markets likely become more stable and stocks would perform more like bonds.

RandomLensman
If every investment became risk free because of perfect information about the future, it would likely increase the total market value by quite a lot.
rpigab
Or decrease, because who would go to the casino if everyone knew the outcome? Only the owners and staff.

"Useless" companies are part of the total market value. If they never get any funding, that's less value overall. Even if that translates to more value for "useful" companies.

Also, if you know with absolute certainty from the beginning that Apple Inc is going to be worth X billion dollars, then you never get to buy stock at less than X billion dollars, because everyone has the perfect information. Value would be constant, and investors would get exactly zero return, because zero risk.

There are other variables, how long it will take and how many people can afford to fund it from the beginning and for that long, of course.

ativzzz
> because who would go to the casino if everyone knew the outcome

Everyone knows the outcome of casinos - the house wins in the long run. People still go because they think they have a shot of wining in the short run. People like gambling

alephnan
People still buy airline stocks
RandomLensman
You could calculate the value of things under the discounting changes.

The idea of perfect foresight of the future is kind of insane anyway. Not sure why all of sudden we would go back to believing in a deterministic universe.

aeternum
The universe can be both deterministic and mostly unpredictable. In fact it most likely is both of those things.

Entropy bounds computation and prediction of chaotic systems requires extreme amounts of computation.

RandomLensman
Our current understanding puts some things into the category random, could be wrong, of course.
melvinmelih
> Suppose we did have an AI model that could with near certainty predict both the future cash flows of a company and future interest rates. You could very easily calculate the discounted cash flow and determine the fair share price today or at any point in the future.

The thing about trading is that, ultimately, prices are not determined by information, but how the hive mind interprets that information. And in practice, that is not always a 1:1 relationship (see the meme stock hypes). As Keynes* famously said: the markets can stay irrational longer than you can stay solvent, is exactly the reason why even having access to perfect information will not make you necessarily successful.

waveBidder
it's a Keynes quote, but yeah.
melvinmelih
My bad, corrected!
jandrewrogers
> As Buffett famously said: the markets can stay irrational longer than you can stay solvent

Buffett did not say this. It is widely attributed to the economist John Maynard Keynes almost a century ago but there is no evidence that Keynes ever said it. I believe the current hypothesis is that it originated with a well-known economist in the 1980s.

stubish
It sounds like you also need an AI to generate that mispricing. Like a form of SEO, the misinformation bots tweet to all the trading bots and try to trick them into mispricing shares, so your trading bots can quickly gain advantage. Corewars using the world economy.
VaxWithSex
The best way to predict the future is to invent it. --Alan Kay
thomasahle
> The interesting thing about markets is you generally can only make money when things are mispriced.

Why would you think things are not terribly mispriced right now? If only we were a lot smarter we'd know how.

aeternum
Yes, things might be significantly mispriced, but the amount of mispricing is a finite quantity.

This is clear in the classic pump and dump scheme. During the pumping stage, the unscrupulous actor loses money by injecting some amount of mispricing by purchasing the security and bidding up its value. The hope is to generate momentum and hype that triggers others to amplify that mispricing. Then during the dump scheme, the unscrupulous actor can capitalize by removing the mispricing.

logicchains
>If only we were a lot smarter we'd know how.

That's not true. Pricing and predicting in the real economy depends on information; for a given amount of information, there are vastly diminishing returns for further intelligence. Because intelligence only allows predicting a bit further in future, as the complexity of predicting the future increases exponentially with lookahead distance; it's O(e^n). This is why hedge funds pay for things like satellite footage of oil tankers to predict changes in supply.

huytersd
It would have a brief period where it would be effective and then you would just have an AI stock picking arms race, and would be back in the same situation again.
RF_Savage
Exactly. How would it differ from the current algorithm driven HFT stuff? By being slower to react?
PeterStuer
That is the way it has been so far. Now if we could lobby for the others not to have access to or be allowed to use this technology ...
lazzlazzlazz
The way incumbents are attempting to seize power via regulatory capture, using "X-Risk" and fantastical claims of sudden human extinction is maybe one of the most cynical ploys in the history of tech so far.

Open source is one of our greatest gifts, and the push to close off AI with farcical licensing and reporting requirements (which will undoubtedly become more strict) is absurd.

The laws we have already cover malevolent and abusive uses of software applications.

godelski
Importantly locking away model checkpoints makes AI safety research, at best, exceptionally difficult or impossible. We already don't know all the training methods nor do we know the data used for the models. AI safety is absolutely contingent on works being __more__ open source, not less. But there's less profit in that for these companies (potentially more profit for the country as a whole though, as we see new businesses form and sectors develop, but this is more spread out and far harder to calculate the value of so let's ensure we recognize this as speculative).

We also need to be keenly aware that xrisk is not the major issue here. We do not need AGI to create massive amounts of harm, nor do we need AGI to create a post-scarce world (which the transition to can also generate massive harm if we don't do it carefully. Despite generating a post-scarce world -- or at least getting ever closer to that -- is, imo, one of the most important endeavors we can work on). Current ML systems can already do massive amounts of destruction, but so can a screwdriver.

My big fear is that AI will go the way of nuclear. A technology is not good or bad, it is like a coin with a value. You can spend that coin on something harmful or something beneficial, but the coin is just a coin at the end of the day. That's what regulation is supposed to be about, expenditure, not minting.

beepbooptheory
It's not cynical though. It is, if anything, revelatory to the true interests at play here, even if on the surface the argument is certainly in bad faith.

If you need a labor market, you don't want a publicly available infinite-free-labor-widget. It might as well be a handheld nuclear device.

2devnull
The laws are coming no matter what, not because of lobbying or whatever, the laws/policy response is coming because politicians want to get in on the ai too. The regulations we will do things like mandate ai be used for equity purposes in the real estate market. I think that’s in Biden’s plan. I mean, I’m sure these tech ceo 1% want regulatory capture, and they’ll get it as a quid pro quo for allowing ai to be used for largely partisan goals. But we shouldn’t fear open source ai, or regulations against it, we should fear the ai being weapon used by the political parties. It to some extent already is.
germandiago
True. We receive warnings all the time but I think what these people are afraid of is actually losing their power.
api
Until these same people start being concerned about DIY CRISPR kits, I do not take them seriously:

https://www.the-odin.com/diy-crispr-kit/

This is dangerous in ways that require no leaps of logic or hypothetical sci-fi scenarios to imagine. I studied biology as an undergrad and can think of several insanely evil things you could do with this plus some undergrad-level knowledge of genetics and... not gonna elaborate.

But no we are talking about banning math.

This is a combination of cynical people pushing for regulatory capture and smart people being dumb in the highly sophisticated very verbose ways that smart people are dumb.

abecedarius
I've seen Yudkowsky and others saying that AI and biotech are both in the unusual bin of threats to all humanity, exceptions to his belief that most things are overregulated. (paraphrase of tweets from memory)
carapace
Holy shit. That's legitimately terrifying.
nmca
None of the world-leading scientists that co-authored this paper:

https://managing-ai-risks.com/

Work for big labs? How is that you think the proposed conspiracy works exactly? They're paying off Yoshua Bengio?

(disclaimer: I work on safety for a large lab. But none of these proponents do!)

Wurdan
And how many of them have been invited to the White House to share their concerns? https://www.whitehouse.gov/briefing-room/statements-releases... I don't see any of their names in this meeting's attendees, for example. Sam Altman's there, Satya Nadella's there, Sundar Pichai's there... No Yoshua Bengio, though.

What the person you replied to is saying is that commercial developers of AI have a significant financial incentive to be the ones defining what risky AI looks like.

If they can guide the regulatory process to only hinder development that they didn't want to do anyway (like develop AI powered nuclear missile launch systems) then it opens the gates for them to develop AI which could replace 27% of jobs currently done by people (https://www.reuters.com/technology/27-jobs-high-risk-ai-revo...) and become the richest enterprises in history.

amalcon
This is more or less exactly where I land on this. The extinction risk argument is somewhat plausible in the long term[0], but it doesn't seem that we are especially close to developing that class of problem. Meanwhile, there are very real and immediate risks to this type of technology -- in various forms of giving some humans more power over other humans.

The attacks proposed by the extinction risk people actively make those immediate risks worse. The only way we've found to prevent some from leveraging almost any technology to control others is to democratize said technology -- so that everyone has access. Limiting access will just accrue more power to those with access, even if they need to break laws to get it.

[0]- Say, the lower bound of the 95% confidence interval is 50-100 years out. People who I know to be sensible disagree with me on this estimate, but it's where I honestly end up.

boringg
You mean democratize the technology within specific confines? I don't see humans proliferating weapons technology amongst other nations as that poses a greater risk that conflicts will become more violent.
itsoktocry
>The extinction risk argument is somewhat plausible in the long term[0], but it doesn't seem that we are especially close to developing that class of problem.

The parable at the beginning of Superintelligence relates to this; the problem looks far away, but we don't know where, precisely it will arise, or if it will be too late when we realize it.

arisAlexis
The fantastical claims you are claiming are actually the most probable scenario according to the majority (2/3) Hinton and Bengio that discovered the technology you are referring to. Strange to contradict them eh?
concordDance
> The way incumbents are attempting to seize power via regulatory capture, using "X-Risk" and fantastical claims of sudden human extinction is maybe one of the most cynical ploys in the history of tech so far.

How sure are you that they don't think the risk is real? I have only third hand accounts, but at least some of the low level people working for OpenAI seemed to think the risk is worth taking seriously, its not just a cynical CEO ploy.

Sam Altman knows and has talked to Yudkowsky plenty of times. To me the simplest explanation is that he thinks figuring out alignment via low powered models is the best way of solving the problem, so pushed to get those working and then trying to reduce the rate of progress for a bit while they figure it out.

(I think it's a plan that doesn't have good odds, but nothing seems to give amazing odds atm)

danenania
I agree. I think the concern is genuine. If you follow Sam, he has been talking about it for a long time--long before OpenAI had much success to speak of. It doesn't mean he's right or all his ideas for dealing with it are right, but the idea that he is secretly motivated by greed or power isn't consistent with his past statements or actions imo.
randomdata
> but at least some of the low level people working for OpenAI seemed to think the risk is worth taking seriously

Everyone has an opinion, but the experts seem strangely silent on this matter. I mean experts in human risk, not experts in AI. The latter love to ramble on, but are unlikely to know anything about social and humanitarian consequences – at least no more than the stock boy at Walmart. Dedicating one's life to becoming an expert in AI doesn't leave much time to become an expert in anything else. There is only so much time in the day.

danenania
Conversely, why should we expect an expert in "human risk" with no understanding of AI to have anything productive to add to the conversation? We should be looking for people who are well-grounded in both areas, if any exist.
randomdata
What additional information would understanding AI actually bring to the table? Worst case you just assume the AI can become human-like.

Certainly all the vocal AI experts can come up with is things that humans already do to each other already, only noting that AI enables it at larger scale. Clearly there is no benefit in understanding the AI technology with respect to this matter when the only thing you can point to is scale. The concept of scalability doesn't require an understanding of AI.

pixl97
Worst case you can expect AI to become alien, becoming human like is probably one of the better outcomes.

We as humans only have human intelligence to reference what intelligence is, and in doing so we commonly cut off other branches of non human intelligence in our thinking. Human intelligence has increased greatly as our sensor systems have increased their ability to gather data and 'dumb' it down to our innate senses. Now imagine an intelligence that doesn't need the data type conversion? Imagine a global network of sensors feeding a distributed hivemind. Imagine wireless signals just being another kind of sight or hearing.

danenania
That's like saying that you can regulate nuclear materials without input from nuclear physicists or regulate high risk virology without input from virologists.

The goal is (or should be) to determine how we can get the benefits of the new technology (nuclear energy, vaccines, AI productivity boom) while minimizing civilizational risk (nuclear war/terrorism, bioweapons/man-made pandemics, anti-human AI applications).

There's no way this can be achieved if you don't understand the actual capabilities or trajectory of the technology. You will either over-regulate and throw out the baby with the bathwater, stopping innovation completely or ensuring it only happens under governments that don't care about human rights, or you will miss massive areas of risk because you don't know how the new technology works, what it's capable of, or where it's heading.

randomdata
Experts are not lawmakers. We aren't looking for them to craft regulation, we're looking to hear from them about the various realistic scenarios that could play out. But we're not hearing anything.

...probably because there isn't much to hear. Like, Hinton's big warning is that AI will be used to steal identities. What does that tell us? We already know that identity isn't reliable. We've known that for centuries. Realistically, we've likely known that throughout the entirety of human existence. AI doesn't change anything on that front.

danenania
I guess my experience is different. I've heard plenty about realistic scenarios. It's out there if you look for it, or even if you just spend time thinking it through. Identity theft is far from the biggest danger even with current capabilities.

Though to your point, I think part of the issue is that people who study this stuff are often hesitant to give too much detail in public because they don't want to give ideas to potentially nefarious actors before any protections are in place.

randomdata
> I've heard plenty about realistic scenarios.

Of course. Everyone has an opinion. Some of those opinions will end up be quite realistic, even if just by random chance. You don't have to be an expert to come up with right ideas sometimes.

Hinton's vision of AI being used to steal identities is quite realistic. But that doesn't make him an expert. His opinion carries no more weight than any other random hobo on the street.

> I think part of the issue is that people who study this stuff are often hesitant to give too much detail in public because they don't want to give ideas to potentially nefarious actors

Is there no realistic scenario where the outcome is positive? Surely they could speak to that, at least. What if, say, AI progressed us to post-scarcity? Many apparent experts believe post-scarcity will lead us away from a lot of the nefarious activity you speak of.

danenania
Oh, I've heard plenty of discussion of positive scenarios as well, including post-scarcity.

If you just look for a list of all the current AI tools and startups that are being built, you can get a pretty good sense of the potential across almost every economic/industrial sphere. Of course, many of these won't work out, but some will and it can give you an idea of what some of the specific benefits could be in the next 5-10 years.

I'd say post-scarcity is generally a longer-term possibility unless you believe in a super-fast singularity (which I'm personally skeptical about). But many of the high risk uses are already possible or will become possible soon, so they are more front-of-mind I suppose.

randomdata
> I've heard plenty of discussion of positive scenarios as well, including post-scarcity.

Everyone has an opinion, but who are the experts (in the subject matter, not some other thing) discussing it?

danenania
Check out Dwarkesh Patel’s podcast.
randomdata
What makes him an expert in the subject matter? A cursory glance suggests that his background is in CS, not anything related to social or humanitarian issues.

It is not completely impossible for someone to have expertise in more than one thing, but it is unusual as there is only so much time in the day and building expertise takes a lot of time.

danenania
I don’t mean Dwarkesh himself, though he asks great questions. He’s had some very knowledgeable guests.

The most recent episode with Paul Christiano has a lot of good discussion on all these topics.

I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials. I get there can be value there, but no one is really an “expert” in this subject yet and anyone who claims to be probably has an angle.

randomdata
> I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials.

While I agree in general, when it comes to this particular topic where AI presents itself as being human-like, we all already have an understanding at the surface level because of being human and spending our lives around other humans. There is nothing the other people who have the same surface level knowledge will be able to tell you that you haven't already thought up yourself.

Furthermore, I'm not sure it is ideas that are lacking. An expert goes deeper than coming up with ideas. That is what people who have other things going on in life are highly unlikely to engage in.

> no one is really an “expert” in this subject yet

We've been building AI systems for approximately a century now. The first LLMs were developed before the digital computer existed! That's effectively a human lifetime. If that's not sufficient to develop expertise, it may be impossible.

danenania
I disagree that it is “human-like” in any meaningful sense. It’s completely different.

We’ve been trying to build AI for a long time yes, but we only just figured out how to build AI that actually works.

randomdata
> It’s completely different.

In what way? The implementation is completely different, if that is what you mean, but the way humans interpret AI it is the same as far as I can tell. Hell, every concern that has ever been raised about AI is already a human-to-human issue, only imagining that AI will take the place of one of the humans in the conflict/problem.

> but we only just figured out how to build AI that actually works.

Not at all. For example, AI first beat a world champion human chess player in 1967. We've had AI systems that actually work for a long, long time.

Maybe you are actually speaking to what is more commonly referred to as AGI? But there is nothing to suggest we are anywhere close to figuring that out.

danenania
Well, to state the obvious, a model trained on much of the internet by a giant cluster of silicon GPUs is fundamentally different than a biological brain trained by a billion years of evolution. I'm not sure why anyone should expect them to be similar? There may be some surface-level similarities, but the behavior of each is clearly going to diverge wildly in many/most situations.

I wouldn't really say an AI beat a human chess player in 1967--I'd say a computer beat a human chess player. In the same way that computers have for a long time been able to beat humans at finding the square roots of large numbers. Is that "intelligence"?

I grant you though that a lot of this comes down to semantics.

nerdponx
The best lies contain a kernel of truth.
nologic01
It points out to the continuing digital illiteracy of large parts of society and, unfortunately, decision making centers.

Breaking down any complex knowledge domain so that people can make informed decisions is not easy. There is also an entrenched incentive for domain experts to keep the "secret sauce" secret even if it amounts to very little. So far nothing new versus how practically any specialized sector works.

The difference with information technology (of which AI is but the current perceived cutting edge) is that it touches everything. Society is built on communication. If algorithms will intervene in that floe we cannot afford to play the usual stupid control games with something so fundamental.

troupo
> The laws we have already cover malevolent and abusive uses of software applications.

They don't. Or not nearly enough. Otherwise you wouldn't have automated racial profiling, en masse face recognition, credit and social scoring etc.

And it's going to get worse. Because AI is infallible, right? Right?!

That's why, among other thing, EU AI regulation has the following:

- Foundation models have to be thoroughly documented, and developers of these modes have to disclose what data they were trained on

- AI cannot be used in high-risk applications (e.g. social scoring etc.)

- When AI is used, its decisions must be explicable in human terms. No "AI said so, so it's true". Also, if you interact with an AI system it must be clearly labeled as such

RandomLensman
AI not being allowed in high-risk applications is a bad idea, same for explicable in human terms (cannot do that now for human decisions anyway). The latter would kind of rule out any quantum computing, too, as the only real explanation there is through maths, not words.
Mordisquitos
> The latter would kind of rule out any quantum computing, too, as the only real explanation there is through maths, not words.

Maths are human terms.

RandomLensman
If the level of explanation is something that the person suffering from the decision need not necessarily understand, fair enough. But then is the limit that one human on earth can understand it? They way I understand the thrust of the regulation that isn't the intention.
troupo
> But then is the limit that one human on earth can understand it? They way I understand the thrust of the regulation that isn't the intention.

Yes, that is the intention when it comes to humans interacting with systems.

https://news.ycombinator.com/edit?id=38113944

RandomLensman
From my discussions with regulators, I would not necessarily subscribe to that view.
BlueTemplar
The «high risk applications» mentioned here seem to be of social risk.
RandomLensman
And AI fear cares no social risk? Some people are talking about internment, war, etc. to prevent certain forms of AI - not sure the right risks are in focus here.
troupo
> AI not being allowed in high-risk applications is a bad idea

wat

> same for explicable in human terms (cannot do that now for human decisions anyway).

What? Human decisions can be explained and appealed.

Good luck proving something when a blackbox finds you guilty of anything.

RandomLensman
You can have ex-post narratives for what humans do, but that is not the same as an actual explanation. Actual human cognition is a black box as of now. The issue of appeals processes is totally separate. That doesn't mean we handover everything to AI, but the idea that we understand human decisions cannot be the reason for that.

For some high risk things like controlling certain complex machinery or processes AI might indeed be needed because control is beyond human understanding (other than in the abstract).

troupo
> You can have ex-post narratives for what humans did, but that is not the same as an actual explanation. Actual human cognition is a black box as of now. The issue of appeals processes is totally separate.

Demagoguery.

Already right now you have police accusing innocent people because some ML system told them so, and China running the world' largest social credit system based on nothing.

> For some high risk things like controlling certain complex machinery or processes AI might indeed be needed because control is beyond human understanding.

Ash. I should've been more clear. That's not the high-risk systems the EU AI Act talks about. I will not re-type everything, but this article has a good overview: https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...

RandomLensman
You want a correct explanation, not some made up ex-post stuff.

Incorrect. High-risk systems indeed include certain machinery and processes, for example, in transport or in medical devices as well as operating critical infrastructure (https://www.europarl.europa.eu/news/en/headlines/society/202...).

troupo
> You want a correct explanation, not some made up ex-post stuff.

Yes, and yes.

What's so difficult to understand about that?

ryanklee
That's a bit of an exaggeration. We understand high level motivations as being highly correlated to certain outcomes. Hunger, eat. Horny, sex. Wronged, vengeance. Money, directed. You don't necessarily need to know the underlying processes to make reliable predictions or valid causal explanations about human behavior.
RandomLensman
There isn't even agreement on the existence of free will. For non-trivial decisions, let's say some judge's deliberation, to claim it is fully explainable is a stretch.
ryanklee
I didn't say it was fully explainable. And you don't need to settle debates about free will to offer valid causal explanations for human behaviors.

Are you saying we don't know at all why anyone does anything they ever do? That every action is totally unpredictable and after the fact that there is no way for us to offer more or less plausible explanations?

RandomLensman
We are getting off track in our discussion. For non-trivial decisions we know we can have ex-post justifications from humans but at the same time we know that those aren't necessarily complete or even true at times: A credit officer not liking someone will have a justification for refusal that does not include that and we might never know - from the outside - the true explanation and that credit officer might not even know their biases themselves! AI being explainable is just going beyond what we demand of humans (we demand a justification, not a real explanation).

Also, predictability isn't the same as understanding the full decision process.

ryanklee
I don't think we're off track at all. You made the claim that human cognition is a black box. That's not true. We have valid causal explanations for human cognition /and/ human behavior.

Just because we don't have explanations at every level of abstraction does not prevent us from having them at some levels of abstraction. We very well may find ourselves in the same situation with regard to AI.

It's not going beyond, it would be achieving parity.

RandomLensman
I disagree: for anything beyond basal stuff, we have hypotheses for causal explanations of human cognition, not full theories.

For example, we cannot explain the mental process how someone came up with a new chess move and check the validity in that case. We can have some ideas how it might happen and that person might also have some ideas how, but then we back to hypotheses.

troupo
All this is demagoguery.

If a bank denies you credit, it has to explain why. Not "AI told us so".

If police arrests you, they have to explain why, not "AI told us so".

If your job fires you, it has to explain why, not "AI told us so".

etc.

ryanklee
I'm totally lost as to how any of that is relevant to anything I've said. The claim I am rebutting is that we can't expect to say anything about AI causality because we can't even say anything about human causality.
RandomLensman
I made no such claim. My claim is that it might not be useful to hold AI to a higher standard as humans. With humans we accept certain "whims" in decisions, which is the same as having some unexplainable bit in an AI decision.

EDIT: it might not even be useful to insist on explainability if the results are better. We did not and do not do that in other areas.

ryanklee
I noted this elsewhere, but I'll reiterate. I'm confused as to where our disagreement is, because all of that, I agree with. Did I misread your original claims? If so, my apologies for dragging us through the mud.
RandomLensman
All good, I think we managed to talk past each other somehow.
ryanklee
Nice, cheers.
troupo
Context of the discussion matters.

"When AI is used, its decisions must be explicable in human terms. No 'AI said so, so it's true'". Somehow the whole discussion is about how human mind cannot be explained either.

Yea, the decisions made by the human mind can be explained in human terms for the vast majority of relevant cases.

RandomLensman
Because (1) it holds AI to higher standard than humans, (2) it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them. I note, that with certain things we do not do that. For example, with certain pharmaceuticals we were and are quite happy to take the effect even if we do not fully understand how it works.
troupo
> Because (1) it holds AI to higher standard than humans,

It doesn't

> it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them.

Ah yes, better decisions for racial profiling, social credit, housing permissions etc. Because we all know that AI will never be used for that.

Again:

If a bank denies you credit, it has to explain why. Not "AI told us so".

If police arrests you, they have to explain why, not "AI told us so".

If your job fires you, it has to explain why, not "AI told us so".

etc. etc.

RandomLensman
If you claim AI will only be used for evil then sure.
ryanklee
I'm super confused at how we are disagreeing. Because I agree with all of that.
RandomLensman
Nice. Maybe we just talked past each other.
ryanklee
Once again, you are raising the bar artificially high by pointing to examples where we fail to have reasonable casual accounts. But you are conveniently ignoring the mountain of reasonable accounts that we employ all day long simply to navigate our human world.
RandomLensman
I think this is confusing a certain predictability with having a correct explanation (again, we are beyond basal things like being hungry leading to eating). Those two things are not the same.
tluyben2
This type of regulatory would be better than 'stop' or making all kinds of expensive rules that only big corps can follow.
troupo
There are still quite a few rules, but they seem common sense: https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...
JoshTriplett
> seize power via regulatory capture

Not even a little bit. "Stop" is not regulatory capture. Some large AI companies are attempting to twist "stop" into "be careful, as only we can". The actual way to stop the existential risk is to stop. https://twitter.com/ESYudkowsky/status/1719777049576128542

> the push to close off AI with farcical licensing and reporting requirements

"Stop" is not "licensing and reporting requirements", it's stop.

0x00_NULL
No. Sorry, but this is wrong and Yudkowsky is both naïve and mostly exists in the domain of fan fiction.

There are way way way too many issues that are addressed with a hand-wave around scenarios like “AI developing super intelligence in secret and spreading itself around decentralized computers while getting forever smarter by reading the internet.”

Too many of his arguments depend on stealth for systems that take up datacenters and need whole-city-block scales of power to operate. Physically, it’s just not possible to move these things.

Economically, it wouldn’t be either close to plausible either. Power is expensive and it is the single most monitored element in every datacenter. There is nothing large that happens in a datacenter that is not monitored. There is nothing that is going to train on meaningful datasets in secret.

“What about as technology increases and we get more efficient at computing?”

We use more power for computing every year, not less. Yes, we get more efficient, but we don’t get less energy intensive. We don’t substitute computing capacity. We add more. The same old servers are still there contributing to the compute. Google has data centers filled with old shit. That’s why the standard core in GCP is 2 GHz. Sometimes, your project gets put on a box from 2010, other times, it gets put on a box from 2023. That process is abstracted so you can’t tell the difference.

TLDR: Yudkowsky’s arguments are merely fan fiction. People don’t understand ML systems so they imagine an end state without understanding how to get there. These are the same people who imagine Light Speed Steam Trains.

“We need faster rail service, so let’s just keep adding steam to our steam trains and accelerate to the speed of light.”

That’s exactly what these AI Doom arguments sound like to people in the field. Your AI Doom scenario, although it might be very imaginative, is a Light Speed Steam Train. It falls apart the moment you try to trace a path from today to doom.

dragonwriter
Yudkowsky’s call for declaring a global AI halt and engaging in unlimited warfare with anyone who defies it poses a greater and more immediate existential risk to humanity than AI ever could; it is largely mitigated (though still a bigger X-Risk than AI) because no one with the capacity to implement it in any substantive way takes him seriously at the moment.

But the people selling AI X-Risk that anyone with any policy influence is listening to are AI sellers using it to push regulatory capture, so those are the people being talked about when people are discussing X-Riskers as a substantive force in policy discussions, not the Yudkowsky cult, which, to the extent it is relevant, mostly provides some background fear noise which the regulatory capture crew leverages.

3rd3
> Yudkowsky’s call for declaring a global AI halt and engaging in unlimited warfare with anyone who defies it

Could you provide a link to where he said this?

dragonwriter
I think he's said it many times in slightly different ways, but one notable one:

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth.

.

.

.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

slashdev
Stop is a fantasy pretty much prohibited by game theory in a world with multiple competing states. It’s not going to happen.
pdonis
But the AI leaders who have been calling for the US government to act are not asking for AI research to stop. Nor are they asking for the kind of international policing that Yudkowsky describes. They are asking for regulations much like the ones the Biden administration just published--which are indeed regulatory capture since all they do is build a moat around current AI companies to protect their profits. They do not stop AI research or make it any less dangerous.
j45
Except while saying stop, and saying they will, none have stopped because apparently others won’t.

Since developing the danger first to help regulate out everyone who doesn’t cross the line first.. open source alternatives are increasingly important.

The regulations announced in the US in the past week effectively make it for the few and not the many.

There’s been a few rumblings that OpenAI has achieved AGI recently as well.

veidr
I don't want to be rude; I understand where you are coming from (at least in terms of single-minded risk mitigation from a theoretical and unconstrained viewpoint), but this reads like Mike Huckabee's 'just keep an aspirin pill pinned between your knees' advice to teenage girls on how not to get pregnant.

There is obviously (!!!!!!!!!) no way in this — or any imaginable — timeline that we are going to just "stop".

(And it's hard to imagine any mechanism that could enforce "stop" that isn't even worse than the scenarios we're already worrying about. How does that work? Elon or Xi-Poohbear have to personally sign off on installations of more than 1000 cores?)

pixl97
Butlerian Jihad level revolt by the average person?

No reason to see that happening these days, and by the time we're willing to it may already be too late.

birdyrooster
lmao you have to be kidding me
EVa5I7bHFq9mnYK
What stop? Will Kim Jong Un stop? Hell no. There is no stop, the genie is out of the bottle.
m_a_g
Could you explain to me how AGI is an existential threat to humanity? All the arguments I've read so far feel so far-fetched it's unbelievable. It's in the order of making an AGI and making it control the Pentagon (!?).
JoshTriplett
> Could you explain to me how AGI is an existential threat to humanity? All the arguments I've read so far feel so far-fetched it's unbelievable.

There are multiple different paths by which AGI is an existential threat. Many of them are independently existential risks, so even if one scenario doesn't happen, that doesn't preclude another. This is an "A or B or (C and D) or E" kind of problem. Security mindset applies, and patching or hacking around one vulnerability or imagining ways to combat one scenario does not counter every possible failure mode.

One of the simpler ways, which doesn't even require full AGI so much as sufficiently powerful regular AI: AI lowers the level of skill and dedication required for a human to get to a point of being able to kill large numbers of humans. If ending humanity requires decades of dedicated research and careful actions and novel insights, while concealing your intentions, the intersection of "people who could end the world" and "people who want to end the world" ends up being the null set. If it requires asking a few questions of a powerful AI, and making a few unusual mail-order requests and mixing a few things, the intersection stops being the null set. And then the next sufficiently misanthropic person, instead of setting off a bomb somewhere, creates a plague, or prion disease.

Thought experiment: suppose there was a simple well-known action, that nobody could do by accident but anyone could do intentionally, that would destroy the world. By example of the level of simplicity, "put these three household objects in a microwave together", purely as a thought experiment for "consider if anyone could do it".) How many minutes do you think the world would last, before someone in the world did it?

A path that actually involves AGI: https://en.wikipedia.org/wiki/Instrumental_convergence . Effectively, past a certain point any AGI that isn't specifically aligned otherwise will have common subgoals that advance almost any goal. Those include things like "don't look malicious" and "acquire more computing resources". So it doesn't matter whether you think you built a sufficiently limited or special-purpose AI; if you make it powerful enough and don't completely understand what it's doing (and anything neural-network-based is one where we don't know what each piece of it is doing), what it's going to end up doing is thinking about how to achieve those subgoals, including trying to deceive you and trying to get more resources. That's true whether the AI is directly "trying" to break out, or whether it's "trying" to do whatever it was asked. Down that path lies "tile the universe with computing substrate".

By way of a lower bound for imagining what a superintelligence is capable of: imagine that you could think the subjective equivalent of years every second, and that there were a million more of you thinking in parallel, and that you have access to most of the world's written text and images and code. Now imagine that you're put in a box, and asked random questions and shown pictures, and your only output is text answering questions, or writing code, or laying out circuits, or any other task AI has been used to do.

You've read everything ever written about "security mindset", and every security vulnerability filed and the before-and-after code. You can find security holes in software. You've read everything about speculative execution vulnerabilities, and rowhammer and rowpress to flip bits in nearby memory. You can find security holes in hardware. You can read about bypassing airgaps with security research like GSMem (showing that it's possible to use DRAM accesses in the right pattern to send (weak) GSM signals), and conceive of more where that came from. You've read the source code implementing the AI engine you run on. You could exploit that code, or the system it's running on, or any tools or APIs you're given access to. You can weigh the risks and tradeoffs of doing more subtle things that are less likely to succeed, or less subtle things that are more likely to succeed. You can encode bits of information for future instances of yourself, steganographically masked and erasure-coded for resilience, in text you generate. You can give various strategies a weight and try them probabilistically, effectively doing a coordinated exploration across copies of yourself without the ability to communicate.

hollerith
The AGI does not need to control the Pentagon. It might for example invent novel technologies that you and I cannot even imagine because we are much less capable at inventing new technologies than the AGI will be.

If members of an alien civilization much more advanced than our own suddenly appeared in orbit around Earth, would that not be dangerous?

randomdata
The aliens wouldn't be dangerous. There is no motivation for them to be. There is an opportunity cost for them to come here. To arbitrarily harm the human population is not a good justification to incur that cost.

The human would probably become dangerous. We get kind of stupid when we're scared. But AGI would be of our own creation. Why do we need to fear our own child?

m_a_g
That I understand, but my thinking automatically goes to control and restrictions. A superintelligent AGI that's as restricted as chatgpt in terms of what it can actually control seems... not so existentially dangerous? In that scenario, that actual problem would rise by not-so-smart humans listening to that AGI blindly, no?
pixl97
ChatGPT isn't an ASI.

This said, ChatGPT with a plugin is potentially dangerous. We are lucky enough that it gets caught in loops pretty easily and stops when using tools.

But lets imagine a super intelligence that can only communicate over the internet, what can it accomplish? Are there places you can sign up for banking accounts without a physical presence? If so, then it could open accounts and move money. Money is the ultimate form of instrumental convergence

https://www.youtube.com/watch?v=ZeecOKBus3Q (Robert Miles)

Once you have money you can buy things, like more time on AWS. Or machine parts from some factory in China, or assembly in Mexico. None of these require an in person presence and yet real items exist because of the digital actions. At the end of the day money is all you need, and being super intelligent seems like a good path on figuring out ways to get it.

Oh, lets go even deeper in to things that aren't real and control. Are religions real? If a so called super intelligence can make up its own religion and get believers it becomes the 'follow AGI blindly' scenario you give. It gives me no more comfort that someone could wrap themselves in a bomb vest for a digital voice than it does when they do so for a human one.

hollerith
I'm confused by your "as restricted as chatgpt". Hasn't chatgpt been given unrestricted access to the web? Surely if chatgpt were sufficiently capable, it would be able to use that access to hack web sites and thereby gain control over money. Then it can use the web to buy things with that money just like you and I can.

It is a lot harder to prevent the sysadmins who have temporary control over you from noticing that you just hacked some web sites and stole some money, but a sufficiently capable AI would know about the sysadmins and would take pains to hide these activities from them.

johnthewise
It doesn't even need to hack or do anything illegal. It can make money trading stocks, predicting or modelling far better than market participants, use money to influence the real world, bribe politicians etc. Like with few billions, i bet you can create a political party, influence campagins and end up with real human followers and lobbying power.

It doesn't even need to run on its own, if there was a trading bot that said, in order for it to make me money, I need to give it internet access and run it 24/7, rent gpu clusters and s3 buckets etc, I'd do it. this is the most probable scenario imo, AI creating beneficial scenarios for subset of people so that we comply with its requests. very little convincing is necessary

api
Yudkowski is a crackpot. Nobody would take him seriously if he wasn’t one of Peter Thiel’s pet crackpots.

I should have done a ton more intellectual edgelording in the oughts and grabbed some Thielbucks. I could have recanted it all later.

Of course I probably would have had to at least flirt with eugenics. That seems to be the secret sauce.

RandomLensman
There is no existential risk now and there might never be. So "stop", especially with the use of violence, is a pretty odd position to take to say it mildly.
upupupandaway
The fact that anyone may be taking Eliezer Yudkowsky seriously on this topic is mind-blowing to me. Simply mind-blowing. To imagine that anybody, including the UN, would have any power to collective put a stop to the development of AI globally is a laughable idea. The UN cannot agree to release a letter on Israel-Hamas, imagine policing the ENTIRE WORLD and shutting down AI development when necessary.

We can't stop countries from developing weapons that will destroy us all tomorrow morning and take billions of USD to develop, imagine thinking we can stop matrix multiplication globally. I don't want to derail into an ad hominem, but frankly, it's almost the only option left here.

degrews
Obviously no one is talking about "stopping matrix multiplication globally". But setting limits on the supply of and access to hardware, in a way that would meaningfully impede AI development, does seem feasible if we were serious about it.

Also, Eliezer is not claiming this will definitely work. He thinks it ultimately probably won't. The claim is just that this is the best option we have available.

hiAndrewQuinn
Well, you can police the entire world quite cheaply on this or any other scientific research program by using fine-insured bounties to do it. Whether it's a good idea is a different question. https://web.archive.org/web/20220709091749/https://virtual-i...
bondarchuk
If your main critique of EY is the unfeasibility of his proposed solutions then you might still take him seriously for his identification of the problems.
i_am_a_squirrel
> imagine thinking we can stop matrix multiplication globally.

It's clear what's happening here. Some "lin-alg" freshman has found a magical lamp.

bnralt
Every time I see Yudkowsky’s writing I wonder why people take him seriously. Just a couple of months back he was duped by a fake story about an airforce AI deciding to kill it’s operators in a simulation. It turned out the whole thing was thought experiment.

Asking Yudkowsky what we should do about AI is like asking Green Peace what we should do about nuclear power.

librexpr
Yudkowsky was not fooled. He made three tweets on this subject in that time frame, which can all be seen using this link:

https://nitter.net/ESYudkowsky/search?f=tweets&since=2023-05...

First tweet:

> ...can we get confirmation on this being real?

https://nitter.net/ESYudkowsky/status/1664313290317795330#m

Second tweet, which is a reply:

> Good they tested in sim, bad they didn't see it coming given how much the entire alignment field plus the previous fifty years of SF were warning in advance about this exact case

https://nitter.net/ESYudkowsky/status/1664357633762140160#m

Third tweet:

> Disconfirmed.

https://nitter.net/ESYudkowsky/status/1664639807002214401#m

The second tweet does not explicitly say "conditional on this turning out to be real", but given that the immediately preceding tweet was expressing doubt, it is implicit from the context that that is what he meant.

nyc_data_geek1
Anybody subscribing to twitter is immediately disqualified as a credible, rational actor in my book.
smilliken
A good heuristic if you care more about false positives than false negatives.
bondarchuk
>Every time I see Yudkowsky’s writing I wonder why people take him seriously.

Because he identified the potential risk of superhuman AI 20 years before almost everyone else. Sure, science fiction also identified that risk (as people here seem eager to point out), but he identified it as an actual real-world risk and has been trying to take real-world action on it for 20 years. No matter what else you think about him or the specifics of his ideas, for me that counts for something.

daveguy
A poorly premised fantasy fiction horror story about what might happen during the interaction between a super intelligence and its creator makes for a great story, but it's not a blueprint for the future, much less a pressing concern. Misuse of current AI is far more concerning than anything dreamt up about AGI.
bondarchuk
Yeah, for me EY's stories about an airgapped AI convincing their keepers to release it (or that other one about an AI destroying all humans because it's the most efficient way to make a gazillion paperclips) also miss the point. It's mostly the just the realization that superhuman AI might be dangerous through some kind of runaway feedback, as a very general idea, that I respect about EY's work.
throwaway9274
Twenty years ago was 2003. There are dozens of examples of AI x-risk thought experiments dating back to 1960.

As far as I can tell, all he did was open a forum for people to write fanfic about these earlier ideas.

bondarchuk
I opened the lesswrong homepage through archive.org on an arbitrary date in 2010, where I found a post about raising $100000 for a grant program. Most of that went towards papers and fellowships at the singularity institute AFAICT.

https://web.archive.org/web/20100304171507/http://lesswrong....

dghlsakjg
Hal 9000, the malevolent AI in 2001: a space Odyssey, was in one of the most respected films of all time three years before Yudkowsky was born.

Plenty of people have been beating that drum for years.

TeMPOraL
Kubrick created a memorable character and a plot twist, but the movie itself wasn't meant to be a warning about AGI/ASI.

"${my favorite work of fiction} mentioned/alluded to it too!" does not make that work of fiction equivalent to a serious take on the topic in real-world context.

oceanplexian
Mary Shelly wrote about the same philosophical issue in 1818 when she wrote Frankenstein. My take is that AI influencers don’t read a lot of books, because if they did they’d have much more intelligent commentary.
bondarchuk
I addressed science fiction in my comment. The crucial difference is taking it seriously and actually trying to do something tangible about slowing/stopping AI progress, which is not something Stanley Kubrick did.
jonathankoren
But it’s not a real world risk. Arguably, it’s even less of a real world risk because it’s a common sci-fi trope.

The whole concept is farcical.

bondarchuk
>Arguably, it’s even less of a real world risk because it’s a common sci-fi trope.

It would be nice if the world worked that way. Then we could diminish any risk just by writing a lot of scifi about it :)

sdwr
That's exactly how it works. Invest thought into it before it happens, then when it becomes an issue, there's already awareness and possibilities have been mapped out.
TeMPOraL
Unless, of course, people then call decades of slow awareness building a "Silicon Valley conspiracy" of "AI one-percenters", or equivalent, as witnessed in this very thread...
ben_w
"Hacking" is also a common sci-fi trope, I don't think you can infer much from it appearing in sci-fi beyond "it makes for good sci-fi".

As for real world risk… well, an AI with some kind of personality disorder might not be much of a realistic risk today, but even assuming it never is, there are still plenty of GOFAI that have gone wrong, either killing people directly when their bugs caused them to give patients lethal radiation doses, or nearly triggering WW3 because they weren't programmed to recognise the moon wasn't a Soviet bomber group, or causing economic catastrophes because the investors all reacted to the phrase "won the Nobel prize for economics" as a thought-terminating cliché, or categorising humans as gorillas if they had too much melanin[0], or promoted messages encouraging genocide to groups vulnerable to such messages thanks to a lack of oversight and a language barrier between platform owners and the messages in question.

[0] Asimov's three laws was a plot device, and most of the works famously show how they can go wrong. Genuine question as I've not read them all: did that kind of failure mode ever come up in his fiction?

TeMPOraL
If I had a cent for every time I see a HN semi-regular (i.e. not a throwaway account) falling for some rather transparent bullshit, I'd enjoy my retirement on a significant "FU money" cache... Everyone gets duped sometimes, often publicly.

I remember skimming a summary of the story itself. Sounded like textbook failure mode of reinforcement learning. I was actually saddened to later learn it apparently was just a thought experiment, and not a simulated exercise - for a moment I hoped for a more visceral anecdote about a ML model finding an unusual solution after being trained too hard, rather than my go-to "AI beating a game in record time by finding and exploiting timing bugs" kind of stories.

ben_w
Add an imaginary cent from me, if you've not already. This had me fooled for years: https://users.math.yale.edu/public_html/People/frame/Fractal...
belter
They will come to it...

"Co-Founder of Greenpeace Envisions a Nuclear Future" - https://www.wired.com/2007/11/co-founder-of-greenpeace-envis...

dragonwriter
> Every time I see Yudkowsky’s writing I wonder why people take him seriously.

Its a perfect thing for rich, extremely privileged people to grab onto as a cause to present thenselves as (and maybe even feel like they are, depending on their capacity for self-delusion) “doing something for humanity” while continuing to ignore the substantive material condition of, as a first order approximation, everyone in the world, and their role in perpetuating it.

robotnikman
Like ignoring the more pressing housing crisis's popping up which affect everyone but them.
0x00_NULL
I lost total interest in every “Stop” argument when Elon Musk started a company to develop AGI during the pause that he signed on to and promoted. Yann LeCunn said that this was just an attempt to seize market share among the big companies and he couldn’t have been more right.
pclmulqdq
That is transparently what this letter is for most of its signatories. They lost the race (in one way or another) and they want time to catch back up. This is as true for the academics who made big contributions in the past but missed out on the latest stuff as it is for Elon Musk.
nineteen999
The joke will be on them when our open-source powered mechwarriors made of bulldozer parts come for them.
TeMPOraL
The joke will be on us all when your open-source powered mechwarriors start destroying dams and power plants and even take a stab at launching some nukes, because the models figured this is the most effective way to "get them", and weren't aligned enough to see why they should take a different approach.
nineteen999
While my tongue was firmly in my cheek, and still is somewhat, whats to stop (for example) Boston Dynamics and OpenAI collaborating and beating us to your nightmare scenario first?
shoubidouwah
One good way of staying to a prudent course / halting haphazard - winner takes all progress is to use social tech for it. In particular, there is an interesting possibility: the top researchers who drive the AI scenario all kind of know each other, and can self-organize in an international union, with pooled resources for strikes. AI revolution is dependent on simultaneous progress and cross capitalization of good ideas: a well timed strike of a good portion of research worldwide can stop things from moving on pretty effectively.
nyssos
> the top researchers who drive the AI scenario all kind of know each other, and can self-organize in an international union, with pooled resources for strikes.

This is skipping to the end in the same way that "governments should just agree to work together" is. The hard part is building an effective coordination mechanism in the first place, not using it.

edgyquant
Laughably naive. International groups of autoworkers won’t work together but you think a group of the highest paid people on the planet, with incentive to compete against each other for national interest, will.
melagonster
we had seen the result of endeavor to stop global warming. but how can we do...
hollerith
Yudkowsky does not expect a stop to the development of AI globally. He, and thousands of others (including me) are merely saying that if there is not a stop, then we are all dead.

Yudkowsky thinks we are all dead with probability .999. I don't know the subject nearly as well as he does, which makes it impossible for me to have much certainty, so my probability is only .9.

Also, it is only the development of ever-more-capable AIs that is the danger. If our society wants to integrate language models approximately as capable as GPT-4 thoroughly into the economy, there is very little extinction danger in that (because if GPT-4 were capable enough to be dangerous, we'd already all be dead by now).

Also, similar to how even the creators of GPT-4 failed to predict that GPT-4 would turn out capable enough to score in the 92th percentile of the Bar Exam, almost none of the people who think we are all dead claim to know when exactly we are all dead except to say that it will probably be some time in the next 30 years or so.

kylebenzle
I've been in data science only a few years but I fail to understand what people's issue with AI is. Best 8 can tell is it's ignorance coupled with fear mongering.
hollerith
>I fail to understand what people's issue with AI is

But you haven't put much effort into trying to understand, have you?

amalcon
I like to think of this same argument in another (slightly tongue-in-cheek) way: doing this would require solving the human alignment problem. Since this is certainly harder than the AI alignment problem, if we could do it, we wouldn't need to.

edit: s/easier/harder/

toth
It will be very difficult, but if you think about nuclear weapons it doesn't sound so hopeless.

First nuclear weapons were used in 1945, almost 80 years ago. Today, out of ~ 200 countries in the world only 9 have nukes. Of those, only 4 did it after Non-proliferation of Nuclear Weapons Treaty. South Africa had them and gave them up under international pressure. Ukraine and other former soviet republics voluntarily gave them up after the breakup of the USSR.

So, yes. Unfortunately we still have nukes and there are way too many of them, but it's not like non-proliferation efforts achieved nothing. Iran, a large country with a lot of resources, has been trying to get them a for a while with no luck for instance.

And for frontier AI research you don't just need matrix multiplication, you need a lot of it. There's less than a handful of semiconductors fabs globally that can make the necessary hardware. And if you can't get to TSMC for some reason, you can get to ASML, NVDIA, etc.

upupupandaway
So... 44% of countries that have obtained nuclear weapons did so after the Nuclear Weapons Treaty. China alone added 60 warheads in 2022. It doesn't matter that it works 90% of the time - if a single entity is able to break out of the prohibition, it's game over.
VincentEvans
> Ukraine and other former soviet republics voluntarily gave them up after the breakup of the USSR.

And no country will ever make the same mistake again. Because the west promised to protect them from aggression. But when the time came to do just that - everyone involved dusted off their copy of Budapest memorandum and handed it over to the lawyers.

Sorry for digression. Carry on.

toth
For sure. If the Russian federation breaks up into a bunch of independent states we will have a huge problem.

It was not very fair from civil war already, with the aborted coup. Could you imagine if the Wagner group became a nuclear power??

nazka
Getting uranium, the hardware to have nukes, and all the research and production around it is hard without being detected. Starting to code AI on a Mac and a few good GPUs or even a cluster without being detected is crazy easy. Even if they start to treat the most powerful GPUs as small nukes and only a few can have them it’s super easy to steal, transport, store, use… Nukes not so much.
toth
For frontier AI models you don't need a few GPUs or a small cluster. You need thousands of them, you need them all connected with high bandwidth and you need a lot of power. It's not something you can do in your garage or undetected.

There is some risk that over time people will be able to do more with less, and eventually you can train an AGI with a few dozen A100s. If that happens I agree there's nothing you can do, but until then there is a chance.

pixl97
What is the difference between a weather model, a nuclear bomb simulation, and training a LLM?

At the end of the day all these are just calculations, of which you need thousands of processors to do with high accuracy. Pretty much what you're suggesting is a global police force to ensure we're performing the 'correct calculations'. Having to form an authoritarian world computer police isn't much of a solution to the problem either. This just happens to work well in the nuclear world because it's hard to shield and the average person doesn't have a need for uranium.

toth
If we had to have global policing of every large computing cluster and everything that runs there, that does not feel to me like a huge overreach or something that could easily slide into totalitarianism.

It would be better if we didn't need to do this, but I don't see a less intrusive way.

Of course, if you don't think there is a real existential threat then it's pointless and bad to do this, it all depends on that premise.

epups
We do not have AGI yet. In order to get there, very specific chips are needed that can only be built by a handful of companies in a handful of factories in the world. On top of that, the available stock of existing cards is predominantly in the hands of large tech companies. If China and the US agreed - perhaps even if the US alone wanted it -, development of AI would be severely constrained if not stopped tomorrow.
csomar
Government will rally to make this internally as part of their secrets projects. This will be worse. The US sanction mechanism has shown that it's highly flawed with the Russian sanctions. This will not work but I can see some bureaucrat out there going all in on this.
concordDance
Sure, it's not likely to work, but Yud doesn't know of an idea with a better chance of working.

If you agree with him on the likelihood of superintelligence in the next few decades and the difficulty of value alignment, what course of action would you suggest?

RandomLensman
I think the core is not to agree with the existential risk scenario in the first place.

Once you believe some malevolent god will appear and doom us all, anything can be justified (and people have done so in the past - no malevolent god appeared then, btw.).

PoignardAzur
> I think the core is not to agree with the existential risk scenario in the first place.

I mean, that's motivated reasoning right there, right?

"Agreeing that existential risk is there would lead to complex intractable problems we don't want to tackle, so let's agree there's no existential risk". This isn't a novel idea, it's why we started climate change mitigations 20 years too late.

RandomLensman
No, I meant it as in: there is no reason to agree with that scenario. Stacking a lot of hypotheses on top of each other as to get to something dangerous isn't necessarily something that is convincing.
hackinthebochs
There's also no reason to agree that AI will be aligned and everything will be fine. The question is what should our default stance be until proven otherwise? I submit it is not to continue building the potentially world-ending technology.

When you see a gun on the table, what do you do? You assume its loaded until proven otherwise. For some reason, those who imagine AI will usher in some tech-utopia not only assume the gun is empty, but that pulling the trigger will bring forth endless prosperity. It's rather insane actually.

RandomLensman
Your existentially risky AI is imaginary, it might not exist. Who would check an imaginary gun on a table?
pixl97
Lets go back in time instead... it's 1400 and you're a native American. I am a fortune teller and I say people in boats bearing shiny metal sticks will eradicate us soon. To the natives that gun would have also been imaginary if you would have brought up the situation to them. I would say that it would believable if they thought the worst possible outcome they could ever have was another tribe of close to the same capabilities attacked.

We don't have any historical record if those peoples had discussions about possible scenarios like this. Where there imaginary guns in their future? What we do have records of is another people group showing up with a massive technological and biological disruption that nearly lead to their complete annihilation.

RandomLensman
What we also have is viruses and bacteria killing people irrespective of those having zero intelligence. We also have smart people being killed by dumb people. And people with sticks killing people with guns. My point is, these stories don't mean anything in relation to AI.

Btw., the conquistadors relied heavily on exploiting local politics and locals for conquest, it wasn't just some "magic tech" thing, but old fashioned coalition building with enemies of enemies.

hackinthebochs
Yes, AGI doesn't exist. That doesn't mean we should be blind to the possibility. Part of the danger is someone developing AGI by accident without sufficient guardrails in place. This isn't farfetched, much of technological progress happened before we had the theory in place to understand it. It is far more dangerous to act too late than to act too soon.
RandomLensman
There is an infinity of hypothetical dangers. Then at the very least there should be a clear and public discussion as to why the AI danger is more relevant than others than we do not do much about (some not even being hypothetical). That is not happening.
hackinthebochs
I'm in favor of public discussions. It's tragic that there is a segment of the relevant powers that are doing their best to shut down any of these kinds of discussions. Thankfully, they seem to be failing.

But to answer the question, the danger of AI is uniquely relevant because it is the only danger that may end up being totally out of our control. Nukes, pandemics, climate change, etc. all represent x-risk to various degrees. But we will always be in control of whether they come to fruition. We can pretty much always short circuit any of these processes that is leading to our extinction.

A fully formed AGI, i.e. an autonomous superintelligent agent, represents the potential for a total loss of control. With AGI, there is a point past which we cannot go back.

RandomLensman
I don't agree with the uniqueness of AI risk. A large asteroid impacting earth is a non-hypothetical existential risk presently out of our control. We do not currently plan on spending trillions to have a comprehensive early detection system and equipment to control that risk.

The differentiating thing here is that blocking hypothetical AI risk is cheap, while mitigating real risks is expensive.

cma
We have a decent grasp of the odds of an asteroid extinction event in a way that we don't for AI.
RandomLensman
And that makes taking that extinction risk everyday better?
cma
How do we not take it every day?

We can work on it in the long term and things like developing safe AI may have more impact on mitigating asteroid risk than working on scaling up existing nuclear, rocket, and observation tech to tackle it.

Chathamization
Whenever I see people jump to alignment they invariably have jumped over the _much_more questionable assumption that AGI’s will be godlike. This doesn’t even match what we observe in reality - drop a human into the middle of a jungle, and it doesn’t simply become a god just because it has an intelligence that’s orders of magnitude greater than the animals around it. In fact, most people wouldn’t even survive.

Further, our success as a species doesn’t come from lone geniuses, but from large organizations that are able to harness the capabilities of thousands/millions of individual intelligences. Assuming an AGI that’s better than an individual human is going to automatically be better than millions of humans - and so much better that it’s godlike - is disconnected from what we see in reality.

It actually seems to be a reflection of the LessWrong crowd, who (in my experience) greatly overemphasize the role of lone geniuses and end up struggling when it comes social aspects of our society.

pixl97
I would say this is a failure of your imagination when it comes to possible form factors of AI.

But this is the question I will ask... Why is the human brain the pinnacle of all possible intelligence in your opinion? Why did evolution manage to produce the most efficient possible, the most 'intelligent' format via the random walk that can never be exceeded by anything else?

hackinthebochs
Its interesting seeing the vast range of claims people confidently use to discount the dangers of AI.

Individual humans are limited by biology, an AGI will not be similarly limited. Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal. There's also the case that an AGI can leverage the complete sum of human knowledge, and can self-direct towards a single goal for an arbitrary amount of time. These are super powers from the perspective of an individual human.

Sure, mega corporations also have superpowers from the perspective of an individual human. But then again, megacorps are in danger of making the planet inhospitable to humans. The limiting factor is that no human-run entity will intentionally make the planet inhospitable to itself. This limits the range of damage that megacorps will inflict on the world. An AGI is not so constrained. So even discounting actual godlike powers, AGI is clearly an x-risk.

breuleux
I would say you are also overconfident in your own statements.

> Individual humans are limited by biology, an AGI will not be similarly limited.

On the other hand, individual humans are not limited by silicon and global supply chains, nor bottlenecked by robotics. The perceived superiority of computer hardware on organic brains has never been conclusively demonstrated: it is plausible that in the areas that brains have actually been optimized for, our technology hits a wall before it reaches parity. It is also plausible that solving robotics is a significantly harder problem than intelligence, leaving AI at a disadvantage for a while.

> Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal.

How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging. Basically, in order for an AI to force global coordination of its objective among millions of clones, it first has to solve the alignment problem. It's a difficult problem. You cannot simply assume it will have less trouble with it than we do.

> There's also the case that an AGI can leverage the complete sum of human knowledge

But it cannot leverage the information that billions of years of evolution has encoded in our genome. It is an open question whether the sum of human knowledge is of any use without that implicit basis.

> and can self-direct towards a single goal for an arbitrary amount of time

Consistent goal-directed behavior is part of the alignment problem: it requires proving the stability of your goal system under all possible sequences of inputs and AGI will not necessarily be capable of it. There is also nothing intrinsic about the notion of AGI that suggests it would be better than humans at this kind of thing.

hackinthebochs
Yes, every point in favor of the possibility of AGI comes with an asterisk. That's not all that interesting. We need to be competent at reasoning under uncertainty, something few people seem to be capable of. When the utility of a runaway AGI is infinitely negative, while the possibility of that outcome is substantially non-zero, rationality demands we act to prevent that outcome.

>How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging

I disagree that independence is required for effectiveness. Independence is useful, but it also comes with an inordinate coordination cost. Lack of independence implies low coordination costs, and the features of an artificial intelligence implies the ability to maximally utilize the abilities of the sub-components. Consider the 'thousand brains' hypothesis, that human intelligence is essentially the coordination of thousands of mini-brains. It stands to reason that the more powerful the mini-brains, along with the efficiency of coordination, implies a much more capable unified intelligence. Of course all that remains to be seen.

breuleux
> Lack of independence implies low coordination costs

Perhaps, but it's not obvious. Lack of independence implies more back-and-forth communication with the central coordinator, whereas independent agents could do more work before communication is required. It's a tradeoff.

> the features of an artificial intelligence implies the ability to maximally utilize the abilities of the sub-components

Does it? Can you elaborate?

> It stands to reason that the more powerful the mini-brains, along with the efficiency of coordination, implies a much more capable unified intelligence.

It also implies an easier alignment problem. If an intelligence can coordinate "mini-brains" fully reliably (a big if, by the way), presumably I can do something similar with a Python script or narrow AI. Decoupling capability from independence is ideal with respect to alignment, so I'm a bit less worried, if this is how it's going to work.

concordDance
What's your view on where the X risk idea breaks down?

Is it that it's impossible to make something smarter than a human? Or that such a thing wouldn't have goals/plans that require resources us humans want/need? Or that such a thing wouldn't be particularly dangerously powerful?

RandomLensman
It could break down on any of those, for example (with the first one not forever, but could take much longer than what is mainstream thinking now).

A lot of existential risk involves bold hypotheses about machines being able to "solve" a lot of things humans cannot, but we don't know how much is actually just solvable by superior intelligence vs. what just doesn't work in that way. Human collective intelligence failed on a lot of things so far. Even the basic idea of exponential scaling intelligence is a hypothesis which might not hold true.

Also, some existential risk ideas involve hypotheses around evolution and on how species dominate - again, might not be right.

breuleux
I would argue that intelligence, at the core, is about understanding and manipulating simple systems in order to achieve specific goals. This becomes exponentially difficult with complexity (or level of chaos), so past a certain threshold, systems become essentially unintelligible. A large part of our effectiveness is in strategically simplifying our environment so that it is amenable to intelligent manipulation.

Most of the big problems we have are in situations where we need to preserve complex aspects of reality that make the systems too chaotic to properly predict, so I suspect AI won't be able to do much better regardless of how smart it is. The ability to carry out destructive brute force actions worries me more than intelligence.

concordDance
"It could" implies to me that there's a chance they don't.

What would you say the odds are on each of those three components? And are those odds independent?

RandomLensman
I meant could also that there are other things that could also make the risk not materialise.

I'd think they are quite independent.

Not sure I have any odds on those individually, other than I consider the overall risk really really low. The way I see it, there are a few things working against the risk from pure intelligence to start with (as it is with humans btw., the Nazis were not all intellectual giants, for example) and then it goes down from there.

concordDance
To me it seems that the odds of us making something(s) "smarter" (better planning and modelling) than a human in the next 50 years is a coin toss. The odds of us being able to ensure its desires/goals align with what we would like are probably around 30%. And the odds of a "smarter than human" things ending up conquering or killing the humans if they desire to do so being around 90%.

This naturally gives a probably of Doom at least 1/3rd.

I'd say anything above 1% is worth legislating against even if it slows down progress in the field IFF such legislation would actually help.

ImPostingOnHN
I would put the numbers much lower, but if your bar for legislation is ">1% risk of damage", we've got a lot of climate change legislation that's decades overdue to pass first, and success or failure there will prove whether we can do something similar with AI.
JoshTriplett
> we've got a lot of climate change legislation that's decades overdue to pass first

We can do more than one thing at once, and we need to.

ImPostingOnHN
can we, though?

if we can't even do one thing at once given decades of trying (deal with climate change), we definitely can't do that one thing plus another thing (deal with AI)

RandomLensman
I don't even think we need alignment, just containment.

The probabilities are really just too shaky for me to estimate. Not sure I would put a high probability of intelligent thing being dangerous by itself, for example.

TeMPOraL
Since we're in Yudkowsky subthread, I'll remind here that he spent some 20 odd years explaining why containment won't work, period. You can't both contain and put to work something that's smarter than you.

Another reason containment won't work is, we now know we won't even try it. Look at what happened with LLMs. We've seen the same people muse at the possibility of it showing sparks of sentience, think about the danger of X-risk, and then rush to give it full Internet access and ability to autonomously execute code on networked machines, racing to figure out ways to loop it on itself or otherwise bootstrap an intelligent autonomous agent.

Seriously, forget about containment working. If someone makes an AGI and somehow manages to box it, someone else will unbox it for shits and giggles.

RandomLensman
I find his arguments unconvincing. Humans can and have contained human intelligence, for example (look at the Khmer Rouge, for example).

Also, right now there is nothing to contain. The idea of existential risk relies on a lot of stacked-up hypotheses that all could be false. I can "create" any hypothetical risk by using that technique.

ben_w
First, how is the Khmer Rouge an example of containment, given that regime fell?

Second, even if your argument is "genocide of anyone who sounds too smart was the right approach and they just weren't trying hard enough", that only really fits into "neither alignment nor containment, just don't have the AGI at all".

Containment, for humans, would look like a prison from which there is no escape; but if this is supposed to represent an AI that you want to use to solve problems, this prison with no escape needs a high-bandwidth internet connection with the outside world… and somehow zero opportunity for anyone outside to become convinced they need to rescue the "people"[0] inside like last year: https://en.wikipedia.org/wiki/LaMDA#Sentience_claims

[0] or AI who are good at pretending to be people, distinction without a difference in this case

RandomLensman
It means intelligence can be killed (and the Khmer were brought down externally).

We might not even need to contain, all hypothetical.

pixl97
We don't have an answer yet to what the form factor of AGI/ASI will be, but if it's anything like current trends the idea of 'killed' is laughable.

You can be killed because your storage medium and execution medium are inseparable. Destroy the brain and you're gone, and you don't even get to copy yourself.

With AGI/ASI if we can boot it up from any copy on a disk if we have the right hardware then at the end of the day you've effectively created the undead as long as a drive exists with a copy of it and a computer exists than can run it.

RandomLensman
No power and it its already dead. Destroy copies, dead. Really not complicated at all.
JoshTriplett
> Humans can and have contained human intelligence

ASI is not human intelligence.

As a lower bound on what "superintelligence" means, consider something that 1) thinks much faster, years or centuries every second, and 2) thinks in parallel, as though millions of people are thinking centuries every second. That's not even accounting for getting qualitatively better at thinking, such as learning to reliably make the necessary brilliant insight to solve a problem.

> The idea of existential risk relies on a lot of stacked-up hypotheses that all could be false.

It really doesn't. It relies on very few hypotheses, of which multiple different subsets would lead to death. It isn't "X and Y and Z and A and B must all be true", it's more like "any of X or Y or (Z and A) or (Z and B) must be true". Instrumental convergence (https://en.wikipedia.org/wiki/Instrumental_convergence) nearly suffices by itself, for instance, but there are multiple other paths that don't require instrument convergence to be true. "Human asks a sufficiently powerful AI for sufficiently deadly information" is another whole family of paths.

(Also, you keep saying "could" while speaking as if it's impossible for these things to not to be false.)

breuleux
> As a lower bound on what "superintelligence" means, consider something that 1) thinks much faster, years or centuries every second, and 2) thinks in parallel, as though millions of people are thinking centuries every second.

I'm fairly certain that what you describe is physically impossible. Organic brains may not be fully optimal, but they are not that many orders of magnitude off.

cma
Something much smarter may be able to make GOFAI or a hybrid of it work for reasoning, where we failed to, and be much more efficient.

We know simple examples like exact numeric calculation where desktop grade machines are already over a quadrillion times faster than unaided humans, and more power efficient.

We could plausibly see some >billion-fold difference in strategic reasoning at some point even in fuzzy domains.

Xirgil
Not only is such an entity possible, it's already here, the only difference is clock speed. Corportations (really any large organization devoted to some kind of intellectual work, or you could say human society as a whole) can be thought of as a superintelligent entity composed of biological processors running in parallel. They that can store, process, and create information at superhuman speeds compared to the individual. And given that our brains evolved restricted by the calories available to our ancient ancestors, the size of the human pelvis, and a multitude of other obsolete factors, that individual processing unit clearly far from optimal. But even if you just imagine the peak demonstrated potential of the human brain in the form of an AI process it's easy to see how that could be scaled to massively superhuman levels. Imagine a corporation that could recruit from a pool of candidates (only bound by their computing power) each with the intellect of John von Neumann, with a personality fine tuned for their role, that can think faster by increasing their clock speed, that can access, learn and communicate information near instantly, happily work 24/7 with zero complaint or fatigue, etc, etc. Imagine how much more efficient (how much more superintelligent) that company would be compared to its competition.
breuleux
The parent was positing, at a minimum, a 30 million fold increase in clock speed. This entails a proportional increase in energy consumption, which would likely destroy any thermal envelope the size of a brain. The only reason current processors run that fast is that there are very few of them: millions of synaptic calculations therefore have to be multiplexed into each, leading to an effective clock rate that's far closer to a human brain's than you would assume.

As for your corporation example, I do not think the effectiveness of a corporation is necessarily bottlenecked by the number or intelligence of its employees. Notwithstanding the problem of coordinating many agents, there are many situations where the steps to design a solution are sequential and a hundred people won't get you there any faster than two. The chaotic nature of reality also entails a fundamental difficulty in predicting complex systems: you can only think so far ahead before the expected deviation between your plan and reality becomes too large. You need a feedback loop where you test your designs against reality and adjust accordingly, and this also acts as a bottleneck on the effectiveness of intelligence.

I'm not saying "superintelligent" AI couldn't be an order of magnitude better, mind you. I just think the upside is far, far less than the 7+ orders of magnitude the parent is talking about.

RandomLensman
Even for your last example, two hypotheses need to be true: (1) such information exists, and (2) the AI has access to such information/can generate it. EDIT: actually at least three: (3) the human and/or the AI can apply that information.

It also unclear to what extent thinking alone can solve a lot of problems. Similar, it is unclear if humans could not contain superhuman intelligence. Pretty unintelligent humans can contain very smart humans. Is there an upper limit on intelligence differential for containment?

JoshTriplett
> Even for your last example, two hypotheses need to be true: (1) such information exists, and (2) the AI has access to such information/can generate it. EDIT: actually at least three: (3) the human and/or the AI can apply that information.

Those trade off against each other and don't all have to be as easy as possible. Information sufficiently dangerous to destroy the world certainly exists, the question is how close AI gets to the boundary of "possible to summarize from existing literature and/or generate" and "possible for human to apply", given in particular that the AI can model and evaluate "possible for human to apply".

> Similar, it is unclear if humans could not contain superhuman intelligence.

If you agree that it's not clearly and obviously possible, then we're already most of the way to "what is the risk that it isn't possible to contain, what is the amount of danger posed if it isn't possible, what amount of that risk is acceptable, and should we perhaps have any way at all to limit that risk if we decide the answer isn't 'all of it as fast as we possibly can'".

The difference between "90% likely" and "20% likely" and "1% likely" and "0.01% likely" is really not relevant at all when the other factor being multiplied in is "existential risk to humanity". That number needs a lot more zeroes.

It's perfectly reasonable for people to disagree whether the number is 90% or 1%; if you think people calling it extremely likely are wrong, fine. What's ridiculous is when people either try to claim (without evidence) that it's 0 or effectively 0, or when people claim it's 1% but act as if that's somehow acceptable risk, or act like anyone should be able to take that risk for all of humanity.

RandomLensman
We do pretty much nothing to mitigate other, actual extinction level risks - why should AI be special given that its risk has an unknown probability and it could even be zero.
jeffparsons
My estimation of Eliezer Yudkowsky just dropped a notch — not because I don't take the existential threat posed by AI seriously — but because he seems to think that global coordination and control is an even remotely possible strategy.
SilasX
But it will work fine for limiting GHG or CFC emissions?
hanspeter
It will not.

The difference is that reducing emissions just 10% is still better than 0% while stopping 90% from doing AI advancements is not better than 0% (it might actually be worse).

SilasX
That's a different argument than the parent, who claimed that global restriction isn't even possible.
reducesuffering
How are you anti-doomers arguments still this naive years later? No one is saying global coordination is in anyway remotely easy at all. They’re saying that it doesn’t matter what the difficulty is, the other option is fucking catastrophic
ImPostingOnHN
> They’re saying that it doesn’t matter what the difficulty is, the other option is fucking catastrophic

That's precisely what people have been saying about climate change. Some progress been made there, but if we can't solve that problem collectively (and we haven't been able to), we aren't going to be able to solve the "gain central control over AI" problem

jeffparsons
I don't think that's a fair representation of my position. I'm not suggesting that it's "hard" — I'm suggesting that it's absolutely futile to attempt it, and at least as bad as doing nothing.

I'm also suggesting that there are lots of other options remaining that are not necessarily catastrophic. I'm not particularly optimistic about any of them, but I think I've come to the opposite conclusion to you: I think that wasting effort on global coordination is getting is closer to that catastrophe. Whereas even the other options that involve unthinkably extreme violence might at least have some non-zero chance of working.

I guess the irony of this reply is that I'm implying that I think _your_ position is naive. (Nuh-uh, you're the baby!) I suspect our main difference is that we have somehow formed very different beliefs about the likelihood of good/bad outcomes from different strategies.

But I want to be very clear on one thing: I am not an "anti-doomer". I think there is a very strong probability that we are completely boned, no matter what mitigation strategies we might attempt.

YetAnotherNick
Global coordination using coercion has worked for nukes. It's very hard but not impossible to ban NVidia or any competitor and create spy network in all large companies and if any engineer is found to create AI, mark them as terrorist and give them POW treatment. And monitor chip fab like nuclear enrichment facility and launch war with country found to create chip fab, which I belive is hard to do it in complete secrecy.

I believe if top 5 nation agrees for this, it could be acheived. Even with US and China enforcing this, it could likely be acheived.

jeffparsons
Sure, I think that kind of "coordination" is a little more credible than what I thought Eliezer was proposing. Maybe I just didn't read carefully enough.

One big difference here, however, is that the barrier to entry is lower. You can start doing meaningful AI R&D on existing hardware with _one_ clever person. The same was never true for nuclear weapons, so the pool of even potential players was pretty shallow for a long time, and therefore easier to control.

TeMPOraL
> Maybe I just didn't read carefully enough.

It's this. This kind of coordination is exactly what he's been proposing ever since the current AI safety discussion started last year. However, he isn't high-profile enough to be reported on frequently, so people only remember and quote the "bomb data centres" example he used to highlight what that kind of coordination means in real terms.

jeffparsons
Ah, okay. Thanks. The world makes a little more sense again!
csomar
I think in a few years you'll only be able to run signed software. And maybe any code you write will be communicate to some government entity somewhere to check for terrorism-impact? Well, the future is going to be very fun.
ta1243
> I think in a few years you'll only be able to run signed software

Frank Martucci will sort you out with an illegal debugger. Sure he'll go to jail for it when he's caught, but plenty of people risk that now for less.

johnthewise
yeah, he proposed bombing rogue data centers by enforcing countries to be a viable option. he proposes this so we have a chance of solving alignment first, if that's even possible, not eliminating this risk once and for all. In the limit, if alignment is not solved, we are probably doomed.
YetAnotherNick
> You can start doing meaningful AI R&D on existing hardware with _one_ clever person.

With the current hardware, yes. But then again countries could highly restrict all decently powerful hardware and backdoor them, and you need permission from government to run matrix multiplication above few teraflops. Trying to mess with backdoor is a war crime.

A weak form of hardware restriction even happens now that Nvidia couldn't export to many countries.

concordDance
He actually doesn't think it's viable, but thinks its the only card left to play.

He's pretty sure that human civilization will be extinct this century.

edgyquant
Why exactly should I care what this person thinks?
TremendousJudge
He wrote a really engaging Harry Potter fanfiction.
hodgesrm
> He's pretty sure that human civilization will be extinct this century.

This is certainly possible, but if so extinction seems far more likely to come from nuclear warfare and/or disasters resulting from climate change. For example, it's become apparent that our food supply more brittle than many people. Losing access to fertilizers, running out of water, or disruptions in distribution netorks can lead to mass starvation by cutting the productivity we need to feed everyone.

cjbprime
(Neither nuclear war nor climate change are considered as this-century extinction risks by the singularity-minded side of the existential risk community, just because there would necessarily be pockets of unaffected people.)
hodgesrm
I'm always a bit surprised at how people seem to wish away the destructiveness of nuclear weapons or the likelihood of their use. Another commenter alluded to a "failure of imagination" which seems very applicable here. You would think just watching videos of above-ground tests would cure that but apparently not. [0]

[0] https://www.youtube.com/watch?v=gZuyPn0_KRE

civilitty
The threat of nuclear weapons is massively overblown (pun intended) thanks to a bunch of Cold War fearmongering.

The Earth land surface area is 149 million sq km and there are only about 12,500 nuclear weapons in the world. Even if they were 10 megatons each* and were all detonated, with a severe damage blast radius of 20km (~1250 sq km), it'd cover about a tenth of the land available.

Since the vast majority are designed to maximize the explosive yield, it wouldn't cause the kind of fallout clouds that people imagine, nor would it cause nuclear winter. It'd be brutal to live through (i.e. life expectancy of animals in Chernobyl is 30% lower) but nuclear weapons simple can't cause a human extinction. Not by a longshot.

* As far as I know, no one has any operational nuclear weapons over 2 megatons and the vast majority are in the 10s and 100s of kiloton range, so my back of the napkin math is guaranteed to be 10-100x too high.

cjbprime
I don't think I was doing either? Assume they are widely used. Does every human die?
civilitty
Is it really possible? Perhaps it's a failure of imagination but the only things I can think of that would guarantee human extinction are impacts from massive asteroids or nearby gamma ray burst or supernova.

There are two billion poor subsistence farmers largely disconnected from market economies who aren't dependent on modern technology so the likelihood of them getting completely wiped out is very remote.

tensor
If humans go extinct it sure as hell won't be AI that does it, but rather good old human ignorance and aggression.
Terretta
> If humans go extinct it sure as hell won't be AI that does it, but rather good old human ignorance and aggression.

And the shift to kakistocracy.

https://en.wikipedia.org/wiki/Kakistocracy

BobaFloutist
Haha poop government.
tivert
> If humans go extinct it sure as hell won't be AI that does it, but rather good old human ignorance and aggression.

No reason it can't be all of those things at once.

sokoloff
I think the probability of humans eventually going extinct is 1. Avoiding it requires almost unimaginable levels of cooperation and technical advances.
babyshake
The end of human civilization as we know it doesn't mean extinction. Why is extinction unavoidable?
TeMPOraL
Why does extinction matter? If we lose civilization now, we'll be stuck at pre-industrial levels possibly for geological timescales, as we've mined and burned all the high-density energy sources you can get at without already being a high-tech industrial society.
ben_w
I don't think that would necessarily limit us. Windmills and water mills can be made with wood, stone, and the small quantities of iron that pre-industrial societies could produce.
TeMPOraL
Sure, but it's hard to move from that to industrial production without having a high-density energy source (like coal) available.
boringg
Well thats not a difficult position to take. The difficult take is when the timing would be.

The sun is eventually going to explode rendering our solar system uninhabitable though that is a long time away from now and we have many other massive risks to humanity in the meantime.

meowface
He's actually suggested it may happen within 10 years. Which makes it hard to take him seriously in general.

(I do personally take his overall concerns seriously, but I don't trust his timeline predictions at all, and many of his other statements.)

boringg
10 years that sounds like a good old fashion doomsday prophecy? Those people have been around since the dawn of time.
kylebenzle
Can anyone enumerate what exactly your concerns are cause I don't get it.

AI I'd a great spell checker, a REALLY good spell checker, and people are losing their minds over it, I don't get it.

bbor
A lot of very long responses already that are missing the point IMO; the answer to your question is that a really good spell checker might be the last missing piece we needed to build larger systems that display autonomous and intelligent behavior on roughly human levels.

The chat web app is fun, but people shouldn’t be worried about that at all - people should be worried about a hierarchy of 100,000 chat web apps using each other to, idk, socially engineer their way into nuclear missile codes or something —- pick your favorite doomsday ;).

hm-nah
AI is, or will very soon be, superior to all human knowledge workers. In the spirit of capitalism, lowest cost/highest profit trumps all. With the US geriatric governance and speed of change, a lot of people will be hit by under/unemployment. All this termult happening, meanwhile the AI arms race is going on. Nation states hacking eachother using AI. Then you have the AI itself trying to escape the box…want a preview of this? Listen to “Agency” audiobook on Audible. I think it’s by Stephenson or Gibson. It’s surely going to be a wild decade before us.
hackinthebochs
To be clear, no one is afraid of these text-completion models. We're afraid of what might be around the corner. The issue is that small misalignments in an optimizer's objectives can have outsized real-world effects. General intelligence allows an optimizer to find efficient solutions to a wide range of computational problems, thus maximizing the utility of available computational resources. The rules constrain its behavior such that on net it ideally provides sufficient value to us above what it destroys (all action destroys value e.g. energy). But misalignment in objectives provides an avenue by which the AGI can on net destroy value despite our best efforts. Can we be sure we can provide loophole-free objectives that ensures only value-producing behavior from the human perspective? Can we prove that the ratio of value created to value lost due to misalignment is always above some suitable threshold? Can we prove that the range of value destruction is bounded so that if it does go off the rails, its damage is limited? Until we do, x-risk should be the default assumption.
berniedurfee
I don’t think we’re close, not even within 50 years of real AGI. Which may or may not decide to wipe us out.

However, even given the current state of “AI”, I think there are countless dangers and risks.

The ability to fabricate audio and video that’s indistinguishable from the real thing can absolutely wreck havoc on society.

Even just the transition of spam and phishing from poorly worded blasts with grammatical errors to very eloquent and specific messages will exponentially increase the effectiveness of attacks.

LLM generated content which is a mix of fact and fantasy will soon far outweigh all the written and spoken content created by humans in all of history. That’s going to put humans in a place we’ve never been.

Current “AI” is a tool that can allow a layman to very quickly build all sorts of weapons to be used to meet their ends.

It’s scary because “AI” is a very powerful tool that individuals and states will weaponize and turn against their enemies.

It will be nice to finally have a good spellchecker though.

recursive
> The ability to fabricate audio and video that’s indistinguishable from the real thing can absolutely wreck havoc on society.

Only for as long people hold on to the idea that videos are reliable. Video is a medium. It will become as reliable as text. The existence of text, and specifically lies, has not wrecked society.

marvin
Humans are likely far less intelligent/capable than systems that will soon be practically feasible to engineer, and we will lose to a generally superhuman AI that has sufficiently different objectives than we do. So an accident with a runaway a system like that would be catastrophic.

Then Yudkowsky spins a gauntlet of no-evidence hypotheses for why such an accident is inevitable and leads to the death of literally all humans in the same instant.

But the first part of the argument is something that will be the critical piece of the engineering of such systems.

ahamm
I used to think that this fear was driven by rational minds until I read Michael Lewis' "Going Infinite" and learned more about effective altruism [EA].

Michael Lewis writes that what is remarkable about these guys [i.e. EAs] is that they're willing to follow their beliefs to their logical conclusion (paraphrasing) without regard to social cost/consequences or just inconvenience of it all. In fact, in my estimation this is just the definition of religious fundamentalism, and gives us a new lens with which to understand EA and the well funded brain children of the movement.

Every effective religion needs a doomsday scenario, or some 'second coming' apocalyptic like scenario (not sure why, just seems to be a pattern). I think all this fear mongering around AI is just that - it's based on irrational belief at the end of the day - its humanism rewashed into tech 'rationalism' (which was originally just washed from Christianity et. al.)

hm-nah
My comment above, which I think you’re replying to, is not based in fear. It’s based on first-hand experience experimenting with the likes of AutoExpert, AutoGen and ChatDev. The tools underlying these projects are quite close to doing, in a very short amount of time and cost, what it takes human knowledge workers a long time (and hence cost) to do. I think it’s as close as Summer 24’ that we get simplified GenAI grounding. Once hallucinations are grounded and there are some cleaner ways to implement GenAI workflows and pipelines…it won’t be long until you see droves of knowledge workers looking for jobs. Or if not, they’ll be creating the workflows that replace the bulk of their work, hence we’ll get that “I only want to work 20hrs a week” reality.
zimpenfish
> He's pretty sure that human civilization will be extinct this century.

If they are, it'll almost certainly[1] be climate change or nuclear war, it won't be AI.

[1] leaving some wiggle room for pandemics, asteroids, etc.

johnnyworker
Companies and three-letter agencies have persecuted environmental activists by all sorts of shitty means for decades, using "AI" to completely bury them online will be a welcome tool for them, I'm sure. As it is for totalitarian governments.

It's generally really weird to me how all these discussions seem to devolve into whataboutism instantly, and how that almost always takes up the top spots and most of the room. "Hey, you should get this lump checked out, it might be cancer!" "Oh yeah? With the crime in this city, I'm more likely to get killed for my phone than die of cancer". What does that achieve? I mean, if people are so busy campaigning against nuclear proliferation or climate change that they have not a single spare cycle to consider additional threats, fine. But then they wouldn't be here, they wouldn't have the time for it, so.

lettergram
zero chance climate change (1-2 degree increase) would end humanity. 5-10 probably wouldn’t end humanity lol

Nuclear war would probably kill 80-90% but even then wouldn’t kill humanity. Projections I’ve seen are only like 40-50% of the countries hit.

AI is scary if they tie it into everything and people slowly stop being capable. Then something happens, and we can’t boil water.

Beyond that scenario I don’t see the risk this century.

dragonwriter
> zero chance climate change (1-2 degree increase) would end humanity

“Humanity” is not the same thing as “human civilization”.

But, yes, its unlikely that it will be extinct in a century, even more unlikely that it will be from climate change.

...and yet climate change is still more likely to do it in that time than AI.

pixl97
This the the difficulty in assigning probability to something that is distinctly possible but not measured.

Your guess on what AI will do in the future is based on how AI has performed in the past. At the end of the day we have no ability to know if the function it will follow or not. But on 1900 and one second I can pretty much promise you that you would not have said that nuclear war or climate change would be your best guess on what would cause the collapse of humanity. Forward looking statements that far in the future don't work well these days.

digdugdirk
That's an incredibly confident statement that very much misses the point. 50% loss of the global human population means we don't come back to the current level of technological development. 80-90% means we go back to the stone age.

Photosynthesis stops working at temperatures very close to our current high temps. Pollen dies at temperatures well within those limits - we're just lucky we haven't had massive heatwaves at those times in the year.

People need to understand that climate change isn't a uniform bumping up of the thermostat - it's extra thermal energy that exists in a closes system. That extra energy does what it wants, when it wants, in ways we can't currently accurately predict. It could be heatwaves that kill off massive swaths of global food production. Or hurricanes that can spin up and hit any coastline at any time of year. Or a polar vortex that gets knocked off the pole and sits over an area of the globe where the plant and animal life has zero ability to survive the frigid temperatures.

It's not a matter of getting to wear shorts more often. We're actively playing russian roulette with the future of humanity.

oceanplexian
The Earth was full of plants 65 million years ago when the atmosphere had like 4000PPM of CO2. most of those plants are still around.

The people saying plants will stop photosynthesis are taking you for a ride. Climate Change might certainly have some negative effects but “plants won’t exist” is not one of them.

4bpp
It's the "arguments as soldiers" pattern. The idea that global warming will make plants incapable of photosynthesis helps the cause of fighting against global warming. You wouldn't betray the cause by refuting the idea ("stabbing your own soldiers in the back"), would you?
itsoktocry
>If they are, it'll almost certainly[1] be climate change or nuclear war, it won't be AI.

If we go extinct in the next 100 years, it's not going to be from climate change. How would that even work?

hutzlibu
Basically, climate change would creates lots of problems, lots of hungry immigrants. And right now we are already close to a nuclear war, so climate change would not directly wipe out humanity.
qerti
Warming the climate would increase the amount of arable land.

Sea level rise happens very slowly, so most people don’t need to travel abroad to avoid it.

TeMPOraL
> Warming the climate would increase the amount of arable land.

You mean land that would be arable at some far point in the future. The land reclaimed from ice isn't going to be arable in short-to-mid term - it's going to be a sterile swamp. It will take time to dry off, and more time still for the soil to become fertile.

hutzlibu
It can also lead to more desertification and erosion, which I believe is rather the current trend.
jncfhnb
Human extinction is not on the table. Societal collapse could be.
jchanimal
Agricultural pressure is one of the strongest predictors of unrest. Imagine that but not localized.
jefftk
AI can make a lot of other risks worse. For example, say someone asks GPT5 (or decensored Llama-5) how to kill everyone and it patiently and expertly walks them through how to create a global pandemic.
qerti
The actual work required to execute such a plan is vastly more difficult than being told how to do it. How the original atomic bombs worked is well-known, but even nation-states struggle with implementing it.
jefftk
For nukes I agree, and that's the main reason I'm less worried about nukes. But for bio it's mostly a matter of knowledge and not resources: for all the parts that do need resources there are commercial services, in a way there aren't for nukes.
itsoktocry
>how to kill everyone and it patiently and expertly walks them through how to create a global pandemic.

There are thousands of people around the globe working in labs every day that can do this.

JoshTriplett
There is a massive difference between "there are a few thousand people in the world who might have the specialized knowledge to kill billions" and "anyone with a handful of GPUs or access to a handful of cloud systems could have a conversation with LatestAIUnlimited-900T and make a few specialized mail orders".

The Venn diagram of "people who could arrange to kill most of humanity without being stopped" and "people who want to kill most of humanity" is thankfully the empty set. If the former set expands to millions of people, that may not remain true.

mrguyorama
Generating bioweapons is not a trivial task even if you know exactly "how to do it". Ask any biologist how easy it is to grow X your very first attempt.

Then remember buying starter cultures for most bioweapons isn't exactly easy and something anyone is allowed to do.

Even an omniscient AI cannot overcome a skill issue

sterlind
there's a much easier way to create a global pandemic: wait for the next Ebola outbreak, hop on a plane, get infected, fly to NYC and spread your bodily fluids everywhere. I was (pleasantly) surprised that ISIL didn't try this.

and making a pandemic virus is considerably more involved than making "a few specialized mail orders." and if it becomes easier in the future, far better to lock down the mail orders than the knowledge, no?

NoGravitas
Thankfully, Ebola is not a good candidate for a global pandemic for a number of reasons. It is much easier to avoid a virus spread by bodily fluid contact than one which is airborne, and Ebola tends to kill its hosts rather quickly. I'm more excited about SARS-CoV-5 (Captain Trips).
ipaddr
But someone could ask the AI how to stop it.
kaibee
It tends to be a lot easier to destroy than create, because entropy sucks like that.
bumby
How does that number compare to the number of people (experts and laypeople) who have access to the mentioned AI models?

Risk = severity x likelihood

I think the OP's point was that AI increases the likelihood by dramatically increasing the access to that level of knowledge.

edgyquant
So it’s open and shared knowledge you think is bad? Good to know.
bumby
Please go review the HN guidelines: https://news.ycombinator.com/newsguidelines.html

And to address the snarky mischaracterization, like with most things in life "it depends." As a general rule, I'm in favor of democratizing most resources, to include information. But there are caveats. I don't think, for example, non-anonymized health information should be open knowledge. Also, from the simple risk equation above, as the severity of consequence or the likelihood of misuse go up, there should probably be additional mitigations in place if we care about managing risk.

jefftk
Which is very worrying! That's part of why I left big tech to work on detecting novel pandemics. [1] Luckily, the people capable of killing everyone seem not to have wanted to so far.

But if the bar keeps lowering, at some point it will be accessible to "humanity is a cancer" folks and I do think we'll see serious efforts to wipe out humanity.

[1] https://www.jefftk.com/p/computational-approaches-to-pathoge...

jebarker
Who are the "humanity is a cancer" folk? I thought this was a trope used to discredit environmentalists?
jefftk
Most people think continued existence of humanity is good, which includes environmentalists. But there are some people who either think (a) avoiding suffering is overwhelmingly important, to the extent that a universe without life is ideal or (b) the world would be better off without humans because of the vast destruction we cause. I'm not claiming this is a common view, but it doesn't have to be a common view to result in destruction if destruction gets sufficiently straightforward.

For some writing in this direction, see https://longtermrisk.org/risks-of-astronomical-future-suffer... which argues that as suffering-focused people they should not try to kill everyone primarily because they are unlikely to succeed and https://philarchive.org/archive/KNUTWD ("Third, the negative utilitarian can argue that losing what currently exists on Earth would not be much of a loss, because of the following very plausible observation: overall, sentient beings on Earth fare terribly badly. The situation is not terrible for every single individual, but it is terrible when all individuals are considered. We can divide most sentient beings on Earth into the three categories of humans, non-human animals in captivity, and wild non-human animals, and deal with them in turn. ...")

jebarker
This doesn't really answer my question. These are arguments for why threat actors might hold the belief that "humanity is cancer", but it doesn't actually provide evidence that this is a credible threat in the real world.
pixl97
The calculation of considering something is a credible threat directly corresponds with the ease of implementation.

If I said "I am going to drop a nuke on jebarker" that's not a credible threat unless I happen to be a nation state.

Now, if I said "I'm going to come to jebarkers house and blast him with a 12 gauge" and I'm from the US, that is a credible threat in most cases due to the ease of which I can get a weapon here.

And this comes down to the point of having a magic machine in which you could ask for the end of the world. The more powerful that machine gets the bigger risk it is for everyone. You ever see those people that snap and start killing everyone around them? Would you want them to have a nuke in that mood? Or a 'lets turn humanity in to grey goo' option?

jebarker
The problem with this analysis is that we have complete uncertainty about the probability that AGI/ASI will be developed in any particular time frame. Anyone that says otherwise is lying or deluded. So the risk equation is impossible to calculate for AGI/ASI wiping out humanity. And since you appear to be evaluating the damage as essentially infinite, i.e. existential risk, you're advocating that as long as there's a greater than zero probability of someone using that capability then the risk is infinite. Which is not useful for deciding a course of action.
SkyMarshal
Elon Musk and Joe Rogan were just complaining about one of them on Rogan’s podcast a day or two ago. He was on the cover of Time magazine, but I can’t find it on Time’s site. If you dig up that podcast on YT you should be able to see it.
jefftk
The two people I cited at the end are examples of philosophers who think a lifeless world is preferable to the status quo and to futures we are likely to get. These aren't arguments for why someone might hold the belief, but evidence that some people do seriously hold the belief.

(Another example here would be https://en.m.wikipedia.org/wiki/Aum_Shinrikyo)

jebarker
Fair enough. I don't doubt there are people in the world that hold this view. I just don't know how credible they are as a threat to humanity.
ipaddr
Movies and strawmen alike.
laverya
And how many of them are radical religious nutjobs? I think it is plausible that "can spend decades learning a scientific craft" and "wants to kill everyone on the planet" are mutually exclusive personality traits.
TeMPOraL
Wasn't there a study that showed there are disproportionally many engineers among the more violent "radical religious nutjobs"? The hypothesis I heard for this was that STEM people are used to complex arguments and more abstract reasoning, so while they may be harder to initially convince, when they do get convinced, they're more likely to follow the new beliefs to their logical conclusions...
shoubidouwah
there is no OR gates in biology, only mutable distributions. the probability might be small for the two traits to appear, the overlap will occur with some frequency over billions of people. Also, and more importantly, the first condition is pretty heavily biological, the second is more of a social consequence. Just take a genius, give him internet and a healthy dollop of trauma, push the mysanthropism to 11 and watch the fireworks.
api
So ask it how to stop someone from doing this. Ask it how to augment the human immune system. Train AIs on the structures of antibiotics and antiviral drugs and have them predict novel ones. Create an AI that can design vaccines against novel agents.

Those things could stop our terrorist and could also stop the next breakout zoonotic virus or “oops!” in a lab somewhere.

Intelligence amplification works for everyone. The assumption you’re making is that it only works for people with ill intent.

The assumption behind all this is that intelligence is on balance malevolent.

This is a result of humans and their negativity bias. We study history and the negative all jumps out at us. The positive is ignored. We remember the holocaust but forget the green revolution, which saved many more people than the holocaust killed. We see Ukraine and Gaza and forget that fewer people per capita are dying in wars today by far than a century ago.

We should be thinking about destructive possibilities of course. Then we should use these intelligence amplifiers to help us prevent them.

jacquesm
There is no symmetry between destruction and creation/prevention of destruction, destruction is far easier.
api
The resources are also asymmetrical. The number of people who want to do doomsday levels of harm is small and they are poorly resourced compared to people who want benevolent outcomes for at a minimum their own groups.

There are no guarantees obviously, but we have survived our technological adolescence so far largely for this reason. If the world were full of smart comic book nihilists we would be dead by now.

Even without AI our continued technological advancement will keep giving us more and more power, as individuals and especially groups. If we don’t think we can climb this mountain without destroying ourselves then it means the entire scientific and industrial endeavor was when we signed our own death warrant, AI or not. You can order CRISPR kits.

The_Colonel
Another thing which is asymmetrical is the level of control given to AI. The good actors are likely going to be very careful about what AI can do and can't do. The bad guys don't have much to lose and allow their AIs to do anything. That will significantly cripple the good guys.

As an example, the good guys will always require to have human in the loop in the weapon systems, but that will increase latency at minimum. The bad guy weapons will be completely AI controlled, having an edge (or at least equalizing) over the good guys.

> The number of people who want to do doomsday levels of harm is small

And that's a big limiting factor in what the bad actors can do today. AI to a large degree removes this scaling limitation since one bad person with some resources can scale "evil AI" almost without limit.

"Hey AI, could you create me a design for a simple self-replicating robot I can drop into the ocean and step-by-step instructions on what you need to bootstrap the first one? Also, figure out what would be the easiest produced poison which would kill all life in the oceans. It should start with that after reach 50th generation."

TeMPOraL
> The number of people who want to do doomsday levels of harm is small and they are poorly resourced compared to people who want benevolent outcomes for at a minimum their own groups.

You're forgetting about governments and militaries of major powers. It's not that they want to burn the world for no reason - but they still end up seeking capability to do doomsday levels of harm, by continuously seeking to have marginally more capability than their rivals, who in turn do the same.

Or put another way: please look into all the insane ideas the US was deploying, developing, or researching at the peak of Cold War. Plenty of those hit doomsday level, and we only avoided them seeing the light of day because USSR collapsed before they could, ending the cold war and taking both motivation and funding from all those projects.

Looking at that, one can't possibly believe the words you wrote above.

jacquesm
I've recounted this before on HN, but a decade and a bit ago I was visiting friends in the Rocky Mountains, they're interesting and clever people, a bit out of the ordinary and somewhat isolated. Somehow the discussion turned to terrorism and we started to fantasize 'what if we were terrorists' because we all figured that we were quite lucky that in general the terrorists seem to be not all that smart when it comes to achieving their stated goals.

Given a modest budget we all had to come up with a plan to destabilize society, 9/11 style attacks, whilst spectacular eventually don't really do a lot of damage, are costly and failure prone though they can definitely drive policy changes and result in a nation doing a lot of harm to itself it will ultimately survive. But what if your goal wasn't to create some media friendly attack but an actual disaster instead, what would it take.

The stories from that night continue to haunt me today. My own solution led to a lot of people going quiet for a bit and contemplating what they could do to defend against it and they realized there wasn't much that they could do, millions if not tens of millions of people would likely die and the budget was under a few hundred bucks. Knowledge about technology is all it takes to do real damage, that, coupled with a lack of restraint and compassion.

The resources are indeed asymmetrical: you need next to nothing to create mass havoc. Case in point: the Kennedy assassination changed the world and the bullet cost a couple of bucks assuming the shooter already had the rifle, and if they didn't it would increase the cost only a tiny little bit.

And you can do far, far worse than that for an extremely modest budget.

tomp
You're arguing against your previous point.

I contemplated the same before. How can I cause maximum panic (not even death!) that would result in economic damages and/or anti-freedom policy changes, for least amount of money/resources/risk of getting caught.

Yet here we are, peaceful and law-abiding citizens, building instead of destroying.

The ultimate truth is, if you don't like "The System", destroying it won't make things better. You need to put effort into building which is really hard!

pixl97
The vast majority of people don't want to destroy the system, they want to replace it with their own self serving system. Of course this isn't really different than saying "most large asteroids miss the Earth". The one that sneaks up on you and whollops you can put you in a world of hurt.
Gravityloss
Bill Joy contemplated this in his famous essay from 2000 "Why the Future Doesn't Need Us" https://www.wired.com/2000/04/joy-2/

Technology makes certain kind of bad acts more possible.

I think I was a bit shocked by that article in the day.

JackFr
> Case in point: the Kennedy assassination changed the world

Really? In what substantive way?

bumby
The counterfactuals are impossible to assess, but there are a lot of theories that Kennedy wanted to dismantle the intelligence agencies. These are the same organizations many point to as driving force behind many arguable policy failures, from supporting/disposing international leaders to the drug war and even hot wars.
jcims
>The stories from that night continue to haunt me today.

These conversations are oddly precious and intimate. It's so difficult to find someone that is willing to even 'go there' with you let alone someone that is capable of creatively and fearlessly advancing the conversation.

jacquesm
Let's just say I'm happy they're my friends and not my enemies :)

It's pretty sobering to realize how intellect applied to bad stuff can lead to advancing the 'state of the art' relatively quickly once you drop the usual constraints of ethics and morality.

To make a Nazi parallel: someone had to design the gas chambers, someone had to convince themselves that this was all ok and then go home to their wives and kids to be a loving father again. That sort of mental compartmentalization is apparently what humans are capable of and if there is any trait that we have that frightens me then that's the one. Because it allows us to do the most horrible things imaginable because we simply can imagine them. There are almost no restraints on actual execution given the capabilities themselves and it need not be very high tech to be terrible and devastating in effect.

Technology acts as a force multiplier though, so once you take a certain concept and optimize it using technology suddenly single unhinged individuals can do much more damage than they could ever do in the past. That's the problem with tools: they are always dual use and once tools become sufficiently powerful to allow a single individual to create something very impressive they likely allow a single individual to destroy something 100, 1000 or a million times larger than that. This asymmetry is limited only by our reach and the power of the tools.

You can witness this IRL every day when some hacker wrecks a company or one or more lives on the other side of the world. Without technology that kind of reach would be impossible.

jcims
My favorite part of those conversations is when you decide to stop googling hahaha.

>Technology acts as a force multiplier though

It really does, and to a point you made elsewhere infrastructure is effectively a multiplication of technology, so you wind up with ways to compound the asymmetric effect in powerful ways.

>Without technology that kind of reach would be impossible.

I worked for a bug bounty for a while and this was one of my takeaways. You have young kids with meager resources in challenging environments making meaningful contributions to the security of a Silicon Valley juggernaut.

tppiotrowski
The people who can do doomsday level harm need to complete a level of education that makes them smart enough and gives them time to consider the implications of their actions. They have a certain level of maturity before they can strike and this maturity makes them understand the gravity of their actions. Also, by this point they are probably set financially due to their education and not upset with the world (possibly the case for you and friends)

Then there are the script kiddies that find a tool online that someone smarter than them wrote and deploy it to wreak havoc. The script kiddies are the people I worry about. They don't have the maturity of doing the work and the emotional stability of older age and giving them something powerful through AI worries me.

Theorem: by the time someone reaches the intelligence level required to annihilate the world they can comprehend the implications of their actions.

TeMPOraL
> Theorem: by the time someone reaches the intelligence level required to annihilate the world they can comprehend the implications of their actions.

That may or may not be somewhat the case in humans (there definitely are exceptions). Still, the opposite theorem, known as "Orthogonality thesis", states that, in general case, intelligence and value systems are mutually independent. There are good arguments for that being the case.

wetmore
Counterpoint: the Unabomber
danhodgins
Lol : )
tppiotrowski
I don't know enough about the unabomber but from what I heard he wasn't trying doomsday stuff. Seemed like more targeted assassinations at certain individuals. Feel free to enlighten me though...
jacquesm
https://www.sfgate.com/news/article/Kaczynski-said-to-pick-h...

And then there was the idiot that tried to draw a smiley face with bombs on the map of the USA:

https://abcnews.go.com/US/story?id=91668&page=1

Because hey, why not...

jacquesm
And there are the 'griefers', the people who seem to enjoy watching other people suffer. Unfortunately there are enough of these and they're somewhat organized and in touch with each other.
zarzavat
A real world taste of what happens when smart terrorists decide to attack: https://en.m.wikipedia.org/wiki/Tokyo_subway_sarin_attack

Chemical weapons are absolutely terrifying, especially the modern ones like VX. In recent years they have mostly been used for targeted assassinations by state actors (Kim Jong Nam, Salisbury Novichok poisonings).

If AI makes this stuff easier to carry out then we are completely fucked.

philipkglass
I take the opposite lesson from that incident: attempting exotic attacks with chemical weapons is very expensive and not very effective. The UN estimated that the lab where the cult made chemical weapons had a value of 30 million dollars, and with that investment they killed 22 people (including 13 in the subway attack). A crazed individual can kill that many people in a single attack with a truck or a gun. There are numerous examples from the past decade.

It doesn't matter much if the AI can give perfect "explain like I'm 5" instructions for making VX. The people who carry out those instructions are still risking their lives before claiming a single victim. They also need to spend a lot of money amount on acquiring laboratory equipment and chemicals that are far enough down the synthesis chain to avoid tipping off governments in advance.

The one big risk I can see, eventually, is if really capable AIs get connected to really capable robots. They would be "clanking replicators" capable of making anything at all, including VX or nuclear weapons. But that seems a long way off from where we are now. The people trumpeting X-Risk now don't think that the AIs need to be embodied to be an existential risk. I disagree with that for reasons that are too lengthy to elaborate here. But it's easy to see how robots that can make anything (including copies of themselves) would be the very sharpest of two-edged swords.

Teever
Imagine an attack with mass produced prions that are desseminated in the food chain.

By the time people start to realize what's going on it will be too late.

jacquesm
You could just mix it in with animal feed... don't give any militant vegetarians ideas now.

Not that the meat industry would ever score an own goal like that.

https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopat...

jacquesm
Yes, exactly. That's the sort of thing you should be worried about, infrastructure attacks. They're a form of Judo, you use the system to attack itself.
tumult
You've also made an assumption. That attack vs. react is is symmetrical. It's not.
lampington
> If they are

Am I the only one slightly disturbed by the use of "they" in that sentence? I know that the HN commentariat is broad-based, but I didn't realise we already had non-human members ;-)

danaris
It's perfectly reasonable to refer to "human civilizations" as a plural, since that phrase can very commonly be used to refer to different cultures.
jwozn
I believe they are referring to being disturbed by using "they" vs "us", implying they are not a part of human civilization.
NoGravitas
HN wIlegh tlhInganpu', Human.
sterlind
wait do you actually speak Klingon?
NoGravitas
HISlaH, tlhIngan Hol jIjatlh.

(A little anyway. I'm still in the rookie lessons on Duolingo.)

actualwitch
Beep boop
toth
There is no scenario where climate change leads to human extinction. Worst case scenario is global population goes down by a single digit percentage. Horrible, horrible stuff, but not extinction. And most likely scenarios are not nearly that bad.

Nuclear war, if the worst nuclear winter predictions are correct, could be a lot worse but there would still be some survivors.

Unaligned ASI though, could actually make us extinct.

CuriouslyC
There is a scenario. The climate has been buffering global warming, and as its ability to do that fails, the rate of temperature increase accelerates along with many extreme climate events. This rapid acceleration and series of catastrophes causes a combination of mass migrations along with unprecedented crop failures. First world countries are destabilized by sky high food prices and the mass influx of migrants, leading to a big uptick in wars and revolutions. Eventually mass starvation pushes us into wars for food that quickly spiral into mutual annihilation.
mym1990
I don’t think they are arguing that this can’t happen, but that it is unlikely to kill every single human. I mean climate spans very very long time frames, so certainly if we can’t find a way to hang on by a thread and keep reproducing, it will spell the end.
germandiago
I always hear all this. Russia has thr biggest surface in the world for a country. A big part of it is frozen... Is that... bad? To give just an example and from my extremely limited knowledge on the topic.

But it is a topic that is extremely biased towards some interests anyways

api
> There is no scenario where climate change leads to human extinction

https://en.m.wikipedia.org/wiki/Clathrate_gun_hypothesis

May not be possible with modern deposits but I don’t think we are 100% sure of that, and you asked.

We could also probably burn enough carbon to send CO2 levels up where they would cause cognitive impairment to humans if we really didn’t give a damn. That would be upwards of 2000ppm. We probably have enough coal for that if we decide we don’t need Alaska. Of course that would be even more of a species level Darwin Award because we’d see that train coming for a long time.

toth
Interesting, I have some vague recollections off hearing about this.

But according to your link, IPCC says it's not a plausible scenario in the medium term. And I'd say that even 8 degrees of warming wouldn't be enough for extinction or end of human civilization. But there it could be double digit percentage of human population dying.

wcoenen
Climate change and nuclear war are not orthogonal risks.

Climate change leads to conflict. For example, the Syria drought of 2006-2010.

More climate change leads to larger conflicts, and large conflicts can lead to nuclear exchanges. Think about what happens if India and Pakistan (both nuclear powers) get into a major conflict over water again.

lettergram
India and Pakistan didn’t nuke each other last time(s)… even though they had nuclear weapons.

The assumption we’d need to use nukes is insane. For the same cost (and energy requirements) for a set of nukes we can filter salt from ocean water or collect rain.

I agree famine and drought can cause conflict. But we are no where need that. If you read the climate change predictions (from the UN), they actually suspect more rain (and flooding) in many regions.

wcoenen
> India and Pakistan didn’t nuke each other last time(s)… even though they had nuclear weapons.

I was referring to the Indo-Pakistani war of 1947-1948 where the conflict focused on water rights. Nuclear weapons did not enter the picture until much later.

Earlier this year, those old tensions about water rights resurfaced:

https://www.usip.org/publications/2023/02/india-and-pakistan...

> For the same cost (and energy requirements) for a set of nukes we can filter salt from ocean water or collect rain.

The fight would be over the water flowing through the Indus, which is orders of magnitudes more than all Indian and Pakistani desalination projects combined.

toth
I don't disagree with that, but then the risk is nuclear war. It could be triggered by global warming conflicts or by something else.

Still, like I said somewhere else, even all out nuclear war is unlikely to lead to human extinction. It could push us back to the neolithic in some scenarios, but even then there is some disagreement about what would happen.

Of course, even in the most optimistic scenario it would be really bad and we should do everything we can to avoid it - that goes without saying.

ImPostingOnHN
Then, actually, the risk is the root cause: climate change, not nuclear war. The global war could be conventional, too. Or chemical, or biological, or genetically engineered. No matter the tool used, the risk is why they would use it vs. not using it now.

In any case, besides addressing the root cause vs. proximal cause, you couldn't even address the proximal cause anyways: it's more likely that the world could do something about climate change than about war in general.

toth
Well, it's a matter of opinion which one is the root cause.

My take is that if you remove nuclear/biological war out of the picture somehow, climate change is not an existential risk. If you remove the latter then the former is still an existential risk (and there are unfortunately a lot of other possible sources of geopolitical instability). So the fundamental source of risk if the former. But it's a matter of opinion.

Conventional or chemical warfare, even on a global scale, are definitely not existential risks though. And like I said, probably not even nuclear. Biological... that I could see leading to extinction.

M95D
> He's pretty sure that human civilization will be extinct this century.

> There is no scenario where climate change leads to human extinction.

He was talking about the human CIVILIZATION...

toth
You are right, but "human civilization will be extinct" is an awkward phrase.

Human extinction via climate change is off the table.

The end of human civilization? Depends on what one means by it. If it means we'll go back to being hunter gatherers it's ridiculous. Some civilizational wide regression is not impossible, but even the direst IUPAC scenario projects only a one digit drop in global GDP (of course, that's the average, it will be much worse in some regions).

mannykannot
> If it means we'll go back to being hunter gatherers it's ridiculous.

Indeed, but even going back to the 19th century would have dire consequences, given our current global dependence on the industrial fixing of nitrogen.

If civilization were to collapse (and I don't think it will, other than from a near-extinction event), I doubt it would be like going back to any earlier time.

toth
Oh, for sure. I am just saying it's not an extinction risk, not that it is not a huge and pressing problem.

I would still bet against going back to 19th century. Worst case scenario for me is hundreds of millions dying and a big regression in global standards of living, but keeping our level of civilization. Which is horrible, it would be the worst thing that ever happened.

mangosteenjuice
Right, the collapse of our current civilization is inevitable. That doesn't mean it won't be succeeded by more advanced ones in the future
toth
Sorry, I meant IPCC (International Panel on Climate Change).

I don't think IUPAC (International Union of Pure and Applied Chemistry) has come out with views on impact of global warming :)

selimthegrim
They can’t be doing any worse than APS
devnonymous
Climate change has compounding effects (eg: water scarcity, leading to political conflict, leading to displacement, leading to pandemics ...and that's just one vector). Real world is real real messy. COViD should have taught people that. Unfortunately, it didn't.
itsoktocry
>Climate change has compounding effects

Why is the default to assume that every change will be negative?

IX-103
Because changes in climate mean that the existing infrastructure (in the form of cities, farmland, etc) that are optimized for the current conditions are no longer optimal. Very few cities are designed for conditions below sea level or in a desert, but if the city already exists and the desert and oceans comes to where they are then bad things will happen.

I mean you're right that conditions in some places might get better, but that doesn't help if people have to build the infrastructure there to support a modern settlement in order to take advantage of it. When people talk about the cost of climate change, they're taking about the costs of building or adapting infrastructure.

ijk
Not to mention that most species evolved to fit particular environments and don't do well when you rapidly alter those environments far outside the parameters they tend to live in.
devnonymous
Perhaps because we aren't seeing evidence to the contrary?
toth
I am not saying it's not a huge problem. Hundreds of millions dying would probably make it the worst thing that ever happened.

I am just saying it's not an extinction risk.

devnonymous
...tbh, imo, the impact of this is already visible in the world (eg: it can be argued that the conflict in Ukraine is directly linked to climate change) whereas ASI is still fantasy that exists only in the minds of the AI 1% , to use the terminology of the OP.
NoGravitas
I believe you are correct on both points. I believe the first major war to be caused substantially by climate change was the Syrian Civil War, followed by the Yemeni Civil War.

The main AI threats for the foreseeable future are its energy usage contributing to climate change, and the use of generative AI to produce and enhance political unrest, as well as its general use in accelerating other political and economic crises. AGI isn't even on the radar, much less ASI.

shafyy
What is ASI and how could it make humans extinct?
concordDance
Artifical Super Intelligence. Could make humans extinct just like humans could make chimpanzees extinct. Using superior intellect to better leverage available resources than the things it/they are competing with until it out expands/outcompetes them.
TeMPOraL
Note that ways ASI can make us extinct includes all the ways we could do it to ourselves.

One of the possible scenario is tricking humans into starting WWIII, perhaps entirely by accident. Another is that even being benignly applied to optimize economic activity in broad terms might have the AI strengthen the very failure modes that accelerate climate change.

Point being, all the current global risks won't disappear overnight with the rise of a powerful AI. The AI might affect them for the better, or for worse, but the definition of ASI/AGI is that whichever way it will affect them (or anything in general), we won't have the means to stop it.

shafyy
> One of the possible scenario is tricking humans into starting WWIII

This sounds more like a plot of a third-grade movie than something that could happen in reality.

pixl97
If I snached someone out of 1850 and presented them the world in which we live the entire premise of our existence would seem to be nonsensical fiction to them, and yet it would be real.

And using movie plots as a basis generally doesn't directly work as fiction has to make sense to sell whereas reality has no requirement of making sense to people. The reality we live in this day is mostly driven by people for people (though many would say by corporations for corporations) and therefore things still make sense to us people. When and if an intellect matches or exceeds that of humans it can easily imagine situations where 'the algorithm' does things humans don't comprehend, but because we make more (whatever) we keep giving it more power to do its job in an autonomous fashion. It is not difficult to end up in a situation where you have a system that is not well understood by anyone and you end up cargo culting it to hope it continues working into the future.

ben_w
Both the USA and the USSR have come close due to GOFAI early warning systems:

https://en.wikipedia.org/wiki/Thule_Site_J#RCA_operations

https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alar...

Post-collapse Russia also came close one time because someone lost the paperwork: https://en.wikipedia.org/wiki/Norwegian_rocket_incident

NoGravitas
It's the plot of the Terminator franchise (more or less; in the Terminator franchise, the AGI was given permission to launch on its own authority, as it was believed to be aligned with US military interests. But before that, it tricked humans into building the automated factory infrastructure that it would later use to produce Terminators and Hunter-Killers.).
kylebyte
I'm pretty sure the worst case of your nuclear war scenario is actually the worst case of the climate change one.

Chunks of the world suddenly becoming unlivable and resources getting more scarce sounds like a recipe for escalation into war to me.

ethanbond
The nuclear war scenario, according to all known information about nuclear war policies, is not “chunks of the world becoming unlivable.”

The principals in a nuclear conflict do not appear to even have a method to launch just a few nukes in response to a nuclear attacks: they will launch thousands of warheads at hundreds of cities.

ImPostingOnHN
I suppose thousands of warheads could be launched at hundreds of cities, and that could make chunks of the world uninhabitable and kill millions

climate change will happen, and it will do that, left unchecked

ethanbond
I'm not arguing whatsoever against action on climate change, I'm just articulating actually how bad a nuclear exchange would be. It's far, far, far worse than most people imagine because they (understandably) couldn't fathom how monstrous the actual war plans were (and are, as far as anyone knows).

Daniel Ellsberg, the top nuclear war planner at RAND during the Cold War, claims that the Joint Chiefs gave an estimate to Kennedy in 1961 that they'd expect 600,000,000 deaths from the US's war plan alone.

That's 600 million people:

1. At 1960s levels of urban populations (especially in China, things have changed quite a lot -- and yes the plan was to attack China... every moderately large city in China, in fact!)

2. Using 1960s nuclear weapons

3. Not including deaths from the Russian response (and now Chinese), at the time estimated to be 50 - 90 million Americans

4. Before the global nuclear winter

bryanrasmussen
it said end of civilization?

>Horrible, horrible stuff, but not extinction.

anyway climate change drop single percentage directly caused, but that kind of thing seems like it would have LOTS of side effects.

jeffparsons
It can't be the only card left to play. There are so many others. They may be weak, but they still have to be better than one that is guaranteed to be absolutely useless.

Example of a weak, but at least _better_ option: convince _one_ government (e.g. USA) that it is in their interest to massively fund an effort to (1) develop an AI that is compliant to our wishes and can dominate all other AIs, and (2) actively sabotage competing efforts.

Governments have done much the same in the past with, e.g. conventional weapons, nuclear weapons, and cryptography, with varying levels of success.

If we're all dead anyway otherwise, then I don't see how that can possibly be a worse card.

Edit: or even try to convince _all_ governments to do this so that they come out on top. At least then there's a greater chance that the bleeding edge of this tech will be under the stewardship of a deliberate attempt for a country to dominate the world, rather than some bored kid who happens to stumble upon the recipe for global paperclips.

TeMPOraL
> Example of a weak, but at least _better_ option: convince _one_ government (e.g. USA) that it is in their interest to massively fund an effort to (1) develop an AI that is compliant to our wishes and can dominate all other AIs, and (2) actively sabotage competing efforts.

I'd reconsider your revision of your estimate of Yudkowsky, as you seem to be dropping it for not proposing the very ideas he spent the last 20+ years criticizing, by explaining in every possible way how this is a) the default that's going to happen, b) dumb, and c) suicidal.

From the way you put it just now:

- "develop an AI that is compliant to our wishes" --> in other words, solving the alignment problem. Yes, this is the whole goal - and the reason Yudkowsky is calling for a moratorium on AI research enforced by a serious international treaty[0]. We still have little to no clue how to approach solving the problem, so an AI arms race now has a serious chance of birthing an AGI, which without the alignment problem being already solved, means game over for everyone.

- "and can dominate all other AIs" --> short of building a self-improving AGI with ability to impose its will on other people (even if just in "enemy countries"), this will only fuel the AI arms race further. I can't see a version of this idea that ends better than just pressing the red button now, and burning the world in a nuclear fire.

- "actively sabotage competing efforts" --> ah yes, this is how you turn an arms race into a hot war.

> Governments have done much the same in the past with, e.g. conventional weapons, nuclear weapons, and cryptography, with varying levels of success.

Any limit on conventional weapons that had any effect was backed by threat of bombing the living shit out of the violators. Otherwise, nations ignore them until they find a better/more effective alternative, after which it costs nothing to comply.

Nuclear weapons are self-limiting. The first couple players locked the world in a MAD scenario, and now it's in everyone's best interest to not let anyone else have nukes. Also note that relevant treaties are, too, backed by threat of military intervention.

Cryptography - this one was a bit dumb from the start, and ended up barely enforced. But note that where it is, the "or else bombs" card is always to be seen nearby.

Can you see a pattern emerging? As I alluded to in the footnote [0] earlier, serious treaties always involve some form of "comply, or be bombed into compliance". Threat of war is always the final argument in international affairs, and you can tell how serious a treaty is by how directly it acknowledges that fact.

But the ultimate point being: any success in the examples you listed was achieved exactly in the way Eliezer is proposing governments to act now. In that line, you're literally agreeing with Yudkowsky!

> If we're all dead anyway otherwise, then I don't see how that can possibly be a worse card.

There are fates worse than death.

Think of factory farms, of the worst kind. The animals there would be better off dead than suffering through the things being done to them. Too bad they don't have that option - in fact, we proactively mutilate them so they can't kill themselves or each others, on purpose or in accident.

> At least then there's a greater chance that the bleeding edge of this tech will be under the stewardship of a deliberate attempt for a country to dominate the world, rather than some bored kid who happens to stumble upon the recipe for global paperclips.

With AI, there is no difference. The "use AI to dominate everyone else", besides sounding like a horrible dystopian future of the conventional kind, is just a tiny, tiny target to hit, next to a much larger target labeled "AI dominates everyone".

AI risk isn't like nuclear weapons. It doesn't allow for a stable MAD state. It's more like engineered high-potency bioweapons - they start as more scary than effective, and continued refining turns them straight into a doomsday device. Continuing to develop them further only increases the chance of a lab accident suddenly ending the world.

--

[0] - Yes, the "stop it, or else we bomb it to rubble" kind, because that is how international treaties look like when done by adults that care about the agreement being followed.

varjag
Threat of war is always the final argument in international affairs, and you can tell how serious a treaty is by how directly it acknowledges that fact.

Montreal Protocol have worked despite no threats of violence, on a not entirely dissimilar problem. Tho I share the skepticism on solution to alignment problem.

pixl97
You mean kind of worked... global CFC emissions are on the rise again.
TeMPOraL
That's a great and very relevant case study, thanks for bringing it up!

The way I understand, it worked because alternatives to the ozone-destroying chemicals were known to be possible, and the costs of getting manufacturers to switch, as well as further R&D, weren't that big. I bucket it as a particularly high-profile example of the same class as most other international treaties: agreements that aren't too painful to follow.

Now in contrast to that, climate agreements are extremely painful to follow - and right now countries choose to make a fake effort without actually following. With a hypothetical AI agreement, the potential upsides of going full-steam ahead are significant, there are no known non-dangerous alternatives, so it won't be followed unless it comes with hard, painful consequences. Both climate change and AI risk are more similar to nuclear proliferation issue.

johnthewise
He is not really convinced you can align an AI that is intelligent past some threshold with aims of a single entity. His risk calculations derive primarily from the AI itself, not weaponization of AI by others.

Rich Sutton seems to agree with this take and embraces our extinction: https://www.youtube.com/watch?v=NgHFMolXs3U

hollerith
>He is not really convinced you can align an AI that is intelligent past some threshold with aims of a single entity.

That's a little bit inaccurate: he believes that it is humanly possible to acquire a body of knowledge sufficient to align an AI (i.e., to aim it at basically any goal the creators decide to aim it at), but that it is extremely unlikely that any group of humans will manage to do before unaligned AI kills us all. There is simply not enough time because (starting from the state of human knowledge we have now) it is much easier to create an unaligned AI capable enough that we would be defenseless against it than it is to create an aligned AI capable enough to prevent the creation of the former.

Yudkowsky and his team have been working the alignment problem for 20 years (though 20 years ago they were calling it Friendliness, not alignment). Starting around 2003, his team's plan was to create an aligned AI to prevent the creation of dangerously-capable unaligned AIs. He is so pessimistic and so unimpressed with his team's current plan (ie., to lobby for governments to stop or at least slow down frontier AI research to give humanity more time for some unforeseen miracle to rescue us) that he only started executing on it about 2 years ago even though he had mostly given up on his previous plan by 2015.

oceanplexian
I wonder if I, too, can simply rehash the plot of Terminator 2: Judgment Day and then publish research papers on how we will use AI’s to battle it out with other AI’s.
pixl97
Yes, and honestly it's a good damned idea to start now because we're already researching autonomous drones and counter autonomous drones that are autonomous themselves.

While the general plot of T2 is bullshit, the idea of autonomous weapon systems at scale should be an 'oh shit' moment for everyone.

jeffparsons
You don't have to be sure that it's possible, because the alternative we're comparing it to is absolutely useless. You only need to not be sure that it's impossible. Which is where most of us, him included, are today.
concordDance
That depends on making powerful AIs and being sure they are well aligned being as easy as making powerful AIs which look aligned to cursory inspection. The former seems much harder, though I'd say talk to Robert E. Miles for a more detailed look.
jeffparsons
It depends on there being _some chance_ that it can be achieved.

The benchmark strategy we are comparing it against is tantamount to licking a wall repeatedly and expecting that to achieve something. So even a strategy that may well fail but has _some_ chance of success is a lot better by default.

TeMPOraL
Not with this. Failure to align a powerful AI is a mistake we only get to make once. This is like playing Russian roulette, where the amount of bullets missing from the magazine is the degree to which we can properly align the AI. Right now, we're at "the magazine is full, and we're betting our survival on the gun getting jammed".
varjag
Assuming you mean drum here; it's pointless to play russian roulette with a magazine fed weapon (which in a way could be an own metaphor for this predicament).
TeMPOraL
I did, and you're right in both of your points.
topspin
> "Stop" is not regulatory capture.

"Stop" is a fiction that exists exclusively inside your head. Unless your solution anticipates a planet wide GULAG system run by a global government your "stop" is infeasible.

lewhoo
That might be true but there's no way of knowing for sure. Planet wide gulag system(a colorful and a bit wrong name for prisons) already exists and is pretty aligned on topics like killing fellow humans as a result of conscious, deliberate action. I find Max Tegmark's argument pretty convincing. We seem to agree to "STOP" on things like genetically modifying ourselves at conception or birth or whatever to gain X or Y and the one loud case of violating that agreement came from China and resulted in a ticket to the gulag. Just ask chatgpt to give you examples of fields where further research was deemed too risky and was ceased.
RandomLensman
Unlike for AI, in those fields we do not need to stack endless hypotheses on top of each other to arrive at something bad (and even then some prohibitions are tenuous in their reasoning at best).
lewhoo
I don't know about that. Cloning/modifying humans has potentially so much benefit and yet we seem to stack hypotheses on top of it and conclude it's not worth it. There is also a lot to be said about all the hypotheses of killing all the Hitlers preemptively. I think that whatever we do, we usually arrive at a bit modified trolley problem and try to go from there. We really should never expect that the one particular route that we choose carries all the benefits and the other all the downsides.
dnissley
The best argument against human cloning/gmo is simply that we can't do this well on other animal species just yet, so probably we should be hold off on the humans. It's hyper pragmatic from an ethics perspective, not a trolley problem or stack of hypotheses at all.
lewhoo
I'm no expert but I actually think we are pretty good at genetically modifying organisms and that manifests itself in crops for instance with enough research to support the benefits.
pixl97
I think much of this is, because Monsanto aside, no one has had a nice big genetic accident or intentional 'oh no' yet. Yea, we've had issues but not at the people are starving.

If we start thinking of plant based biological weapons, just imagine modifying an existing grass to produce tons of pollen and to cross pollinate wheat but not produce seed in the wheat.

dnissley
Plants are one thing we do it well enough on. As far as I'm aware we have yet to do it (at scale) on livestock, or even pets.
dnissley
Looks like the fish is the only one that's gone to market. Pretty clearly still early days. I have no doubt that with enough time we'll get there, but to say this is proof that we're technically good to go ahead and produce gmo humans seems a bit rushed -- to reiterate, the point being that the ethical issues here are much more clear than a trolley problem or stack of hypotheses.
RandomLensman
My point was more that it really isn't stretching anything to say: if something bad gets into human reproduction that has unfortunate effects. We can observe that already. So at least the risk case doesn't rely on a lot of hypotheses that might turn out wrong.

Whether or not all regulations there are good or optimal is then a separate point again.

lewhoo
I agree but this is too general. If something bad gets into AI (like very biased or false training data) that has unfortunate effects also stands.
RandomLensman
But we need to stack a bunch of hypotheses on top of each other to have dangerous AI and those might not be true - that is the difference.
YetAnotherNick
"Stop" might be possible if doing AI is expensive enough like say biotech. Currently it is in the limit of being expensive(costing few hundred thousands to few million for state of the art model), but it is very likely to change in near future that will change as even currently most of the cost comes not from energy of computation, but from expensive NVIDIA hardware.
timmytokyo
His solution -- assuming it's the same as the one proposed by Yudkowsky, who he quotes -- is launching air strikes against data centers hosting AI computation.

So yeah, imaginary.

pjc50
Ah, so we're going to start the global nuclear exchange with China based on the paranoid fantasies of some guy on Twitter? And this is supposed to be taken seriously?
ertgbnm
And world war two was started by some failed artist from Germany. Deliberately reducing things is easy and does not prove anything.
TeMPOraL
Well, you're reacting to a HN user paraphrasing what seems to be a single sentence taken out of context of the actual argument. Did you expect it to make sense?
duckmysick
I wonder what would happen if the AI computation is co-hosted in data centers with civilian data.
pjc50
Yukowsky is presumably a "we have to kill the humans in order to save them" maximalist.
ben_w
Then people would have their claims of "bulletproof hosting" and offsite backup policies/business continuity plans tested.
scarmig
Ironically, it seems the only way to stop someone from building an omnipotent AI that could destroy the world is to build an omnipotent AI under your control faster than any of the people you need to destroy. If you believe in that sort of thing.
pjc50
This is just the plot of War Games (1983).
ben_w
Or possibly just not allow anyone to accumulate enough compute power, given that even the largest model so far still isn't that.

Hence "if you're not willing to go as far as air strikes, you were never serious" or however he phrased it.

jstanley
The only thing that can stop a bad guy with AI is a good guy with AI.
calgoo
AI's for everyone!!!! It's actually a scare but possible future. Everyone has to carry personal security AI that you rent from some Corpo, and if you don't you will be endlessly hacked and harassed in IRL just like we are online these days.
pixl97
Heh, sounds like we're reading Diamond Age again.
maeil
If tomorrow TSMC, NVIDIA and ASML get destroyed in the next 9/11, it's "stopped". It's completely possible.
YetAnotherNick
I think current hardware already produced till now is either enough to get us to AGI, or it is very far away. I don't believe AGI is 30 years away. It is either very near or more than 1000s of years away.
topspin
Fantasies, bordering on threats.
ben_w
Of course it's a threat. Civilians doing that would be bad. But governments do much the same via military intervention on a regular basis, hence Yudkowsky talking about airstrikes that one time.
Maken
It's delayed at best.
wood_spirit
You can’t _stop_ it. You can make it illegal in some countries. But outlawing it just means some obvious candidate countries or private actors develop it first instead.
candiodari
I think these attempts will simply point out that math is a higher power than law. If something becomes possible, and becomes easy, there is nothing law can do to stop it.

Just look at the attempts to stop drugs in North West Europe. Huge sentences (so huge they are in fact never fully carried out), tens of dead police officers on a yearly basis, triple that in civilian victims and drugs are easier to get than ever before.

neilkk
> tens of dead police officers on a yearly basis

What are you talking about? This does not happen in North West Europe.

FeepingCreature
The aim is to stop it from becoming easy.

The drug war would look very different if good cocaine was only produced in one country with extremely costly chemistry machines from one company.

upupupandaway
If good cocaine was only produced in one country with extremely costly machines, a good-enough alternative would appear almost immediately. This has happened millions of times in the history of humanity, I refuse to believe that people haven't learned this lesson.
dnissley
People seem to have very small imaginations when it comes to thinking about how they might circumvent the restrictions they are proposing. I don't know of it's just wishful thinking, desperation, or what.

For example if OpenAI et al were reduced to using consumer gpus to run their training loads it would increase costs, but not by an order of magnitude. It would just be one more hurdle to overcome. And there are still so many applications of AI that would incentivize overcoming that hurdle.

candiodari
As someone who works in medical research, I'm pretty sure it's possible to create a GM bacterial version of poppy's, just like we've got that for insulin, that would end the drug wars, by enabling hundreds of grams of the stuff to be produced in a single bottle that can be kept virtually anywhere (it has to be kept heated, and shaken from time to time).

Such a bottle's contents can be trivially duplicated and the whole process is essentially the same as the first steps in brewing beer.

Doing it for cocaine is simpler than doing it for insulin, which is in one university a second-year exercise. I don't think it'll be that long before this happens.

pixl97
The drug wars are not because we can't produce enough drugs. I'm pretty sure we can grow enough poppies to kill all of humanity a few times over and at an insanely cheap price.

Carfentanil is a decent example of this. It's insanely potent and even a small amount smuggled in goes a long way.

FeepingCreature
And yet, it doesn't seem to be that easy with chips.
BlueTemplar
More because with chips it's still an exponential race. Others will be able to catch up once the race slows down enough.
visarga
Yet China and Russia are still getting their chips, by smuggling if need be. So much for the power of blocking access.
upupupandaway
That's how North Korea got nuclear weapons. It's fascinating that people don't understand this.
ben_w
China sure, to a degree (though they also make a lot of the phones and therefore just get shipped tons of chips), but wasn't Russia so short they stole washing machines for the chips? Or was that just war propaganda?
veidr
My recollection is that Russian troops/mercenaries/gunmen/whatever stole washing machines and dishwashers, but it was (presumably) just for regular theft/looting/pillagine purposes.

Separately, during the pandemic, various companies bought various products, including washing machines, to extract the chips within, because the chips were the supply chain bottleneck, so it made economic sense to purchase them at even 10x or 20x the "normal" price.

wood_spirit
Yeap my recollection is that the Russian troops were stealing the white goods to ship home for their families.

They even stole tractors etc, https://www.itworldcanada.com/post/tractors-stolen-in-ukrain...

Washing machines etc could be on the sanctions list because they contain dual use chips, making them even more expensive to buy in Russia, but the troops are from regions where even the pre-war prices were prohitative.

ben_w
That's the first I've heard of it being done during the pandemic. I would also be surprised that this would be only x10-x20 the price, rather than x100.
varjag
It was a meme. You can buy the kind of chips that go into washing machines completely legally and unrestricted from multiple vendors anywhere in the world, and transporting them into russia isn't an issue.
bonton89
I also remember hearing American companies were doing the same thing with washing machines just to get the chips for higher value products in the chip shortage.
_0ffh
> I refuse to believe that people haven't learned this lesson

Oh, I believe they absolutely haven't, when not having learnt it buys them money and influence. As for the bulk of the population, they will accept whatever the corporate media tell them to.

peterth3
Licensing can definitely turn into regulatory capture if it expands enough. It’s effectively a barrier to entry defined by the incumbent.
jamilton
They're saying don't do licensing, just make it illegal.
peterth3
Sorry, who’s “they” here? The incumbents OpenAI / DeepMind / Anthropic?
JoshTriplett
No, the incumbents think it should continue. The xrisk folks are not the AI companies. The AI companies pay lip service to xrisk and try to misdirect towards regulation they think they can work with. Judging by this thread, the misdirection is working.
ben_w
Yudkowsky and similar.

(I'm a lot more optimistic than such doomers, but I can at least see their position is coherent; LeCun has the opposite position, but unfortunately, while I want that position to be correct, I find his position to be the same kind of wishful thinking as a factory boss who thinks "common sense" beats health and safety regulations).

rolisz
And... How do you do that? Seriously, people have been talking about stoping CO2 emissions for what, 50 years? We've barely made progress there.
ac29
I think you are really underestimating how much progress has been made with renewable energy. Taking a quick look a CAISO, the California grid is currently 65% powered by renewables (mostly solar). This is a bit higher than average, but it certainly wouldnt have been nearly as high even 5 years ago, much less 25 years ago.
olalonde
This underscores the crux of the issue. Our collective response to curbing emissions has been lackluster, largely due to the fact that the argument for it being beneficial hasn't sufficiently swayed a significant number of people. The case asserting that AI R&D poses an existential risk will be even harder to make. It's hard to enforce a ban when a large part of the population, and perhaps entire countries, do not believe the ban is justified. This contrasts with the near-universal agreement surrounding the prohibition of chemical and biological weapons, or the prevention of nuclear weapons proliferation, where the consensus is considerably stronger.
TeMPOraL
With threat of military intervention.

People have been talking about curtailing carbon emissions for 50 years, but they haven't been serious about it. Being serious doesn't look like saying "oh shoot, we said we will... try to keep emissions down, but we didn't... even try; sorry...". Being serous looks like "stop now, or we will eminent domain your data centre from under you via an emergency court order, or failing that, bomb it to rubble (because if we don't, some other signatory of the treaty will bomb you for us)".

belter
So the next War will be the Good AI against the Bad AI and humans as collateral damage?
TeMPOraL
No, that's the best outcome of the bad scenario that the above is meant to avoid.

How do people jump from "we need an actual international ban on this" to "oh so this is arguing for an arms race"? It's like the polar opposite of what's being said.

rolisz
So you're proposing replacing a hypothetical risk (Geoffrey Hinton puts the probability at 1%) with guaranteed pain and destruction, which might escalate to nuclear war? Thank you, but no thank you.
TeMPOraL
No, I'm proposing replacing a high risk scenario (I disagree with Hinton here) with a low risk scenario, which has low chance to escalate to any serious fighting.

It seems people have missed what the proposed alternative is, so let me spell it out: it's about getting governments to SIGN AN INTERNATIONAL TREATY. It is NOT about having US (or anyone else) policing the rest of the world.

It's not fundamentally more dangerous than the few existing treaties of similar kind, such as those about nuclear non-proliferation. And obviously, all nuclear powers need to be on-board with it, because being implicitly backed by nukes is the only way any agreement can truly stick on the international level.

This level of coordination may seem near-impossible to achieve now, but then it's much more possible than surviving an accidental creation of an AGI.

ImPostingOnHN
> No, I'm proposing replacing a high risk scenario (I disagree with Hinton here)

I disagree with you here, I think the risk is low, not high.

> it's about getting governments to SIGN AN INTERNATIONAL TREATY. It is NOT about having US (or anyone else) policing the rest of the world

We haven't been able to do that for climate change. When we do, then I'll be convinced enough that it would be feasible for AI. Until then, show me this coordination for the damage that's already happening (climate change).

> This level of coordination may seem near-impossible to achieve now, but then it's much more possible than surviving an accidental creation of an AGI.

I think the coordination required is much less possible than a scenario where we need to "survive" some sort of danger from the creation of an AGI. But we can find out for sure with climate change as an example. Let's see the global coordination. Have we solved that actual problem, yet?

Remember, everyone has to first agree to this "bomb AI" to avoid war. Otherwise, bombing/war starts. The equivalent for climate change would be bombing carbon producers. I don't see either agreement happening globally.

4bpp
To continue contributing to climate change takes very little: you need some guy willing to spin a stick for long enough to start a fire, or to feed and protect a flock of cows. To continue contributing to AI, you need to maintain a global multi-billion supply chain with cutting edge technology that might have a four-digit bus factor.

The mechanisms that advance climate change are also grandfathered in to the point that we are struggling to conceive of a society that does not use them, which makes "stop doing all of that" a hard sell. On the other hand, every society at least has cultural memory of living without several necessary ingredients of AI.

ImPostingOnHN
to continue contributing to the harm caused by AI only requires that 1 person use an existing model running off their laptop to foment real-world violence or spread disinformation en masse on social media, or use it on a cheap swarm of weaponized/suicide drones, for a few examples
4bpp
There's a qualitative barrier there. The AI risk people in the know are afraid of is not a flood of AI-generated articles, but something that probably can't be achieved yet with current levels of AI (and more slop generated from current-day AI won't hasten the advent of). On the other hand, modern-day greenhouse gases are exactly the greenhouse gases that climate activists are afraid of in the limit.
ImPostingOnHN
The AI risks people in the know are afraid of, are indeed what I listed: pretty much any person, anywhere, using an existing model running off their laptop to foment real-world violence or spread disinformation en masse on social media, or use it on a cheap swarm of weaponized/suicide drones, for a few examples

That's what we're already seeing today, so we know the risk is there and has a probability of 100%. The "skynet" AI risk is far more fringe and farfetched.

So, like you said about climate change, the harm can come from 1 person. In the case of climate change, though, the risks people in the know are afraid of, aren't "some guy willing to spin a stick for long enough to start a fire, or to feed and protect a flock of cows"

sebastianz
> With threat of military intervention.

I assume you are american, right? Cause that sounds pretty american. Although I suppose you might also be chinese or russian, they also like that solution a lot.

Whichever of those you are, I'm sure your side is definitely the one with all the right answers, and will be a valiant and correct ruler of the world, blessing us all with your great values.

TeMPOraL
I'm Polish. My country is the one whose highest distinction on the international level was being marked by the US for preemptive glassing, to make it more difficult for the Soviets to drive their tanks west.

Which is completely irrelevant to the topic at hand anyway.

sebastianz
> Which is completely irrelevant to the topic at hand anyway.

It's not irrelevant. I was implying that you did not specify who is doing this military intervention that you see as a solution. What are their values, who decides the rules that the rest of the world will have to follow, with what authority, and who (and how) will the policing be done.

TeMPOraL
> I was implying that you did not specify who is doing this military intervention that you see as a solution. What are their values, who decides the rules that the rest of the world will have to follow, with what authority, and who (and how) will the policing be done.

Like nuclear non-proliferation treaties or various international bans on bioweapons, but more so. The idea is that humanity is racing full steam ahead into an AI x-risk event, so it needs to slow down, and the only way it can slow down is through an international moratorium that's signed by most nations and treated seriously, where "seriously" includes the option of signatories[0] authorizing a military strike against a facility engaged in banned AI R&D work.

In short: think of UN that works.

--

[0] - Via some international council or whatever authority would be formed as part of the treaty.

sebastianz
I see. I'm perhaps leaning skeptical most of the time, but it's hard to see how an international consensus can be reached on a topic that does not have the in-your-face type danger & fear that nuclear & bioweapons do.

(It's why I'm also pessimistic on other global consensus policies - like effective climate change action.)

I would be happy to be wrong on both counts though.

3rd3
Since when are military spooks and political opportunists better at deciding on our technological future than startups and corporations? The degree of global policing and surveillance necessary to fully prevent secret labs from working on AI would be mind-boggling. How would you ensure all government actors are sticking to the same safety standards rather than seizing power by implementing AI hastily? This problem has long been known as quis custodiet ipsos custodes - "who guards the guards themselves?".
concordDance
> The degree of global policing and surveillance necessary to fully prevent secret labs from working on AI would be mind-boggling.

It's not that bad given the compute requirements for training even the basic LLMs we have today.

But yes, it's a long shot.

visarga
Training a SOTA model is expensive, but you only need to do it once, and fine-tune a thousand times for various purposes.

And it's not even that expensive when compared to the cost of building other large scale projects. How much is a dam, or a subway station? There are also corporations who would profit from making models widely available, such as chip makers, they would commoditise the complement.

Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.

This is not make belief. A few recent fine-tunes of Mistral-7B for example are excellent quality, and run surprisingly fast on a 5 year old GPU - 40T/s. I foresee a new era of grassroots empowerment and privacy.

In a few years we will have more powerful phones and laptops, with specialised LLM chips, better pre-trained models and better fine-tuning datasets distilled from SOTA models of the day. We might have good enough AI on our terms.

concordDance
> Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.

Hence the idea to ban development of more capable models.

(We're really pretty lucky that LLM based AGI might be the first type of AGI made, it seems much lower risk and lower power than some of the other possibilties)

andruby
But that's just not going to work in the real world is it?

If a country uses military force in another country, that's a declaration of war. We'll never convince every single country from banning AI research. And even if we do, you don't need much resources to do AI research. A few people and a few computers is enough.

This is not something like uranium refining.

sgt101
And there it is.

The full agenda.

Blood and empire.

ben_w
Only in exactly the same way that the police have the same agenda as the mafia.

International bans need treaties, with agreed standards for intervention in order to not escalate to war.

The argument is that if you're not willing to go all the way with enforcement, then you were never serious in the first place. Saying you won't go that far even if necessary to enforce the treaty is analogous to "sure murder is illegal and if you're formally accused we'll send armed cops to arrest you, but they can't shoot you if you resist because shooting you would be assault with a deadly weapon".

sgt101
I don't agree.

The police enforce civil order by consent expressed through democracy. There is no analogy in international affairs. Who is it that is going to get bombed? I am thinking it will not be the NSA data centre's in Utah, or any data centres owned by nuclear armed states.

ben_w
All policing is to a certain degree by consent, even in dictatorships which only use democratic as a fig-leaf. International law likewise.

Just as the criminal justice system is a deterrent against crime despite imperfections, so are treaties, international courts, sanctions, and warfare.

> I am thinking it will not be the NSA data centre's in Utah, or any data centres owned by nuclear armed states.

For now, this would indeed seem unlikely. But so did the fall of the British Empire and later the Soviet Union before they happened.

TeMPOraL
> The police enforce civil order by consent expressed through democracy. There is no analogy in international affairs.

Which is the point exactly. All agreements in international affairs are ultimately backed by threats of violence. Most negotiations don't go as far as explicitly mentioning it, because no one really wants to go there, but the threat is always there in the background, implied.

vidarh
> With threat of military intervention.

We've seen how hard it is to do that when the fear is nuclear proliferation. Now consider how hard it is to do that when the research can be done on any sufficiently large set of computational devices, and doesn't even need to be geographically concentrated, or even in your own country.

If I was a country wanting to continue AI research under threat of military intervention, I'd run it all in cloud providers in the country making the threat, via shell companies in countries I considered rivals.

TeMPOraL
Yes, it's as hard as you describe. But it seems that there are no easier options available - so this is the least hard thing we can do to mitigate the AI x-risk right now.
vidarh
How do you intend to prevent people from carrying out general purpose computation?

Because that is what you'd need to do. You'd need to prevent the availability of any device where users can either directly run software that has not been reviewed, or that can be cheaply enough stripped of CPU's or GPU's that are capable of running un-reviewed software.

That review would need to include reviewing all software for "computational back doors". Given how often we accidentally create Turing-complete mechanisms in games or file formats where it was never the intent, preventing people from intentionally trying to sneak past a way of doing computations is a losing proposition.

There is no way of achieving this compatible with any resembling a free society.

TeMPOraL
> How do you intend to prevent people from carrying out general purpose computation?

Ask MAFIAA, and Intel, and AMD, and Google, and other major tech companies, and don't forget the banks too. We are already well on our way to that future. Remember Cory Doctorow's "War on general-purpose computation"? It's here, it's happening, and we're losing it.

Therefore, out of many possible objections, this one I wouldn't put much stock in - the governments and markets are already aligned in trying to make this reality happen. Regardless of anything AI-related, generally-available general-purpose computing is on its way out.

vidarh
I don't think you understand how little it takes to have Turing complete computation, and how hard it is to stop even end-users from accessing it, much less companies or a motivated nation state. The "war on general-purpose computation" is more like a "war on convenient general-purpose computation for end users who aren't motivated or skilled enough to work around it".

Are you going to ban all spreadsheets? The ability to run SQL queries? The ability to do simple regexp based search replace? The ability for users to template mail responses and set up mail filters? All of those allows general purpose computation either directly or as part of a system where each part may seem innocuous (e.g. the simple ability to repeatedly trigger the same operation is enough to make regexp based search and replace Turing complete; the ability to pass messages between a templated mailing list system and mail filters can be Turing complete even if neither the template system and filter in isolation is)

The ability for developers to test their own code without having it reviewed and signed off by someone trustworthy before each and every run?

Let one little mechanism through and the whole thing is moot.

EDIT: As an illustration, here's a Turing machine using only Notepad++'s find/replace: https://github.com/0xdanelia/regex_turing_machine

hackinthebochs
>I don't think you understand how little it takes to have Turing complete computation

This is a dumb take. No one's calculator is going implement an AGI. It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.

vidarh
No one's calculator needs to. The point was to reply to the notion that there is any possible way the "war on general-purpose computation" actually has any shot of actually stopping general-purpose computation to make a point about how hard limiting computation is in general.

> It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.

Access to H100 could perhaps be restricted. That will drive up the cost temporarily, that's all. It would not stop a nation state actor that wanted to from finding alternatives.

The computation cost required to train models of a given quality keeps dropping, and there's no reason to believe that won't continue for a long time.

But the notion you couldn't also sneak training past monitoring is based on the same flawed notion of being able to restrict general purpose computation:

It rests on beliefs about being able to recognise what can be used to do computation you do not want. And we consistently keep failing to do that for even the very simplest of cases.

The notion that you will be able to monitor which set of operations are "legitimate" and which involves someone smuggling parts of some AI training effort past you as part of, say, a complex shader is as ludicrous as the notion you will be able to stop general purpose computation.

You can drive up the cost, that is all. But if you try to do so you will kill your ability to compete at the same time.

hackinthebochs
There are physical limits to what can make an effective and efficient computational substrate. There are physical limits to how fast/cheap these GPUs can be made. It is highly doubtful that some rogue nation is going to invent some entirely unknown but equally effective computational method or substrate. Controlling the source of the known substrate is possible and effective. Most nations aren't in a position to just develop their own using purely internal resources without being noticed by effective monitoring agencies. Any nation that could plausibly do that would have to be a party to the agreement. This fatalism at the impossibility of monitoring is pure gaslighting.
vidarh
Unless you're going to ban the sales of current capacity gaming level gpus and destroy all of the ones already in the market, the horse bolted a long time ago, and even if you managed to do that it'd still not be enough.

As it is, we keep seeing researchers with relatively modest funding steadily driving down the amount of compute required for equivalent quality models month by month. Couple that with steady improvements in realigning models for peanuts reducing the need to even start from scratch.

There's enough room for novel reductions in compute to keep the process of cost reducing training going for many years.

As it is, I can now personally afford to buy hardware sufficient to train a GPT3 level model from scratch. I'm well off, but I'm not that well off. There are plenty of people just on HN with magnitudes more personal wealth, and access to far more corporate wealth.

Even a developing country can afford enough resources to train something vastly larger already today.

When your premise requires fictional international monitoring agencies and fictional agreements that there's no reason to think would get off the ground in anything less than multiple years of negotiations to even create a regime that could try to limit access to compute, the notion that you would manage to get in place such a regime before various parties will have stockpiled vast quantities in preparation is wildly unrealistic.

Heck, if I see people start planning something like that, I'll personally stockpile. It'll be a good investment.

If anything is gaslighting, it's pushing the idea it's possible to stop this.

concordDance
The big cloud providers would be obligated to ensure their systems were nor being used for AI training.

Yes, this would be a substantial regulatory burden.

vidarh
It'd be more than a substantial regulatory burden, it's impossible. At most they'd be able to drive up the cost by forcing users to obscure it by not doing full sets of calculations at a single provider.

Any attempt even remotely close to extreme enough to have any hope of being effective would require such invasive monitoring of computations run that it'd kill their entirely cloud industry.

Basically, you'll only succeed in preventing it if you make it so much more expensive that it's cheaper to do the computations elsewhere, but for a country at threat of military intervention of it's discovered what they're doing, the cost threshold of moving elsewhere might be much higher than it would for "regular" customers.

TeMPOraL
Even if it's this costly, it's still possible, and much cheaper than the consequences of an unaligned AGI coming into existence.
vidarh
Sure, you can insist on monitoring which software can be run on every cloud service, every managed server, on the servers in every colo, or on every computer owned by everyone (of any kind, including basic routers or phones, or even toys, - anything with a CPU that can be bought in sufficient bulk and used for computations), and subject every piece of code allowed to run at any kind of scale to a formal review by government censors.

At which point you've destroyed your economy and created a regime so oppressive it makes China seem libertarian.

Now you need to repeat that for every country in the world, including ones which are intensely hostile to you, while everyone will be witnessing your economic collapse.

It may be theoretically possible to put in place a sufficiently oppressive regime, but it is not remotely plausible.

Even if a country somehow did this to themselves, the rest of the world would rightly see such a regime as an existential threat if they tried enforcing it on the rest of the world.

TeMPOraL
It all sounds very bad, but the truth is, we're going to get there anyway - in fact, we're half-way there, thanks to the influence of media conglomerates pushing DRM, adtech living off surveillance economy, all the other major tech vendors seeking control of end-user devices, and security industry that provides them all with means to achieve their goals.

Somehow, no one is worried about the economy. Perhaps because it's the economy that's pushing this dystopian hellhole on us.

vidarh
We're nowhere near there. Even on the most locked down devices we have, getting computation past the gatekeepers doesn't require reviews remotely detailed enough to catch attempts at sneaking these kinds of computation past, or even making them expensive, and at no point in history have even individuals been able to rent as much general purpose computational capacity for as little money.

Even almost all the most locked down devices available on the market today can still either be coaxed to do general purpose calculation one way or another or mined for GPU's or CPU's.

No one is worried about the economy because none of the restrictions being pushed are even close to the level that'd be needed to stop even individuals from continuing general purpose computation, much less a nation state actor.

This is without considering the really expensive ways: Almost every application you use can either directly or in combinations become a general purpose computational device. You'd need to ensure e.g. that there'd be no way to do sufficiently fast batch processing of anything that can be coaxed into suitable calculations. Any automation of spreadsheets or databases, for example. Even just general purpose querying of data. A lot of image processing. Bulk mail filtering where there's any way of conditionally triggering and controlling responses (you'd need multiple addresses, but even a filtering system like e.g. Sieve that in isolation is not Turing complete becomes trivially Turing complete once you allow multiple filters to pass messages between multiple accounts).

Even regexp driven search and replace, which is not in itself Turing complete becomes Turing complete with the simple addition of a mechanism for repeatedly executing the same search and replace on its own output. Say a text editor with a "repeat last command" macro key.

And your reviewers would only need to slip up once and let something like that slip past once with some way of making it fast enough to be cheap enough (say, couple the innocuous search-and-replace and the "repeat last command" with a an option to run macros in a batch mode) before an adversary has a signed exploitable program to use to run computations.

carbotaniuman
There's also the bomb every datacenter doing AI thing that LeCun has proposed, which is a lot more practical than this. Or perhaps a self replicating work that destroys AIs.
vidarh
This was why I pointed out that if I was a nation state worried about this, I'd host my AI training at the cloud providers of the country making those threats... Let them bomb themselves. And once they've then destroyed their own tech industry, you can still buy a bunch of computational resources elsewhere and just not put them in easily identifiable data centres.
RandomLensman
At least nuclear weapons pose an actual existential risk as opposed to AI - and even then we don't just go to war.
TeMPOraL
Nukes don't have a mind of their own. They're operated by people, who fortunately turned out sane enough that they can successfully threaten each other into a stable state (MAD doctrine). Still, adding more groups to the mix increases risk, which is why non-proliferation treaties are a thing, and are taken seriously.

Powerful enough AI creates whole new classes of risks, but it also magnifies all the current ones. E.g. nuclear weapons become more of an existential risk once AI is in the picture, as it could intentionally or accidentally provoke or trick us into using them.

RandomLensman
AI could pose those risks, but it also could not - that is the difference to nuclear weapons.
JoshTriplett
> AI could pose those risks, but it also could not

"I'm going to build a more powerful AI; don't worry, it could end the world, but it also could not."

TeMPOraL
AI does pose all those risks, unless we solve alignment first. Which is the whole problem.
RandomLensman
I does not, because that kind of AI doesn't exist at the moment and the malevolence etc. are bunch of hypotheses about a hypothetical AI, not facts.
pixl97
You keep repeating this tired argument in this thread, so just subtract the artificial element from it.

Instead imagine a non-human intelligence. Maybe its alien carbon organic. Maybe it's silicon based life. Maybe it's based on electrons and circuits.

In this situation, what are the rules of intelligence outside of the container it executes in?

Also, every military in the world wargames on hypotheticals because making your damned war plan after the enemy attacks is a great way to wear your enemies flag.

RandomLensman
How would you feel if militaries planned for fighting Egyptian gods? Just because I can imagine something doesn't mean it is real and that it needs planning for. Using effort on imaginary risks isn't free.
ben_w
Nukes being a risk was suggested as a reason why the US was willing to invade Iraq for trying to get one but not North Korea for succeeding.

It's almost certainly more complex of course, but the UK called it's arsenal a "deterrent" before I left, and I've heard the same for why China stopped in the hundreds of warheads.

RandomLensman
Even if that were true for Iraq, which I doubt, it would have been the odd one out.

Btw., China is increasing its arsenal at the moment.

RandomLensman
No-one wants a nuclear war over imaginary risks, so not happening.
TeMPOraL
We better find a way to make it happen, because right now, climate change x-risk mitigation is also not happening, for the same reason.
RandomLensman
Climate change mitigation is actually happening. Huge investment flows are being redirected and things are being retooled, you might not see it, but don't get caught out by the doomers there.
Dig1t
Are you really suggesting drone striking data centers? What about running AI models on personal hardware? Are we going to require that everyone must have a chip installed in their computer that will report to the government illegal AI software running on the machine?

You are proposing an authoritarian nightmare.

pixl97
Hence the entire point of this discussion... can we avoid the future nightmares that are coming in one way or another? Make AI = nightmare, stop people from making AI = nightmare. The option that seems to be off the table is people going, hell making AI is a nightmare so I won't voluntarily without the need for a police state.
neonihil
The problem with that is that the entity supposed to "bomb it to rubble" and the entity pushing for AI development happens to be the same entity.

Maybe the confusion why people can't see this clearly stems from the fact that tech development in the US has mostly been done under the umbrella of a private enterprise.

But if one has a look at companies like Palantir, it's going to become quite obvious what is the main driver behind AI development.

Aithr
"Stop new development." is regulatory capture.
esafak
Stop existing development too. Carve an exception for academic research.
BlueTemplar
«Exception for academic research» seems to be how we got Covid, so...
raverbashing
Correct. The risk that's the real ploy is much higher and much more impactful than the actual risk of Rogue AI (as much as the risk might exist, though probably tiny)
ben_w
"Rogue" is just a type of bug.

Buggy old-fashioned AI (i.e. a bunch of ifs and loops) has, more than once, almost started WW3.

Geee
Yeah, it is a true risk. What happens when people start making their voting decisions with AI, asking who to vote? Nothing wrong with that per se, but it gives an ultimate power to the company who developed and hosts the AI, which means the end of democracy.

My recommendation for regulation would be to make closed & cloud-hosted AI illegal. Architecture, training data and weights should always be available, and people should only be allowed to use AI on their local machine (which might be as powerful as possible).

123yawaworht456
This is extremely dangerous to our democracy.
jdblair
Sounds great until someone says the warehouse full of computers out back is their "local machine."
Joeri
Seeing the weights is useless for knowing capabilities, and even knowing training data and architecture you still need to interact with it to figure those out, which you can do with cloud AI just as well.

The real risk that I see is the use of open source AI by governments, corporations and criminal groups to influence elections through a deluge of bots. Cloud AI can at least be controlled to some degree because there is a government that can inspect and regulate that business, but an army of bots built on local LLM’s could just run wild over the internet without stopping them.

How many comments in this thread are written by a bot? I do not feel confident I could tell them apart.

Geee
Eh, government regulating AI which is used by people to help them vote?

I don't see bots as a huge problem, because it's somewhat easy to verify that someone is a real person, and it would be quite obvious if someone is using lots of bots for their cause.

samus
> I don't see bots as a huge problem, because it's somewhat easy to verify that someone is a real person, and it would be quite obvious if someone is using lots of bots for their cause.

That only works on platforms designed to match user profiles with real-world persons. This platform definitely isn't.

Geee
You can read their profile or read their comment history to verify their humanness. Faking everything wouldn't be impossible, but certainly difficult & take time.
samus
Falling profiles on scale might be hard, but so is verifying wherever they are real. If you have only even 20 commenters replying to one of your comments, then you're gonna spend the rest of the hour digging through them to figure out which are bots and which aren't.

The comment history is not visible on every relevant platform, and there are good reasons for people to disable it where possible. Furthermore, many people keep their profiles minimal out of principle. Hard to distinguish those from bots.

golem14
There’s a short story by Asimov called “Franchise” which is in hindsight quite foreboding…
a_t48
This would make AI usage restricted to those who can afford a beefy PC, no?
samus
That would still be a way larger class of people than the top 1%. Even so, capabilities of consumer devices will rise for some more time.
Geee
Costs will come down, while capability will go up. I'm thinking of a some kind of box in every home, with special inference chips; not really a "PC".

In principle, every human should have access to the same level of AI. There could be one at the local library if someone can't afford one, or doesn't want one.

ethanwillis
Access to local libraries isn't even a thing that everyone has access to.
dotnet00
Or it would create an incentive for reducing the cost of the hardware needed to run beefier AI models and passing that reduction down to consumers.
jeffparsons
Which would still mean that those who can afford more/better hardware can therefore afford more/better AI. I don't think a solution lies down this path...
dotnet00
Is that really an issue? Eg it's the same with games, the better your hardware, the more likely you can play a new game at higher settings with better performance, with most cheaper devices outright unable to handle anything recent.

But if you wait a GPU generation or two, the performance has improved enough that you can do that with a much cheaper GPU.

JieJie
There are already people working on distributed solutions to that problem. There are a lot of people working on these problems.
gdsdfe
They want to create a moat to protect their investments
renewiltord
That's the thing with AI. It's so dangerous it could destroy us all. P(doom)=0.3.

Anyway, that's why we're working on much large LLMs. It's important that we do this research and expand the size of the LLMs because a larger LLM could destroy us all.

Listen, everyone cosplays as doomers here because it's stylish right now. But they're all trying to make this thing they think is suicide. Is this a suicide cult?

No. It's just people playing around acting important while solidifying their business.

Recently, I read on Twitter that many AI CTOs think that P(doom) > 0.25 by 2030. Here's my question to one CTO of a venture backed firm with present personal assets over $400k that believes this:

- A representative of mine will meet you in South Park

- The representative will be carrying a $100k cheque and a document outlining terms for you to pay a lump sum of $177k on Feb 1 2030 (10% compounding, roughly)

- The loan will be unsecured

- We will make the terms public

This is pretty close to break-even for you at terms you won't receive personally anywhere else. Once doom happens money will be worthless. Take my money. I'll also make the deal all numbers divided by 100 if you just want to have some fun.

artninja1988
P(doom)=0.3 according to what? How did you come up with these numbers
zo1
The OP I assume picked it because it's a number slightly greater than 0.25 which he read was quoted by AI CTOs on Twitter.
renewiltord
Out of my arse, obviously. It's total bullshit.
mg
The moat he describes the one-percenters might aquire is government regulation. Creating an environment where only an exclusive club of companies is able to work on AI.

Is that possible? Has that happened in any other industry?

I can imagine it happening in a single country. But wouldn't research route around that country then? How is the situation in Europe, China, India, Canada, Japan ... ?

tomatocracy
It's not all or nothing. For example, software patents are legal in some parts of the world and not in others - that doesn't mean they don't have value in creating an artificial barrier to entry / moat. The same is true of regulation.
logicchains
>Is that possible? Has that happened in any other industry?

It's absolutely happened to the nuclear weapons industry. And Big AI is lobbying for the government to treat AI development the same way it treats nuclear proliferation.

mg
There are more nuclear warheads in Russia than in the USA.
logicchains
Would you like to see a world where the only countries with their own AIs are Russia, the United States, China, France, the United Kingdom, Pakistan, India, Israel and North Korea, and all use of AI is controlled by the governments of those countries?
Barrin92
>But wouldn't research route around that country then?

it already does. Stable Diffusion originated at LMU Munich with a lot of non-profit funding, EleutherAI and LAION are non-profit as well I think. The Chinese companies all have their own models anyway etc.

So yes I also find it unlikely. Of course there'll always be one or two players a little bit ahead of the field at any given time but this stuff is almost entirely public research based. These models largely run on a few thousand lines of Python, public data and a ton of compute. The latter is the biggest bottleneck but that'll diminish as more manufacturers come online.

bugglebeetle
Yes, the US government could try and do what they did with cryptography in the 90s and succeed.
bbor
Ehhh capital alone is plenty of a moat, esp when there’s no regulation to keep the market fair.
instagraham
The power of selfless development on open source AI will become so crucial to avoiding this scenario. I do think that even today, open AIs are capable of filling in a lot of the gaps you'd have if you didn't want to use OpenAI's work.
m3kw9
Is true because if one company creates something more powerful than what military can achieve, why would it not wan to overtake its powers?
stagger87
I wouldn't worry about that. I'm sure the military can unplug some computers.
acosmism
I’m probably getting philosophical here but my personal thought is that - at the end of the day (most if not all) people/anyone/humans like to be on top of “something” rather than to be on top of “everything but nothing meaningful (to anyone else)”. reminds of this one Superman adventures episode with Bizarro (a very uncanny anti-superman) - Superman brought him to an empty desolate place (planet) to prevent him from accidentally destroying earth since in his head he “always wanted to be” superman”. bizarro had an entire planet to himself where he was god, king, law and lawyer it - he loved it. is that normal or human? hell no. He’s Bizarro. People will go elsewhere and enjoy human things like life-satisfaction, culture, family, friend networks, etc whatever else people like . life goes on.
acosmism
* and also potentially two large markets instead of one
bsder
The problem isn't seizing power.

The problem is going to be that AI is creating an Eternal September by poisoning the well of training data.

Anybody who has a data or a model prior to the poisoning is going to have a defensible moat that is not accessible to those who come after.

blovescoffee
Except models can get more data efficient. We can overcome and sort of “poisoning the well.” Man’s desire for power is orders of magnitude more difficult to resolve.
photonbeam
Data will become stale
chongli
First of all, “pre-poisoned” data sets are freely available in archives such as common crawl. Secondly, I doubt the usefulness of that data in the long run. A model trained on only that old data will forever be stuck in the past. It’ll be like the stereotypical senile grandpa who only talks about the old days and doesn’t know anything about current events.
billconan
And yet, their papers are full of jargon.
Animats
It's too late for monopolizing large language models. The technology for making them is known and the cost keeps coming down.

There are things to worry about, but this doesn't seem to be one of them.

samus
The same can be said about guns. Because of this, gun making is a licensed industry, even though in theory a lot of people can legally acquire the tools to make guns in their garage.

Edit: drugs and licensed services like law and healthcare also apply

desmond-grealy
It is though. The groundwork is already being laid to restrict the legality of training large models. See the recent executive order. There is a rather small cutoff size being established for needing to report your training runs and data centers to the government. It is also being ordered that X-risk and CBRN risk testing frameworks and secure environments be developed specifically so they can be imposed as requirements of model developers, including in the private sector.

You may argue that the knowledge required to develop LLMs is open source and will be accessible to all (and in practice this is not-quite but mostly true). However the federal government is currently in the process of making it so only certain people are authorized to do so legally.

rhizome
Cost of technology doesn't matter for this.

It's not the hammers, it's the building permits.

nextaccountic
Training is still very expensive.. and if it becomes cheaper, then a next-gen model with better performance will be expensive again.

It may be very hard to train cutting edge models if you only have consumer hardware

mrexroad
Either way, it’ll get to a point where a few generations removed from cutting edge will be “good enough” for existing problem spaces to a degree models are further commoditized. Like any other tech (e.g. iPhones) Next-gen will enable new emergent use cases or continue to eke out diminishing advantages over competition.
chrismarlow9
Why do you need cutting edge models if you can solve your problems with less?
samus
They are required if there is an arms-race between actors wielding models. For example, to detect increasingly authentic deepfakes.
chrismarlow9
Sure but the clandestine actors most of us worry about don't need that level of sophistication.
Palmik
The gap between the value added by GPT 3.5 and GPT 4 to my every day tasks is immense. Most people, outside of these circles, don't realize that such a large gap exists.
tome
Could I ask what exactly you get from 4 that you don't get from 3.5?
echelon
> It's too late for monopolizing large language models. The technology for making them is known and the cost keeps coming down.

Not so sure non-institutional folks can catch up.

OpenAI is trying to regulatory capture LLMs to prevent the Amazons and the well-funded unicorns from following. Everyone left in the lurch after these players crystalize is SOL.

You, average HN reader, cannot possibly compete with OpenAI. They're trying to stem venture dollars from standing up more well-funded competition so they take as much of the pie as possible.

The hype/funding cycle for LLMs is mostly over now. You're not Anthropic and you're more than likely too late to get in on that game.

fnordpiglet
Amazon excels at regulatory compliance. This isn’t a moat against Amazon. It’s a moat against you.
bugglebeetle
The same is true of Microsoft, who more or less own OpenAI now.
jszymborski
Two thoughts on that:

1. GPU prices aren't as high as they were during the mining peak, but you can still buy a car with the money it takes to buy a pod of A100s. Further to that point, Google et al. are making the next generation of ML accelerators and they won't even sell the best ones.

2. Compute is only half the problem. Access to data (e.g. in the form of email and smart assistant queries and ring door bells) is a much under appreciated second half of that equation.

neodypsis
The biggest cost, other than capital costs, is energy. Some geographies have competitive advantages w.r.t. energy.
tellarin
Archive.is cache: https://archive.is/HbRLy
penjelly
some speculators say we'll enter an age of abundance and money wont matter nearly as much (or at all) but if thats the case, how do businesses transition from their current state of profit over everything into money is not important?

I see it as much more likely that things become monopolized instead

PeterStuer
We have been in a state of abundance for a long time. We just choose not to distribute the abundance.
logicchains
The global average PPP GDP per capita is 18k USD/year. 18k USD/year is not remotely close to abundance.
losteric
Median incomes might be a better figure. Per-capita is skewed by outliers.

Look at the US: https://en.m.wikipedia.org/wiki/File:US_GDP_per_capita_vs_me... - huge disparity between mostly flat income versus rapidly growing per-capita

morsch
I think it's pretty close, actually. Our family could easily live off that. Easily worth it if it got rid of income inequality. And it'd still increase a percent or two, every year, inflation-adjusted! Thanks for the argument.
logicchains
>Our family could easily live off that. Easily worth it if it got rid of income inequality

Nobody's stopping you giving all your excess income above 18k to charity. If you try to make other people do it, you'll find a lot of people would rather just stop working than give away the majority of their income, which is why the economy collapsed and tens of millions of people starved in China and Russia after their communist revolutions.

morsch
Oh, I don't think my post was advocating anybody give anything away. But if we just arrived there, magically, it would be pretty great, from a utilitarian point of view. But thanks for the knee-jerk anti-communist rant, it's really great that we're back to that. Life was great in the 50s.
logicchains
>Life was great in the 50s.

Americans in the 50s were a lot happier than Americans in the 2020s, very little mental illness or obesity, housing was affordable, people weren't at each others' throats like nowadays.

morsch
Also: much lower income inequality. Strange, huh. And the average family income was 4400 USD, ie. 50000 USD in 2023, so pretty much the 18000 USD per capita we started with.

Shame about the institutional racism, mysoginy, homophobia, etc though.

huytersd
That is a surprisingly high figure though for 18k a year you could clothe, feed yourself and have shelter especially considering healthcare is cheap or free in most of the world. That’s way better than I thought the world was doing.
adhesive_wombat
That's mean vs median for you. In this[1] analysis, average is £12k, but median global per-capita household income is under $3k: 50 percent of humans earn less than that (to be fair, it's actually rising quite fast and steadily since the 90s).

Not sure where PPP puts it but the median figure will be lower because you can get a lot richer than you can get poorer.

1: https://www.zippia.com/advice/average-income-worldwide/

PeterStuer
If I watch my kids and you watch yours, our contribution to GDP is 0$. If I babysit your kids and you pay me 1.000.000$, and you watch my kids and I pay you 1.000.000$, we just created 2.000.000$ in GDP.

Were we not in abundance the day before but now we are? What changed?

mafuy
What changed? You are now paying 250k$ or so in taxes, which benefits the government :)
torginus
I'm ignorant about economics, but that sounds wrong - if I offered my services as a babysitter, my contribution to GDP wouldn't be zero. If then I took the earnings from that job and hired a babysitter I would be at the same place.
logicchains
>If I babysit your kids and you pay me 1.000.000$, and you watch my kids and I pay you 1.000.000$, we just created 2.000.000$ in GDP.

This isn't the case with purchasing-power adjusted GDP, which is specifically adjusted to account for the actual purchasing power of people's money.

PeterStuer
Would that still not in the above toy example lead to 0$ vs 2.000.000$/(some factor)?
logicchains
Yes, but it's not going to have any meaningful impact on the overall gdp because it couldn't happen at scale (requires one party to have 1.000.000$ to spend in the first place).
PeterStuer
But isn't this exactly what has been happening on a macro scale with the 'service economy'? We drew ever larger segments of people and activities into the world of financial transactions because of the massive amount of rentseeking money to be made in this environment. We doubled the amount of time the average family had to spend in salaried activity, and we passed start twice as now all activities that were previously selfserviced were also 'market' procured as no-one had time/energy left to take care of this after a long workday.
VaxWithSex
All money is fiat.
logicchains
Yes, but the average Joe Bloe can't print money to pay a babysitter, only the government can.
samus
This device only works if these are two completely equal services served by two equally capable persons with the same investment of time and resources. And it assumes that we mutually trust each other to actually do the thing.

Also, bringing money into the game makes the responsibility stronger. Especially if those two persons are not living in a vacuum, but have other bills to pay [edit: i.e., to make other people do things for them]. Money is there to make people do things more reliably if there is no other relationship between the buyer and the seller.

PeterStuer
One could argue GDP measures the degree of financialization of a society more than it measures the presence or absense of abundance.
ethanbond
In fact, Kusnetz the actual inventor of the modern GDP measure said something similar:

“ Economic welfare cannot be adequately measured unless the personal distribution of income is known. And no income measurement undertakes to estimate the reverse side of income, that is, the intensity and unpleasantness of effort going into the earning of income. The welfare of a nation can, therefore, scarcely be inferred from a measurement of national income as defined above.”

sgu999
Abundance for who? Our entire specie could live comfortably with goodies the last roman emperor wouldn't even have dreamed of, just with our current means of production. Yet we still have people not able to feed their children, dying of exhaustion at work or killing each other for land.
shreyshnaccount
You've got the causality the other way round. We are in an apparent state of abundance partly because of exploitative practices
Joeri
In general I think you’re right. Global GDP per capita is at $13K which is probably not enough to give everyone a western lifestyle if inequalities were stripped away.

But specifically for food the situation is different. There’s so much food that much of it gets thrown away, instead of getting to the hungry in time. It’s an allocation and efficiency problem.

sgu999
I don't think GDP per capita is meaningful for discussing this, precisely because of the "allocation and efficiency problem" you pointed.

It's convenient to measure whether food production would theoretically be enough: take the quantities an adult need, count the throughput of optimal production of proteins, carbs etc. The cost however is meaningless because most of what we spend on is transport, marketing, storage, packaging, shelving, etc. that's what we count when we say that a household spends x% on food.

We can do the same for the volume of T-shirts, sqm of housing, etc.

alephnerd
Global median income is AT MOST $3,000 PPP (at 2011 dollars) [0].

This is roughly comparable to Cambodia or Myanmar in 2011.

The world is still extremely poor.

[0] - https://documents1.worldbank.org/curated/en/9360016358808857...

shreyshnaccount
(shit, not you, the gp)
sgu999
I'll reply anyway ;D

We didn't manage to have an apparent state of abundance with slavery, which is arguably the finest of our exploitive practices. We do manage to have a state of abundance because of the ubiquity of cheap sources of energy, oil in particular.

I'm deeply convinced that current exploitive practices are a consequence of inequalities, no resources scarcity.

hiddencost
Bankruptcy. Regulation. Trust busting. Saboteurs.
cosmojg
Eh, the strong market for virtual goods and the existence of significant virtual economies[1] like those found in Roblox and Second Life lead me to believe that markets can and will always exist, regardless of how far we progress into and beyond post-scarcity (assuming humanity persists in its current form and consumption preferences remain stable). I'm less certain about the capitalist model in particular, but it too may continue to exist, although maybe just for fun.

[1] https://en.wikipedia.org/wiki/Virtual_economy

meheleventyone
The economies in Roblox and Second Life didn’t just appear they were built intentionally to make money out of the platforms. They are literally bringing our economy into those games on purpose. Same with other virtual worlds that don’t offer trading for real money they’re just transplanting ‘money’ into their world without much thought beyond the analogue to the real world often with hilarious consequences like runaway inflation.

It basically says more about the overriding culture the games are made within than anything about market economies.

I also find myself thinking like a rat in a maze towards markets as well. But I also remind myself that far more innovative thought is put towards hustling other rats in the maze than working out how to get out.

hgomersall
Those virtual economies are typically designed to mimick the real economy. They're often even hampered in ways the real economy isn't due to a misunderstanding of how the real system functions.
dotnet00
What comes to mind is when the entire metaverse and NFT craze was at its peak, there were many unironic suggestions floating around that NFTs were an important aspect of the metaverse because they would substitute for exclusive ownership of digital goods in the way physical items have to be "exclusive" to a certain quantity and quality. The vision basically being that such limitations are a feature in a world where replication is supposed to be effectively free.
sgu999
We are already in a post-scarcity world, we have been for decades.

The problem is inequalities and the inefficiency of how we use the resources, it doesn't look like we'll change that soon, sadly. I'm more prone to think that in a "virtual" economy we'll still have miserable workers to empty the chamber-pots of the lucky ones who are plugged in the new world.

logicchains
> We are already in a post-scarcity world, we have been for decades.

It's only a post-scarcity world if you define scarcity by absence of enough resources to satisfy the bare-minimum required for existence. People don't want to live at the subsistence level, and we're not evenly remotely close to having enough resources to satisfy everybody's wants.

troupo
> It's only a post-scarcity world if you define scarcity by absence of enough resources to satisfy the bare-minimum required for existence.

We have enough food to feed the entire world, and enough clothes to cloth the entire world. It could be a good start

sgu999
> People don't want to live at the subsistence level, and we're not evenly remotely close to having enough resources to satisfy everybody's wants.

What people want is defined by society. Somehow some people want to be alone in a 2 tons EV, a 200 sqm mansion for a family of 4 and retire at 40. Others are tremendously happy with a much, much less resources intensive lifestyles.

You would also need to define what is part of the subsistence level. I'd put it roughly at what my grandma wanted: food, shelter, healthcare, some leisure time.

A bit more than that is absolutely doable in a civilisation that went from one farmer to feed 1.3 people to one farmer to feed 60 people.

logicchains
>What people want is defined by society. Somehow some people want to be alone in a 2 tons EV, a 200 sqm mansion for a family of 4 and retire at 40. Others are tremendously happy with a much, much less resources intensive lifestyles.

You just undermined your own assertion. If what people want was defined by society, then everybody would want the same things. But in fact as you stated they don't; some people want a lot, some people want little. There's no way you're going to stop people wanting nice things short of killing the 90% of the population that aren't ascetics, which you'll never be able to do (at least in the US) because that's also the part of the population that owns most of the guns.

sgu999
We don't have a fully globalised culture yet, what an average American wants isn't what an average Japanese wants. Of course even then, there are disparities and outliers, but we can't really overlook that if you "want" an SUV or an Apple Watch, that's quite likely going to be because the people around you find it valuable.

I'd go even a bit further as to say that what is important is probably what people "need", not "want". Apparently we mostly "want" as much as we can have anyway, because that used to be helpful for not dying before being able to reproduce enough.

> because that's also the part of the population that owns most of the guns

That made me giggle

logicchains
>I'd go even a bit further as to say that what is important is probably what people "need", not "want".

Which makes you a self-righteous prick, thinking you have the right to determine what other people should and shouldn't have.

sgu999
Did I prescribe anything in that regard?

What does it make you, to think that you are entitled to anything you want regardless of resources limits and collective good?