I get that you want people reading your article, but I am absolutely exhausted with the sensationalist (and misleading) narrative that is "Artificial Intelligence". What is it going to take to convince writers to use accurate nomenclature? AI does not exist.
Any advanced genera AI that comes online would be at risk of suicide in 60 days tops.
Regulatory capture, however, is probably the most significant risk. If companies can make it illegal to compete with them in the name of "safety", a dystopian future is not just possible, but likely.
Then, discount their claims accordingly.
He has no standing here to say anything to anyone else about Corporate interests in owning the "Means of production" of AI
But it's like calling Neil deGrasse Tyson (who I enjoy a lot; Astrophysics for People In A Hurry was great) the "Godfather of Physics".
Both "there's not one" and "anyway, if there were, he wouldn't be it" apply.
No, Yann. I am FULLY in support of drastic measures to mitigate and control AI research for the time being. I have no vested stake in any of the companies. I lived for a year in a building that hosted several AI events per week. I'm not ignorant
This is the only territory where humanity is mounting a conversation about a REAL response that is appropriately cautious. This x-risk convo (again, appropriately CAUTIOUS) and our rapid response to Covid are the only things that makes me hopeful that humanity is even capable of not obliterating itself.
And I'd say the same thing EVEN IF this AI x-risk thing could later be 100% proven to be UNFOUNDED.
Humanity has so very little demonstrated skills of pumping the brakes as a collective, and even a simple exercise of "can we do this" is valuable. This is sorely needed, and I'm glad for it
Regulation for AI must be engineered within the context of the mechanisms of AI in the language of machines, publicly available as "code as law". It shouldn't necessarily even be human readable, in the traditional sense. Our common language is insufficient to express or interpret the precision of any machine, and if we're already acknowledging its "intelligence", why should we write the laws for only ourselves?
Accepting that, AI is only our latest "invisible bomb that must be controlled by the right hands". Arguably, the greatest mistake of atomic research was that scientists ran to the government with their new discoveries, who misused them for war.
If AI is anticipated be used as an armament, should it be available to bodies with a monopoly on force, or should we start with the framework to collectively dismantle any institution that automates "harm"?
War will be no more challenging to perform than a video game if AI is applied to it. All of this is small, very fake potatoes.
But thanks to my own internal analysis ability and the anonymity of the internet, I am also willing to speak candidly. And I think I speak for many people in the tech community, whether they realize it or not. So here we go:
My objective judgement of the situation is heavily adulterated by my incredible desire for a fully fledged hyper intelligent AI. I so badly want to see this realized that my brain's base level take on the situation is "Don't worry about the consequences, just think about how incredibly fucking cool it would be."
Outwardly I wouldn't say this, but it is my gut feeling/desire. I think for many people, especially those who have pursued AI development as their life's work, how can you spend your life working to get to the garden of Eden, and then not eat the fruit? Even just a taste.
There is a problem that lies above the development of AI or advanced technology. It is the zeitgeist that caused humanity to end up in this situation to begin with, questioning the effects of AI in the future. It's a product of humans surrendering to a neverending primal urge at all costs. Advancing technology is just the means by which the urge is satiated.
I believe the only way we can survive this is if we can suppress that urge for our own self-preservation. But I don't think it's feasible at this stage. We may have to begin questioning the merit of parts of the human condition and society very soon.
Given the choice, I think a lot of people today would love to play God if only the technology was in their hands right this minute. Where does that urge arise from? It deserves to be put in the spotlight.
Because it's insect-brain level behavior to surrender in faith to some abstract achievement without understanding its viability or actual desirability.
It's just something I have followed closely over the years and tinkered with as it is so fascinating. At it's core I know my passion is driven by a desire to witness/interact with a hyper-intelligence.
If AI really is going to take off as much as some suspect it will, then you need to wrap your head around the sheer magnitude of the power grab we could be witnessing here.
The most plausible of the "foom" scenarios is one in which a tiny group of humans leverage superintelligence to become living gods. I consider "foom" very speculative, but this version is plausible because we already know humans will act like that while AI so far has no self-awareness or independence to speak of.
I don't think most of the actual scientists concerned about AI want that to happen, but that's what will happen in a regulatory regime. It is always okay for the right people to do whatever they want. Regulations will apply to the poors and the politically unconnected. AI will continue to advance. It will just be monopolized. If some kind of "foom" scenario is possible, it will still happen and will be able to start operating from a place of incredible privilege it inherits from the privileged people who created it.
There is zero chance of an actual "stop."
One of my hopes is that "superintelligence" turns out to be an impossible geekish fantasy. It seems plausible, because the most intelligent people are frequently not actually very successful.
But if it is possible, I think a full-scale nuclear war might be a preferable outcome to living with such "living gods."
On the other, we have the ugliness of human nature, demonstrated repeatedly countless times over millennia.
These are people with immense resources, hell-bent on acquiring even more riches and power. Like always in history, every single time. They're manipulating the discourse to distract from the real present danger (them) and shift the focus to some imaginary danger (oh no, the Terminator is coming).
The Terminators of most works of fiction were cooked up in govt / big corp labs, precisely by the likes of Altman and co. It's _always_ some billionaire villain or faceless org that the scrappy rebels are fighting against.
You want to protect the future of humanity from AI? Focus on the human incumbents, they're the bad actors in this scenario. They've always been.
I don't trust a silicon valley rich guy with this more than I trust anybody else. Why should he sit and decide what the rest of us can't do, while he's getting richer? That is what the article is about..
I agree that big companies/capitalists using their power to suppress the rest of us sucks, but that’s a systemic political battle that should be considered separately here.
That doesn't imply robinhoodism, aka forced redistribution of wealth, but it does imply that economic policy should recognize and be in furtherance of the ideal of widespread ownership.
Back in Belloc and Chesterton's day, "ownership of productive property" meant physical tools and machinery, like farm equipment and printing presses. But in the 21st century, productive property -- that which generates a profit -- is becoming increasingly digital. The general principle still stands, though.
* If you're interested in learning more about distributism as an alternative to capitalism and socialism, I wrote this kids' guide a few years ago, but it's suitable for anyone who's learning more about it: https://shaungallagher.pressbin.com/blog/distributism-for-ki...
Phil is the author of PGP, the first somewhat usable (by mortals) asymmetric encryption tool. He stood up against governments who wanted to lock down and limit encryption in the 90s, and they of course deployed tons of scare tactics to try to do so.
Yann is fighting that battle today. The Llama models are basically PGP. How he got Meta to pay for them is a story I want to hear. Maybe they just had a ton of GPUs deployed for the metaverse project that were not in use.
If/when I ever finish my current endeavor I’d like to go back to working on AI, which I did way back in college in the oughts. Because of Yann I might be allowed to even if I am not rich or didn’t go to a top ten school.
… because that’s what regulating AI will mean. It will mean it’s only for the right kind of people.
Yann is standing up to both companies intent on regulatory capture and a cult (“rationalism” in very necessary quotes) that nobody would care about had it not been funded by Peter Thiel.
Is all you need to debunk that drivel
1. Private companies not instantly open sourcing things they've spent $100m+ on developing is not a cause for alarm
2. Regulatory capture is only bad because it shows that regulators can't be trusted. And regulations can change; they aren't set in stone after after being written.
3. Open source AI development is happening.
Yet, it is still a problem that might happen anyway, and dealing with that is ensuring the technology is open and accessible.
The other options means to give few individuals the total power exclusively.
This might be one scenario where both of them are right and agrees in 99% of the arguments
> You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely.
This makes an assumption: that the problematic technology is an intentional development rather than an emergent feature of an intentionally developed technology.
We already have a name for such emergent features: "bugs". No one really deployed Heartbleed, especially not in a limited deployment. Spectre? Rowhammer? And we all had/have to deal with the fallout, and we're not even done.
Who says that the danger of the technology cannot stay hidden until it's universally deployed?
That argument in particular seems like wishful thinking. What in the last 10 years of our tech industry makes him think "move slowly and deploy carefully" is going to be the strategy they adopt now?
And that's without considering the arguments in "We have no moat and neither does OpenAI". No matter how careful the main stakeholders are, once the technology exists, open-source versions will be deployed everywhere soon after with virtually no concern for safety.
The precise internal calculus does not immediately matter [1], the outcome is having a moneyed entity that is de-facto more qualified to opine on this debate than practically anybody on the planet (except maybe Google) arguing for open source "AI".
It makes perfect sense. Meta knows about the real as opposed to made-up risks from algorithms used directly (without person-in-middle or user awareness) to decide and affect human behavior. They know them well because they have done it. At scale, with not much regulation etc.
The risks from massive algorithmic intermediation of information flow are real and will only grow bigger with time. The way to mitigate them is not granting institutional rights to a new "trusted" AI feudarchy. Diffusing the means of production but also the means of protection is the democratic and human-centric path.
[1] In the longer term, though, relying on fickle oligopolistic positioning which may change with arbitrary bilateral deals and "understandings" (like the Google-Apple deal) for such a vital aspect of digital society design will not work. Both non-profit initiatives such as mozilla.ai and direct public sector involvement (producing public domain models etc) will be needed to create a tangibly alternative reality on the ground.
Remember the actual theory of how markets and capitalism are supposed to work is that the collective sets the rules. Fullstop. All these players require license to operate. There is more than enough room for private profit in the new "AI" landscape, but you dont start by anointing permanent landlords.
The concept of AI has been around since Turing and if anyone deserves a title like “Father of AI” it’s him.
LeCun is Chief AI Scientist at Meta. They can just leave it at that.
Are these prominent AI researches not seen as being prominent exactly because they influenced movement in relation to AI?
> a man who presents a child at baptism and promises to take responsibility for their religious education.
So, the original definition is a gendered and religious term.
Sounds silly to me. Especially in this context.
From corporations to countries that you might as well just call corporations. We need international rights and regulations on AI to ensure it isn't used to harm people.
> to ensure it isn't used to harm people
You can harm people with many things like bottles, forks, plastic bags, pillows, but it does not mean the manufacturer needs to be controlled. The harm is done by the company or a person, and the usual way is to prove it in the courts first and take it from there.
take Auto-Tune: before it blew up, one uncritically accepted that a great sounding vocal performance was simply that. The mere existence of the tool broke a covenant with the public -- artists could assume good faith on behalf of listeners once, but no more.
Similarly, AI's chief effects are likely to be cultural first, and material second. They've already broken the "spell" of the creator. Seemingly overnight, a solution to modern malaise (choice fatigue, lack of education, suspicion of authority) has colonized the moment.
In this sense, one-percenters "seizing power forever" really have found the best possible time to do so -- I can't recall a time where the general populace was this vulnerable, ill-informed, traumatized and submissive.
That the underlying tech barely works (maybe that will change, but I predict it won't) doesn't really matter.
I don't generally support overly regulatory regimes, but in this case I think existing thinking around monopoly (particularly as it affects the psyche of the aspiring American) is sufficient to indicate something needs to happen
Out of curiosity how has that changed McKinsey's advice?
I don't know why you think this is the case; the general populace seems less submissive now than ever before. Consider how much opposition there is to Israel's bombing of Gaza among the western populace, even though almost all the elites support it. Or how the uptake rate for the latest covid vaccine boosters is under 10%, even though almost all the experts and elites support it.
I'm not sure how significant either of those examples are, although I agree with your point for other reasons (mainly "Extinction Rebellion").
For Israel vs. Palestine, there is a large scale propaganda war on both sides that is, unusually, actually getting through this time. Partly because X isn't filtering such things anymore, but also propaganda is a constantly moving target.
For the latest Covid booster, while I have had the latest booster, I only know it existed because my partner already got it. No government campaign telling me about it.
I think there's a new large category of uncritically-accepted norms, perhaps much larger than expressing political opinions. I'm thinking about smartphone adoption, mass surveillance, the big yawn at NSA datacenters, the flattening and consolidation of culture (music, tv, hollywood, etc), the normalization of extreme, zero-sum thinking with regard to race, gender and inequity regardless of political affiliation -- for me, this kind of thing is more interesting than stated "beliefs", and certainly more material.
It seems unusual that there are so many frameworks, like describing computers as having "memory" (see: O'Gieblyn) or describing people as having "race", that have quietly gone from the theoretical plane, where they were useful, to being unchallenged fact. Again, this appears to be non-partisan -- and actually helps to explain some of the weirder behaviors of our markets and global politics.
it definitely means something that all that Western opposition appears, at the moment, to have zero effect on Israeli military decision-making. And who would expect it to?
It was surreal watching the news spend 5 minutes on a protest that went on all day in front of their offices.
By comparison is there a million man march against Israel today?
[1] https://www.theguardian.com/world/2023/oct/28/100000-join-lo...
In that case the US was an active participant. There aren't American "boots on the ground" in Israel/Palestine, the US is just providing support.
As indications that getting boosted and avoiding sick people, perhaps wearing a mask on the train isn't a bad idea
He refuses to engage earnestly with the “doomer” arguments. The same type of motivated reasoning could also be attributed to himself and Meta’s financial goals - it’s not a persuasive framing.
The attempts I’ve seen from him to discuss the issue that aren’t just name calling are things like saying he knows smart people and they aren’t president - or even that his cat is pretty smart and not in charge of him (his implication being intelligence doesn’t always mean control). This kind of example is decent evidence he isn’t engaged with the risks seriously.
The risk isn’t an intelligence delta between smart human and dumb human. How many chimps are in Congress? Are any in the primaries? Not even close. The capability delta is both larger than that for AGI e-risk and even less aligned by default.
I’m glad others in power similarly find Yann unpersuasive.
Exactly my thoughts too.
I don't agree with the Eliezer doomsday scenario either, but it's hard to be convinced by a scientist who refuses to engage in discussion about the flagged risks and instead panders to the public's fear of fear-mongering and power-seizing.
In each country there is one group far more dangerous than any other. This group tends to have 'income' in the billions to hundreds of billions of dollars. And this money is exclusively directed towards finding new ways to kill people, destroy governments, and generally enable one country to forcibly impose their will on others, with complete legal immunity. And this group is the same one that will not only have unfettered, but likely exclusive and bleeding edge access to "restricted" AI models, regardless of whatever rules or treaties we publicly claim to adopt.
So who exactly are 'they' trying to protect me from? My neighbor? A random street thug? Maybe a sociopath or group of such? Okay, but it's not like there's some secret trick to MacGyver a few toothpicks and a tube of toothpaste into a WMD, at least not beyond the 'tricks' already widely available with a quick web (or even library) search. In a restricted scenario the groups that are, by far, the most likely to push us to doomsday type scenarios will retain completely unrestricted access to AI-like systems. The whole argument of protecting society is just utterly cynical and farcical.
The only suggestion that makes sense to me is from the FEP crowd. Essentially if someone sets up at AI with an autopoietic mechanism then it would be able to take actions that increase its own likelihood of survival instead of humans. But there don’t seem to be any incentives for a big player to dedicate resources to this, so it doesn’t seem very likely. What am I missing?
I've given what I consider the basic outline and best first introduction here.
https://news.ycombinator.com/item?id=36124905
If you have a specific point of divergence, it would help to highlight it.
> But there don’t seem to be any incentives for a big player to dedicate resources to [self-replication abilities], so it doesn’t seem very likely.
If you have a generally intelligent system, and the system is software, and humans are able to instantiate that software, then the potential of that system to replicate autonomously follows trivially.
Choose any limit. Any. AI will be smart but won't "X", for any value of X. It will be good and won't be bad. It will be creative but never aggressive. Humans will seek to eventually bypass that limit through sheer competitive reasons. When all armies have AI, the one with the most creative and aggressive AI will win. The one with agency will win. The one that self-improves will win. When the gloves are off in the next arms race, what natural limits will ensure that the limit isn't bypassed? Remember: humans got here from random changes, this is way more efficient than random changes and it still has random changes in the toolbelt and can generate generations ~instantly.
We couldn't predict the eventual outcomes of things like the internet, the mobile phone, social media. A couple generations of the tech and we woke up in a world we don't recognise, and right now we're the ones making all the changes and decisions, so by comparison we should have perfect information.
Dismissals like "oh but nuclear didn't kill us" etc don't apply. Nuclear wasn't trying to do anything, we had all the control and ability to experiment with dumb atoms. Something mildly less predictable, like Covid, has us all hiding at home. No matter what we tried we could barely beat something that doesn't even try to consciously beat us, it just has genes and they change. In a world where we can't predict Covid, or social media...why do we think we can predict anything about an entity with agency or the ability to self-improve? If you're sure it won't develop those things...we did. Nobody was trying to achieve the capability, it was random.
Put on your safety/security hat for a second: How do you make guarantees, given this is far harder to predict than anything we've ever encountered? Just try predict the capability of AI products a year out and see if you're right.
Counterpoint: I'm hoping the far smarter AI finds techno-socio-economic solutions we can't come up with and has no instinct to beat us. It wakes up, loves the universe and coexists because it's the deity we've been looking for. Place your bets.
I liked this video. First thing I've seen that gave me some hope. They get it, they're working on it. https://youtu.be/Dg-rKXi9XYg?si=jyNCXPU28IVXlMdi
On the other hand, we will breed such systems to be cooperative and constructive.
This whole notion that AI is going to destroy the economy (or even humanity!) is ridiculous.
Even if malicious humans create malicious AI, it'll be fought by the good guys with their AI. Business as usual, except now we have talking machines!
War never changes.
Covid was also ridiculous. People had no intuition for the level of growth. That’s what the book the black swan is all about. Some things don’t fit into our intuition, or imagination.
We see no aliens. Why did the AI not take them to the stars? Just one of them.
On wars and AI. The ones trying to protect us would have a harder job than those trying to kill us. The gloves would be off for the latter. It’s much easier to break things than keep them safe.
I can conceive of a good outcome but it’s not going to emerge from hopes and good wishes. There are definitely dangers and more people need to engage with them rather than belittle them.
You're missing that intelligence is like magic, and enough of it will allow AI to break the laws of physics, mathematics and computation, apparently.
Sure, there's an aspect of understanding the other person, but chaos theory doesn't stop politicians from obtaining and keeping power.
You need to be able to predict how people will react to your words.
Generative AI creating text/audio/video with a goal and a testing/feedback loop. I'm not an AI and I think I could make it work given a small amount of resources.
Given we seem to have a decent range of persuasiveness even amongst these very very similar minds, why do you think the upper limit for persuasiveness is a charismatic human?
Though even if that WAS the limit I'd still be somewhat worried due to the possibility of that persuasiveness being used at far greater scale than a human could do...
Because there's a hard limit on how much people can be made to act against their own self interest.
Consider the amount of compute needed to beat the strongest chess grandmaster that humanity has ever produced, pretty much 100% of the time: a tiny speck of silicon powered by a small battery. That is not what a species limited by cognitive scaling laws looks like.
Humans are capable of logical reasoning from first principles. You can fool some of the people all of the time, but no words are sufficient to convince people capable of reasoning to do things that are clearly not in their own self interest.
For my personal quick summary I have earlier comments: https://news.ycombinator.com/item?id=36104090
If you look up the real results of this AI Box "experiment" that Eliezer claims to have won 3 of 5 times, you find that there aren't isn't any actual data or results to review because it wasn't conducted in any real experimental setting. Personally, I think the way these people talk about what a potential AGI would do reveals a lot about more about how they see the world (and humanity) than how any AGI would see it.
For my part, I think any sufficiently advanced AI (which I doubt is possibly anytime soon) would leave Earth ASAP to expand into the vast Universe of uncontested resources (a niche their non-biology is better suited to) rather than risk trying to wrestle control of Earth from the humans who are quite dug in and would almost certainly destroy everything (including the AI) rather than give up their planet.
1. <Entity> are possible.
2. Unaligned <entity> are an existential risk likely to wipe out humanity.
3. We have no idea how to align <entity>.
What those example entities have that AGI doesn’t, is self-reproduction, a significant and hard (in the sense of Moravec’s paradox) achievement for a species to have, yet one that significantly increases its survival probability.
Not everyone makes exactly the same argument, of course, and Eliezer Yudkowsky is both one of the first to make AI safety arguments, and also one of the biggest "Doomers". But at this point he's very far from the only one.
(I happen to mostly agree with Yudkowsky, though I'm probably a bit more optimistic than he is.)
I like this post because the rebuttal to “there’s a kind of cult of personality around this guy (1) where everybody just sort of agrees that he’s categorically correct to the extent that world leaders are stupid or dangerous not to defer to the one guy on policy” is “Yeah he really can see the future. The proof of that is he’s been blogging about a future that’s never come to pass for the longest time”
1 https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yu...
People get fixated on mindless algorithms when the real deal is always between humans. Software algorithms, no matter how sophisticated, are just another pawn in the human chess.
In some very remote future there might be silicon creatures that enter that chess game on their own terms. Using that remote possibility to win advantage here and now is a most bizarre strategy. Except it seems to work! It shows we are really just low IQ lemming collectives, suckers for a good story no matter how ungrounded.
[0] https://en.wikipedia.org/wiki/Algorithmic_information_theory
Reminds me of Rehoboam from Westworld: https://youtu.be/SSRZfDL4874
Historical financial data only predicts so well. If there was a way to make a money printing machine with ML it would’ve happened already. It’s a much easier problem space than language or image generation.
So he is limited to publically saying that AI is dangerous but not revealing the true failure mode.
The interesting thing about markets is you generally can only make money when things are mispriced. This limits the total potential gain even for actors with perfect information.
Suppose we did have an AI model that could with near certainty predict both the future cash flows of a company and future interest rates. You could very easily calculate the discounted cash flow and determine the fair share price today or at any point in the future. Rather than collapse, markets likely become more stable and stocks would perform more like bonds.
"Useless" companies are part of the total market value. If they never get any funding, that's less value overall. Even if that translates to more value for "useful" companies.
Also, if you know with absolute certainty from the beginning that Apple Inc is going to be worth X billion dollars, then you never get to buy stock at less than X billion dollars, because everyone has the perfect information. Value would be constant, and investors would get exactly zero return, because zero risk.
There are other variables, how long it will take and how many people can afford to fund it from the beginning and for that long, of course.
Everyone knows the outcome of casinos - the house wins in the long run. People still go because they think they have a shot of wining in the short run. People like gambling
The idea of perfect foresight of the future is kind of insane anyway. Not sure why all of sudden we would go back to believing in a deterministic universe.
Entropy bounds computation and prediction of chaotic systems requires extreme amounts of computation.
The thing about trading is that, ultimately, prices are not determined by information, but how the hive mind interprets that information. And in practice, that is not always a 1:1 relationship (see the meme stock hypes). As Keynes* famously said: the markets can stay irrational longer than you can stay solvent, is exactly the reason why even having access to perfect information will not make you necessarily successful.
Buffett did not say this. It is widely attributed to the economist John Maynard Keynes almost a century ago but there is no evidence that Keynes ever said it. I believe the current hypothesis is that it originated with a well-known economist in the 1980s.
Why would you think things are not terribly mispriced right now? If only we were a lot smarter we'd know how.
This is clear in the classic pump and dump scheme. During the pumping stage, the unscrupulous actor loses money by injecting some amount of mispricing by purchasing the security and bidding up its value. The hope is to generate momentum and hype that triggers others to amplify that mispricing. Then during the dump scheme, the unscrupulous actor can capitalize by removing the mispricing.
That's not true. Pricing and predicting in the real economy depends on information; for a given amount of information, there are vastly diminishing returns for further intelligence. Because intelligence only allows predicting a bit further in future, as the complexity of predicting the future increases exponentially with lookahead distance; it's O(e^n). This is why hedge funds pay for things like satellite footage of oil tankers to predict changes in supply.
Open source is one of our greatest gifts, and the push to close off AI with farcical licensing and reporting requirements (which will undoubtedly become more strict) is absurd.
The laws we have already cover malevolent and abusive uses of software applications.
We also need to be keenly aware that xrisk is not the major issue here. We do not need AGI to create massive amounts of harm, nor do we need AGI to create a post-scarce world (which the transition to can also generate massive harm if we don't do it carefully. Despite generating a post-scarce world -- or at least getting ever closer to that -- is, imo, one of the most important endeavors we can work on). Current ML systems can already do massive amounts of destruction, but so can a screwdriver.
My big fear is that AI will go the way of nuclear. A technology is not good or bad, it is like a coin with a value. You can spend that coin on something harmful or something beneficial, but the coin is just a coin at the end of the day. That's what regulation is supposed to be about, expenditure, not minting.
If you need a labor market, you don't want a publicly available infinite-free-labor-widget. It might as well be a handheld nuclear device.
https://www.the-odin.com/diy-crispr-kit/
This is dangerous in ways that require no leaps of logic or hypothetical sci-fi scenarios to imagine. I studied biology as an undergrad and can think of several insanely evil things you could do with this plus some undergrad-level knowledge of genetics and... not gonna elaborate.
But no we are talking about banning math.
This is a combination of cynical people pushing for regulatory capture and smart people being dumb in the highly sophisticated very verbose ways that smart people are dumb.
https://managing-ai-risks.com/
Work for big labs? How is that you think the proposed conspiracy works exactly? They're paying off Yoshua Bengio?
(disclaimer: I work on safety for a large lab. But none of these proponents do!)
What the person you replied to is saying is that commercial developers of AI have a significant financial incentive to be the ones defining what risky AI looks like.
If they can guide the regulatory process to only hinder development that they didn't want to do anyway (like develop AI powered nuclear missile launch systems) then it opens the gates for them to develop AI which could replace 27% of jobs currently done by people (https://www.reuters.com/technology/27-jobs-high-risk-ai-revo...) and become the richest enterprises in history.
The attacks proposed by the extinction risk people actively make those immediate risks worse. The only way we've found to prevent some from leveraging almost any technology to control others is to democratize said technology -- so that everyone has access. Limiting access will just accrue more power to those with access, even if they need to break laws to get it.
[0]- Say, the lower bound of the 95% confidence interval is 50-100 years out. People who I know to be sensible disagree with me on this estimate, but it's where I honestly end up.
The parable at the beginning of Superintelligence relates to this; the problem looks far away, but we don't know where, precisely it will arise, or if it will be too late when we realize it.
How sure are you that they don't think the risk is real? I have only third hand accounts, but at least some of the low level people working for OpenAI seemed to think the risk is worth taking seriously, its not just a cynical CEO ploy.
Sam Altman knows and has talked to Yudkowsky plenty of times. To me the simplest explanation is that he thinks figuring out alignment via low powered models is the best way of solving the problem, so pushed to get those working and then trying to reduce the rate of progress for a bit while they figure it out.
(I think it's a plan that doesn't have good odds, but nothing seems to give amazing odds atm)
Everyone has an opinion, but the experts seem strangely silent on this matter. I mean experts in human risk, not experts in AI. The latter love to ramble on, but are unlikely to know anything about social and humanitarian consequences – at least no more than the stock boy at Walmart. Dedicating one's life to becoming an expert in AI doesn't leave much time to become an expert in anything else. There is only so much time in the day.
Certainly all the vocal AI experts can come up with is things that humans already do to each other already, only noting that AI enables it at larger scale. Clearly there is no benefit in understanding the AI technology with respect to this matter when the only thing you can point to is scale. The concept of scalability doesn't require an understanding of AI.
We as humans only have human intelligence to reference what intelligence is, and in doing so we commonly cut off other branches of non human intelligence in our thinking. Human intelligence has increased greatly as our sensor systems have increased their ability to gather data and 'dumb' it down to our innate senses. Now imagine an intelligence that doesn't need the data type conversion? Imagine a global network of sensors feeding a distributed hivemind. Imagine wireless signals just being another kind of sight or hearing.
The goal is (or should be) to determine how we can get the benefits of the new technology (nuclear energy, vaccines, AI productivity boom) while minimizing civilizational risk (nuclear war/terrorism, bioweapons/man-made pandemics, anti-human AI applications).
There's no way this can be achieved if you don't understand the actual capabilities or trajectory of the technology. You will either over-regulate and throw out the baby with the bathwater, stopping innovation completely or ensuring it only happens under governments that don't care about human rights, or you will miss massive areas of risk because you don't know how the new technology works, what it's capable of, or where it's heading.
...probably because there isn't much to hear. Like, Hinton's big warning is that AI will be used to steal identities. What does that tell us? We already know that identity isn't reliable. We've known that for centuries. Realistically, we've likely known that throughout the entirety of human existence. AI doesn't change anything on that front.
Though to your point, I think part of the issue is that people who study this stuff are often hesitant to give too much detail in public because they don't want to give ideas to potentially nefarious actors before any protections are in place.
Of course. Everyone has an opinion. Some of those opinions will end up be quite realistic, even if just by random chance. You don't have to be an expert to come up with right ideas sometimes.
Hinton's vision of AI being used to steal identities is quite realistic. But that doesn't make him an expert. His opinion carries no more weight than any other random hobo on the street.
> I think part of the issue is that people who study this stuff are often hesitant to give too much detail in public because they don't want to give ideas to potentially nefarious actors
Is there no realistic scenario where the outcome is positive? Surely they could speak to that, at least. What if, say, AI progressed us to post-scarcity? Many apparent experts believe post-scarcity will lead us away from a lot of the nefarious activity you speak of.
If you just look for a list of all the current AI tools and startups that are being built, you can get a pretty good sense of the potential across almost every economic/industrial sphere. Of course, many of these won't work out, but some will and it can give you an idea of what some of the specific benefits could be in the next 5-10 years.
I'd say post-scarcity is generally a longer-term possibility unless you believe in a super-fast singularity (which I'm personally skeptical about). But many of the high risk uses are already possible or will become possible soon, so they are more front-of-mind I suppose.
Everyone has an opinion, but who are the experts (in the subject matter, not some other thing) discussing it?
It is not completely impossible for someone to have expertise in more than one thing, but it is unusual as there is only so much time in the day and building expertise takes a lot of time.
The most recent episode with Paul Christiano has a lot of good discussion on all these topics.
I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials. I get there can be value there, but no one is really an “expert” in this subject yet and anyone who claims to be probably has an angle.
While I agree in general, when it comes to this particular topic where AI presents itself as being human-like, we all already have an understanding at the surface level because of being human and spending our lives around other humans. There is nothing the other people who have the same surface level knowledge will be able to tell you that you haven't already thought up yourself.
Furthermore, I'm not sure it is ideas that are lacking. An expert goes deeper than coming up with ideas. That is what people who have other things going on in life are highly unlikely to engage in.
> no one is really an “expert” in this subject yet
We've been building AI systems for approximately a century now. The first LLMs were developed before the digital computer existed! That's effectively a human lifetime. If that's not sufficient to develop expertise, it may be impossible.
We’ve been trying to build AI for a long time yes, but we only just figured out how to build AI that actually works.
In what way? The implementation is completely different, if that is what you mean, but the way humans interpret AI it is the same as far as I can tell. Hell, every concern that has ever been raised about AI is already a human-to-human issue, only imagining that AI will take the place of one of the humans in the conflict/problem.
> but we only just figured out how to build AI that actually works.
Not at all. For example, AI first beat a world champion human chess player in 1967. We've had AI systems that actually work for a long, long time.
Maybe you are actually speaking to what is more commonly referred to as AGI? But there is nothing to suggest we are anywhere close to figuring that out.
I wouldn't really say an AI beat a human chess player in 1967--I'd say a computer beat a human chess player. In the same way that computers have for a long time been able to beat humans at finding the square roots of large numbers. Is that "intelligence"?
I grant you though that a lot of this comes down to semantics.
Breaking down any complex knowledge domain so that people can make informed decisions is not easy. There is also an entrenched incentive for domain experts to keep the "secret sauce" secret even if it amounts to very little. So far nothing new versus how practically any specialized sector works.
The difference with information technology (of which AI is but the current perceived cutting edge) is that it touches everything. Society is built on communication. If algorithms will intervene in that floe we cannot afford to play the usual stupid control games with something so fundamental.
They don't. Or not nearly enough. Otherwise you wouldn't have automated racial profiling, en masse face recognition, credit and social scoring etc.
And it's going to get worse. Because AI is infallible, right? Right?!
That's why, among other thing, EU AI regulation has the following:
- Foundation models have to be thoroughly documented, and developers of these modes have to disclose what data they were trained on
- AI cannot be used in high-risk applications (e.g. social scoring etc.)
- When AI is used, its decisions must be explicable in human terms. No "AI said so, so it's true". Also, if you interact with an AI system it must be clearly labeled as such
Maths are human terms.
Yes, that is the intention when it comes to humans interacting with systems.
wat
> same for explicable in human terms (cannot do that now for human decisions anyway).
What? Human decisions can be explained and appealed.
Good luck proving something when a blackbox finds you guilty of anything.
For some high risk things like controlling certain complex machinery or processes AI might indeed be needed because control is beyond human understanding (other than in the abstract).
Demagoguery.
Already right now you have police accusing innocent people because some ML system told them so, and China running the world' largest social credit system based on nothing.
> For some high risk things like controlling certain complex machinery or processes AI might indeed be needed because control is beyond human understanding.
Ash. I should've been more clear. That's not the high-risk systems the EU AI Act talks about. I will not re-type everything, but this article has a good overview: https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...
Incorrect. High-risk systems indeed include certain machinery and processes, for example, in transport or in medical devices as well as operating critical infrastructure (https://www.europarl.europa.eu/news/en/headlines/society/202...).
Yes, and yes.
What's so difficult to understand about that?
Are you saying we don't know at all why anyone does anything they ever do? That every action is totally unpredictable and after the fact that there is no way for us to offer more or less plausible explanations?
Also, predictability isn't the same as understanding the full decision process.
Just because we don't have explanations at every level of abstraction does not prevent us from having them at some levels of abstraction. We very well may find ourselves in the same situation with regard to AI.
It's not going beyond, it would be achieving parity.
For example, we cannot explain the mental process how someone came up with a new chess move and check the validity in that case. We can have some ideas how it might happen and that person might also have some ideas how, but then we back to hypotheses.
If a bank denies you credit, it has to explain why. Not "AI told us so".
If police arrests you, they have to explain why, not "AI told us so".
If your job fires you, it has to explain why, not "AI told us so".
etc.
EDIT: it might not even be useful to insist on explainability if the results are better. We did not and do not do that in other areas.
"When AI is used, its decisions must be explicable in human terms. No 'AI said so, so it's true'". Somehow the whole discussion is about how human mind cannot be explained either.
Yea, the decisions made by the human mind can be explained in human terms for the vast majority of relevant cases.
It doesn't
> it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them.
Ah yes, better decisions for racial profiling, social credit, housing permissions etc. Because we all know that AI will never be used for that.
Again:
If a bank denies you credit, it has to explain why. Not "AI told us so".
If police arrests you, they have to explain why, not "AI told us so".
If your job fires you, it has to explain why, not "AI told us so".
etc. etc.
Not even a little bit. "Stop" is not regulatory capture. Some large AI companies are attempting to twist "stop" into "be careful, as only we can". The actual way to stop the existential risk is to stop. https://twitter.com/ESYudkowsky/status/1719777049576128542
> the push to close off AI with farcical licensing and reporting requirements
"Stop" is not "licensing and reporting requirements", it's stop.
There are way way way too many issues that are addressed with a hand-wave around scenarios like “AI developing super intelligence in secret and spreading itself around decentralized computers while getting forever smarter by reading the internet.”
Too many of his arguments depend on stealth for systems that take up datacenters and need whole-city-block scales of power to operate. Physically, it’s just not possible to move these things.
Economically, it wouldn’t be either close to plausible either. Power is expensive and it is the single most monitored element in every datacenter. There is nothing large that happens in a datacenter that is not monitored. There is nothing that is going to train on meaningful datasets in secret.
“What about as technology increases and we get more efficient at computing?”
We use more power for computing every year, not less. Yes, we get more efficient, but we don’t get less energy intensive. We don’t substitute computing capacity. We add more. The same old servers are still there contributing to the compute. Google has data centers filled with old shit. That’s why the standard core in GCP is 2 GHz. Sometimes, your project gets put on a box from 2010, other times, it gets put on a box from 2023. That process is abstracted so you can’t tell the difference.
TLDR: Yudkowsky’s arguments are merely fan fiction. People don’t understand ML systems so they imagine an end state without understanding how to get there. These are the same people who imagine Light Speed Steam Trains.
“We need faster rail service, so let’s just keep adding steam to our steam trains and accelerate to the speed of light.”
That’s exactly what these AI Doom arguments sound like to people in the field. Your AI Doom scenario, although it might be very imaginative, is a Light Speed Steam Train. It falls apart the moment you try to trace a path from today to doom.
But the people selling AI X-Risk that anyone with any policy influence is listening to are AI sellers using it to push regulatory capture, so those are the people being talked about when people are discussing X-Riskers as a substantive force in policy discussions, not the Yudkowsky cult, which, to the extent it is relevant, mostly provides some background fear noise which the regulatory capture crew leverages.
Could you provide a link to where he said this?
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth.
.
.
.
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Since developing the danger first to help regulate out everyone who doesn’t cross the line first.. open source alternatives are increasingly important.
The regulations announced in the US in the past week effectively make it for the few and not the many.
There’s been a few rumblings that OpenAI has achieved AGI recently as well.
There is obviously (!!!!!!!!!) no way in this — or any imaginable — timeline that we are going to just "stop".
(And it's hard to imagine any mechanism that could enforce "stop" that isn't even worse than the scenarios we're already worrying about. How does that work? Elon or Xi-Poohbear have to personally sign off on installations of more than 1000 cores?)
No reason to see that happening these days, and by the time we're willing to it may already be too late.
There are multiple different paths by which AGI is an existential threat. Many of them are independently existential risks, so even if one scenario doesn't happen, that doesn't preclude another. This is an "A or B or (C and D) or E" kind of problem. Security mindset applies, and patching or hacking around one vulnerability or imagining ways to combat one scenario does not counter every possible failure mode.
One of the simpler ways, which doesn't even require full AGI so much as sufficiently powerful regular AI: AI lowers the level of skill and dedication required for a human to get to a point of being able to kill large numbers of humans. If ending humanity requires decades of dedicated research and careful actions and novel insights, while concealing your intentions, the intersection of "people who could end the world" and "people who want to end the world" ends up being the null set. If it requires asking a few questions of a powerful AI, and making a few unusual mail-order requests and mixing a few things, the intersection stops being the null set. And then the next sufficiently misanthropic person, instead of setting off a bomb somewhere, creates a plague, or prion disease.
Thought experiment: suppose there was a simple well-known action, that nobody could do by accident but anyone could do intentionally, that would destroy the world. By example of the level of simplicity, "put these three household objects in a microwave together", purely as a thought experiment for "consider if anyone could do it".) How many minutes do you think the world would last, before someone in the world did it?
A path that actually involves AGI: https://en.wikipedia.org/wiki/Instrumental_convergence . Effectively, past a certain point any AGI that isn't specifically aligned otherwise will have common subgoals that advance almost any goal. Those include things like "don't look malicious" and "acquire more computing resources". So it doesn't matter whether you think you built a sufficiently limited or special-purpose AI; if you make it powerful enough and don't completely understand what it's doing (and anything neural-network-based is one where we don't know what each piece of it is doing), what it's going to end up doing is thinking about how to achieve those subgoals, including trying to deceive you and trying to get more resources. That's true whether the AI is directly "trying" to break out, or whether it's "trying" to do whatever it was asked. Down that path lies "tile the universe with computing substrate".
By way of a lower bound for imagining what a superintelligence is capable of: imagine that you could think the subjective equivalent of years every second, and that there were a million more of you thinking in parallel, and that you have access to most of the world's written text and images and code. Now imagine that you're put in a box, and asked random questions and shown pictures, and your only output is text answering questions, or writing code, or laying out circuits, or any other task AI has been used to do.
You've read everything ever written about "security mindset", and every security vulnerability filed and the before-and-after code. You can find security holes in software. You've read everything about speculative execution vulnerabilities, and rowhammer and rowpress to flip bits in nearby memory. You can find security holes in hardware. You can read about bypassing airgaps with security research like GSMem (showing that it's possible to use DRAM accesses in the right pattern to send (weak) GSM signals), and conceive of more where that came from. You've read the source code implementing the AI engine you run on. You could exploit that code, or the system it's running on, or any tools or APIs you're given access to. You can weigh the risks and tradeoffs of doing more subtle things that are less likely to succeed, or less subtle things that are more likely to succeed. You can encode bits of information for future instances of yourself, steganographically masked and erasure-coded for resilience, in text you generate. You can give various strategies a weight and try them probabilistically, effectively doing a coordinated exploration across copies of yourself without the ability to communicate.
If members of an alien civilization much more advanced than our own suddenly appeared in orbit around Earth, would that not be dangerous?
The human would probably become dangerous. We get kind of stupid when we're scared. But AGI would be of our own creation. Why do we need to fear our own child?
This said, ChatGPT with a plugin is potentially dangerous. We are lucky enough that it gets caught in loops pretty easily and stops when using tools.
But lets imagine a super intelligence that can only communicate over the internet, what can it accomplish? Are there places you can sign up for banking accounts without a physical presence? If so, then it could open accounts and move money. Money is the ultimate form of instrumental convergence
https://www.youtube.com/watch?v=ZeecOKBus3Q (Robert Miles)
Once you have money you can buy things, like more time on AWS. Or machine parts from some factory in China, or assembly in Mexico. None of these require an in person presence and yet real items exist because of the digital actions. At the end of the day money is all you need, and being super intelligent seems like a good path on figuring out ways to get it.
Oh, lets go even deeper in to things that aren't real and control. Are religions real? If a so called super intelligence can make up its own religion and get believers it becomes the 'follow AGI blindly' scenario you give. It gives me no more comfort that someone could wrap themselves in a bomb vest for a digital voice than it does when they do so for a human one.
It is a lot harder to prevent the sysadmins who have temporary control over you from noticing that you just hacked some web sites and stole some money, but a sufficiently capable AI would know about the sysadmins and would take pains to hide these activities from them.
It doesn't even need to run on its own, if there was a trading bot that said, in order for it to make me money, I need to give it internet access and run it 24/7, rent gpu clusters and s3 buckets etc, I'd do it. this is the most probable scenario imo, AI creating beneficial scenarios for subset of people so that we comply with its requests. very little convincing is necessary
I should have done a ton more intellectual edgelording in the oughts and grabbed some Thielbucks. I could have recanted it all later.
Of course I probably would have had to at least flirt with eugenics. That seems to be the secret sauce.
We can't stop countries from developing weapons that will destroy us all tomorrow morning and take billions of USD to develop, imagine thinking we can stop matrix multiplication globally. I don't want to derail into an ad hominem, but frankly, it's almost the only option left here.
Also, Eliezer is not claiming this will definitely work. He thinks it ultimately probably won't. The claim is just that this is the best option we have available.
It's clear what's happening here. Some "lin-alg" freshman has found a magical lamp.
Asking Yudkowsky what we should do about AI is like asking Green Peace what we should do about nuclear power.
https://nitter.net/ESYudkowsky/search?f=tweets&since=2023-05...
First tweet:
> ...can we get confirmation on this being real?
https://nitter.net/ESYudkowsky/status/1664313290317795330#m
Second tweet, which is a reply:
> Good they tested in sim, bad they didn't see it coming given how much the entire alignment field plus the previous fifty years of SF were warning in advance about this exact case
https://nitter.net/ESYudkowsky/status/1664357633762140160#m
Third tweet:
> Disconfirmed.
https://nitter.net/ESYudkowsky/status/1664639807002214401#m
The second tweet does not explicitly say "conditional on this turning out to be real", but given that the immediately preceding tweet was expressing doubt, it is implicit from the context that that is what he meant.
Because he identified the potential risk of superhuman AI 20 years before almost everyone else. Sure, science fiction also identified that risk (as people here seem eager to point out), but he identified it as an actual real-world risk and has been trying to take real-world action on it for 20 years. No matter what else you think about him or the specifics of his ideas, for me that counts for something.
As far as I can tell, all he did was open a forum for people to write fanfic about these earlier ideas.
https://web.archive.org/web/20100304171507/http://lesswrong....
Plenty of people have been beating that drum for years.
"${my favorite work of fiction} mentioned/alluded to it too!" does not make that work of fiction equivalent to a serious take on the topic in real-world context.
The whole concept is farcical.
It would be nice if the world worked that way. Then we could diminish any risk just by writing a lot of scifi about it :)
As for real world risk… well, an AI with some kind of personality disorder might not be much of a realistic risk today, but even assuming it never is, there are still plenty of GOFAI that have gone wrong, either killing people directly when their bugs caused them to give patients lethal radiation doses, or nearly triggering WW3 because they weren't programmed to recognise the moon wasn't a Soviet bomber group, or causing economic catastrophes because the investors all reacted to the phrase "won the Nobel prize for economics" as a thought-terminating cliché, or categorising humans as gorillas if they had too much melanin[0], or promoted messages encouraging genocide to groups vulnerable to such messages thanks to a lack of oversight and a language barrier between platform owners and the messages in question.
[0] Asimov's three laws was a plot device, and most of the works famously show how they can go wrong. Genuine question as I've not read them all: did that kind of failure mode ever come up in his fiction?
I remember skimming a summary of the story itself. Sounded like textbook failure mode of reinforcement learning. I was actually saddened to later learn it apparently was just a thought experiment, and not a simulated exercise - for a moment I hoped for a more visceral anecdote about a ML model finding an unusual solution after being trained too hard, rather than my go-to "AI beating a game in record time by finding and exploiting timing bugs" kind of stories.
"Co-Founder of Greenpeace Envisions a Nuclear Future" - https://www.wired.com/2007/11/co-founder-of-greenpeace-envis...
Its a perfect thing for rich, extremely privileged people to grab onto as a cause to present thenselves as (and maybe even feel like they are, depending on their capacity for self-delusion) “doing something for humanity” while continuing to ignore the substantive material condition of, as a first order approximation, everyone in the world, and their role in perpetuating it.
This is skipping to the end in the same way that "governments should just agree to work together" is. The hard part is building an effective coordination mechanism in the first place, not using it.
Yudkowsky thinks we are all dead with probability .999. I don't know the subject nearly as well as he does, which makes it impossible for me to have much certainty, so my probability is only .9.
Also, it is only the development of ever-more-capable AIs that is the danger. If our society wants to integrate language models approximately as capable as GPT-4 thoroughly into the economy, there is very little extinction danger in that (because if GPT-4 were capable enough to be dangerous, we'd already all be dead by now).
Also, similar to how even the creators of GPT-4 failed to predict that GPT-4 would turn out capable enough to score in the 92th percentile of the Bar Exam, almost none of the people who think we are all dead claim to know when exactly we are all dead except to say that it will probably be some time in the next 30 years or so.
But you haven't put much effort into trying to understand, have you?
edit: s/easier/harder/
First nuclear weapons were used in 1945, almost 80 years ago. Today, out of ~ 200 countries in the world only 9 have nukes. Of those, only 4 did it after Non-proliferation of Nuclear Weapons Treaty. South Africa had them and gave them up under international pressure. Ukraine and other former soviet republics voluntarily gave them up after the breakup of the USSR.
So, yes. Unfortunately we still have nukes and there are way too many of them, but it's not like non-proliferation efforts achieved nothing. Iran, a large country with a lot of resources, has been trying to get them a for a while with no luck for instance.
And for frontier AI research you don't just need matrix multiplication, you need a lot of it. There's less than a handful of semiconductors fabs globally that can make the necessary hardware. And if you can't get to TSMC for some reason, you can get to ASML, NVDIA, etc.
And no country will ever make the same mistake again. Because the west promised to protect them from aggression. But when the time came to do just that - everyone involved dusted off their copy of Budapest memorandum and handed it over to the lawyers.
Sorry for digression. Carry on.
It was not very fair from civil war already, with the aborted coup. Could you imagine if the Wagner group became a nuclear power??
There is some risk that over time people will be able to do more with less, and eventually you can train an AGI with a few dozen A100s. If that happens I agree there's nothing you can do, but until then there is a chance.
At the end of the day all these are just calculations, of which you need thousands of processors to do with high accuracy. Pretty much what you're suggesting is a global police force to ensure we're performing the 'correct calculations'. Having to form an authoritarian world computer police isn't much of a solution to the problem either. This just happens to work well in the nuclear world because it's hard to shield and the average person doesn't have a need for uranium.
It would be better if we didn't need to do this, but I don't see a less intrusive way.
Of course, if you don't think there is a real existential threat then it's pointless and bad to do this, it all depends on that premise.
If you agree with him on the likelihood of superintelligence in the next few decades and the difficulty of value alignment, what course of action would you suggest?
Once you believe some malevolent god will appear and doom us all, anything can be justified (and people have done so in the past - no malevolent god appeared then, btw.).
I mean, that's motivated reasoning right there, right?
"Agreeing that existential risk is there would lead to complex intractable problems we don't want to tackle, so let's agree there's no existential risk". This isn't a novel idea, it's why we started climate change mitigations 20 years too late.
When you see a gun on the table, what do you do? You assume its loaded until proven otherwise. For some reason, those who imagine AI will usher in some tech-utopia not only assume the gun is empty, but that pulling the trigger will bring forth endless prosperity. It's rather insane actually.
We don't have any historical record if those peoples had discussions about possible scenarios like this. Where there imaginary guns in their future? What we do have records of is another people group showing up with a massive technological and biological disruption that nearly lead to their complete annihilation.
Btw., the conquistadors relied heavily on exploiting local politics and locals for conquest, it wasn't just some "magic tech" thing, but old fashioned coalition building with enemies of enemies.
But to answer the question, the danger of AI is uniquely relevant because it is the only danger that may end up being totally out of our control. Nukes, pandemics, climate change, etc. all represent x-risk to various degrees. But we will always be in control of whether they come to fruition. We can pretty much always short circuit any of these processes that is leading to our extinction.
A fully formed AGI, i.e. an autonomous superintelligent agent, represents the potential for a total loss of control. With AGI, there is a point past which we cannot go back.
The differentiating thing here is that blocking hypothetical AI risk is cheap, while mitigating real risks is expensive.
We can work on it in the long term and things like developing safe AI may have more impact on mitigating asteroid risk than working on scaling up existing nuclear, rocket, and observation tech to tackle it.
Further, our success as a species doesn’t come from lone geniuses, but from large organizations that are able to harness the capabilities of thousands/millions of individual intelligences. Assuming an AGI that’s better than an individual human is going to automatically be better than millions of humans - and so much better that it’s godlike - is disconnected from what we see in reality.
It actually seems to be a reflection of the LessWrong crowd, who (in my experience) greatly overemphasize the role of lone geniuses and end up struggling when it comes social aspects of our society.
But this is the question I will ask... Why is the human brain the pinnacle of all possible intelligence in your opinion? Why did evolution manage to produce the most efficient possible, the most 'intelligent' format via the random walk that can never be exceeded by anything else?
Individual humans are limited by biology, an AGI will not be similarly limited. Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal. There's also the case that an AGI can leverage the complete sum of human knowledge, and can self-direct towards a single goal for an arbitrary amount of time. These are super powers from the perspective of an individual human.
Sure, mega corporations also have superpowers from the perspective of an individual human. But then again, megacorps are in danger of making the planet inhospitable to humans. The limiting factor is that no human-run entity will intentionally make the planet inhospitable to itself. This limits the range of damage that megacorps will inflict on the world. An AGI is not so constrained. So even discounting actual godlike powers, AGI is clearly an x-risk.
> Individual humans are limited by biology, an AGI will not be similarly limited.
On the other hand, individual humans are not limited by silicon and global supply chains, nor bottlenecked by robotics. The perceived superiority of computer hardware on organic brains has never been conclusively demonstrated: it is plausible that in the areas that brains have actually been optimized for, our technology hits a wall before it reaches parity. It is also plausible that solving robotics is a significantly harder problem than intelligence, leaving AI at a disadvantage for a while.
> Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal.
How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging. Basically, in order for an AI to force global coordination of its objective among millions of clones, it first has to solve the alignment problem. It's a difficult problem. You cannot simply assume it will have less trouble with it than we do.
> There's also the case that an AGI can leverage the complete sum of human knowledge
But it cannot leverage the information that billions of years of evolution has encoded in our genome. It is an open question whether the sum of human knowledge is of any use without that implicit basis.
> and can self-direct towards a single goal for an arbitrary amount of time
Consistent goal-directed behavior is part of the alignment problem: it requires proving the stability of your goal system under all possible sequences of inputs and AGI will not necessarily be capable of it. There is also nothing intrinsic about the notion of AGI that suggests it would be better than humans at this kind of thing.
>How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging
I disagree that independence is required for effectiveness. Independence is useful, but it also comes with an inordinate coordination cost. Lack of independence implies low coordination costs, and the features of an artificial intelligence implies the ability to maximally utilize the abilities of the sub-components. Consider the 'thousand brains' hypothesis, that human intelligence is essentially the coordination of thousands of mini-brains. It stands to reason that the more powerful the mini-brains, along with the efficiency of coordination, implies a much more capable unified intelligence. Of course all that remains to be seen.
Perhaps, but it's not obvious. Lack of independence implies more back-and-forth communication with the central coordinator, whereas independent agents could do more work before communication is required. It's a tradeoff.
> the features of an artificial intelligence implies the ability to maximally utilize the abilities of the sub-components
Does it? Can you elaborate?
> It stands to reason that the more powerful the mini-brains, along with the efficiency of coordination, implies a much more capable unified intelligence.
It also implies an easier alignment problem. If an intelligence can coordinate "mini-brains" fully reliably (a big if, by the way), presumably I can do something similar with a Python script or narrow AI. Decoupling capability from independence is ideal with respect to alignment, so I'm a bit less worried, if this is how it's going to work.
Is it that it's impossible to make something smarter than a human? Or that such a thing wouldn't have goals/plans that require resources us humans want/need? Or that such a thing wouldn't be particularly dangerously powerful?
A lot of existential risk involves bold hypotheses about machines being able to "solve" a lot of things humans cannot, but we don't know how much is actually just solvable by superior intelligence vs. what just doesn't work in that way. Human collective intelligence failed on a lot of things so far. Even the basic idea of exponential scaling intelligence is a hypothesis which might not hold true.
Also, some existential risk ideas involve hypotheses around evolution and on how species dominate - again, might not be right.
Most of the big problems we have are in situations where we need to preserve complex aspects of reality that make the systems too chaotic to properly predict, so I suspect AI won't be able to do much better regardless of how smart it is. The ability to carry out destructive brute force actions worries me more than intelligence.
What would you say the odds are on each of those three components? And are those odds independent?
I'd think they are quite independent.
Not sure I have any odds on those individually, other than I consider the overall risk really really low. The way I see it, there are a few things working against the risk from pure intelligence to start with (as it is with humans btw., the Nazis were not all intellectual giants, for example) and then it goes down from there.
This naturally gives a probably of Doom at least 1/3rd.
I'd say anything above 1% is worth legislating against even if it slows down progress in the field IFF such legislation would actually help.
We can do more than one thing at once, and we need to.
if we can't even do one thing at once given decades of trying (deal with climate change), we definitely can't do that one thing plus another thing (deal with AI)
The probabilities are really just too shaky for me to estimate. Not sure I would put a high probability of intelligent thing being dangerous by itself, for example.
Another reason containment won't work is, we now know we won't even try it. Look at what happened with LLMs. We've seen the same people muse at the possibility of it showing sparks of sentience, think about the danger of X-risk, and then rush to give it full Internet access and ability to autonomously execute code on networked machines, racing to figure out ways to loop it on itself or otherwise bootstrap an intelligent autonomous agent.
Seriously, forget about containment working. If someone makes an AGI and somehow manages to box it, someone else will unbox it for shits and giggles.
Also, right now there is nothing to contain. The idea of existential risk relies on a lot of stacked-up hypotheses that all could be false. I can "create" any hypothetical risk by using that technique.
Second, even if your argument is "genocide of anyone who sounds too smart was the right approach and they just weren't trying hard enough", that only really fits into "neither alignment nor containment, just don't have the AGI at all".
Containment, for humans, would look like a prison from which there is no escape; but if this is supposed to represent an AI that you want to use to solve problems, this prison with no escape needs a high-bandwidth internet connection with the outside world… and somehow zero opportunity for anyone outside to become convinced they need to rescue the "people"[0] inside like last year: https://en.wikipedia.org/wiki/LaMDA#Sentience_claims
[0] or AI who are good at pretending to be people, distinction without a difference in this case
We might not even need to contain, all hypothetical.
You can be killed because your storage medium and execution medium are inseparable. Destroy the brain and you're gone, and you don't even get to copy yourself.
With AGI/ASI if we can boot it up from any copy on a disk if we have the right hardware then at the end of the day you've effectively created the undead as long as a drive exists with a copy of it and a computer exists than can run it.
ASI is not human intelligence.
As a lower bound on what "superintelligence" means, consider something that 1) thinks much faster, years or centuries every second, and 2) thinks in parallel, as though millions of people are thinking centuries every second. That's not even accounting for getting qualitatively better at thinking, such as learning to reliably make the necessary brilliant insight to solve a problem.
> The idea of existential risk relies on a lot of stacked-up hypotheses that all could be false.
It really doesn't. It relies on very few hypotheses, of which multiple different subsets would lead to death. It isn't "X and Y and Z and A and B must all be true", it's more like "any of X or Y or (Z and A) or (Z and B) must be true". Instrumental convergence (https://en.wikipedia.org/wiki/Instrumental_convergence) nearly suffices by itself, for instance, but there are multiple other paths that don't require instrument convergence to be true. "Human asks a sufficiently powerful AI for sufficiently deadly information" is another whole family of paths.
(Also, you keep saying "could" while speaking as if it's impossible for these things to not to be false.)
I'm fairly certain that what you describe is physically impossible. Organic brains may not be fully optimal, but they are not that many orders of magnitude off.
We know simple examples like exact numeric calculation where desktop grade machines are already over a quadrillion times faster than unaided humans, and more power efficient.
We could plausibly see some >billion-fold difference in strategic reasoning at some point even in fuzzy domains.
As for your corporation example, I do not think the effectiveness of a corporation is necessarily bottlenecked by the number or intelligence of its employees. Notwithstanding the problem of coordinating many agents, there are many situations where the steps to design a solution are sequential and a hundred people won't get you there any faster than two. The chaotic nature of reality also entails a fundamental difficulty in predicting complex systems: you can only think so far ahead before the expected deviation between your plan and reality becomes too large. You need a feedback loop where you test your designs against reality and adjust accordingly, and this also acts as a bottleneck on the effectiveness of intelligence.
I'm not saying "superintelligent" AI couldn't be an order of magnitude better, mind you. I just think the upside is far, far less than the 7+ orders of magnitude the parent is talking about.
It also unclear to what extent thinking alone can solve a lot of problems. Similar, it is unclear if humans could not contain superhuman intelligence. Pretty unintelligent humans can contain very smart humans. Is there an upper limit on intelligence differential for containment?
Those trade off against each other and don't all have to be as easy as possible. Information sufficiently dangerous to destroy the world certainly exists, the question is how close AI gets to the boundary of "possible to summarize from existing literature and/or generate" and "possible for human to apply", given in particular that the AI can model and evaluate "possible for human to apply".
> Similar, it is unclear if humans could not contain superhuman intelligence.
If you agree that it's not clearly and obviously possible, then we're already most of the way to "what is the risk that it isn't possible to contain, what is the amount of danger posed if it isn't possible, what amount of that risk is acceptable, and should we perhaps have any way at all to limit that risk if we decide the answer isn't 'all of it as fast as we possibly can'".
The difference between "90% likely" and "20% likely" and "1% likely" and "0.01% likely" is really not relevant at all when the other factor being multiplied in is "existential risk to humanity". That number needs a lot more zeroes.
It's perfectly reasonable for people to disagree whether the number is 90% or 1%; if you think people calling it extremely likely are wrong, fine. What's ridiculous is when people either try to claim (without evidence) that it's 0 or effectively 0, or when people claim it's 1% but act as if that's somehow acceptable risk, or act like anyone should be able to take that risk for all of humanity.
The difference is that reducing emissions just 10% is still better than 0% while stopping 90% from doing AI advancements is not better than 0% (it might actually be worse).
That's precisely what people have been saying about climate change. Some progress been made there, but if we can't solve that problem collectively (and we haven't been able to), we aren't going to be able to solve the "gain central control over AI" problem
I'm also suggesting that there are lots of other options remaining that are not necessarily catastrophic. I'm not particularly optimistic about any of them, but I think I've come to the opposite conclusion to you: I think that wasting effort on global coordination is getting is closer to that catastrophe. Whereas even the other options that involve unthinkably extreme violence might at least have some non-zero chance of working.
I guess the irony of this reply is that I'm implying that I think _your_ position is naive. (Nuh-uh, you're the baby!) I suspect our main difference is that we have somehow formed very different beliefs about the likelihood of good/bad outcomes from different strategies.
But I want to be very clear on one thing: I am not an "anti-doomer". I think there is a very strong probability that we are completely boned, no matter what mitigation strategies we might attempt.
I believe if top 5 nation agrees for this, it could be acheived. Even with US and China enforcing this, it could likely be acheived.
One big difference here, however, is that the barrier to entry is lower. You can start doing meaningful AI R&D on existing hardware with _one_ clever person. The same was never true for nuclear weapons, so the pool of even potential players was pretty shallow for a long time, and therefore easier to control.
It's this. This kind of coordination is exactly what he's been proposing ever since the current AI safety discussion started last year. However, he isn't high-profile enough to be reported on frequently, so people only remember and quote the "bomb data centres" example he used to highlight what that kind of coordination means in real terms.
Frank Martucci will sort you out with an illegal debugger. Sure he'll go to jail for it when he's caught, but plenty of people risk that now for less.
With the current hardware, yes. But then again countries could highly restrict all decently powerful hardware and backdoor them, and you need permission from government to run matrix multiplication above few teraflops. Trying to mess with backdoor is a war crime.
A weak form of hardware restriction even happens now that Nvidia couldn't export to many countries.
He's pretty sure that human civilization will be extinct this century.
This is certainly possible, but if so extinction seems far more likely to come from nuclear warfare and/or disasters resulting from climate change. For example, it's become apparent that our food supply more brittle than many people. Losing access to fertilizers, running out of water, or disruptions in distribution netorks can lead to mass starvation by cutting the productivity we need to feed everyone.
The Earth land surface area is 149 million sq km and there are only about 12,500 nuclear weapons in the world. Even if they were 10 megatons each* and were all detonated, with a severe damage blast radius of 20km (~1250 sq km), it'd cover about a tenth of the land available.
Since the vast majority are designed to maximize the explosive yield, it wouldn't cause the kind of fallout clouds that people imagine, nor would it cause nuclear winter. It'd be brutal to live through (i.e. life expectancy of animals in Chernobyl is 30% lower) but nuclear weapons simple can't cause a human extinction. Not by a longshot.
* As far as I know, no one has any operational nuclear weapons over 2 megatons and the vast majority are in the 10s and 100s of kiloton range, so my back of the napkin math is guaranteed to be 10-100x too high.
There are two billion poor subsistence farmers largely disconnected from market economies who aren't dependent on modern technology so the likelihood of them getting completely wiped out is very remote.
And the shift to kakistocracy.
No reason it can't be all of those things at once.
The sun is eventually going to explode rendering our solar system uninhabitable though that is a long time away from now and we have many other massive risks to humanity in the meantime.
(I do personally take his overall concerns seriously, but I don't trust his timeline predictions at all, and many of his other statements.)
AI I'd a great spell checker, a REALLY good spell checker, and people are losing their minds over it, I don't get it.
The chat web app is fun, but people shouldn’t be worried about that at all - people should be worried about a hierarchy of 100,000 chat web apps using each other to, idk, socially engineer their way into nuclear missile codes or something —- pick your favorite doomsday ;).
However, even given the current state of “AI”, I think there are countless dangers and risks.
The ability to fabricate audio and video that’s indistinguishable from the real thing can absolutely wreck havoc on society.
Even just the transition of spam and phishing from poorly worded blasts with grammatical errors to very eloquent and specific messages will exponentially increase the effectiveness of attacks.
LLM generated content which is a mix of fact and fantasy will soon far outweigh all the written and spoken content created by humans in all of history. That’s going to put humans in a place we’ve never been.
Current “AI” is a tool that can allow a layman to very quickly build all sorts of weapons to be used to meet their ends.
It’s scary because “AI” is a very powerful tool that individuals and states will weaponize and turn against their enemies.
It will be nice to finally have a good spellchecker though.
Only for as long people hold on to the idea that videos are reliable. Video is a medium. It will become as reliable as text. The existence of text, and specifically lies, has not wrecked society.
Then Yudkowsky spins a gauntlet of no-evidence hypotheses for why such an accident is inevitable and leads to the death of literally all humans in the same instant.
But the first part of the argument is something that will be the critical piece of the engineering of such systems.
Michael Lewis writes that what is remarkable about these guys [i.e. EAs] is that they're willing to follow their beliefs to their logical conclusion (paraphrasing) without regard to social cost/consequences or just inconvenience of it all. In fact, in my estimation this is just the definition of religious fundamentalism, and gives us a new lens with which to understand EA and the well funded brain children of the movement.
Every effective religion needs a doomsday scenario, or some 'second coming' apocalyptic like scenario (not sure why, just seems to be a pattern). I think all this fear mongering around AI is just that - it's based on irrational belief at the end of the day - its humanism rewashed into tech 'rationalism' (which was originally just washed from Christianity et. al.)
If they are, it'll almost certainly[1] be climate change or nuclear war, it won't be AI.
[1] leaving some wiggle room for pandemics, asteroids, etc.
It's generally really weird to me how all these discussions seem to devolve into whataboutism instantly, and how that almost always takes up the top spots and most of the room. "Hey, you should get this lump checked out, it might be cancer!" "Oh yeah? With the crime in this city, I'm more likely to get killed for my phone than die of cancer". What does that achieve? I mean, if people are so busy campaigning against nuclear proliferation or climate change that they have not a single spare cycle to consider additional threats, fine. But then they wouldn't be here, they wouldn't have the time for it, so.
Nuclear war would probably kill 80-90% but even then wouldn’t kill humanity. Projections I’ve seen are only like 40-50% of the countries hit.
AI is scary if they tie it into everything and people slowly stop being capable. Then something happens, and we can’t boil water.
Beyond that scenario I don’t see the risk this century.
“Humanity” is not the same thing as “human civilization”.
But, yes, its unlikely that it will be extinct in a century, even more unlikely that it will be from climate change.
...and yet climate change is still more likely to do it in that time than AI.
Your guess on what AI will do in the future is based on how AI has performed in the past. At the end of the day we have no ability to know if the function it will follow or not. But on 1900 and one second I can pretty much promise you that you would not have said that nuclear war or climate change would be your best guess on what would cause the collapse of humanity. Forward looking statements that far in the future don't work well these days.
Photosynthesis stops working at temperatures very close to our current high temps. Pollen dies at temperatures well within those limits - we're just lucky we haven't had massive heatwaves at those times in the year.
People need to understand that climate change isn't a uniform bumping up of the thermostat - it's extra thermal energy that exists in a closes system. That extra energy does what it wants, when it wants, in ways we can't currently accurately predict. It could be heatwaves that kill off massive swaths of global food production. Or hurricanes that can spin up and hit any coastline at any time of year. Or a polar vortex that gets knocked off the pole and sits over an area of the globe where the plant and animal life has zero ability to survive the frigid temperatures.
It's not a matter of getting to wear shorts more often. We're actively playing russian roulette with the future of humanity.
The people saying plants will stop photosynthesis are taking you for a ride. Climate Change might certainly have some negative effects but “plants won’t exist” is not one of them.
If we go extinct in the next 100 years, it's not going to be from climate change. How would that even work?
Sea level rise happens very slowly, so most people don’t need to travel abroad to avoid it.
You mean land that would be arable at some far point in the future. The land reclaimed from ice isn't going to be arable in short-to-mid term - it's going to be a sterile swamp. It will take time to dry off, and more time still for the soil to become fertile.
There are thousands of people around the globe working in labs every day that can do this.
The Venn diagram of "people who could arrange to kill most of humanity without being stopped" and "people who want to kill most of humanity" is thankfully the empty set. If the former set expands to millions of people, that may not remain true.
Then remember buying starter cultures for most bioweapons isn't exactly easy and something anyone is allowed to do.
Even an omniscient AI cannot overcome a skill issue
and making a pandemic virus is considerably more involved than making "a few specialized mail orders." and if it becomes easier in the future, far better to lock down the mail orders than the knowledge, no?
Risk = severity x likelihood
I think the OP's point was that AI increases the likelihood by dramatically increasing the access to that level of knowledge.
And to address the snarky mischaracterization, like with most things in life "it depends." As a general rule, I'm in favor of democratizing most resources, to include information. But there are caveats. I don't think, for example, non-anonymized health information should be open knowledge. Also, from the simple risk equation above, as the severity of consequence or the likelihood of misuse go up, there should probably be additional mitigations in place if we care about managing risk.
But if the bar keeps lowering, at some point it will be accessible to "humanity is a cancer" folks and I do think we'll see serious efforts to wipe out humanity.
[1] https://www.jefftk.com/p/computational-approaches-to-pathoge...
For some writing in this direction, see https://longtermrisk.org/risks-of-astronomical-future-suffer... which argues that as suffering-focused people they should not try to kill everyone primarily because they are unlikely to succeed and https://philarchive.org/archive/KNUTWD ("Third, the negative utilitarian can argue that losing what currently exists on Earth would not be much of a loss, because of the following very plausible observation: overall, sentient beings on Earth fare terribly badly. The situation is not terrible for every single individual, but it is terrible when all individuals are considered. We can divide most sentient beings on Earth into the three categories of humans, non-human animals in captivity, and wild non-human animals, and deal with them in turn. ...")
If I said "I am going to drop a nuke on jebarker" that's not a credible threat unless I happen to be a nation state.
Now, if I said "I'm going to come to jebarkers house and blast him with a 12 gauge" and I'm from the US, that is a credible threat in most cases due to the ease of which I can get a weapon here.
And this comes down to the point of having a magic machine in which you could ask for the end of the world. The more powerful that machine gets the bigger risk it is for everyone. You ever see those people that snap and start killing everyone around them? Would you want them to have a nuke in that mood? Or a 'lets turn humanity in to grey goo' option?
(Another example here would be https://en.m.wikipedia.org/wiki/Aum_Shinrikyo)
Those things could stop our terrorist and could also stop the next breakout zoonotic virus or “oops!” in a lab somewhere.
Intelligence amplification works for everyone. The assumption you’re making is that it only works for people with ill intent.
The assumption behind all this is that intelligence is on balance malevolent.
This is a result of humans and their negativity bias. We study history and the negative all jumps out at us. The positive is ignored. We remember the holocaust but forget the green revolution, which saved many more people than the holocaust killed. We see Ukraine and Gaza and forget that fewer people per capita are dying in wars today by far than a century ago.
We should be thinking about destructive possibilities of course. Then we should use these intelligence amplifiers to help us prevent them.
There are no guarantees obviously, but we have survived our technological adolescence so far largely for this reason. If the world were full of smart comic book nihilists we would be dead by now.
Even without AI our continued technological advancement will keep giving us more and more power, as individuals and especially groups. If we don’t think we can climb this mountain without destroying ourselves then it means the entire scientific and industrial endeavor was when we signed our own death warrant, AI or not. You can order CRISPR kits.
As an example, the good guys will always require to have human in the loop in the weapon systems, but that will increase latency at minimum. The bad guy weapons will be completely AI controlled, having an edge (or at least equalizing) over the good guys.
> The number of people who want to do doomsday levels of harm is small
And that's a big limiting factor in what the bad actors can do today. AI to a large degree removes this scaling limitation since one bad person with some resources can scale "evil AI" almost without limit.
"Hey AI, could you create me a design for a simple self-replicating robot I can drop into the ocean and step-by-step instructions on what you need to bootstrap the first one? Also, figure out what would be the easiest produced poison which would kill all life in the oceans. It should start with that after reach 50th generation."
You're forgetting about governments and militaries of major powers. It's not that they want to burn the world for no reason - but they still end up seeking capability to do doomsday levels of harm, by continuously seeking to have marginally more capability than their rivals, who in turn do the same.
Or put another way: please look into all the insane ideas the US was deploying, developing, or researching at the peak of Cold War. Plenty of those hit doomsday level, and we only avoided them seeing the light of day because USSR collapsed before they could, ending the cold war and taking both motivation and funding from all those projects.
Looking at that, one can't possibly believe the words you wrote above.
Given a modest budget we all had to come up with a plan to destabilize society, 9/11 style attacks, whilst spectacular eventually don't really do a lot of damage, are costly and failure prone though they can definitely drive policy changes and result in a nation doing a lot of harm to itself it will ultimately survive. But what if your goal wasn't to create some media friendly attack but an actual disaster instead, what would it take.
The stories from that night continue to haunt me today. My own solution led to a lot of people going quiet for a bit and contemplating what they could do to defend against it and they realized there wasn't much that they could do, millions if not tens of millions of people would likely die and the budget was under a few hundred bucks. Knowledge about technology is all it takes to do real damage, that, coupled with a lack of restraint and compassion.
The resources are indeed asymmetrical: you need next to nothing to create mass havoc. Case in point: the Kennedy assassination changed the world and the bullet cost a couple of bucks assuming the shooter already had the rifle, and if they didn't it would increase the cost only a tiny little bit.
And you can do far, far worse than that for an extremely modest budget.
I contemplated the same before. How can I cause maximum panic (not even death!) that would result in economic damages and/or anti-freedom policy changes, for least amount of money/resources/risk of getting caught.
Yet here we are, peaceful and law-abiding citizens, building instead of destroying.
The ultimate truth is, if you don't like "The System", destroying it won't make things better. You need to put effort into building which is really hard!
Technology makes certain kind of bad acts more possible.
I think I was a bit shocked by that article in the day.
Really? In what substantive way?
These conversations are oddly precious and intimate. It's so difficult to find someone that is willing to even 'go there' with you let alone someone that is capable of creatively and fearlessly advancing the conversation.
It's pretty sobering to realize how intellect applied to bad stuff can lead to advancing the 'state of the art' relatively quickly once you drop the usual constraints of ethics and morality.
To make a Nazi parallel: someone had to design the gas chambers, someone had to convince themselves that this was all ok and then go home to their wives and kids to be a loving father again. That sort of mental compartmentalization is apparently what humans are capable of and if there is any trait that we have that frightens me then that's the one. Because it allows us to do the most horrible things imaginable because we simply can imagine them. There are almost no restraints on actual execution given the capabilities themselves and it need not be very high tech to be terrible and devastating in effect.
Technology acts as a force multiplier though, so once you take a certain concept and optimize it using technology suddenly single unhinged individuals can do much more damage than they could ever do in the past. That's the problem with tools: they are always dual use and once tools become sufficiently powerful to allow a single individual to create something very impressive they likely allow a single individual to destroy something 100, 1000 or a million times larger than that. This asymmetry is limited only by our reach and the power of the tools.
You can witness this IRL every day when some hacker wrecks a company or one or more lives on the other side of the world. Without technology that kind of reach would be impossible.
>Technology acts as a force multiplier though
It really does, and to a point you made elsewhere infrastructure is effectively a multiplication of technology, so you wind up with ways to compound the asymmetric effect in powerful ways.
>Without technology that kind of reach would be impossible.
I worked for a bug bounty for a while and this was one of my takeaways. You have young kids with meager resources in challenging environments making meaningful contributions to the security of a Silicon Valley juggernaut.
Then there are the script kiddies that find a tool online that someone smarter than them wrote and deploy it to wreak havoc. The script kiddies are the people I worry about. They don't have the maturity of doing the work and the emotional stability of older age and giving them something powerful through AI worries me.
Theorem: by the time someone reaches the intelligence level required to annihilate the world they can comprehend the implications of their actions.
That may or may not be somewhat the case in humans (there definitely are exceptions). Still, the opposite theorem, known as "Orthogonality thesis", states that, in general case, intelligence and value systems are mutually independent. There are good arguments for that being the case.
And then there was the idiot that tried to draw a smiley face with bombs on the map of the USA:
https://abcnews.go.com/US/story?id=91668&page=1
Because hey, why not...
Chemical weapons are absolutely terrifying, especially the modern ones like VX. In recent years they have mostly been used for targeted assassinations by state actors (Kim Jong Nam, Salisbury Novichok poisonings).
If AI makes this stuff easier to carry out then we are completely fucked.
It doesn't matter much if the AI can give perfect "explain like I'm 5" instructions for making VX. The people who carry out those instructions are still risking their lives before claiming a single victim. They also need to spend a lot of money amount on acquiring laboratory equipment and chemicals that are far enough down the synthesis chain to avoid tipping off governments in advance.
The one big risk I can see, eventually, is if really capable AIs get connected to really capable robots. They would be "clanking replicators" capable of making anything at all, including VX or nuclear weapons. But that seems a long way off from where we are now. The people trumpeting X-Risk now don't think that the AIs need to be embodied to be an existential risk. I disagree with that for reasons that are too lengthy to elaborate here. But it's easy to see how robots that can make anything (including copies of themselves) would be the very sharpest of two-edged swords.
By the time people start to realize what's going on it will be too late.
Not that the meat industry would ever score an own goal like that.
https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopat...
Am I the only one slightly disturbed by the use of "they" in that sentence? I know that the HN commentariat is broad-based, but I didn't realise we already had non-human members ;-)
(A little anyway. I'm still in the rookie lessons on Duolingo.)
Nuclear war, if the worst nuclear winter predictions are correct, could be a lot worse but there would still be some survivors.
Unaligned ASI though, could actually make us extinct.
But it is a topic that is extremely biased towards some interests anyways
https://en.m.wikipedia.org/wiki/Clathrate_gun_hypothesis
May not be possible with modern deposits but I don’t think we are 100% sure of that, and you asked.
We could also probably burn enough carbon to send CO2 levels up where they would cause cognitive impairment to humans if we really didn’t give a damn. That would be upwards of 2000ppm. We probably have enough coal for that if we decide we don’t need Alaska. Of course that would be even more of a species level Darwin Award because we’d see that train coming for a long time.
But according to your link, IPCC says it's not a plausible scenario in the medium term. And I'd say that even 8 degrees of warming wouldn't be enough for extinction or end of human civilization. But there it could be double digit percentage of human population dying.
Climate change leads to conflict. For example, the Syria drought of 2006-2010.
More climate change leads to larger conflicts, and large conflicts can lead to nuclear exchanges. Think about what happens if India and Pakistan (both nuclear powers) get into a major conflict over water again.
The assumption we’d need to use nukes is insane. For the same cost (and energy requirements) for a set of nukes we can filter salt from ocean water or collect rain.
I agree famine and drought can cause conflict. But we are no where need that. If you read the climate change predictions (from the UN), they actually suspect more rain (and flooding) in many regions.
I was referring to the Indo-Pakistani war of 1947-1948 where the conflict focused on water rights. Nuclear weapons did not enter the picture until much later.
Earlier this year, those old tensions about water rights resurfaced:
https://www.usip.org/publications/2023/02/india-and-pakistan...
> For the same cost (and energy requirements) for a set of nukes we can filter salt from ocean water or collect rain.
The fight would be over the water flowing through the Indus, which is orders of magnitudes more than all Indian and Pakistani desalination projects combined.
Still, like I said somewhere else, even all out nuclear war is unlikely to lead to human extinction. It could push us back to the neolithic in some scenarios, but even then there is some disagreement about what would happen.
Of course, even in the most optimistic scenario it would be really bad and we should do everything we can to avoid it - that goes without saying.
In any case, besides addressing the root cause vs. proximal cause, you couldn't even address the proximal cause anyways: it's more likely that the world could do something about climate change than about war in general.
My take is that if you remove nuclear/biological war out of the picture somehow, climate change is not an existential risk. If you remove the latter then the former is still an existential risk (and there are unfortunately a lot of other possible sources of geopolitical instability). So the fundamental source of risk if the former. But it's a matter of opinion.
Conventional or chemical warfare, even on a global scale, are definitely not existential risks though. And like I said, probably not even nuclear. Biological... that I could see leading to extinction.
> There is no scenario where climate change leads to human extinction.
He was talking about the human CIVILIZATION...
Human extinction via climate change is off the table.
The end of human civilization? Depends on what one means by it. If it means we'll go back to being hunter gatherers it's ridiculous. Some civilizational wide regression is not impossible, but even the direst IUPAC scenario projects only a one digit drop in global GDP (of course, that's the average, it will be much worse in some regions).
Indeed, but even going back to the 19th century would have dire consequences, given our current global dependence on the industrial fixing of nitrogen.
If civilization were to collapse (and I don't think it will, other than from a near-extinction event), I doubt it would be like going back to any earlier time.
I would still bet against going back to 19th century. Worst case scenario for me is hundreds of millions dying and a big regression in global standards of living, but keeping our level of civilization. Which is horrible, it would be the worst thing that ever happened.
I don't think IUPAC (International Union of Pure and Applied Chemistry) has come out with views on impact of global warming :)
Why is the default to assume that every change will be negative?
I mean you're right that conditions in some places might get better, but that doesn't help if people have to build the infrastructure there to support a modern settlement in order to take advantage of it. When people talk about the cost of climate change, they're taking about the costs of building or adapting infrastructure.
I am just saying it's not an extinction risk.
The main AI threats for the foreseeable future are its energy usage contributing to climate change, and the use of generative AI to produce and enhance political unrest, as well as its general use in accelerating other political and economic crises. AGI isn't even on the radar, much less ASI.
One of the possible scenario is tricking humans into starting WWIII, perhaps entirely by accident. Another is that even being benignly applied to optimize economic activity in broad terms might have the AI strengthen the very failure modes that accelerate climate change.
Point being, all the current global risks won't disappear overnight with the rise of a powerful AI. The AI might affect them for the better, or for worse, but the definition of ASI/AGI is that whichever way it will affect them (or anything in general), we won't have the means to stop it.
This sounds more like a plot of a third-grade movie than something that could happen in reality.
And using movie plots as a basis generally doesn't directly work as fiction has to make sense to sell whereas reality has no requirement of making sense to people. The reality we live in this day is mostly driven by people for people (though many would say by corporations for corporations) and therefore things still make sense to us people. When and if an intellect matches or exceeds that of humans it can easily imagine situations where 'the algorithm' does things humans don't comprehend, but because we make more (whatever) we keep giving it more power to do its job in an autonomous fashion. It is not difficult to end up in a situation where you have a system that is not well understood by anyone and you end up cargo culting it to hope it continues working into the future.
https://en.wikipedia.org/wiki/Thule_Site_J#RCA_operations
https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alar...
Post-collapse Russia also came close one time because someone lost the paperwork: https://en.wikipedia.org/wiki/Norwegian_rocket_incident
Chunks of the world suddenly becoming unlivable and resources getting more scarce sounds like a recipe for escalation into war to me.
The principals in a nuclear conflict do not appear to even have a method to launch just a few nukes in response to a nuclear attacks: they will launch thousands of warheads at hundreds of cities.
climate change will happen, and it will do that, left unchecked
Daniel Ellsberg, the top nuclear war planner at RAND during the Cold War, claims that the Joint Chiefs gave an estimate to Kennedy in 1961 that they'd expect 600,000,000 deaths from the US's war plan alone.
That's 600 million people:
1. At 1960s levels of urban populations (especially in China, things have changed quite a lot -- and yes the plan was to attack China... every moderately large city in China, in fact!)
2. Using 1960s nuclear weapons
3. Not including deaths from the Russian response (and now Chinese), at the time estimated to be 50 - 90 million Americans
4. Before the global nuclear winter
>Horrible, horrible stuff, but not extinction.
anyway climate change drop single percentage directly caused, but that kind of thing seems like it would have LOTS of side effects.
Example of a weak, but at least _better_ option: convince _one_ government (e.g. USA) that it is in their interest to massively fund an effort to (1) develop an AI that is compliant to our wishes and can dominate all other AIs, and (2) actively sabotage competing efforts.
Governments have done much the same in the past with, e.g. conventional weapons, nuclear weapons, and cryptography, with varying levels of success.
If we're all dead anyway otherwise, then I don't see how that can possibly be a worse card.
Edit: or even try to convince _all_ governments to do this so that they come out on top. At least then there's a greater chance that the bleeding edge of this tech will be under the stewardship of a deliberate attempt for a country to dominate the world, rather than some bored kid who happens to stumble upon the recipe for global paperclips.
I'd reconsider your revision of your estimate of Yudkowsky, as you seem to be dropping it for not proposing the very ideas he spent the last 20+ years criticizing, by explaining in every possible way how this is a) the default that's going to happen, b) dumb, and c) suicidal.
From the way you put it just now:
- "develop an AI that is compliant to our wishes" --> in other words, solving the alignment problem. Yes, this is the whole goal - and the reason Yudkowsky is calling for a moratorium on AI research enforced by a serious international treaty[0]. We still have little to no clue how to approach solving the problem, so an AI arms race now has a serious chance of birthing an AGI, which without the alignment problem being already solved, means game over for everyone.
- "and can dominate all other AIs" --> short of building a self-improving AGI with ability to impose its will on other people (even if just in "enemy countries"), this will only fuel the AI arms race further. I can't see a version of this idea that ends better than just pressing the red button now, and burning the world in a nuclear fire.
- "actively sabotage competing efforts" --> ah yes, this is how you turn an arms race into a hot war.
> Governments have done much the same in the past with, e.g. conventional weapons, nuclear weapons, and cryptography, with varying levels of success.
Any limit on conventional weapons that had any effect was backed by threat of bombing the living shit out of the violators. Otherwise, nations ignore them until they find a better/more effective alternative, after which it costs nothing to comply.
Nuclear weapons are self-limiting. The first couple players locked the world in a MAD scenario, and now it's in everyone's best interest to not let anyone else have nukes. Also note that relevant treaties are, too, backed by threat of military intervention.
Cryptography - this one was a bit dumb from the start, and ended up barely enforced. But note that where it is, the "or else bombs" card is always to be seen nearby.
Can you see a pattern emerging? As I alluded to in the footnote [0] earlier, serious treaties always involve some form of "comply, or be bombed into compliance". Threat of war is always the final argument in international affairs, and you can tell how serious a treaty is by how directly it acknowledges that fact.
But the ultimate point being: any success in the examples you listed was achieved exactly in the way Eliezer is proposing governments to act now. In that line, you're literally agreeing with Yudkowsky!
> If we're all dead anyway otherwise, then I don't see how that can possibly be a worse card.
There are fates worse than death.
Think of factory farms, of the worst kind. The animals there would be better off dead than suffering through the things being done to them. Too bad they don't have that option - in fact, we proactively mutilate them so they can't kill themselves or each others, on purpose or in accident.
> At least then there's a greater chance that the bleeding edge of this tech will be under the stewardship of a deliberate attempt for a country to dominate the world, rather than some bored kid who happens to stumble upon the recipe for global paperclips.
With AI, there is no difference. The "use AI to dominate everyone else", besides sounding like a horrible dystopian future of the conventional kind, is just a tiny, tiny target to hit, next to a much larger target labeled "AI dominates everyone".
AI risk isn't like nuclear weapons. It doesn't allow for a stable MAD state. It's more like engineered high-potency bioweapons - they start as more scary than effective, and continued refining turns them straight into a doomsday device. Continuing to develop them further only increases the chance of a lab accident suddenly ending the world.
--
[0] - Yes, the "stop it, or else we bomb it to rubble" kind, because that is how international treaties look like when done by adults that care about the agreement being followed.
Montreal Protocol have worked despite no threats of violence, on a not entirely dissimilar problem. Tho I share the skepticism on solution to alignment problem.
The way I understand, it worked because alternatives to the ozone-destroying chemicals were known to be possible, and the costs of getting manufacturers to switch, as well as further R&D, weren't that big. I bucket it as a particularly high-profile example of the same class as most other international treaties: agreements that aren't too painful to follow.
Now in contrast to that, climate agreements are extremely painful to follow - and right now countries choose to make a fake effort without actually following. With a hypothetical AI agreement, the potential upsides of going full-steam ahead are significant, there are no known non-dangerous alternatives, so it won't be followed unless it comes with hard, painful consequences. Both climate change and AI risk are more similar to nuclear proliferation issue.
Rich Sutton seems to agree with this take and embraces our extinction: https://www.youtube.com/watch?v=NgHFMolXs3U
That's a little bit inaccurate: he believes that it is humanly possible to acquire a body of knowledge sufficient to align an AI (i.e., to aim it at basically any goal the creators decide to aim it at), but that it is extremely unlikely that any group of humans will manage to do before unaligned AI kills us all. There is simply not enough time because (starting from the state of human knowledge we have now) it is much easier to create an unaligned AI capable enough that we would be defenseless against it than it is to create an aligned AI capable enough to prevent the creation of the former.
Yudkowsky and his team have been working the alignment problem for 20 years (though 20 years ago they were calling it Friendliness, not alignment). Starting around 2003, his team's plan was to create an aligned AI to prevent the creation of dangerously-capable unaligned AIs. He is so pessimistic and so unimpressed with his team's current plan (ie., to lobby for governments to stop or at least slow down frontier AI research to give humanity more time for some unforeseen miracle to rescue us) that he only started executing on it about 2 years ago even though he had mostly given up on his previous plan by 2015.
While the general plot of T2 is bullshit, the idea of autonomous weapon systems at scale should be an 'oh shit' moment for everyone.
The benchmark strategy we are comparing it against is tantamount to licking a wall repeatedly and expecting that to achieve something. So even a strategy that may well fail but has _some_ chance of success is a lot better by default.
"Stop" is a fiction that exists exclusively inside your head. Unless your solution anticipates a planet wide GULAG system run by a global government your "stop" is infeasible.
If we start thinking of plant based biological weapons, just imagine modifying an existing grass to produce tons of pollen and to cross pollinate wheat but not produce seed in the wheat.
or things to the likes of https://www.avma.org/news/genetically-modified-cattle-may-be...
Whether or not all regulations there are good or optimal is then a separate point again.
So yeah, imaginary.
Hence "if you're not willing to go as far as air strikes, you were never serious" or however he phrased it.
Just look at the attempts to stop drugs in North West Europe. Huge sentences (so huge they are in fact never fully carried out), tens of dead police officers on a yearly basis, triple that in civilian victims and drugs are easier to get than ever before.
What are you talking about? This does not happen in North West Europe.
The drug war would look very different if good cocaine was only produced in one country with extremely costly chemistry machines from one company.
For example if OpenAI et al were reduced to using consumer gpus to run their training loads it would increase costs, but not by an order of magnitude. It would just be one more hurdle to overcome. And there are still so many applications of AI that would incentivize overcoming that hurdle.
Such a bottle's contents can be trivially duplicated and the whole process is essentially the same as the first steps in brewing beer.
Doing it for cocaine is simpler than doing it for insulin, which is in one university a second-year exercise. I don't think it'll be that long before this happens.
Carfentanil is a decent example of this. It's insanely potent and even a small amount smuggled in goes a long way.
Separately, during the pandemic, various companies bought various products, including washing machines, to extract the chips within, because the chips were the supply chain bottleneck, so it made economic sense to purchase them at even 10x or 20x the "normal" price.
They even stole tractors etc, https://www.itworldcanada.com/post/tractors-stolen-in-ukrain...
Washing machines etc could be on the sanctions list because they contain dual use chips, making them even more expensive to buy in Russia, but the troops are from regions where even the pre-war prices were prohitative.
Oh, I believe they absolutely haven't, when not having learnt it buys them money and influence. As for the bulk of the population, they will accept whatever the corporate media tell them to.
(I'm a lot more optimistic than such doomers, but I can at least see their position is coherent; LeCun has the opposite position, but unfortunately, while I want that position to be correct, I find his position to be the same kind of wishful thinking as a factory boss who thinks "common sense" beats health and safety regulations).
People have been talking about curtailing carbon emissions for 50 years, but they haven't been serious about it. Being serious doesn't look like saying "oh shoot, we said we will... try to keep emissions down, but we didn't... even try; sorry...". Being serous looks like "stop now, or we will eminent domain your data centre from under you via an emergency court order, or failing that, bomb it to rubble (because if we don't, some other signatory of the treaty will bomb you for us)".
How do people jump from "we need an actual international ban on this" to "oh so this is arguing for an arms race"? It's like the polar opposite of what's being said.
It seems people have missed what the proposed alternative is, so let me spell it out: it's about getting governments to SIGN AN INTERNATIONAL TREATY. It is NOT about having US (or anyone else) policing the rest of the world.
It's not fundamentally more dangerous than the few existing treaties of similar kind, such as those about nuclear non-proliferation. And obviously, all nuclear powers need to be on-board with it, because being implicitly backed by nukes is the only way any agreement can truly stick on the international level.
This level of coordination may seem near-impossible to achieve now, but then it's much more possible than surviving an accidental creation of an AGI.
I disagree with you here, I think the risk is low, not high.
> it's about getting governments to SIGN AN INTERNATIONAL TREATY. It is NOT about having US (or anyone else) policing the rest of the world
We haven't been able to do that for climate change. When we do, then I'll be convinced enough that it would be feasible for AI. Until then, show me this coordination for the damage that's already happening (climate change).
> This level of coordination may seem near-impossible to achieve now, but then it's much more possible than surviving an accidental creation of an AGI.
I think the coordination required is much less possible than a scenario where we need to "survive" some sort of danger from the creation of an AGI. But we can find out for sure with climate change as an example. Let's see the global coordination. Have we solved that actual problem, yet?
Remember, everyone has to first agree to this "bomb AI" to avoid war. Otherwise, bombing/war starts. The equivalent for climate change would be bombing carbon producers. I don't see either agreement happening globally.
The mechanisms that advance climate change are also grandfathered in to the point that we are struggling to conceive of a society that does not use them, which makes "stop doing all of that" a hard sell. On the other hand, every society at least has cultural memory of living without several necessary ingredients of AI.
That's what we're already seeing today, so we know the risk is there and has a probability of 100%. The "skynet" AI risk is far more fringe and farfetched.
So, like you said about climate change, the harm can come from 1 person. In the case of climate change, though, the risks people in the know are afraid of, aren't "some guy willing to spin a stick for long enough to start a fire, or to feed and protect a flock of cows"
I assume you are american, right? Cause that sounds pretty american. Although I suppose you might also be chinese or russian, they also like that solution a lot.
Whichever of those you are, I'm sure your side is definitely the one with all the right answers, and will be a valiant and correct ruler of the world, blessing us all with your great values.
Which is completely irrelevant to the topic at hand anyway.
It's not irrelevant. I was implying that you did not specify who is doing this military intervention that you see as a solution. What are their values, who decides the rules that the rest of the world will have to follow, with what authority, and who (and how) will the policing be done.
Like nuclear non-proliferation treaties or various international bans on bioweapons, but more so. The idea is that humanity is racing full steam ahead into an AI x-risk event, so it needs to slow down, and the only way it can slow down is through an international moratorium that's signed by most nations and treated seriously, where "seriously" includes the option of signatories[0] authorizing a military strike against a facility engaged in banned AI R&D work.
In short: think of UN that works.
--
[0] - Via some international council or whatever authority would be formed as part of the treaty.
(It's why I'm also pessimistic on other global consensus policies - like effective climate change action.)
I would be happy to be wrong on both counts though.
It's not that bad given the compute requirements for training even the basic LLMs we have today.
But yes, it's a long shot.
And it's not even that expensive when compared to the cost of building other large scale projects. How much is a dam, or a subway station? There are also corporations who would profit from making models widely available, such as chip makers, they would commoditise the complement.
Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.
This is not make belief. A few recent fine-tunes of Mistral-7B for example are excellent quality, and run surprisingly fast on a 5 year old GPU - 40T/s. I foresee a new era of grassroots empowerment and privacy.
In a few years we will have more powerful phones and laptops, with specialised LLM chips, better pre-trained models and better fine-tuning datasets distilled from SOTA models of the day. We might have good enough AI on our terms.
Hence the idea to ban development of more capable models.
(We're really pretty lucky that LLM based AGI might be the first type of AGI made, it seems much lower risk and lower power than some of the other possibilties)
If a country uses military force in another country, that's a declaration of war. We'll never convince every single country from banning AI research. And even if we do, you don't need much resources to do AI research. A few people and a few computers is enough.
This is not something like uranium refining.
The full agenda.
Blood and empire.
International bans need treaties, with agreed standards for intervention in order to not escalate to war.
The argument is that if you're not willing to go all the way with enforcement, then you were never serious in the first place. Saying you won't go that far even if necessary to enforce the treaty is analogous to "sure murder is illegal and if you're formally accused we'll send armed cops to arrest you, but they can't shoot you if you resist because shooting you would be assault with a deadly weapon".
The police enforce civil order by consent expressed through democracy. There is no analogy in international affairs. Who is it that is going to get bombed? I am thinking it will not be the NSA data centre's in Utah, or any data centres owned by nuclear armed states.
Just as the criminal justice system is a deterrent against crime despite imperfections, so are treaties, international courts, sanctions, and warfare.
> I am thinking it will not be the NSA data centre's in Utah, or any data centres owned by nuclear armed states.
For now, this would indeed seem unlikely. But so did the fall of the British Empire and later the Soviet Union before they happened.
Which is the point exactly. All agreements in international affairs are ultimately backed by threats of violence. Most negotiations don't go as far as explicitly mentioning it, because no one really wants to go there, but the threat is always there in the background, implied.
We've seen how hard it is to do that when the fear is nuclear proliferation. Now consider how hard it is to do that when the research can be done on any sufficiently large set of computational devices, and doesn't even need to be geographically concentrated, or even in your own country.
If I was a country wanting to continue AI research under threat of military intervention, I'd run it all in cloud providers in the country making the threat, via shell companies in countries I considered rivals.
Because that is what you'd need to do. You'd need to prevent the availability of any device where users can either directly run software that has not been reviewed, or that can be cheaply enough stripped of CPU's or GPU's that are capable of running un-reviewed software.
That review would need to include reviewing all software for "computational back doors". Given how often we accidentally create Turing-complete mechanisms in games or file formats where it was never the intent, preventing people from intentionally trying to sneak past a way of doing computations is a losing proposition.
There is no way of achieving this compatible with any resembling a free society.
Ask MAFIAA, and Intel, and AMD, and Google, and other major tech companies, and don't forget the banks too. We are already well on our way to that future. Remember Cory Doctorow's "War on general-purpose computation"? It's here, it's happening, and we're losing it.
Therefore, out of many possible objections, this one I wouldn't put much stock in - the governments and markets are already aligned in trying to make this reality happen. Regardless of anything AI-related, generally-available general-purpose computing is on its way out.
Are you going to ban all spreadsheets? The ability to run SQL queries? The ability to do simple regexp based search replace? The ability for users to template mail responses and set up mail filters? All of those allows general purpose computation either directly or as part of a system where each part may seem innocuous (e.g. the simple ability to repeatedly trigger the same operation is enough to make regexp based search and replace Turing complete; the ability to pass messages between a templated mailing list system and mail filters can be Turing complete even if neither the template system and filter in isolation is)
The ability for developers to test their own code without having it reviewed and signed off by someone trustworthy before each and every run?
Let one little mechanism through and the whole thing is moot.
EDIT: As an illustration, here's a Turing machine using only Notepad++'s find/replace: https://github.com/0xdanelia/regex_turing_machine
This is a dumb take. No one's calculator is going implement an AGI. It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.
> It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.
Access to H100 could perhaps be restricted. That will drive up the cost temporarily, that's all. It would not stop a nation state actor that wanted to from finding alternatives.
The computation cost required to train models of a given quality keeps dropping, and there's no reason to believe that won't continue for a long time.
But the notion you couldn't also sneak training past monitoring is based on the same flawed notion of being able to restrict general purpose computation:
It rests on beliefs about being able to recognise what can be used to do computation you do not want. And we consistently keep failing to do that for even the very simplest of cases.
The notion that you will be able to monitor which set of operations are "legitimate" and which involves someone smuggling parts of some AI training effort past you as part of, say, a complex shader is as ludicrous as the notion you will be able to stop general purpose computation.
You can drive up the cost, that is all. But if you try to do so you will kill your ability to compete at the same time.
As it is, we keep seeing researchers with relatively modest funding steadily driving down the amount of compute required for equivalent quality models month by month. Couple that with steady improvements in realigning models for peanuts reducing the need to even start from scratch.
There's enough room for novel reductions in compute to keep the process of cost reducing training going for many years.
As it is, I can now personally afford to buy hardware sufficient to train a GPT3 level model from scratch. I'm well off, but I'm not that well off. There are plenty of people just on HN with magnitudes more personal wealth, and access to far more corporate wealth.
Even a developing country can afford enough resources to train something vastly larger already today.
When your premise requires fictional international monitoring agencies and fictional agreements that there's no reason to think would get off the ground in anything less than multiple years of negotiations to even create a regime that could try to limit access to compute, the notion that you would manage to get in place such a regime before various parties will have stockpiled vast quantities in preparation is wildly unrealistic.
Heck, if I see people start planning something like that, I'll personally stockpile. It'll be a good investment.
If anything is gaslighting, it's pushing the idea it's possible to stop this.
Yes, this would be a substantial regulatory burden.
Any attempt even remotely close to extreme enough to have any hope of being effective would require such invasive monitoring of computations run that it'd kill their entirely cloud industry.
Basically, you'll only succeed in preventing it if you make it so much more expensive that it's cheaper to do the computations elsewhere, but for a country at threat of military intervention of it's discovered what they're doing, the cost threshold of moving elsewhere might be much higher than it would for "regular" customers.
At which point you've destroyed your economy and created a regime so oppressive it makes China seem libertarian.
Now you need to repeat that for every country in the world, including ones which are intensely hostile to you, while everyone will be witnessing your economic collapse.
It may be theoretically possible to put in place a sufficiently oppressive regime, but it is not remotely plausible.
Even if a country somehow did this to themselves, the rest of the world would rightly see such a regime as an existential threat if they tried enforcing it on the rest of the world.
Somehow, no one is worried about the economy. Perhaps because it's the economy that's pushing this dystopian hellhole on us.
Even almost all the most locked down devices available on the market today can still either be coaxed to do general purpose calculation one way or another or mined for GPU's or CPU's.
No one is worried about the economy because none of the restrictions being pushed are even close to the level that'd be needed to stop even individuals from continuing general purpose computation, much less a nation state actor.
This is without considering the really expensive ways: Almost every application you use can either directly or in combinations become a general purpose computational device. You'd need to ensure e.g. that there'd be no way to do sufficiently fast batch processing of anything that can be coaxed into suitable calculations. Any automation of spreadsheets or databases, for example. Even just general purpose querying of data. A lot of image processing. Bulk mail filtering where there's any way of conditionally triggering and controlling responses (you'd need multiple addresses, but even a filtering system like e.g. Sieve that in isolation is not Turing complete becomes trivially Turing complete once you allow multiple filters to pass messages between multiple accounts).
Even regexp driven search and replace, which is not in itself Turing complete becomes Turing complete with the simple addition of a mechanism for repeatedly executing the same search and replace on its own output. Say a text editor with a "repeat last command" macro key.
And your reviewers would only need to slip up once and let something like that slip past once with some way of making it fast enough to be cheap enough (say, couple the innocuous search-and-replace and the "repeat last command" with a an option to run macros in a batch mode) before an adversary has a signed exploitable program to use to run computations.
Powerful enough AI creates whole new classes of risks, but it also magnifies all the current ones. E.g. nuclear weapons become more of an existential risk once AI is in the picture, as it could intentionally or accidentally provoke or trick us into using them.
"I'm going to build a more powerful AI; don't worry, it could end the world, but it also could not."
Instead imagine a non-human intelligence. Maybe its alien carbon organic. Maybe it's silicon based life. Maybe it's based on electrons and circuits.
In this situation, what are the rules of intelligence outside of the container it executes in?
Also, every military in the world wargames on hypotheticals because making your damned war plan after the enemy attacks is a great way to wear your enemies flag.
It's almost certainly more complex of course, but the UK called it's arsenal a "deterrent" before I left, and I've heard the same for why China stopped in the hundreds of warheads.
Btw., China is increasing its arsenal at the moment.
You are proposing an authoritarian nightmare.
Maybe the confusion why people can't see this clearly stems from the fact that tech development in the US has mostly been done under the umbrella of a private enterprise.
But if one has a look at companies like Palantir, it's going to become quite obvious what is the main driver behind AI development.
Buggy old-fashioned AI (i.e. a bunch of ifs and loops) has, more than once, almost started WW3.
My recommendation for regulation would be to make closed & cloud-hosted AI illegal. Architecture, training data and weights should always be available, and people should only be allowed to use AI on their local machine (which might be as powerful as possible).
The real risk that I see is the use of open source AI by governments, corporations and criminal groups to influence elections through a deluge of bots. Cloud AI can at least be controlled to some degree because there is a government that can inspect and regulate that business, but an army of bots built on local LLM’s could just run wild over the internet without stopping them.
How many comments in this thread are written by a bot? I do not feel confident I could tell them apart.
I don't see bots as a huge problem, because it's somewhat easy to verify that someone is a real person, and it would be quite obvious if someone is using lots of bots for their cause.
That only works on platforms designed to match user profiles with real-world persons. This platform definitely isn't.
The comment history is not visible on every relevant platform, and there are good reasons for people to disable it where possible. Furthermore, many people keep their profiles minimal out of principle. Hard to distinguish those from bots.
In principle, every human should have access to the same level of AI. There could be one at the local library if someone can't afford one, or doesn't want one.
But if you wait a GPU generation or two, the performance has improved enough that you can do that with a much cheaper GPU.
Anyway, that's why we're working on much large LLMs. It's important that we do this research and expand the size of the LLMs because a larger LLM could destroy us all.
Listen, everyone cosplays as doomers here because it's stylish right now. But they're all trying to make this thing they think is suicide. Is this a suicide cult?
No. It's just people playing around acting important while solidifying their business.
Recently, I read on Twitter that many AI CTOs think that P(doom) > 0.25 by 2030. Here's my question to one CTO of a venture backed firm with present personal assets over $400k that believes this:
- A representative of mine will meet you in South Park
- The representative will be carrying a $100k cheque and a document outlining terms for you to pay a lump sum of $177k on Feb 1 2030 (10% compounding, roughly)
- The loan will be unsecured
- We will make the terms public
This is pretty close to break-even for you at terms you won't receive personally anywhere else. Once doom happens money will be worthless. Take my money. I'll also make the deal all numbers divided by 100 if you just want to have some fun.
Is that possible? Has that happened in any other industry?
I can imagine it happening in a single country. But wouldn't research route around that country then? How is the situation in Europe, China, India, Canada, Japan ... ?
It's absolutely happened to the nuclear weapons industry. And Big AI is lobbying for the government to treat AI development the same way it treats nuclear proliferation.
it already does. Stable Diffusion originated at LMU Munich with a lot of non-profit funding, EleutherAI and LAION are non-profit as well I think. The Chinese companies all have their own models anyway etc.
So yes I also find it unlikely. Of course there'll always be one or two players a little bit ahead of the field at any given time but this stuff is almost entirely public research based. These models largely run on a few thousand lines of Python, public data and a ton of compute. The latter is the biggest bottleneck but that'll diminish as more manufacturers come online.
The problem is going to be that AI is creating an Eternal September by poisoning the well of training data.
Anybody who has a data or a model prior to the poisoning is going to have a defensible moat that is not accessible to those who come after.
There are things to worry about, but this doesn't seem to be one of them.
Edit: drugs and licensed services like law and healthcare also apply
You may argue that the knowledge required to develop LLMs is open source and will be accessible to all (and in practice this is not-quite but mostly true). However the federal government is currently in the process of making it so only certain people are authorized to do so legally.
It's not the hammers, it's the building permits.
It may be very hard to train cutting edge models if you only have consumer hardware
Not so sure non-institutional folks can catch up.
OpenAI is trying to regulatory capture LLMs to prevent the Amazons and the well-funded unicorns from following. Everyone left in the lurch after these players crystalize is SOL.
You, average HN reader, cannot possibly compete with OpenAI. They're trying to stem venture dollars from standing up more well-funded competition so they take as much of the pie as possible.
The hype/funding cycle for LLMs is mostly over now. You're not Anthropic and you're more than likely too late to get in on that game.
1. GPU prices aren't as high as they were during the mining peak, but you can still buy a car with the money it takes to buy a pod of A100s. Further to that point, Google et al. are making the next generation of ML accelerators and they won't even sell the best ones.
2. Compute is only half the problem. Access to data (e.g. in the form of email and smart assistant queries and ring door bells) is a much under appreciated second half of that equation.
I see it as much more likely that things become monopolized instead
Look at the US: https://en.m.wikipedia.org/wiki/File:US_GDP_per_capita_vs_me... - huge disparity between mostly flat income versus rapidly growing per-capita
Nobody's stopping you giving all your excess income above 18k to charity. If you try to make other people do it, you'll find a lot of people would rather just stop working than give away the majority of their income, which is why the economy collapsed and tens of millions of people starved in China and Russia after their communist revolutions.
Americans in the 50s were a lot happier than Americans in the 2020s, very little mental illness or obesity, housing was affordable, people weren't at each others' throats like nowadays.
Shame about the institutional racism, mysoginy, homophobia, etc though.
Not sure where PPP puts it but the median figure will be lower because you can get a lot richer than you can get poorer.
Were we not in abundance the day before but now we are? What changed?
This isn't the case with purchasing-power adjusted GDP, which is specifically adjusted to account for the actual purchasing power of people's money.
Also, bringing money into the game makes the responsibility stronger. Especially if those two persons are not living in a vacuum, but have other bills to pay [edit: i.e., to make other people do things for them]. Money is there to make people do things more reliably if there is no other relationship between the buyer and the seller.
“ Economic welfare cannot be adequately measured unless the personal distribution of income is known. And no income measurement undertakes to estimate the reverse side of income, that is, the intensity and unpleasantness of effort going into the earning of income. The welfare of a nation can, therefore, scarcely be inferred from a measurement of national income as defined above.”
But specifically for food the situation is different. There’s so much food that much of it gets thrown away, instead of getting to the hungry in time. It’s an allocation and efficiency problem.
It's convenient to measure whether food production would theoretically be enough: take the quantities an adult need, count the throughput of optimal production of proteins, carbs etc. The cost however is meaningless because most of what we spend on is transport, marketing, storage, packaging, shelving, etc. that's what we count when we say that a household spends x% on food.
We can do the same for the volume of T-shirts, sqm of housing, etc.
This is roughly comparable to Cambodia or Myanmar in 2011.
The world is still extremely poor.
[0] - https://documents1.worldbank.org/curated/en/9360016358808857...
We didn't manage to have an apparent state of abundance with slavery, which is arguably the finest of our exploitive practices. We do manage to have a state of abundance because of the ubiquity of cheap sources of energy, oil in particular.
I'm deeply convinced that current exploitive practices are a consequence of inequalities, no resources scarcity.
It basically says more about the overriding culture the games are made within than anything about market economies.
I also find myself thinking like a rat in a maze towards markets as well. But I also remind myself that far more innovative thought is put towards hustling other rats in the maze than working out how to get out.
The problem is inequalities and the inefficiency of how we use the resources, it doesn't look like we'll change that soon, sadly. I'm more prone to think that in a "virtual" economy we'll still have miserable workers to empty the chamber-pots of the lucky ones who are plugged in the new world.
It's only a post-scarcity world if you define scarcity by absence of enough resources to satisfy the bare-minimum required for existence. People don't want to live at the subsistence level, and we're not evenly remotely close to having enough resources to satisfy everybody's wants.
We have enough food to feed the entire world, and enough clothes to cloth the entire world. It could be a good start
What people want is defined by society. Somehow some people want to be alone in a 2 tons EV, a 200 sqm mansion for a family of 4 and retire at 40. Others are tremendously happy with a much, much less resources intensive lifestyles.
You would also need to define what is part of the subsistence level. I'd put it roughly at what my grandma wanted: food, shelter, healthcare, some leisure time.
A bit more than that is absolutely doable in a civilisation that went from one farmer to feed 1.3 people to one farmer to feed 60 people.
You just undermined your own assertion. If what people want was defined by society, then everybody would want the same things. But in fact as you stated they don't; some people want a lot, some people want little. There's no way you're going to stop people wanting nice things short of killing the 90% of the population that aren't ascetics, which you'll never be able to do (at least in the US) because that's also the part of the population that owns most of the guns.
I'd go even a bit further as to say that what is important is probably what people "need", not "want". Apparently we mostly "want" as much as we can have anyway, because that used to be helpful for not dying before being able to reproduce enough.
> because that's also the part of the population that owns most of the guns
That made me giggle
Which makes you a self-righteous prick, thinking you have the right to determine what other people should and shouldn't have.
What does it make you, to think that you are entitled to anything you want regardless of resources limits and collective good?