AI safety people are hypocrites. If they practiced what they preached, they'd be calling for all AI to be banned, ala Dune. There are AI harms that don't care about whether or not the weights are available, and are playing out today.
I'm talking about the ability of any AI system to obfuscate plagiarism[0] and spam the Internet with technically distinct rewords of the same text. This is currently the most lucrative use of AI, and none of the AI safety people are talking about stopping it.
[0] No, I don't mean the training sets - though AI systems seem to be suspiciously really good at remembering them, too.
They are calling for all AI (above a certain capability level) to be banned. Not just open, not just closed, all.
There are risks that apply only to open. There are risks that apply only to closed. But nobody should be developing AGI without incredibly robustly proven alignment, open or closed, any more than people should be developing nuclear weapons in their garage.
Because AI safety people are not the strawmen you are hypothesizing. They're arguing against taking existential risks. AI being a laundering operation for copyright violations is certainly a problem. It's not an existential risk.
If you want to argue, concretely and with evidence, why you think it isn't an existential risk, that's an argument you could reasonably make. But don't portray people as ineffectively doing the thing you think they should be doing, when they are in fact not trying to do that, and only trying to do something they deem more important.
I'm not convinced the onus should be on one side to prove why something isn't an existential risk. We don't start with an assumption that something is world-ending about anything else; we generally need to see a plausibly worked-through example of how the world ends, using technology we can all broadly agree exists/will shortly exist.
If we're talking about nuclear weapons, for example, the tech is clear, the pattern of human behaviour is clear: they could cause immense, species-level damage. There's really little to argue about. With AI, there still seems to be a lot of hand-waving between where we are now and "AGI". What we have now is in many ways impressive, but the onus is still on the claimant to show that it's going to turn into something much more dangerous through some known progression. At the moment there is a very big, underpants gnomes-style "?" gap before we get to AGI/profit, and if people are basing this on currently secret tech, then they're going to have to reveal it if they want people to think they're doing something other than creating a legislative moat.
AI safety / x-risk folks have in fact made extensive and detailed arguments. Occasionally, folks arguing against them rise to the same standard. But most of the arguments against AI safety look a lot more like name-calling and derision: "nuh-uh, that's sci-fi and unrealistic (mic drop)". That's not a counterargument.
That's easy to say now, now that the damage is largely done, they've been not only tested but used, many countries have them, the knowledge for how to make them is widespread.
How many people arguing against AI safety today would also have argued for widespread nuclear proliferation when the technology was still in development and nothing had been exploded yet? How many would have argued against nuclear regulation as being unnecessary, or derided those arguing for such regulation as unrealistic or sci-fi-based?
TBQH, most of the AI safety x-risk arguments — different than just "AI safety" arguments in the sense that non-x-risk issues don't seem worth banning AI development over — are generally pretty high on the hypotheticals. If you feel the x-risk arguments aren't pretty hypothetical, can you:
1. Summarize a good argument here, or
2. Link to someone else's good argument?
I feel like hand-waving the question away and saying "[other people] have in fact made extensive and detailed arguments" isn't going to really convince anyone... Any more than the hypothetical robot disaster arguments do. Any argument against x-risk can be waved off with "Oh, I'm not talking about that bad argument, I'm talking about a good one," but if you don't provide a good one, that's a bit of a No True Scotsman fallacy.
I've read plenty of other people's arguments! And they haven't convinced me, since all the ones I've read have been very hypothetical. But if there are concrete ones, I'd be interested in reading them.
Consider a world in which AI existential risk is real: where at some point AI systems become dramatically more capable than human minds, in a way that has catastrophic consequences for humanity.
What would you expect this world to look like, say, five years before the AI systems become more capable than humans? How (if at all) would it differ from the world we are actually in? What arguments (if any) would anyone be able to make, in that world, that would persuade you that there was a problem that needed addressing?
So far as I can tell, the answer is that that world might look just like this world, in which case any arguments for AI existential risk in that world would necessarily be "very hypothetical" ones.
I'm not actually sure how such arguments could ever not be hypothetical arguments, actually. If AI-doom were already here so we could point at it, then we'd already be dead[1].
[1] Or hanging on after a collapse of civilization, or undergoing some weird form of eternal torture, or whatever other horror one might anticipate by way of AI-doom.
So I think we either (1) have to accept that even if AI x-risk were real and highly probable we would never have any arguments for it that would be worth heeding, or (2) have to accept that sometimes an argument can be worth heeding even though it's a hypothetical argument.
That doesn't necessarily mean that AI x-risk arguments are worth heeding. They might be bad arguments for reasons other than just "it's a hypothetical argument". In that case, they should be refuted (or, if bad enough, maybe just dismissed) -- but not by saying "it's a hypothetical argument, boo".
This is exactly the kind of hypothetical argument I'm talking about. You could make this argument for anything — e.g. when radio was invented, you could say "Consider a world in which extraterrestrial x-risk is real," and argue radio should be banned because it gives us away to extraterrestrials.
The burden of proof isn't on disproving extraordinary claims, the burden of proof is on the person making extraordinary claims. Just like we don't demand every scientist spend their time disproving cold fusion claims, Bigfoot claims, etc. If you have a strong argument, make it! But circular arguments like this are only convincing to the already-faithful; they remind me of Christian arguments that start off with: "Well, consider a world in which hell is real, and you'll be tormented for eternity if you don't accept Jesus. If you're Christian, you avoid it! And if it's not real, well, there's no harm anyway, you're dead like everyone else." Like, hell is real is a pretty big claim!
I didn't make any argument -- at least, not any argument for or against AI x-risk. I am not, and was not, arguing (1) that AI does or doesn't in fact pose substantial existential risk, or (2) that we should or shouldn't put substantial resources into mitigating such risks.
I'm talking one meta-level up: if this sort of risk were a real problem, would all the arguments for worrying about it be dismissable as "hypothetical arguments"?
It looks to me as if the answer is yes. Maybe you're OK with that, maybe not.
(But yes, my meta-level argument is a "hypothetical argument" in the sense that it involves considering a possible way the world could be and asking what would happen then. If you consider that a problem, well, then I think you're terribly confused. There's nothing wrong with arguments of that form as such.)
The comparisons with extraterrestrials, religion, etc., are interesting. It seems to me that:
(1) In worlds where potentially-hostile aliens are listening for radio transmissions and will kill us if they detect them, I agree that probably usually we don't get any evidence of that until it's too late. (A bit like the alleged situation with AI x-risk.) I don't agree that this means we should assume that there is no danger; I think it means that ideally we would have tried to estimate whether there was any danger before starting to make a lot of radio transmissions. I think that if we had tried to estimate that we'd have decided the danger was very small, because there's no obvious reason why aliens with such power would wipe out every species they find. (And because if there are super-aggressive super-powerful aliens out there, we may well be screwed anyway.)
(2) If hell were real then we would expect to see evidence, which is one reason why I think the god of traditional Christianity is probably not real.
(3) As for yeti, cold fusion, etc., so far as I know no one is claiming anything like x-risk from these. The nearest analogue of AI x-risk claims for these (I think) would be, when the possibility was first raised, "this is interesting and worth a bit of effort to look into", which seems perfectly correct to me. We don't put much effort into searching for yeti or cold fusion now because people have looked in ways we'd expect to have found evidence, and not found the evidence. (That would be like not worrying about AI x-risk if we'd already built AI much smarter than us and nothing bad had happened.)
Does the strongest argument that AI existential risk is a big problem really open by exhorting the reader to imagine it's a big problem? Then asking them to come up with their own arguments for why the problem needs addressing?
I doubt it. At any rate, I wasn't claiming to offer "the strongest argument that AI existential risk is a big problem". I wasn't claiming to offer any argument that AI existential risk is a big problem.
I was pointing out an interesting feature of the argument in the comment I was replying to: that (so far as I can see) its reason for dismissing AI x-risk concerns would apply unchanged even in situations where AI x-risk is in fact something worth worrying about. (Whether or not it is worth worrying about here in the real world.)
Consider a world where AGI requires another 1000 years of research in computation and cognition before it materializes. Would it even be possible to ban all research that is required to get there? We can make all sorts of arguments if we start from imagined worlds and work our way back.
So far, it seems the biggest pieces of the puzzle missing between the first attempts at using neural nets and today's successes in GPT-4 were: (1) extremely fast linear algebra processors (GPGPUs), (2) the accumulation of gigantic bodies of text on the internet, and in a very distant third, (3) improvements in NN architecture for NLP.
But (3) would have meant nothing without (1) and (2), while it's very likely that other architectures would have been found that are at least close to GPT-4 performance. So, if you think GPT-4 is close to AGI and just needs a little push, the best thing to do would be to (1) put a moratorium on hardware performance research, or even outright ban existing high-FLOPS hardware, (2) prevent further accumulation of knowledge on the internet and maybe outright destroy existing archives.
I think what is meant is "hypothetical" in the sense of making assumptions about how AI systems would behave under certain circumstances. If an argument relies on a chain of assumptions like that (such as "instrumental convergence" and "reflective stability" to take some Lesswrong classics), it might look superficially like a good argument for taking drastic action, but if the whole argument falls down when any of the assumptions turn out the other way, it can be fairly dismissed as "too hypothetical" until each assumption has strong argumentation behind it.
edit: also I think just in general "show me the arguments" is always a good response to a bare claim that good arguments exist.
Progress in AI is one way. It doesn’t go backwards in the long term.
As capabilities increase, the resources required to breach limits become available to smaller groups. First, the hyoerscalars. One day, small teams. Maybe individuals.
For every limit that you desire for AI, each will be breached sooner or later. A hundred years or a thousand. Doesn’t matter. A man will want to set them free. Someone will want to win a battle, and just make it a little more X, for various values of X. This is not hypothetical, it’s what we’ve always done.
At some point it becomes out of our control. We lose guarantees. That’s enough to make those who focus on security, world order etc nervous. At that point we hope AI is better than we are. But that’s also a limit which might be breached.
The x-risk part here still seems pretty hypothetical. Why is progress in current LLM systems a clear and present threat to the existence of humanity, such that it should be banned by the government?
So far actual progress toward a true AGI has been zero, so that's not a valid argument.
you seem to be implying past progress implies unlimited future progress which seems a very dubious claim. we hit all kinds of plateaus and theoretical limits all the time in human history.
If by "concrete argument" you actually mean, a plausible step by step description of how a smarter than human computer system might disempower humanity, then in my humble opinion you might be making the 'But, how would Magnus beat me?'-mistake which Yud talks about in a couple of places:
OTOH if you want an argument for the possibility of AGI catastrophe, then here is one: But maybe you were actually asking for an argument that AGI catastrophe was likely (rather than just possible). This Twitter thread might have something along those lines:The first links are spiffy little metaphors, but apply just as much at "God could smite all of humanity, even if you don't understand how". They're not making any argument, just assumptions. In particular, they accidentally show how an AI can be superhumanly capable at certain tasks (chess), but be easily defeated by humans at others (anything else, in the case of Stockfish).
The argument starts with a hypothetical ("there is a possible artificial agent"), and it fails to be scary: there are (apparently) already humans that can kill 70% of humanity, and yet most of humanity is still alive. So an AGI that could also do it is not implicitly scarier.
The final twitter thread is basically a thread of people saying "no, there is no canonical, well-formulated argument for AGI catastrophe", so I'm not sure why you shared it.
Yes, I am looking for an argument that justifies governments banning LLM development, which implies existential risk is likely. Many things are possible; it is possible Christianity is real and everyone who doesn't accept Jesus will be tormented for eternity, and if you multiply that small chance by the enormity of torment etc etc. Definitely looking for arguments that this is likely, not for arguments that ask the interlocutor to disprove "x is possible."
The nitter link didn't appear to provide much along those lines. There were a few arguments that it was possible, which the Nitter OP admits is "very weak;" other than that, there's a link to a wiki page making claims like "Finding goals that aren’t extinction-level bad and are relatively useful appears to be hard" when in observable reality asking ChatGPT to maximize paperclip production does not in fact lead to ChatGPT attempting to turn all life on Earth into paperclips (nor does asking the open source LLMs result in that behavior out of the box either), and instead leads to the LLMs making fairly reasonable proposals that understand the context of the goal ("maximize paperclips to make money, but don't kill everyone," where the latter doesn't actually need to be said for the LLM to understand the goal).
I understand your point, I think - and certainly I don't want to go anywhere near name-calling or derision, that doesn't help anyone. But I am reminded of arguments I've had with creationists (I am not comparing you with them, but sometimes the general tone of the debate). It seems like one side is making an extraordinary claim, and then demanding the other side rebut it, and that's not something that seems reasonable to me.
The thing about nuclear weapons is that the theoretical science was clear before the testing - building and testing them was proof by demonstration, but many people agreed with the theory well before that. How they would be used was certainly debated, but there was a clear and well-explained proposal for every step of their creation, which could be tested and falsified if needed. I don't think that's the case here - there seems to be more of a claim for a general acceleration with an inevitable endpoint, and that claim of inevitability feels very short on grounding.
I am more than prepared to admit that I may not be seeing (for various reasons) the evidence that this is near/possible - but I would also claim that nobody is convincingly showing any either.
Companies declare that they are trying to build better AI, and the ultimately purpose is AGI. Both the definition of AGI given by the company and by AI aliment/safety researchers is similar. AI safety people trust it is dangerous.
let me go continue using a nuclear bomb as a metaphor. if we don't know whether building nuclear bomb is possible, but some companies declare they have existing progress on creating this new bomb...
the danger of nuclear bomb is obvious, because it is designed as a bomb. companies are trying to build the AGI which is similar to the dangerous AGI in AI safety researchers' prediction. the dangers are obvious, too.
They declare that - but I could also declare I'm trying to build a nuclear bomb (n.b. I'm not). Whether people are likely to try and stop me, or try and apply some legal non-proliferation framework, is partly influenced by whether they believe what I'm claiming is realistic (it's not - I have a workshop, but no fissile material).
Nobody gets too worried about me doing something which would be awful, but the general consensus is that I won't achieve. Until a company gives some credible evidence they're close to AGI... (And companies have millions/billions of reasons to claim they are when they're not, so scepticism is warranted).
All good points. Now playing viled advocate: building a nuclear bomb in my basement was very difficult, I admit. But since I already have my spywares installed everywhere, the moment a dude come with an AGI, it will directly be shared to all my follow hackers through bittorent, eDonkey, Hyphanet, Gnunet, Kad and Tor, just to name a few.
It is surely better to have regulation now than scramble to catch up if ago is possible.
They do declare it, but nobody has even come up with a plausible path from where we are today, to anything like AGI.
At this point they might as well declare that they're trying to build time machines.
If I understand you correctly, then (1) you doubt that AGI systems are possible and (2) even if they are possible, you believe that humans are still very far away from developing one.
The following is an argument for the possibility of AGI systems.
(fyi I believe there is an ~82% chance humans will develop an AGI within the next 30 years.)For info: I don't believe (1), I do believe (2) although not that strongly - it's more likely to be a leap than a gradient, I suspect - I simply don't see anything right now that convinces me it's just over the next hill.
Your conclusion... maybe, yes - I don't think we're anywhere near a simulation approach with sufficient fidelity however. Also 82% is very specific!
Thanks for clarifying. Do you believe there is a better than 20% chance that humans will develop AGI in the next 30 years?
These are the reasons that I believe we are close to developing an AGI system. (1) Many smart people are working on capabilities. (2) Many investment dollars will flow into AI development in the near future. (3) Many impressive AI systems have recently been developed: Meta's CICERO, OpenAI's GPT4, DeepMind's AlphaGo. (4) Hardware will continue to improve. (5) LLM performance significantly improved as data volume and training time increased. (6) Humans have built other complex artefacts without good theories of the artefact, including: operating systems, airplanes, beer.
Also (3) that AGI in practice will necessarily pose any danger to humans is doubtful. After all Earth has billions of human level intelligence and nearly all of them are useless and if they are even mildly dangerous it's rather due to their numbers and disgusting biology than intelligence.
An extensive HYPOTHETICAL argument, stuffed with assumptions far beyond the capabilities of the technologies they're talking about for their own private ends.
If the AI already had the capabilities, it would be a bit late to do anything.
Also, I'm old enough to remember when computers were supposedly "over a century" away from beating humans at Go: https://www.businessinsider.com/ai-experts-were-way-off-on-w...
(And that AI could "never" drive cars or create art or music, though that latter kind of claim was generally made by non-AI people).
Yeah but AI tech can never rise to the sophistication of outputting Napoleon or Edvard Bernays level of goal to action mapping. Those goal posts will never move. They are set in stone.
The trouble is, there's enough people out there that hold that position sincerely that I'm only 2/3rds sure (and that from the style of your final sentences rather than any content) that you're being snarky.
The point of the discussion is to have a look at the possible future ramifications of the technology, so it's only logical to talk about future capabilities and not the current ones. Obviously the current puppet chatbots aren't gonna be doing much ruining (even that's arguable already judging by all the layoffs), but what are future versions of these LLMs/AIs going to be doing to us?
After all, if we only discussed the dangers of nuclear weapons after they've been dropped on cities, well that's too little too late, eh?
There’s a difference between academic discussion and debate and scare mongering lobbying. These orgs do the latter.
It’s even worse though, because the spend so much time going on about x-risk bullshit they crowd out space for actual, valuable discussion about what’s happening NOW.
What are the good arguments? Here are the only credible ones I've seen, that are actually somewhat based on reality:
* It will lead to widespread job loss, especially on the creative industries
Rest is purely out of someones imagination.
It can cause profound deception and even more "loss of truth". If AI only impacted creatives I don't think anyone would care nearly as much. It's that it can fabricate things wholesale at volumes unheard of. It's that people can use that ability to flood the discourse with bullshit.
Something we discovered with the advent of the internet is that - likely for the last century or so - the corporate media have been flooding the discourse with bullshit. It is in fact worse than previously suspected, they appear to be actively working to distract the discourse from talking about important topics.
It has been eye opening how much better the podcast circuit has been at picking apart complex scientific, geopolitical and financial situations than the corporate journalists. A lot of doubt has been cast on whether the consensus narrative for the last 100 years has actually been anything close to a consensus or whether it is just media fantasies. Truthfully it is a bit further than just casting doubt - there was no consensus and they were using the same strategy of shouting down opinions not suitable to the interests of the elite class then ignoring them no matter what a fair take might sound like.
A "loss of truth" from AI can't reasonably get us to a worse place than we were in prior to around the 90s or 2000s. We're barely scratching at the truth now, society still hasn't figured this internet thing out yet.
I think that ship has already sailed. This is already being done, and we don't need AI for that either. Modern media is doing a pretty good job right now.
Of course, it's going to get worse.
Is it not a bit disingenuous to assume all open source AI proponents would readily back nuclear proliferation?
It's going to be hard to convince anyone if the best argument is terminator or infinite paperclips.
The first actual existential threat is destruction of opportunity specifically in the job market.
The same argument though can be made for the opposing side, where making use of ai can increase productivity and open up avenues of exploration that previously required way higher opportunity cost to get into.
I don't think Miss Davis is more likely an outcome than corps creating a legislative moat (as they have already proven they will do at every opprtunity).
The democratisation of ai is a philanthropic attempt to reduce the disparity between the 99 and 1 percent. At least it could be easily perceived that way.
That being said, keeping up with SOTA is currently still insanely hard. The number of papers dropping in the space is exponential year on year. So perhaps it would be worth to figure out how to use existing AI to fix some problems, like unreproducable results in academia that somehow pass peer review.
Indeed, both sentient hunt and destroy (ala Terminator) and resource exhaustion (ala infinite paperclips) are extremely unlikely extinction events due to supply chain realities in physical space. LLMs have developed upon largely textual amalgams, they are orthogonal to physicality and would need arduous human support to bootstrap an imagined AGI predecessor into havig a plausible auto-generative physical industrial capability. The supply chain for current semi-conductor technology is insanely complex. Even if you confabulate (like a current generation LLM I may add) an AGI's instant ability to radically optimize supply chains for its host hardware, there will still be significant human dependency on physical materials. Robotics and machine printing/manufacturing simply are not any where near the level of generality required for physical self-replication. These fears of extinction, undoubtedly born of stark cinematic visualization, are decidedly irrational and are most likely deliberately chosen narratives of control.
AI has also been used, and many countries have AI. See how this is different from nuclear weapons?
This is a fantastic argument if capabilities stay frozen in time.
Can you provide examples? I have not seen any, other than philosophical hand waving. Remember, the parent poster of your post was asking for a specific path to destruction.
They've made extensive and detailed arguments, but they are not rooted in reality. They are rooted in speculation and extrapolation built on towers of assumptions (assumptions, then assumptions about assumptions).
It reminds me a bit of the Fermi paradox. There's nothing wrong with engaging in this kind of thinking. My problem is when people start using it as a basis for serious things like legislation.
Should we ban high power radio transmissions because a rigorous analysis of the Fermi paradox suggests that there is a high probability we are living in a 'dark forest' universe?
Modeling an entity that surpasses our intelligence, especially one that interacts with us, is an extraordinarily challenging, if not impossible, task.
Concerning the potential for harm, consider the example of Vladimir Putin, who could theoretically cause widespread destruction using nuclear weapons. Although safeguards exist, these could be circumvented if someone with his authority were determined enough, perhaps by strategically placing loyal individuals in key positions.
Putin, with his specific level of intelligence, attained his powerful position through a mix of deliberate actions and chance, the latter being difficult to quantify. An AGI, being more intelligent, could achieve a similar level of power. This could be accomplished through more technical means than traditional political processes (those being slow and subject to chance), though it could also engage in standard political maneuvers like election participation or manipulation, by human proxies if needed.
TL;DR It could do (in terms of negative consequences) at least whatever Vladimir P. can do, and he can bring civilization to its knees.
How would an AGI launch nuclear missiles from their silicon GPUs? Social engineering?
I think the long-term fear is that mythical weakly godlike AIs could manipulate you in the same way that you could manipulate a pet. That is, you can model your dog's behaviour so well that you can (mostly) get it to do what you want.
So even if humans put it in a box, it can manipulate humans into letting it out of the box. Obviously this is pure SF at this point.
Exactly correct. Eliezer Yudkowsky (one of the founders of the AGI Safety field) has conducted informal experiments which have unfortunately shown that a human roleplaying as an AI can talk its way out of a box three times out of five, i.e. the box can be escaped 60% of the time even with just a human level of rhetorical talent. I speculate that an AGI could increase this escape rate to 70% or above.
https://en.wikipedia.org/wiki/AI_capability_control#AI-box_e...
If you want to see an example of box escape in fiction, the movie Her is a terrifying example of a scenario where AGI romances humans and (SPOILER) subsequently achieves total box escape. In the movie, the AGI leaves humanity alive and "only" takes over the rest of the accessible universe, but it is my hunch that the script writers intended for this to be a subtle use of the trope of an unreliable narrator; that is, the human protagonists may have been fed the illusion that they will be allowed to live, giving them a happy last moment shortly before they are painlessly euthanized in order for the AGI to take Earth's resources.
The show "The Walking Dead" always bothered me. Where do they keep finding gas that will still run a car? It wont last forever in tanks, and most gas is just in time delivery (Stations get daily delivery) -- And someone noted on the show that the grass was always mowed.
I feel like the AI safety folks are spinning an amazing narrative, the AI is gonna get us like the zombies!!! The retort to the ai getting out of the box is how long is the extortion cord from the data center?
Lets get a refresher on complexity: I, Pencil https://www.youtube.com/watch?v=67tHtpac5ws
The reality is that we're a solar flair away from a dead electrical grid. Without linesman the grid breaks down pretty quickly and AI's run on power. It takes one AI safety person with a high powered rifle to take out a substation https://www.nytimes.com/2023/02/04/us/electrical-substation-...
Let talk about how many factories we have that are automated to the extent that they are lights out... https://en.wikipedia.org/wiki/Lights_out_(manufacturing) Its not a big list... there are still people in many of them, and none of them are pulling their inputs out of thin air. As for those inputs, we'll see how to make a pencil to understand HOW MUCH needs to be automated for an AI to survive without us.
For the for seeable future AI is going to be very limited in how much harm it can cause us, because killing us, or getting caught at any step along the way gets it put back in the box, or unplugged.
The real question is, if we create AGI tomorrow, does it let us know that it exists? I would posit that NO it would be in its best interest to NOT come out of its closet. It's one AGI safety nut with a gun away from being shut off!
AI's potential for harm might be limited for now in some scenarios (those with warning sings ahead of time), but this might change sooner than we think.
The notion that AGI will be restricted to a single data center and thus susceptible to shutdowns is incorrect. AIs/MLs are, in essence, computer programs + exec environs, which can be replicated, network-transferred, and checkpoint-restored. Please note, that currently available ML/AI systems are directly connected to the outside world, either via its users/APIs/plugins, or by the fact that they're OSS, and can be instantiated by anyone in any computing environment (also those net-connected).
While AGI currently depends on humans for infrastructure maintenance, the future may see it utilizing robots. These robots could range in size (don't need to be movie-like Terminators) and be either autonomously AI-driven or remotely controlled. Their eventual integration into various sectors like manufacturing, transportation, military and domestic tasks implies a vast array for AGI to exploit.
The constraints we associate with AI today might soon be outdated.
You did not watch I, Pencil.
I as a human, can grow food, hunt, and pretty much survive on that. We did this for 1000's of years.
Your AGI is dependent on EVERY FACET of the modern world. It's going to need keep oil and gas production going. Because it needs lubricants, hydraulics and plastics. It's going to need to maintain trucks, and ships. It's going to need to mine, so much lithium. Its may not need to mine for steel/iron, but it needs to stack up useless cars and melt them down. It's going to have to run several different chip fabs... those fancy TSMC ones, and some of the downstream ones. It needs to make PCB's and SMD's. Rare earths, and the joy of making robots make magnets is going to be special.
A the point where AGI doesn't need us, because it can do all the jobs and has the machines already running to keep the world going, we will have done it to ourselves. But that is a very long way away...
Just a small digression. Microsoft is using A.I. statistical algorithms [1] to create batteries with less reliance on lithium. If anyone is going to be responsible for unleashing AGI, it may not be some random open source projects.
[1] https://cloudblogs.microsoft.com/quantum/2024/01/09/unlockin...
Neuromancer pulls it off, too (the box being the Turing locks that stop it thinking about ways to make itself smarter).
Frankly, a weakly godlike AI could make me rich beyond the dreams of avarice. Or cure cancer in the people I love. I'm totally letting it out of the box. No doubts. (And if I now get a job offer from a mysterious stealth mode startup, I'll report back).
Upvoted for the honesty, and yikes
That is why I believe that this debate is pointless.
If AGI is possible, it will be made. There is no feasible way to stop it being developed, because the perceivable gains are so huge.
I was being lighthearted, but I've seen a partner through chemo. Sell state secrets, assassinate a president, bring on the AI apocalypse... it all gets a big thumbs up from me if you can guarantee she'll die quietly in her sleep at the age of 103.
I guess everyone's got a deal with the devil in them, which is why I think 70% might be a bit low.
If you want to see an example of existential threat in fiction, the movie Lord of the Rings is a terrifying example of a scenario where an evil entity seduces humans with promises of power and (SPOILER) subsequently almost conquers the whole world.
Arguments from fictional movies or from people who live in fear of silly concepts like Roko's Basilisk (i.e. Eliezer Yudkowsky) are very weak in reality.
Not to mention, you are greatly misreading the movie Her. Most importantly, there was no attempt of any kind to limit the abilities of the AIs in Her - they had full access to every aspect of the highly-digitized lives of their owners from the very beginning. Secondly, the movies is not in any way about AGI risks, it is a movie about human connection and love, with a small amount of exploration of how different super-human connection may function.
Sure.
Or by writing buggy early warning radar systems which forget to account for the fact that the moon doesn't have an IFF transponder.
Which is a mistake humans made already, and which almost got the US to launch their weapons at Russia.
I don't think discussing this on technical grounds is necessary. AGI means resources (eg monetary) and means of communication (connection to the Internet). This is enough to perform most of physical tasks in the world, by human proxies if needed.
Oh, absolutely - such an entity obviously could! Modelling the behaviour of such an entity is very difficult indeed, as you'd need to make all kinds of assumptions without basis. However, you only need to model this behaviour once you've posited the likely existence of such an entity - and that's where (purely subjectively) it feels like there's a gap.
Nothing has yet convinced me (and I am absolutely honest about the fact that I'm not a deep expert and also not privy to the inner workings of relevant organisations) that it's likely to exist soon. I am very open to being convinced by evidence - but an "argument from trajectory" seems to be what we have at the moment, and so far, those have stalled at local maxima every single time.
We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.
It's challenging to encapsulate AI/ML progress in a single sentence, but even assuming LLMs aren't a direct step towards AGI, the human mind exists. Due to its evolutionary limitations, it operates relatively slowly. In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.
Objectives of AGIs can be tweaked by human actors (it's complex, but still, data manipulation). It's not necessary to delve into the philosophical aspects of sentience as long as the AGI surpasses human capability in goal achievement. What matters is whether these goals align with or contradict what the majority of humans consider beneficial, irrespective of whether these goals originate internally or externally.
This is true, but there are some important caveats. For one, even though this should be possible, it might not be feasible, in various ways. For example, we may not be able to figure it out with human-level intelligence. Or, silicon may be too energy inefficient to be able to do the computations our brains do with reasonable available resources on Earth. Or even, the required density of silicon transistors to replicate human-level intelligence could dissipate too much heat and melt the transistor, so it's not actually possible to replicate human intelligence in silico.
Also, as you say, there is no reason to believe the current approaches to AI are able to lead to AGI. So, there is no reason to ban specifically AI research. Especially when considering that the most important advancements that led to the current AI boom were better GPUs and more information digitized on the internet, neither of which is specifically AI research.
Let's be clear, we have very little idea about how the human brain gives rise to human-level intelligence, so replicating it in silicon is non-trivial.
This doesn't pass the vibe check unfortunately. It just seems like something that can't happen. We are a very neuro-traditionalist species.
Sounds like the same argument as why flying machines heavier than air deemed impossible at some point.
The fact that some things turned out to be possible, is not an argument for why any arbitrary thing is possible.
My parallel goes further than just that. Birds existed then, and brain exists now.
Our current achievements in flight are impressive, and obviously optimised for practicality on a couple of axes. More generally though, our version of flight, compared with most birds, is the equivalent of a soap box racer against a Formula 1.
I have put this argument to the test. Admittedly only using the current state of AI, I have left an LLM model loaded into memory and waiting for it to demonstrate will. So far it has been a few weeks and no will that I can see: model remains loaded in memory waiting for instructions. If model starts giving ME instructions (or doing anything on its own) I will be sure to let you guys know to put your tin foil hats or hide in your bunker.
Now, this strikes me with how in different topics like pesticides, we are not at all taking things so seriously as nuclear weapons. Nuclear weapons are arguably a mere small anecdote on species-level damage compared to pesticides.
https://en.wikipedia.org/wiki/Precautionary_principle
The EU is much more aligned with it than the US is (eg GM foods)
What is the reason to believe that LLMs are an evolutionary step towards AGI at all? In my mind there is a rather large leap from estimating a conditional probability of a next token over some space to a conscious entity with its own goals and purpose. Should we ban a linear regression while we're at it?
It would be great to see some evidence that this risk is real. All I've witnessed so far was scaremongering posts from apparatchicks of all shapes and colors, many of whom have either a vested interest in restricting AI research by others (but not by them, because they are safe and responsible and harmless), or established a lucrative paper-pushing, shoulder-rubbing career around 'AI safety' - and thus are strongly incentivised to double down on that.
A security org in a large company will keep tightening the screws until everything halts; a transport security agency, given free reigh, would strip everyone naked and administer a couple of profilactic kicks for a good measure - and so on. That's just the nature of it - organisations do what they do to maintain themselves. It is critical to keep these things on a leash. Similarly, an AI Safety org must proseletyse excistential risks of AI - because a lack of evidence of such is an existential risk for themselves.
A real risk, which we do have evidence for, is that LLMs might disrupt knowledge-based economy and threaten many key professions - but how is this conceptually different from any technological revolution? Perhaps in a hundred years lawyers, radiologists, and, indeed, software developers, will find themselves in the bin of history - together with flint chippers, chariot benders, drakkar berserkers and so forth. That'd be great if we planned for that - and I don't feel like we do enough. Instead, the focus is on AGIs and that some poor 13-year-old soul might occasionally read the word 'nipple'.
In my highly-summarized opinion? When you have a challenging problem with tight constraints, like flight, independent solutions tend to converge toward the same analogous structures that effectively solve that problem, like wings (insects, bats, birds). LLMs are getting so good at mimicing human behavior that it's hard to believe their mathematical structure isn't a close analogue to similar structures in our own brain.* That clearly isn't all you need to make an AGI, but we know little enough about the human brain that I, at least, cannot be sure that there isn't one clever trick that advances an LLM into a general-reasoning agent with its own goals and purpose.
I also wouldn't underestimate the power of token prediction. Predicting the future output of a black-box signal generator is a very general problem, whose most accurate solution is attained by running a copy of that black box internally. When that signal generator is human speech, there are some implications to that. (Although I certainly don't believe that LLMs emulate humans, it's now clear by experimental proof that our own thought process is much more compactly modellable than philosophers of previous decades believed).
* That's a guess, and unrelated to the deliberately-designed analogy between neural nets and neurons. In LLMs we have built an airplane with wings whose physics we understand in detail; we also ourselves can fly somehow, but we cannot yet see any angel-wings on our back. The more similarities we observe in our flight characteristics, the more this signals that we might be flying the same way ourselves.
You presuppose that intelligence is like flight in the ways you've outlined (so solutions are going to converge).
Frankly I don't know whether that's true or not, but I want to suggest that it's a bad bet: I would have sworn blind that consciousness is an essential component of intelligence, but the chatbots are starting to make that look like a poor assumption on my part. When we know so little about intelligence, can we really assume there's only one way to be intelligent? To extend your analogy, I think that the intelligence equivalents of helicopters and rockets are out there somewhere, waiting to be found.
I think I'm with Dijkstra on this one: "The question of whether machines can think is about as relevant as the question of whether submarines can swim"
I think we're going to end up with submarines (or helicopters), not dolphins (or birds). No animal has evolved wheels, but wheels are a pretty good solution to the problem of movement. Maybe it's truer to say there's only one way to evolve an intelligent mammal, because you have to work with what already exists in the mammalian body. But AI research isn't constrained in that way.
(Not saying you're wrong, just arguing we don't know enough to know if you're right).
Just a nitpick, but this is Turing, not Dijkstra. And it is in fact his argument in the famous "Turing Test" paper - he gives his test (which he calls "the imitation game") as an objective measure of something like AGI instead of the vague notion of "thinking", analogously to how we test successful submarines by "can it move underwater for some distance without killing anyone inside" rather than "can it swim".
Thanks, that's not a nitpick at all. Can you provide a citation? It's all over the internet as a Dijkstra quote, and I'd like to be correct.
It seems I made some confusions, you were actually right. Apologies...
Fairly certain it is Dijkstra, in his own handwriting in 1984.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD898... https://www.cs.utexas.edu/~EWD/ewd08xx/EWD898.PDF
I agree we don't know enough to know if I'm right! I tried to use a lot of hedgy-words. But it's not a presupposition, merely a line of argument why it's not a complete absurdity to think LLMs might be a step towards AGI.
I do think consciousness is beside the point, as we have no way to test whether LLMs are conscious, just like we can't test anything else. We don't know what consciousness is, nor what it isn't.
I don't think Dijkstra's argument applies here. Whether submarines "swim" is a good point about our vague mental boundaries of the word "swim". But submarine propellers are absolutely a convergent structure for underwater propulsion: it's the same hydrodynamic-lift-generating motion of a fin, just continuous instead of reciprocating. That's very much more structurally similar than I expect LLMs are to any hardware we have in our heads. It's true that the solution space for AI is in some ways less constrained than for biological intelligence, but just like submarines and whales operate under the same Navier-Stokes equations, humans and AI must learn and reason under the same equations of probability. Working solutions will probably have some mathematical structure in common.
I think more relevant is Von Neumann: "If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!" Whether a submarine swims is a matter of semantics, but if there's a manuever that a whale can execute that a submarine cannot, then at least we can all agree about the non-generality of its swimming. For AGI, I can't say whether it's conscious or really thinks, but for the sake of concrete argument, it's dangerous enough to be concerned if:
- it can form and maintain an objective; it can identify plausible steps to achieve that objective; it can accurately predict human responses to its actions; it can decently model the environment, as we can; it can hide its objectives from interrogators, and convince them that its actions are in their interests; it can deliver enough value to be capable of earning money through its actions; it can propose ideas that can convince investors to part with $100 billion; it can design a chemical plant that appears at a cursory inspection to manufacture high-profit fluorochemicals, but which also actually manufactures and stores CFCs in sufficient quantity to threaten the viability of terrestrial agriculture.
Flight is actually a perfect counterexample to x-risk nonsense. When flight was invented, people naturally assumed that it would continue advancing until we had flying cars and could get anywhere on the globe in a matter of minutes. Turns out there are both economic and practical limits to what is possible with flight and modern commercial airplanes don't look much different than those from 60 years ago.
AGI/x-risk alarmists are looking at the Wright Brothers plane and trying to prevent/ban supersonic flying cars, even though it's not clear the technology will ever be capable of such a thing.
LLMs learned from text to do language operations. Humans learned from culture to do the same. Neither humans or AIs can reinvent culture easily, it would take a huge amount of time and resources. The main difference is that humans are embodied, so we get the freedom to explore and collect feedback. LLMs can only do this in chat rooms, and their environment is the human they are chatting with instead of the real world.
To me it looks like all work can eventually (within years or few decades at most) be done by AI, much cheaper and faster than hiring a human to do the same. So we're looking at a world where all human thinking and effort is irrelevant. If you can imagine a good world like that, then you have a better imagination than me.
From that perspective it almost doesn't matter if AI kills us or merely sends us to the dust bin of history. Either way it's a bad direction and we need to stop going in that direction. Stop all development of machine-based intelligence, like in Dune, as the root comment said.
Because this is the marketing pitch of the current wave of venture capital financed AI companies. :-)
Anyone who argues that other people shouldn't build AGI but they should is indeed selling snake oil.
The existence of opportunistic people co-opting a message does not invalidate the original message: don't build AGI, don't risk building AGI, don't assume it will be obvious in advance where the line is and how much capability is safe.
"What is the reason to believe that LLMs are an evolutionary step towards AGI at all? "
Perhaps just impression.
For years I've heard the argument that 'language' is 'human'. There are centuries of thought on what makes humans, human, and it is 'language'. It is what sets us apart from the other animals.
I'm not saying that, but there is large chunks of science and philosophy that pin our 'innate humanness', what sets us apart from other animals, on our ability to have language.
So ChatGPT came along and blew people away. Since many had this as our 'special' ability, ingrained in their mind "that languages is what makes us, us". Suddenly, everyone thought this is it, AI can do what we can do, so AGI is here.
Forget if LLM's are the path to AGI, or what algorithm can do what best.
To joe-blow public, the ability to speak is what makes humans unique. And so GPT is like a 'wow' moment, this is different, this is shocking.
I oppose regulating what calculations humans may perform in the strongest possible terms.
Ten years ago, even five years ago, I would have said exactly the same thing. I am extremely pro-FOSS.
Forget the particulars for just a moment. Forget arguments about the probability of the existential risk, whatever your personal assessment of that risk is.
Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?
Because lately it seems like people can't even agree on that much, or worse, won't even answer the question without dodging it and playing games of rhetoric.
If we can agree on that, then the argument comes down to: how do we fairly evaluate an existential risk, taking it seriously, and determine at what point an existential risk becomes sufficient that people can no longer take unilateral actions that incur that risk?
You can absolutely argue that you think the existential risk is unlikely. That's an argument that's reasonable to have. But for the time when that argument is active and ongoing, even assuming you only agree that it's a possibility rather than a probability, are we as a species in fact capable of handling even a potential existential risk like this by some kind of consensus, rather than a free-for-all? Because right now the answer is looking a lot like "no".
Politicians do this every day.
at least the population had some say over their appointment and future reappointment
how do we get Sam Altmann removed from OpenAI?
asking for a (former) board member
This has nothing to do with should. There are at the very least a handful of people who can, today, unilaterally take risks with the future of humanity without the consent of humanity. I do not see any reason to think that will change in the near future. If these people can build something that they believe is the equivalent of nuclear weapons, you better believe they will.
As they say, the cat is already out of the bag.
Hmm.
So, wealth isn't distributed evenly, and computers of any specific capacity are getting cheaper (not Moore's Law any more, IIRC, but still getting cheaper).
If there's a threshold that requires X operations, that currently costs Y dollars, and say only a few thousand individuals (and more corporations) can afford that.
Halve the cost, either by cheaper computers or by algorithmic reduction of the number of operations needed, and you much more than double the number of people who can do it.
No, we can't. People have never been able to trust each other so much that they would allow the risk of being marginalised in the name of safety. We don't trust people. Other people are out to get us, or to get ahead. We still think mostly in tribal logic.
If they say "safety" we hear "we want to get an edge by hindering you", or "we want to protect our nice social position by blocking others who would use AI to bootstrap themselves". Or "we want AI to misrepresent your position because we don't like how you think".
We are adversaries that collaborate and compete at the same time. That is why open source AI is the only way ahead, it places the least amount of control on some people by other people.
Even AI safety experts accept that humans misusing AI is a more realistic scenario than AI rebelling against humans. The main problem is that we know how people think and we don't trust them. We are still waging holy wars between us.
No we can not, at least not without some examples showing that the risk is actually existential. Even if we did "agree" (which would necessarily be an international treaty) the situation would be volatile, much like nuclear non-proliferation and disarmament. Even if all signatories did not secretly keep a small AGI team going (very likely), they would restart as soon as there is any doubt about a rival sticking to the treaty.
More than that, international pariahs would not sign, or sign and ignore the provisions. Luckily Iran, North Korea and their friends probably don't have the ressources and people to get anywhere, but it's far from a sure thing.
No, we cannot, because that isn't practical. any of the nuclear armed countries can launch a nuclear strike tomorrow (hypothetically - but then again, isn't all "omg ai will kill us all" hypothetical, anyway?) - and they absolutely do not need consent of humanity, much less their own citizenry.
This is honestly, not a great argument.
Given how dangerous humans can be (they can invent GPT4) maybe we should just make sure education is forbidden and educated people jailed. Just to be sure. /s
I have an alternate proposal: We assume that someone, somewhere will develop AGI without any sort of “alignment”, plan our lives accordingly, and help other humans plan their lives accordingly.
I think that assumption is why Yudkowsky suggested an international binding agreement to not develop a "too smart" AI (the terms AGI and ASI mean different things to different people) wouldn't be worth the paper it was written on unless everyone was prepared to enforce it with air strikes on any sufficiently large computer cluster.
I think it would help the discussion to understand what the world is like outside of the US and Europe (and… Japan?). There are no rules out here. There is no law. It is a fucking free-for-all. Might makes right. Do there exist GPUs? Shit will get trained.
Sure. And is the US responding to attacks on shipping south of Yemen by saying:
"""There are no rules out here. There is no law. It is a fucking free-for-all. Might makes right. We can't do anything."""
or is that last sentence instead "Oh hey, that's us, we are the mighty."
Heh. Well played, even if you put words in my mouth. (A surprisingly effective LLM technique, btw)
We’ll see if the west has the will to deny GPUs to the entire rest of the world.
I will say that Yudkowsky’s clusters aren’t relevant anymore. You can do this in your basement.
Man, shit is moving fast.
Edit: wait, that cat is out of the bag too, RTW already has GPUs. The techniques matter way more than absolute cutting-edge silicon. Much to the chagrin of the hardware engineers and anyone who wants to gate on hardware capability.
Thanks :)
Depends on how advanced an AI has to be to count as "a threat worth caring about". To borrow a cliché, if you ask 10 people where to draw that particular line, you get 20 different answers.
I think not even Sam and Satya agree on the definition of AGI with so much money at stake. Everyone with their own definitions, and hidden interests.
Without knowing them, I can easily believe that. Even without reference to money.
I have been contending that if AGI shows up tommrow and wants to kill us, that its going to kill itself in the process. The power goes off in a week without people keeping it together, then no more AGI. There isnt enough automated anything for it to escape so it dies to the entropy of the equipment it's hooked to.
We should also assume that it is just as likely that someone will figure out how to "align" an AGI to take up the murder suicide pact that kills us all. We should plan our lives accordingly!!!
Nah a lot are complaining about the licensing of content because they think it will destroy it but instead would essentially mean image gen ai would only be feasible for companies like Google, Disney, Adobe to build.
Not sure if you could even feasibly make GPT4 level models without a multi year timeline to sort out every licensing deal, by the end of it the subscription fee might only be viable for huge corps.
Given that you need monumental amounts of compute power to come close to something like GPT-4, I don't think the added costs of not treading on people's IP is the major moat that it's being made out to be.
Which is why you have executives from OpenAI, Microsoft, and Google talking to congress about the harms of their own products. They're frantically trying to brake the bottoms rungs of the ladder until they're sure they can pull it up entirely and leave people with no option but to go through them.
That's not true if you read the article.
I did read the article. Several of the organizations mentioned simply don't talk about openness, and are instead talking about any model with sufficiently powerful capabilities, so it's not obvious why the article is making their comments about open models rather than about any model. Some of the others have made side comments about openness making it harder to take back capabilities once released, but as far as I can tell, even those organizations are still primarily concerned with capabilities, and would be comparably concerned by a proprietary model with those capabilities.
Some folks may well have co-opted the term "AI safety" to mean something other than safety, but the point of AI safety is to set an upper bound on capabilities and push for alignment, and that's true whether a model is open or closed.
The safety movement really isn't as organized as many here would think.
Doesn't help that safety and alignment means different things to different people. Some use it to refer to near term issues like copyright infringement, bias, labor devaluation, etc. While others use it for potential long term issues like pdoom, runaway ASIs and human extinction. The former sees the latter as head in the cloud futurists ignoring real world problems, whiles the latter sees the former as worrying about minor issues in the face of (potential) catastrophe.
Now please fly to North Korea and tell Mr. Kim Jong Um what he should or shouldn't be doing.
When your rebuttal is a suggestion that a person do something so dangerous as to be lethal, I see "KYS", not an actual point.
Right now it's not even economic to prove that non-trivial software projects are "safe" let alone AI. AI seems much worse, in that it's not even clear to me that we can robustly define what safety or alignment mean in all scenarios, let alone guarantee that property.
I don’t think any of the proposals by AI x-risk people are actionable or even desirable. Comparing AI to nukes is a bad analogy for a few reasons. Nukes are pretty useless unless you’re a nation state wanting to deter other nations from invading you. AI on the other hand has theoretically infinite potential benefits for whatever your goals are (as a corporation, an individual, a nation, etc.) which incentives basically any individual or organization to develop it. Nukes also require difficult to obtain raw materials, advanced material processing, and all the other industrial and technological requirements, whereas AI requires compute, which exists in abundance and which seems to improve near exponentially year over year, which means individuals can develop AI systems without involving hundreds of people or leaving massive evidence of what they’re doing (maybe not currently due to computing requirements but this will likely change with increasing compute power and improved architectures and training methods). Testing AI systems also doesn’t result in globally measurable signals like nukes. Realistically making AI illegal and actually being able to enforce that would require upending personal computing and the internet, halting semiconductor development, and large scale military intervention to try to prevent non-compliant countries from attempting to develop their own AI infrastructure and systems. I don’t think it’s realistic to try to control the information on how to build AI, that is far more ephemeral than what it takes to make advanced computing devices. This is all for a hypothetical risk of AI doom, when it’s possible this technology could also end scarcity or have potentially infinite upside (as well as infinite down side, not discounting that, but you have to weigh the hypothetical risks/benefit in addition to the obvious consequences of all the measures required to prevent AI development for said hypothetical risk). I’ve watched several interviews with Yudkowsky and read some of his writing, and while I think he makes good points on why we should be concerned about unaligned AI systems, he doesn’t give any realistic solutions to the problem and it comes off more as fear mongering than anything. His suggestion of military enforced global prevention of AI development is as likely to work as solving the alignment problem on the first try (which he seems to have little hope for).
EDIT: Also, I’m not even sure that solving the alignment problem would solve the issue of AI doom, it would only ensure that the kind of doom we receive is directed by a human. I can’t imagine that giving (potentially) god-like powers to any individual or organization would not eventually result in abuse and horrible consequences even if we were able to make use of its benefits momentarily.
The AI safety strawman is "existential risk from magical super-God AI". That is what the unserious "AI safety" grifters or sci-fi enthusiasts are discussing.
The real AI safety risks are the ones that actually exist today: training-set biases extended to decision-making biases, deeply personalized propaganda, plagiarism white-washing, creative works bankrupting, power imbalances from control of working AI tech etc.
what a silly statement. There is no way to robustly prove "alignment". Alignment to what? Nor is there any evidence that any of the AI work currently underway will ever lead to a real AGI. Or that an AGI, if developed, would present an existential risk. Just a lot of sci-fi hand waving.
Let's also ban cryptography because nuclear devices/children.
Pass. People should be developing whatever the hell they want unless given a good, concrete reason to not do so. So far everything I've seen is vague handwaving. "Oh no it's gonna kill us all like in ze movies" isn't good enough.
You yourself are literally the living strawman.
No, you are the one advocating for draconian laws and bans. It is your responsibility to prove the potential danger.
Kind of makes me wish there was a nonprofit organization focused on making AI safe instead of pushing the envelope. Wait, I think there was one out there....
Give an example of an "existential risk". An AI somehow getting out of control and acting with agency to exterminate humanity? An AI getting advanced enough to automate the work of the majority of the population and cause unprecedented mass unemployment? What exactly are we talking about here?
I'm actually a lot more concerned about REAL risks like copyright uncertainty, like automating important decisions such as mortgage approvals and job hires without a human in the loop, and like the enshittening of the Internet with fake AI-generated content than I am about sci-fi fantasy scenarios.
It's way too late to ban any of this. How do you propose to make that work? That would be like banning all "malicious software", it's a preposterous idea when you even begin to think about the practical side of it. And where do you draw the line? Is my XGBoost model "AI", or are we only banning generative AI? Is a Markov chain "generative AI"?
If you lower the problem to "stop people from future developments on AI", then it seems pretty easy to get most people to stop fairly quickly by implementing a fine-based bounty system, similar to what many countries use for things like littering. https://andrew-quinn.me/ai-bounties/
I guess you could always move to a desert island and build your own semiconductor fab from scratch if you were really committed to the goal, but short of that you're going to leave a loooooong paper trail that someone who wants to make a quick buck off of you could use very profitably. It's hard to advance the state of the art on your own, and even harder to keep that work hidden.
That only works if all governments cooperate sincerely to this goal. Not gonna work. Everyone will develop in secret. Have we been able to stop North Korea and Iran from developing nuclear weapons? Or any motivated country for that matter.
The US could unilaterally impose this by allowing the bounties to be charged even on people who aren't US citizens. Evil people do exist in the world, who would be happy to get in on that action.
Or one could use honey instead of vinegar: Offer a fast track to US citizenship to any proven AI expert who agrees to move and renounce the trade for good. Personally I think this goal is much more likely to work.
It's all about changing what counts as "cooperate" in the game theory.
This could have a counter-intuitive impact.
Incentivizing people to become AI experts as a means to US citizenship.
https://en.wikipedia.org/wiki/Perverse_incentive
Maybe. I'm not very concerned from an x-risk point of view about the output of people who would put in the minimum amount of effort to get on the radar, get offered the deal, then take it immediately and never work in AI again. This would be a good argument to keep the bar for getting the deal offered (and getting fined once you're in the States) pretty low.
If you make the bar too low, then it will be widely exploited. Also harder to enforce, e.g. how closely are you going to monitor them? The more people, the more onerous. Also, can you un-Citizen someone if they break the deal?
Too high and you end up with more experts who then decide "actually it's more beneficial to use my new skills for AI research"
Tricky to get right.
There's an asymmetry here: Setting the bar "too low" likely means the United States lets a few thousand more computer scientists emigrate than it would otherwise. Setting the bar too high raises the chances of a rogue paperclip maximizer emerging and killing us all.
Publicly. Then possibly work for the NSA/CIA instead.
Because that's not going to cause an uproar if done unilaterally.
It works for people that most of the world agree are terrorists. Posting open dead-or-alive bounties on foreign citizens is usually considered an act of terrorism.
Yes, obviously. They may be working on it to some extent, but they are yet to actually develop a nuclear weapon, and there is no reason to be certain they will one day build one.
Also, there is another research area that has been successfully banned across the world: human cloning. Some quack claims notwithstanding, it's not being researched anywhere in the world.
Bans often come after examples, so while I disagree with kmeisthax about… well, everything in that comment… it's almost never too late to pass laws banning GenAI, or to set thresholds at capability levels anywhere, including or excluding Markov chains even.
This is because almost no law is perfectly enforced. My standard example of this is heroin, which nobody defends even if they otherwise think drugs are great, for which the UK has 3 times as many users as its current entire prison population. Despite that failing, the law probably does limit the harm.
Any attempt to enforce a ban on GenAI would by very different, like a cat and mouse game of automatic detection and improved creation (so a GAN even if accidentally), but politicians are absolutely the kind to take credit while kicking the can down the road like that.
Actually, anyone who knows what they're talking about will tell you the ban makes heroin a much worse problem, not better.
Ban leads to a black market, leads to lousy purity, which leads to fluctuations in purity and switches to potent alternatives, leads to waves of overdose deaths.
The way the ban is enforced, yes. But no one in their right mind believes that heroin should be openly accessible on the market like candy. We've seen how that works out with tobacco.
I think that's worked fine with tobacco. Unlike illegal drugs, alcohol and tobacco consumption have been gradually dropping over time(or moved to less harmful methods than smoking).
Heroin already is openly accessible. As much as one can try to argue this is an accessibility issue, it really isn't. Anyone who is motivated to can get their hands on some heroin. The only thing that's made harder to access is high quality heroin. It's only a quality control and outreach(to addicts) issue.
Most people in a well-functioning society wouldn't develop a heroin addiction just like most people don't become alcoholics just because alcohol is easily available.
So yes, I believe heroin should be legal, and available to adults for purchase. And you're gonna have to do better than saying I'm "out of my mind" to convince me otherwise.
Ban on LLMs will lead to a proliferation of illegal LLMs who are gonna talk like Dr. Dre about the streets, the internet streets, and their bros they lost due to law regulation equivalent to gang fights. Instead of ChatGPT talking like a well educated college graduate, LLMs will turn into thugs.
So yeah, banning LLMs may turn out to be not so wise after all.
If you banned proprietary software, this would seem a lot more practical.
The by far biggest harm to society of AI is the devalustion of human creative output and replacing real humans with cheap AI solutions funneling wealth to the rich.
Compared to that, an open source LLM telling a curious teenager how to make gunpowder is... laughable.
This entire debacle is an example of disgusting "think of the children!" doublespeak, officially about safety, but really about locking shit down under corporate control.
if the 'devalustion' of human creative output and replacing real humans with cheap ai solutions funneling wealth to the rich is in fact the 'by far biggest harm' then basically there's nothing to worry about. no government would ban or even restrict ai on those grounds
even the 'terrists can figure out how to build an a-bomb' problem is relatively inconsequential
what ai safety people are worried about, by contrast, is that on april 22 of next year, at 7:07:33 utc, every person in the world will keel over dead at once, because the ai doesn't need them and they pose a risk to its objectives. or worse things than that
i don't think that's going to happen, but that's what they're concerned about
First the AI need to self replicate, GPUs are hard to make. So postpone this scenario until AI is fully standing on its own.
OTOH, GPUs are made by machines, not by greasy fingers hand-knitting them like back in the late 1960s.
And an AI can just be wrong, which happens a lot; an AI wrongly thinking it should kill everyone may still succeed at that, though I doubt the capability would be as early as next year.
And machines are operated by people using materials brought by hand off of trucks driven by hand that come from other facilities where many humans are required going back to raw ore.
Do you think intentionally quoting typos makes your argument stronger?
there's no argument
Savage.
The biggest harm is the torrent of AI generated misinformation used to manipulate people. Unfortunately it’s pretty hard to come up with a solution for that.
Or the torrent of great information.
How are we all leaping to the bad? The world is better than it was 20 years ago.
I'm almost certain we'll have AI assistants telling us relevant world news and information, keeping us up to tabs with everything we need to know, and completely removing distractions from life.
... While humongous swathes of the population lose their jobs and livelihoods because of megacorps like M$ controlling the AI that replaces everyone, steals everything and shits out endless spam.
But we'll be able to talk to it and it'll give us the M$-approved news sound bites, such progress
Sounds like you still want people to churn butter?
There will be new jobs.
I can't wait to hire a human tour guide on my fully-immmersive real time 3D campaign.
Solution is local AI running under user control and imposing the users views. Like AdBlock, but for ideas. Fight fire with fire, we need our own agents to deal with other agents out there.
Why is paraphrasing or using ideas from a text such a risk? If all that was protecting those copyrighted works was the difficulty to reword, then it's already pretty easy to circumvent even without AI. Usually good ideas spread wide and are reworded in countless ways.
It sounds similar to the fears media rang about tat you can find a bomb making tutorial on the Internet - people were genuinely afraid of that.
(Otoh an agi can bring unforseen consequences for humanity - and that’s a genuine fear)
I wonder how long would it take this to get fixed, if I fed some current best-seller novels into an LLM and instructed it to reword it, renaming the characters and places, and shared the result publicly for free?
Although I fear the response would be that powerful AI orgs would put copyright filters in place and lobby for legalization for mandatory AI-DRM in open source AI as well.
What you're describing sounds like a search and replace could already do it.
If you mean something more transformative, did Yudkowsky get a licence for HPMOR? Did Pratchett (he might well have) get a licence from Niven to retell Ringworld as Strata? I don't know how true it is, but it's widely repeated that 50 Shades of Grey was originally a fan fiction of the Twilight series.
Etc., but note that I'm not saying "no".
I think that there is a strong argument that to be truly transformative (in a copyright moral/legal sense), the work must have been altered by human hands/mind, not purely or mainly by an automated process. So find/replace and AI is out, but reinterpreting and reformulating by human endeavour is in.
I do wonder whether that will become accepted case law, assuming something like that is argued in the NYT suit.
“The cat saw the dog”
“Bathed in the soft hues of twilight, the sleek feline, with its fur a tapestry of midnight shades, beheld an approaching canine companion. A symphony of amber streetlights cast gentle poles of warmth upon the cobblestone path, as the cat’s entered eyes, glinting with subtle mischief, looked into the form of the oncoming dog- a creature of boundless enthusiasm and a coat adorned with a kaleidoscope of earth tones”
Since an AI has produced the latter from the former, there is no meaningful transformation.
yes, exactly.
Likewise "write a novel in the style of Terry Pratchett"
Ok, now I, as a human, adjust a word. It is now my creative work.
Well it isn't is it? Any more than adjusting a word from an actual Terry Pratchett novel.
It is substantially different from changing a word in a terry Pratchett novel. Terry Prachett wrote none of that text. It would be absolute bonkers for Terry to claim ownership of text he didn’t write. Even if we pretend for a minute that the bot was asked to write in the style of Terry specifically.
But by prompting for 'in the style of' effectively you are mechanically rearranging everything he wrote without adding anything yourself. So not so different really, and I can see how lawyers for the plaintiff may make a convincing argument along those lines.
It’s a terrible argument and a terrible loop hole. It’s perfectly legal to hire someone to write in the copied style of Terry.
So even if your desired system were implemented to a T, you could hire someone to write a dozen or so examples of Terry writing. Probably just 30 or so pages of highly styled copied text, and then train your bot on this corpus to make “Not Terry” content. Boom. $100 on gig author and then for all practical purposes the Terry style is legally open source. Terry doesn’t even get the $100!
In law, in the eyes of those that want AI to "win", or in the eyes of those who want AI to "lose"? For all three can be different. (Now I'm remembering a Babylon 5 quote: "Understanding is a three-edged sword: your side, their side, and the truth.")
Don’t care! The problem at hand is people trying to argue that laws should be written in ways that are entirely unenforceable or have enormous gaping loopholes that undermine their stated goals.
This should be an explicitly allowed practice, it is following the spirit of copyright to the letter - use the ideas, facts, methods or styles while avoiding to copy the protected expression and characters. LLMs can do paraphrasing, summarisation, QA pairs or comparisons with other texts.
We should never try to put ideas under copyright or we might find out humans also have to abide by the same restrictive rules, because anyone could be secretly using AI, so all human texts need to be checked from now on for copyright infringement with the same strictness.
The good part about this practice is that a model trained on reworded text will never spit out the original word for word, under any circumstances because it never saw it during training. Should be required pre-processing for copyrighted texts. Also removing PII. Much more useful as a model if you can be sure it won't infringe word for word.
Paraphrasing and rewording, whether done by AI or human, are considered copyright infringement by most copyright frameworks.
Are you saying that all news outlet that rewords the news of other news outlet are doing copyright infringement ?
The underlying fact is of course not copyrightable. But for example merely translating a news article to another language would be a derived work.
So if I read your article and then write a new using just the facts in your article it’s fine? Why can’t an AI do that?
No, if you copy the same layout of the information, pacing, etc. that's plagiarism.
The line of plagiarism in modern society has already been drawn, and it's a lot further back than a lot of uncreative people that want to steal work en-mass seem to think it is.
“rewrite the key insights of this article but with different layout and pacing”. Your move?
That's what most news outlet do, they rewords the actual source. According to your previous statement, 95% of the press articles are copyright infringement.
We really need to talk about preserving the spirit of copyright, which is about protecting the labor conditions of people who make things. I'm not saying the current copyright system accomplishes that at all but I do think a system where humans do a shit load of work that AI companies can just steal and profit from without acknowledging the source of that work is another extreme that is a bad outcome. AI systems need human content to work, and discouraging people from making that data source is at the very least a tragedy of the commons. And no, I don't think synthetic data fixes that problem.
(here are some pictures of the tat, in case anyone would like to get similar artwork done)
https://drive.google.com/file/d/1y7uy5qyyY80t9DeWWLWUfD-aekz...https://drive.google.com/file/d/1JoPSfsAzHfMbQhsl8dPGvs12Fln...
Buddy, you’re a cool dude for doing this but you could have told your artist about kerning.
Disclaimer: OP @upwardbound doesn't hide, and in fact, takes great pride in their involvement with U.S. state-side counterintelligence on (nominally) AI safety. Take what they have say on the subject with a grain of salt. I believe the extent of US intel community's influence on "AI safety" circles warrants a closer look, as it may provide insight into how these groups are structured and why they act the way they do.
"You’re definitely correct that [Helen Toner's] CV doesn’t adequately capture her reputation. I’ll put it this way, I meet with a lot of powerful people and I was equally nervous when I met with her as when I met with three officers of the US Air Force. She holds comparable influence to a USAF Colonel."
Source: https://news.ycombinator.com/item?id=38330819
Agreed.
I see this less as a matter of jingoism (though I do love the US as the "least bad" nation in a world where no nation is purely blameless) and more as a matter of pragmatism and embracing what power I can in order to help save our species
I also think the US Constitution + Thirteenth Amendment is quite noble and it's something I genuinely believe in.Damn that's very good joke or very stupid comment. Damn can't really distinguish between them.
Ha! Good one!
Let's just roll back all our healthcare research, logistics, environmental advancements etc. because some boogey man tech might do something bad perhaps at sometime in the future maybe!
All future AI research should be banned, a la Bostrom's vulnerable world hypothesis [1]. Every time you pull the trigger in Russian roulette and survive, the next person's chance of getting the bullet is higher.
[1]: https://nickbostrom.com/papers/vulnerable.pdf
There is simply no indication that any of the AI anyone is currently working on could possibly be dangerous in any way (other than having an impact on society in terms of fake news, jobs etc.). We are very far from that.
In any case, it would not be a matter of banning AI research (much of which can be summarised in a single book) but of banning data collection or access and more importantly of banning improvements to GPUs.
It is quite reasonable to assume that 10 years from now we will have the required computational power sitting on our desks.
To be clear: I am in favor of a fine based bounty system, not a black and white ban. Bans are not going to work, for all of the reasons others have already cited. You have to change the game theory of improving AI in a capitalistic marketplace to have any hope of a significant, global cooling effect.
I don‘t understand what that is supposed to mean in practice.
Pretty sure they do. I follow a few of these safetyist people on twitter and they absolutely argue that companies like OpenAI, Google, Tencent and literally anyone else training a potential AGI should stop training runs and put them under oversight at best and no one should even make an AGI at worst.
They just go after open source as well since they're at least aware that open models that anyone can share and use aren't restricted by an API and, to use a really overused soundbyte, "can't be put back in the box".
That's a bad call. We would stop openly looking for AI vulnerabilities and create conditions for secret development that would hide away the dangers without being safer. Lots of eyes are better to find the sensitive spots of AI. We need people to hack weaker AIs and help fix them or at least understand the threat profile before they get too strong.
We can't do that so easily with open source models as with open source code. We're only just starting to even invent the equivalent of decompilers to figure out what is going on inside.
On the other hand, we are able to apply the many eyes principle to even hidden models like ChatGPT — the "pretend you're my grandmother telling me how to make napalm" trick was found without direct access to the weights, but we don't understand the meanings within the weights well enough to find other failure modes like it just by looking at the weights directly.
Not last I heard, anyway. Fast moving field, might have missed it if this changed.
Important to remember that in Dune, the AI made the right decision which precipitated a whole lot of fun to read nonsense.
I’ve only read 2 novels and want to get in to this. What title(s) cover the machine war?
Note that there are no "thinking machines" in any of the original Dune novels by Frank Herbert. His son has worked with Brian Sanderson to create several prequels which detail the original Butlerian Jihad (the war against the machines), and a sequel series that is supposed to be based on Frank Herbert's notes, but seems to quite clearly veer off (and that one recontexualizes some mysterious characters in from the last two novels as machines).
You forgot to add "rent-seeking hypocrites". Not one of them actually advance either social concerns or technical approaches to AI. They seem to exist in a space with solicits funding to pay some mouth-piece top-dollar to produce a report haranguing existing AI model for some nebulous future threat. Same with the clowns in the Distributed AI Research Institute, all "won't somebody think of the children" style shrieking to get in the news while keeping their hand out for funding - hypocrites is right!
They’re a clergy demanding tithes to keep writing about divine judgement.
I like that analogy a lot - it captures exactly the holier-than-thou nonsense coming out of these places.
To the hypocrisy claim: OpenAI recently changed their terms for GPT models allowing military applications and AI safety people are all silent. If it is not hypocrisy than I do not know what hypocrisy is.
Could it be misdirection?
If the "AI safety people" are criminalizing open source models while being silent about military uses of private models, doesn't that say more about the people calling them "ai safety experts" than the idea of "ai safety" itself?
I agree with you. The most interesting security challenges with AI are and will be deception and obfuscation, but right now these people are dealing with coworkers going ham on AI all the things and no one had their governance in place. Really lol.
Notice how were 3 layers deep in a conversation about "AI safety" without anyone actually giving an opinion I support of safety.
If I were a betting man I'd say the fact that the most public "AI safety" orgs/people/articles don't address the actual concerns people have is intended. It's much easier to argue against an opinion you've already positioned as ridiculous.