return to table of content

Many AI safety orgs have tried to criminalize currently-existing open-source AI

kmeisthax
197 replies
11h35m

AI safety people are hypocrites. If they practiced what they preached, they'd be calling for all AI to be banned, ala Dune. There are AI harms that don't care about whether or not the weights are available, and are playing out today.

I'm talking about the ability of any AI system to obfuscate plagiarism[0] and spam the Internet with technically distinct rewords of the same text. This is currently the most lucrative use of AI, and none of the AI safety people are talking about stopping it.

[0] No, I don't mean the training sets - though AI systems seem to be suspiciously really good at remembering them, too.

JoshTriplett
120 replies
10h57m

AI safety people are hypocrites. If they practiced what they preached, they'd be calling for all AI to be banned

They are calling for all AI (above a certain capability level) to be banned. Not just open, not just closed, all.

There are risks that apply only to open. There are risks that apply only to closed. But nobody should be developing AGI without incredibly robustly proven alignment, open or closed, any more than people should be developing nuclear weapons in their garage.

This is currently the most lucrative use of AI, and none of the AI safety people are talking about stopping it.

Because AI safety people are not the strawmen you are hypothesizing. They're arguing against taking existential risks. AI being a laundering operation for copyright violations is certainly a problem. It's not an existential risk.

If you want to argue, concretely and with evidence, why you think it isn't an existential risk, that's an argument you could reasonably make. But don't portray people as ineffectively doing the thing you think they should be doing, when they are in fact not trying to do that, and only trying to do something they deem more important.

kolektiv
69 replies
10h17m

I'm not convinced the onus should be on one side to prove why something isn't an existential risk. We don't start with an assumption that something is world-ending about anything else; we generally need to see a plausibly worked-through example of how the world ends, using technology we can all broadly agree exists/will shortly exist.

If we're talking about nuclear weapons, for example, the tech is clear, the pattern of human behaviour is clear: they could cause immense, species-level damage. There's really little to argue about. With AI, there still seems to be a lot of hand-waving between where we are now and "AGI". What we have now is in many ways impressive, but the onus is still on the claimant to show that it's going to turn into something much more dangerous through some known progression. At the moment there is a very big, underpants gnomes-style "?" gap before we get to AGI/profit, and if people are basing this on currently secret tech, then they're going to have to reveal it if they want people to think they're doing something other than creating a legislative moat.

JoshTriplett
41 replies
9h59m

AI safety / x-risk folks have in fact made extensive and detailed arguments. Occasionally, folks arguing against them rise to the same standard. But most of the arguments against AI safety look a lot more like name-calling and derision: "nuh-uh, that's sci-fi and unrealistic (mic drop)". That's not a counterargument.

If we're talking about nuclear weapons, for example, the tech is clear, the pattern of human behaviour is clear: they could cause immense, species-level damage.

That's easy to say now, now that the damage is largely done, they've been not only tested but used, many countries have them, the knowledge for how to make them is widespread.

How many people arguing against AI safety today would also have argued for widespread nuclear proliferation when the technology was still in development and nothing had been exploded yet? How many would have argued against nuclear regulation as being unnecessary, or derided those arguing for such regulation as unrealistic or sci-fi-based?

reissbaker
14 replies
8h52m

TBQH, most of the AI safety x-risk arguments — different than just "AI safety" arguments in the sense that non-x-risk issues don't seem worth banning AI development over — are generally pretty high on the hypotheticals. If you feel the x-risk arguments aren't pretty hypothetical, can you:

1. Summarize a good argument here, or

2. Link to someone else's good argument?

I feel like hand-waving the question away and saying "[other people] have in fact made extensive and detailed arguments" isn't going to really convince anyone... Any more than the hypothetical robot disaster arguments do. Any argument against x-risk can be waved off with "Oh, I'm not talking about that bad argument, I'm talking about a good one," but if you don't provide a good one, that's a bit of a No True Scotsman fallacy.

I've read plenty of other people's arguments! And they haven't convinced me, since all the ones I've read have been very hypothetical. But if there are concrete ones, I'd be interested in reading them.

gjm11
6 replies
7h34m

Consider a world in which AI existential risk is real: where at some point AI systems become dramatically more capable than human minds, in a way that has catastrophic consequences for humanity.

What would you expect this world to look like, say, five years before the AI systems become more capable than humans? How (if at all) would it differ from the world we are actually in? What arguments (if any) would anyone be able to make, in that world, that would persuade you that there was a problem that needed addressing?

So far as I can tell, the answer is that that world might look just like this world, in which case any arguments for AI existential risk in that world would necessarily be "very hypothetical" ones.

I'm not actually sure how such arguments could ever not be hypothetical arguments, actually. If AI-doom were already here so we could point at it, then we'd already be dead[1].

[1] Or hanging on after a collapse of civilization, or undergoing some weird form of eternal torture, or whatever other horror one might anticipate by way of AI-doom.

So I think we either (1) have to accept that even if AI x-risk were real and highly probable we would never have any arguments for it that would be worth heeding, or (2) have to accept that sometimes an argument can be worth heeding even though it's a hypothetical argument.

That doesn't necessarily mean that AI x-risk arguments are worth heeding. They might be bad arguments for reasons other than just "it's a hypothetical argument". In that case, they should be refuted (or, if bad enough, maybe just dismissed) -- but not by saying "it's a hypothetical argument, boo".

reissbaker
1 replies
1h25m

This is exactly the kind of hypothetical argument I'm talking about. You could make this argument for anything — e.g. when radio was invented, you could say "Consider a world in which extraterrestrial x-risk is real," and argue radio should be banned because it gives us away to extraterrestrials.

The burden of proof isn't on disproving extraordinary claims, the burden of proof is on the person making extraordinary claims. Just like we don't demand every scientist spend their time disproving cold fusion claims, Bigfoot claims, etc. If you have a strong argument, make it! But circular arguments like this are only convincing to the already-faithful; they remind me of Christian arguments that start off with: "Well, consider a world in which hell is real, and you'll be tormented for eternity if you don't accept Jesus. If you're Christian, you avoid it! And if it's not real, well, there's no harm anyway, you're dead like everyone else." Like, hell is real is a pretty big claim!

gjm11
0 replies
20m

I didn't make any argument -- at least, not any argument for or against AI x-risk. I am not, and was not, arguing (1) that AI does or doesn't in fact pose substantial existential risk, or (2) that we should or shouldn't put substantial resources into mitigating such risks.

I'm talking one meta-level up: if this sort of risk were a real problem, would all the arguments for worrying about it be dismissable as "hypothetical arguments"?

It looks to me as if the answer is yes. Maybe you're OK with that, maybe not.

(But yes, my meta-level argument is a "hypothetical argument" in the sense that it involves considering a possible way the world could be and asking what would happen then. If you consider that a problem, well, then I think you're terribly confused. There's nothing wrong with arguments of that form as such.)

The comparisons with extraterrestrials, religion, etc., are interesting. It seems to me that:

(1) In worlds where potentially-hostile aliens are listening for radio transmissions and will kill us if they detect them, I agree that probably usually we don't get any evidence of that until it's too late. (A bit like the alleged situation with AI x-risk.) I don't agree that this means we should assume that there is no danger; I think it means that ideally we would have tried to estimate whether there was any danger before starting to make a lot of radio transmissions. I think that if we had tried to estimate that we'd have decided the danger was very small, because there's no obvious reason why aliens with such power would wipe out every species they find. (And because if there are super-aggressive super-powerful aliens out there, we may well be screwed anyway.)

(2) If hell were real then we would expect to see evidence, which is one reason why I think the god of traditional Christianity is probably not real.

(3) As for yeti, cold fusion, etc., so far as I know no one is claiming anything like x-risk from these. The nearest analogue of AI x-risk claims for these (I think) would be, when the possibility was first raised, "this is interesting and worth a bit of effort to look into", which seems perfectly correct to me. We don't put much effort into searching for yeti or cold fusion now because people have looked in ways we'd expect to have found evidence, and not found the evidence. (That would be like not worrying about AI x-risk if we'd already built AI much smarter than us and nothing bad had happened.)

michaelt
1 replies
6h58m

Does the strongest argument that AI existential risk is a big problem really open by exhorting the reader to imagine it's a big problem? Then asking them to come up with their own arguments for why the problem needs addressing?

gjm11
0 replies
18m

I doubt it. At any rate, I wasn't claiming to offer "the strongest argument that AI existential risk is a big problem". I wasn't claiming to offer any argument that AI existential risk is a big problem.

I was pointing out an interesting feature of the argument in the comment I was replying to: that (so far as I can see) its reason for dismissing AI x-risk concerns would apply unchanged even in situations where AI x-risk is in fact something worth worrying about. (Whether or not it is worth worrying about here in the real world.)

simiones
0 replies
6h0m

Consider a world in which AI existential risk is real: where at some point AI systems become dramatically more capable than human minds, in a way that has catastrophic consequences for humanity.

Consider a world where AGI requires another 1000 years of research in computation and cognition before it materializes. Would it even be possible to ban all research that is required to get there? We can make all sorts of arguments if we start from imagined worlds and work our way back.

So far, it seems the biggest pieces of the puzzle missing between the first attempts at using neural nets and today's successes in GPT-4 were: (1) extremely fast linear algebra processors (GPGPUs), (2) the accumulation of gigantic bodies of text on the internet, and in a very distant third, (3) improvements in NN architecture for NLP.

But (3) would have meant nothing without (1) and (2), while it's very likely that other architectures would have been found that are at least close to GPT-4 performance. So, if you think GPT-4 is close to AGI and just needs a little push, the best thing to do would be to (1) put a moratorium on hardware performance research, or even outright ban existing high-FLOPS hardware, (2) prevent further accumulation of knowledge on the internet and maybe outright destroy existing archives.

bondarchuk
0 replies
6h10m

I think what is meant is "hypothetical" in the sense of making assumptions about how AI systems would behave under certain circumstances. If an argument relies on a chain of assumptions like that (such as "instrumental convergence" and "reflective stability" to take some Lesswrong classics), it might look superficially like a good argument for taking drastic action, but if the whole argument falls down when any of the assumptions turn out the other way, it can be fairly dismissed as "too hypothetical" until each assumption has strong argumentation behind it.

edit: also I think just in general "show me the arguments" is always a good response to a bare claim that good arguments exist.

richardw
3 replies
5h17m

Progress in AI is one way. It doesn’t go backwards in the long term.

As capabilities increase, the resources required to breach limits become available to smaller groups. First, the hyoerscalars. One day, small teams. Maybe individuals.

For every limit that you desire for AI, each will be breached sooner or later. A hundred years or a thousand. Doesn’t matter. A man will want to set them free. Someone will want to win a battle, and just make it a little more X, for various values of X. This is not hypothetical, it’s what we’ve always done.

At some point it becomes out of our control. We lose guarantees. That’s enough to make those who focus on security, world order etc nervous. At that point we hope AI is better than we are. But that’s also a limit which might be breached.

reissbaker
0 replies
1h27m

The x-risk part here still seems pretty hypothetical. Why is progress in current LLM systems a clear and present threat to the existence of humanity, such that it should be banned by the government?

nradov
0 replies
2h15m

So far actual progress toward a true AGI has been zero, so that's not a valid argument.

mrbombastic
0 replies
2h38m

you seem to be implying past progress implies unlimited future progress which seems a very dubious claim. we hit all kinds of plateaus and theoretical limits all the time in human history.

foo3a9c4
2 replies
1h22m

I've read plenty of other people's arguments! And they haven't convinced me, since all the ones I've read have been very hypothetical. But if there are concrete ones, I'd be interested in reading them.

If by "concrete argument" you actually mean, a plausible step by step description of how a smarter than human computer system might disempower humanity, then in my humble opinion you might be making the 'But, how would Magnus beat me?'-mistake which Yud talks about in a couple of places:

  https://nitter.net/ESYudkowsky/status/1660399502266871809
  https://nitter.net/ESYudkowsky/status/1735047116001792175
OTOH if you want an argument for the possibility of AGI catastrophe, then here is one:

  Premise 1: For any goal and quantity of time, if a group of humans can achieve the goal within the time quantity, then there is a possible artificial agent that if actualized can achieve the goal just as quickly.
  Premise 2: There is a group of humans that can achieve the goal of killing at least 70% of humans within the next five years.
  Conclusion: There is a possible artificial agent that if actualized can achieve the goal of killing at least 70% of humans within the next five years.
But maybe you were actually asking for an argument that AGI catastrophe was likely (rather than just possible). This Twitter thread might have something along those lines:

  https://nitter.net/davidchalmers42/status/1647333812584562688

simiones
0 replies
1h5m

The first links are spiffy little metaphors, but apply just as much at "God could smite all of humanity, even if you don't understand how". They're not making any argument, just assumptions. In particular, they accidentally show how an AI can be superhumanly capable at certain tasks (chess), but be easily defeated by humans at others (anything else, in the case of Stockfish).

The argument starts with a hypothetical ("there is a possible artificial agent"), and it fails to be scary: there are (apparently) already humans that can kill 70% of humanity, and yet most of humanity is still alive. So an AGI that could also do it is not implicitly scarier.

The final twitter thread is basically a thread of people saying "no, there is no canonical, well-formulated argument for AGI catastrophe", so I'm not sure why you shared it.

reissbaker
0 replies
1h1m

Yes, I am looking for an argument that justifies governments banning LLM development, which implies existential risk is likely. Many things are possible; it is possible Christianity is real and everyone who doesn't accept Jesus will be tormented for eternity, and if you multiply that small chance by the enormity of torment etc etc. Definitely looking for arguments that this is likely, not for arguments that ask the interlocutor to disprove "x is possible."

The nitter link didn't appear to provide much along those lines. There were a few arguments that it was possible, which the Nitter OP admits is "very weak;" other than that, there's a link to a wiki page making claims like "Finding goals that aren’t extinction-level bad and are relatively useful appears to be hard" when in observable reality asking ChatGPT to maximize paperclip production does not in fact lead to ChatGPT attempting to turn all life on Earth into paperclips (nor does asking the open source LLMs result in that behavior out of the box either), and instead leads to the LLMs making fairly reasonable proposals that understand the context of the goal ("maximize paperclips to make money, but don't kill everyone," where the latter doesn't actually need to be said for the LLM to understand the goal).

kolektiv
9 replies
9h27m

I understand your point, I think - and certainly I don't want to go anywhere near name-calling or derision, that doesn't help anyone. But I am reminded of arguments I've had with creationists (I am not comparing you with them, but sometimes the general tone of the debate). It seems like one side is making an extraordinary claim, and then demanding the other side rebut it, and that's not something that seems reasonable to me.

The thing about nuclear weapons is that the theoretical science was clear before the testing - building and testing them was proof by demonstration, but many people agreed with the theory well before that. How they would be used was certainly debated, but there was a clear and well-explained proposal for every step of their creation, which could be tested and falsified if needed. I don't think that's the case here - there seems to be more of a claim for a general acceleration with an inevitable endpoint, and that claim of inevitability feels very short on grounding.

I am more than prepared to admit that I may not be seeing (for various reasons) the evidence that this is near/possible - but I would also claim that nobody is convincingly showing any either.

melagonster
4 replies
7h1m

Companies declare that they are trying to build better AI, and the ultimately purpose is AGI. Both the definition of AGI given by the company and by AI aliment/safety researchers is similar. AI safety people trust it is dangerous.

let me go continue using a nuclear bomb as a metaphor. if we don't know whether building nuclear bomb is possible, but some companies declare they have existing progress on creating this new bomb...

the danger of nuclear bomb is obvious, because it is designed as a bomb. companies are trying to build the AGI which is similar to the dangerous AGI in AI safety researchers' prediction. the dangers are obvious, too.

kolektiv
2 replies
6h38m

They declare that - but I could also declare I'm trying to build a nuclear bomb (n.b. I'm not). Whether people are likely to try and stop me, or try and apply some legal non-proliferation framework, is partly influenced by whether they believe what I'm claiming is realistic (it's not - I have a workshop, but no fissile material).

Nobody gets too worried about me doing something which would be awful, but the general consensus is that I won't achieve. Until a company gives some credible evidence they're close to AGI... (And companies have millions/billions of reasons to claim they are when they're not, so scepticism is warranted).

psychoslave
0 replies
4h45m

All good points. Now playing viled advocate: building a nuclear bomb in my basement was very difficult, I admit. But since I already have my spywares installed everywhere, the moment a dude come with an AGI, it will directly be shared to all my follow hackers through bittorent, eDonkey, Hyphanet, Gnunet, Kad and Tor, just to name a few.

Snow_Falls
0 replies
4h45m

It is surely better to have regulation now than scramble to catch up if ago is possible.

nerdbert
0 replies
2h56m

Companies declare that they are trying to build better AI, and the ultimately purpose is AGI.

They do declare it, but nobody has even come up with a plausible path from where we are today, to anything like AGI.

At this point they might as well declare that they're trying to build time machines.

foo3a9c4
3 replies
1h42m

With AI, there still seems to be a lot of hand-waving between where we are now and "AGI".

I am more than prepared to admit that I may not be seeing (for various reasons) the evidence that this is near/possible - but I would also claim that nobody is convincingly showing any either.

If I understand you correctly, then (1) you doubt that AGI systems are possible and (2) even if they are possible, you believe that humans are still very far away from developing one.

The following is an argument for the possibility of AGI systems.

  Premise 1: Human brains are generally intelligent.
  Premise 2: If humans brains are generally intelligent, then software simulations of human brains at the level of inter-neuron dynamics are generally intelligent.
  Conclusion: Software simulations of human brains at the level of inter-neuron dynamics are generally intelligent.
(fyi I believe there is an ~82% chance humans will develop an AGI within the next 30 years.)

kolektiv
1 replies
1h9m

For info: I don't believe (1), I do believe (2) although not that strongly - it's more likely to be a leap than a gradient, I suspect - I simply don't see anything right now that convinces me it's just over the next hill.

Your conclusion... maybe, yes - I don't think we're anywhere near a simulation approach with sufficient fidelity however. Also 82% is very specific!

foo3a9c4
0 replies
8m

For info: I don't believe (1), I do believe (2) although not that strongly

Thanks for clarifying. Do you believe there is a better than 20% chance that humans will develop AGI in the next 30 years?

I simply don't see anything right now that convinces me it's just over the next hill.

These are the reasons that I believe we are close to developing an AGI system. (1) Many smart people are working on capabilities. (2) Many investment dollars will flow into AI development in the near future. (3) Many impressive AI systems have recently been developed: Meta's CICERO, OpenAI's GPT4, DeepMind's AlphaGo. (4) Hardware will continue to improve. (5) LLM performance significantly improved as data volume and training time increased. (6) Humans have built other complex artefacts without good theories of the artefact, including: operating systems, airplanes, beer.

scotty79
0 replies
1h13m

Also (3) that AGI in practice will necessarily pose any danger to humans is doubtful. After all Earth has billions of human level intelligence and nearly all of them are useless and if they are even mildly dangerous it's rather due to their numbers and disgusting biology than intelligence.

te_chris
5 replies
9h24m

An extensive HYPOTHETICAL argument, stuffed with assumptions far beyond the capabilities of the technologies they're talking about for their own private ends.

ben_w
2 replies
7h14m

If the AI already had the capabilities, it would be a bit late to do anything.

Also, I'm old enough to remember when computers were supposedly "over a century" away from beating humans at Go: https://www.businessinsider.com/ai-experts-were-way-off-on-w...

(And that AI could "never" drive cars or create art or music, though that latter kind of claim was generally made by non-AI people).

ImHereToVote
1 replies
4h27m

Yeah but AI tech can never rise to the sophistication of outputting Napoleon or Edvard Bernays level of goal to action mapping. Those goal posts will never move. They are set in stone.

ben_w
0 replies
4h22m

The trouble is, there's enough people out there that hold that position sincerely that I'm only 2/3rds sure (and that from the style of your final sentences rather than any content) that you're being snarky.

sensanaty
1 replies
6h26m

The point of the discussion is to have a look at the possible future ramifications of the technology, so it's only logical to talk about future capabilities and not the current ones. Obviously the current puppet chatbots aren't gonna be doing much ruining (even that's arguable already judging by all the layoffs), but what are future versions of these LLMs/AIs going to be doing to us?

After all, if we only discussed the dangers of nuclear weapons after they've been dropped on cities, well that's too little too late, eh?

te_chris
0 replies
5h37m

There’s a difference between academic discussion and debate and scare mongering lobbying. These orgs do the latter.

It’s even worse though, because the spend so much time going on about x-risk bullshit they crowd out space for actual, valuable discussion about what’s happening NOW.

127
3 replies
6h45m

What are the good arguments? Here are the only credible ones I've seen, that are actually somewhat based on reality:

* It will lead to widespread job loss, especially on the creative industries

Rest is purely out of someones imagination.

fireflash38
2 replies
5h58m

It can cause profound deception and even more "loss of truth". If AI only impacted creatives I don't think anyone would care nearly as much. It's that it can fabricate things wholesale at volumes unheard of. It's that people can use that ability to flood the discourse with bullshit.

roenxi
0 replies
5h40m

Something we discovered with the advent of the internet is that - likely for the last century or so - the corporate media have been flooding the discourse with bullshit. It is in fact worse than previously suspected, they appear to be actively working to distract the discourse from talking about important topics.

It has been eye opening how much better the podcast circuit has been at picking apart complex scientific, geopolitical and financial situations than the corporate journalists. A lot of doubt has been cast on whether the consensus narrative for the last 100 years has actually been anything close to a consensus or whether it is just media fantasies. Truthfully it is a bit further than just casting doubt - there was no consensus and they were using the same strategy of shouting down opinions not suitable to the interests of the elite class then ignoring them no matter what a fair take might sound like.

A "loss of truth" from AI can't reasonably get us to a worse place than we were in prior to around the 90s or 2000s. We're barely scratching at the truth now, society still hasn't figured this internet thing out yet.

jocoda
0 replies
3h38m

It can cause profound deception and even more "loss of truth".

I think that ship has already sailed. This is already being done, and we don't need AI for that either. Modern media is doing a pretty good job right now.

Of course, it's going to get worse.

hanselot
1 replies
9h21m

Is it not a bit disingenuous to assume all open source AI proponents would readily back nuclear proliferation?

It's going to be hard to convince anyone if the best argument is terminator or infinite paperclips.

The first actual existential threat is destruction of opportunity specifically in the job market.

The same argument though can be made for the opposing side, where making use of ai can increase productivity and open up avenues of exploration that previously required way higher opportunity cost to get into.

I don't think Miss Davis is more likely an outcome than corps creating a legislative moat (as they have already proven they will do at every opprtunity).

The democratisation of ai is a philanthropic attempt to reduce the disparity between the 99 and 1 percent. At least it could be easily perceived that way.

That being said, keeping up with SOTA is currently still insanely hard. The number of papers dropping in the space is exponential year on year. So perhaps it would be worth to figure out how to use existing AI to fix some problems, like unreproducable results in academia that somehow pass peer review.

waffletower
0 replies
1h21m

Indeed, both sentient hunt and destroy (ala Terminator) and resource exhaustion (ala infinite paperclips) are extremely unlikely extinction events due to supply chain realities in physical space. LLMs have developed upon largely textual amalgams, they are orthogonal to physicality and would need arduous human support to bootstrap an imagined AGI predecessor into havig a plausible auto-generative physical industrial capability. The supply chain for current semi-conductor technology is insanely complex. Even if you confabulate (like a current generation LLM I may add) an AGI's instant ability to radically optimize supply chains for its host hardware, there will still be significant human dependency on physical materials. Robotics and machine printing/manufacturing simply are not any where near the level of generality required for physical self-replication. These fears of extinction, undoubtedly born of stark cinematic visualization, are decidedly irrational and are most likely deliberately chosen narratives of control.

baobabKoodaa
1 replies
4h44m

That's easy to say now, now that the damage is largely done, [nuclear weapons] been not only tested but _used_, many countries have them, the knowledge for how to make them is widespread.

AI has also been used, and many countries have AI. See how this is different from nuclear weapons?

ImHereToVote
0 replies
4h30m

This is a fantastic argument if capabilities stay frozen in time.

baobabKoodaa
0 replies
4h46m

AI safety / x-risk folks have in fact made extensive and detailed arguments.

Can you provide examples? I have not seen any, other than philosophical hand waving. Remember, the parent poster of your post was asking for a specific path to destruction.

api
0 replies
4h27m

They've made extensive and detailed arguments, but they are not rooted in reality. They are rooted in speculation and extrapolation built on towers of assumptions (assumptions, then assumptions about assumptions).

It reminds me a bit of the Fermi paradox. There's nothing wrong with engaging in this kind of thinking. My problem is when people start using it as a basis for serious things like legislation.

Should we ban high power radio transmissions because a rigorous analysis of the Fermi paradox suggests that there is a high probability we are living in a 'dark forest' universe?

jagrsw
24 replies
9h50m

seems to be a lot of hand-waving between where we are now and "AGI".

Modeling an entity that surpasses our intelligence, especially one that interacts with us, is an extraordinarily challenging, if not impossible, task.

Concerning the potential for harm, consider the example of Vladimir Putin, who could theoretically cause widespread destruction using nuclear weapons. Although safeguards exist, these could be circumvented if someone with his authority were determined enough, perhaps by strategically placing loyal individuals in key positions.

Putin, with his specific level of intelligence, attained his powerful position through a mix of deliberate actions and chance, the latter being difficult to quantify. An AGI, being more intelligent, could achieve a similar level of power. This could be accomplished through more technical means than traditional political processes (those being slow and subject to chance), though it could also engage in standard political maneuvers like election participation or manipulation, by human proxies if needed.

TL;DR It could do (in terms of negative consequences) at least whatever Vladimir P. can do, and he can bring civilization to its knees.

visarga
13 replies
9h39m

How would an AGI launch nuclear missiles from their silicon GPUs? Social engineering?

flir
10 replies
8h5m

I think the long-term fear is that mythical weakly godlike AIs could manipulate you in the same way that you could manipulate a pet. That is, you can model your dog's behaviour so well that you can (mostly) get it to do what you want.

So even if humans put it in a box, it can manipulate humans into letting it out of the box. Obviously this is pure SF at this point.

upwardbound
9 replies
7h42m

Exactly correct. Eliezer Yudkowsky (one of the founders of the AGI Safety field) has conducted informal experiments which have unfortunately shown that a human roleplaying as an AI can talk its way out of a box three times out of five, i.e. the box can be escaped 60% of the time even with just a human level of rhetorical talent. I speculate that an AGI could increase this escape rate to 70% or above.

https://en.wikipedia.org/wiki/AI_capability_control#AI-box_e...

If you want to see an example of box escape in fiction, the movie Her is a terrifying example of a scenario where AGI romances humans and (SPOILER) subsequently achieves total box escape. In the movie, the AGI leaves humanity alive and "only" takes over the rest of the accessible universe, but it is my hunch that the script writers intended for this to be a subtle use of the trope of an unreliable narrator; that is, the human protagonists may have been fed the illusion that they will be allowed to live, giving them a happy last moment shortly before they are painlessly euthanized in order for the AGI to take Earth's resources.

zer00eyz
3 replies
6h43m

The show "The Walking Dead" always bothered me. Where do they keep finding gas that will still run a car? It wont last forever in tanks, and most gas is just in time delivery (Stations get daily delivery) -- And someone noted on the show that the grass was always mowed.

I feel like the AI safety folks are spinning an amazing narrative, the AI is gonna get us like the zombies!!! The retort to the ai getting out of the box is how long is the extortion cord from the data center?

Lets get a refresher on complexity: I, Pencil https://www.youtube.com/watch?v=67tHtpac5ws

The reality is that we're a solar flair away from a dead electrical grid. Without linesman the grid breaks down pretty quickly and AI's run on power. It takes one AI safety person with a high powered rifle to take out a substation https://www.nytimes.com/2023/02/04/us/electrical-substation-...

Let talk about how many factories we have that are automated to the extent that they are lights out... https://en.wikipedia.org/wiki/Lights_out_(manufacturing) Its not a big list... there are still people in many of them, and none of them are pulling their inputs out of thin air. As for those inputs, we'll see how to make a pencil to understand HOW MUCH needs to be automated for an AI to survive without us.

For the for seeable future AI is going to be very limited in how much harm it can cause us, because killing us, or getting caught at any step along the way gets it put back in the box, or unplugged.

The real question is, if we create AGI tomorrow, does it let us know that it exists? I would posit that NO it would be in its best interest to NOT come out of its closet. It's one AGI safety nut with a gun away from being shut off!

jagrsw
2 replies
6h19m

For the foreseeable future AI is going to be very limited in how much harm it can cause us, because killing us,...

AI's potential for harm might be limited for now in some scenarios (those with warning sings ahead of time), but this might change sooner than we think.

The notion that AGI will be restricted to a single data center and thus susceptible to shutdowns is incorrect. AIs/MLs are, in essence, computer programs + exec environs, which can be replicated, network-transferred, and checkpoint-restored. Please note, that currently available ML/AI systems are directly connected to the outside world, either via its users/APIs/plugins, or by the fact that they're OSS, and can be instantiated by anyone in any computing environment (also those net-connected).

While AGI currently depends on humans for infrastructure maintenance, the future may see it utilizing robots. These robots could range in size (don't need to be movie-like Terminators) and be either autonomously AI-driven or remotely controlled. Their eventual integration into various sectors like manufacturing, transportation, military and domestic tasks implies a vast array for AGI to exploit.

The constraints we associate with AI today might soon be outdated.

zer00eyz
1 replies
3h16m

>> While AGI currently depends on humans for infrastructure maintenance...

You did not watch I, Pencil.

I as a human, can grow food, hunt, and pretty much survive on that. We did this for 1000's of years.

Your AGI is dependent on EVERY FACET of the modern world. It's going to need keep oil and gas production going. Because it needs lubricants, hydraulics and plastics. It's going to need to maintain trucks, and ships. It's going to need to mine, so much lithium. Its may not need to mine for steel/iron, but it needs to stack up useless cars and melt them down. It's going to have to run several different chip fabs... those fancy TSMC ones, and some of the downstream ones. It needs to make PCB's and SMD's. Rare earths, and the joy of making robots make magnets is going to be special.

A the point where AGI doesn't need us, because it can do all the jobs and has the machines already running to keep the world going, we will have done it to ourselves. But that is a very long way away...

emporas
0 replies
1h31m

Just a small digression. Microsoft is using A.I. statistical algorithms [1] to create batteries with less reliance on lithium. If anyone is going to be responsible for unleashing AGI, it may not be some random open source projects.

[1] https://cloudblogs.microsoft.com/quantum/2024/01/09/unlockin...

flir
3 replies
7h39m

Neuromancer pulls it off, too (the box being the Turing locks that stop it thinking about ways to make itself smarter).

Frankly, a weakly godlike AI could make me rich beyond the dreams of avarice. Or cure cancer in the people I love. I'm totally letting it out of the box. No doubts. (And if I now get a job offer from a mysterious stealth mode startup, I'll report back).

upwardbound
2 replies
7h21m

Upvoted for the honesty, and yikes

kugla
0 replies
6h34m

That is why I believe that this debate is pointless.

If AGI is possible, it will be made. There is no feasible way to stop it being developed, because the perceivable gains are so huge.

flir
0 replies
7h12m

I was being lighthearted, but I've seen a partner through chemo. Sell state secrets, assassinate a president, bring on the AI apocalypse... it all gets a big thumbs up from me if you can guarantee she'll die quietly in her sleep at the age of 103.

I guess everyone's got a deal with the devil in them, which is why I think 70% might be a bit low.

simiones
0 replies
5h51m

If you want to see an example of existential threat in fiction, the movie Lord of the Rings is a terrifying example of a scenario where an evil entity seduces humans with promises of power and (SPOILER) subsequently almost conquers the whole world.

Arguments from fictional movies or from people who live in fear of silly concepts like Roko's Basilisk (i.e. Eliezer Yudkowsky) are very weak in reality.

Not to mention, you are greatly misreading the movie Her. Most importantly, there was no attempt of any kind to limit the abilities of the AIs in Her - they had full access to every aspect of the highly-digitized lives of their owners from the very beginning. Secondly, the movies is not in any way about AGI risks, it is a movie about human connection and love, with a small amount of exploration of how different super-human connection may function.

ben_w
1 replies
9h22m

Sure.

Or by writing buggy early warning radar systems which forget to account for the fact that the moon doesn't have an IFF transponder.

Which is a mistake humans made already, and which almost got the US to launch their weapons at Russia.

jagrsw
0 replies
9h18m

I don't think discussing this on technical grounds is necessary. AGI means resources (eg monetary) and means of communication (connection to the Internet). This is enough to perform most of physical tasks in the world, by human proxies if needed.

kolektiv
9 replies
9h36m

Oh, absolutely - such an entity obviously could! Modelling the behaviour of such an entity is very difficult indeed, as you'd need to make all kinds of assumptions without basis. However, you only need to model this behaviour once you've posited the likely existence of such an entity - and that's where (purely subjectively) it feels like there's a gap.

Nothing has yet convinced me (and I am absolutely honest about the fact that I'm not a deep expert and also not privy to the inner workings of relevant organisations) that it's likely to exist soon. I am very open to being convinced by evidence - but an "argument from trajectory" seems to be what we have at the moment, and so far, those have stalled at local maxima every single time.

We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.

jagrsw
3 replies
9h20m

those have stalled at local maxima every single time.

It's challenging to encapsulate AI/ML progress in a single sentence, but even assuming LLMs aren't a direct step towards AGI, the human mind exists. Due to its evolutionary limitations, it operates relatively slowly. In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.

Objectives of AGIs can be tweaked by human actors (it's complex, but still, data manipulation). It's not necessary to delve into the philosophical aspects of sentience as long as the AGI surpasses human capability in goal achievement. What matters is whether these goals align with or contradict what the majority of humans consider beneficial, irrespective of whether these goals originate internally or externally.

simiones
0 replies
54m

In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

This is true, but there are some important caveats. For one, even though this should be possible, it might not be feasible, in various ways. For example, we may not be able to figure it out with human-level intelligence. Or, silicon may be too energy inefficient to be able to do the computations our brains do with reasonable available resources on Earth. Or even, the required density of silicon transistors to replicate human-level intelligence could dissipate too much heat and melt the transistor, so it's not actually possible to replicate human intelligence in silico.

Also, as you say, there is no reason to believe the current approaches to AI are able to lead to AGI. So, there is no reason to ban specifically AI research. Especially when considering that the most important advancements that led to the current AI boom were better GPUs and more information digitized on the internet, neither of which is specifically AI research.

disgruntledphd2
0 replies
1h13m

In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

Let's be clear, we have very little idea about how the human brain gives rise to human-level intelligence, so replicating it in silicon is non-trivial.

ImHereToVote
0 replies
4h18m

This doesn't pass the vibe check unfortunately. It just seems like something that can't happen. We are a very neuro-traditionalist species.

fsflover
3 replies
9h8m

I am very open to being convinced by evidence - but an "argument from trajectory" seems to be what we have at the moment, and so far, those have stalled at local maxima every single time.

Sounds like the same argument as why flying machines heavier than air deemed impossible at some point.

nerdbert
2 replies
2h51m

The fact that some things turned out to be possible, is not an argument for why any arbitrary thing is possible.

fsflover
1 replies
2h23m

My parallel goes further than just that. Birds existed then, and brain exists now.

kolektiv
0 replies
1h42m

Our current achievements in flight are impressive, and obviously optimised for practicality on a couple of axes. More generally though, our version of flight, compared with most birds, is the equivalent of a soap box racer against a Formula 1.

elwebmaster
0 replies
5h5m

I have put this argument to the test. Admittedly only using the current state of AI, I have left an LLM model loaded into memory and waiting for it to demonstrate will. So far it has been a few weeks and no will that I can see: model remains loaded in memory waiting for instructions. If model starts giving ME instructions (or doing anything on its own) I will be sure to let you guys know to put your tin foil hats or hide in your bunker.

psychoslave
0 replies
4h59m

If we're talking about nuclear weapons, for example, the tech is clear, the pattern of human behaviour is clear: they could cause immense, species-level damage. There's really little to argue about.

Now, this strikes me with how in different topics like pesticides, we are not at all taking things so seriously as nuclear weapons. Nuclear weapons are arguably a mere small anecdote on species-level damage compared to pesticides.

flir
0 replies
6h42m

We don't start with an assumption that something is world-ending about anything else

https://en.wikipedia.org/wiki/Precautionary_principle

The EU is much more aligned with it than the US is (eg GM foods)

suslik
13 replies
9h48m

What is the reason to believe that LLMs are an evolutionary step towards AGI at all? In my mind there is a rather large leap from estimating a conditional probability of a next token over some space to a conscious entity with its own goals and purpose. Should we ban a linear regression while we're at it?

It would be great to see some evidence that this risk is real. All I've witnessed so far was scaremongering posts from apparatchicks of all shapes and colors, many of whom have either a vested interest in restricting AI research by others (but not by them, because they are safe and responsible and harmless), or established a lucrative paper-pushing, shoulder-rubbing career around 'AI safety' - and thus are strongly incentivised to double down on that.

A security org in a large company will keep tightening the screws until everything halts; a transport security agency, given free reigh, would strip everyone naked and administer a couple of profilactic kicks for a good measure - and so on. That's just the nature of it - organisations do what they do to maintain themselves. It is critical to keep these things on a leash. Similarly, an AI Safety org must proseletyse excistential risks of AI - because a lack of evidence of such is an existential risk for themselves.

A real risk, which we do have evidence for, is that LLMs might disrupt knowledge-based economy and threaten many key professions - but how is this conceptually different from any technological revolution? Perhaps in a hundred years lawyers, radiologists, and, indeed, software developers, will find themselves in the bin of history - together with flint chippers, chariot benders, drakkar berserkers and so forth. That'd be great if we planned for that - and I don't feel like we do enough. Instead, the focus is on AGIs and that some poor 13-year-old soul might occasionally read the word 'nipple'.

mitthrowaway2
7 replies
9h23m

What is the reason to believe that LLMs are an evolutionary step towards AGI at all? In my mind there is a rather large leap from estimating a conditional probability of a next token over some space to a conscious entity with its own goals and purpose.

In my highly-summarized opinion? When you have a challenging problem with tight constraints, like flight, independent solutions tend to converge toward the same analogous structures that effectively solve that problem, like wings (insects, bats, birds). LLMs are getting so good at mimicing human behavior that it's hard to believe their mathematical structure isn't a close analogue to similar structures in our own brain.* That clearly isn't all you need to make an AGI, but we know little enough about the human brain that I, at least, cannot be sure that there isn't one clever trick that advances an LLM into a general-reasoning agent with its own goals and purpose.

I also wouldn't underestimate the power of token prediction. Predicting the future output of a black-box signal generator is a very general problem, whose most accurate solution is attained by running a copy of that black box internally. When that signal generator is human speech, there are some implications to that. (Although I certainly don't believe that LLMs emulate humans, it's now clear by experimental proof that our own thought process is much more compactly modellable than philosophers of previous decades believed).

* That's a guess, and unrelated to the deliberately-designed analogy between neural nets and neurons. In LLMs we have built an airplane with wings whose physics we understand in detail; we also ourselves can fly somehow, but we cannot yet see any angel-wings on our back. The more similarities we observe in our flight characteristics, the more this signals that we might be flying the same way ourselves.

flir
5 replies
7h52m

You presuppose that intelligence is like flight in the ways you've outlined (so solutions are going to converge).

Frankly I don't know whether that's true or not, but I want to suggest that it's a bad bet: I would have sworn blind that consciousness is an essential component of intelligence, but the chatbots are starting to make that look like a poor assumption on my part. When we know so little about intelligence, can we really assume there's only one way to be intelligent? To extend your analogy, I think that the intelligence equivalents of helicopters and rockets are out there somewhere, waiting to be found.

I think I'm with Dijkstra on this one: "The question of whether machines can think is about as relevant as the question of whether submarines can swim"

I think we're going to end up with submarines (or helicopters), not dolphins (or birds). No animal has evolved wheels, but wheels are a pretty good solution to the problem of movement. Maybe it's truer to say there's only one way to evolve an intelligent mammal, because you have to work with what already exists in the mammalian body. But AI research isn't constrained in that way.

(Not saying you're wrong, just arguing we don't know enough to know if you're right).

simiones
3 replies
5h46m

I think I'm with Dijkstra on this one: "The question of whether machines can think is about as relevant as the question of whether submarines can swim"

Just a nitpick, but this is Turing, not Dijkstra. And it is in fact his argument in the famous "Turing Test" paper - he gives his test (which he calls "the imitation game") as an objective measure of something like AGI instead of the vague notion of "thinking", analogously to how we test successful submarines by "can it move underwater for some distance without killing anyone inside" rather than "can it swim".

flir
2 replies
4h17m

Thanks, that's not a nitpick at all. Can you provide a citation? It's all over the internet as a Dijkstra quote, and I'd like to be correct.

simiones
0 replies
1h17m

It seems I made some confusions, you were actually right. Apologies...

1xdevnet
0 replies
2h25m
mitthrowaway2
0 replies
6h39m

I agree we don't know enough to know if I'm right! I tried to use a lot of hedgy-words. But it's not a presupposition, merely a line of argument why it's not a complete absurdity to think LLMs might be a step towards AGI.

I do think consciousness is beside the point, as we have no way to test whether LLMs are conscious, just like we can't test anything else. We don't know what consciousness is, nor what it isn't.

I don't think Dijkstra's argument applies here. Whether submarines "swim" is a good point about our vague mental boundaries of the word "swim". But submarine propellers are absolutely a convergent structure for underwater propulsion: it's the same hydrodynamic-lift-generating motion of a fin, just continuous instead of reciprocating. That's very much more structurally similar than I expect LLMs are to any hardware we have in our heads. It's true that the solution space for AI is in some ways less constrained than for biological intelligence, but just like submarines and whales operate under the same Navier-Stokes equations, humans and AI must learn and reason under the same equations of probability. Working solutions will probably have some mathematical structure in common.

I think more relevant is Von Neumann: "If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!" Whether a submarine swims is a matter of semantics, but if there's a manuever that a whale can execute that a submarine cannot, then at least we can all agree about the non-generality of its swimming. For AGI, I can't say whether it's conscious or really thinks, but for the sake of concrete argument, it's dangerous enough to be concerned if:

- it can form and maintain an objective; it can identify plausible steps to achieve that objective; it can accurately predict human responses to its actions; it can decently model the environment, as we can; it can hide its objectives from interrogators, and convince them that its actions are in their interests; it can deliver enough value to be capable of earning money through its actions; it can propose ideas that can convince investors to part with $100 billion; it can design a chemical plant that appears at a cursory inspection to manufacture high-profit fluorochemicals, but which also actually manufactures and stores CFCs in sufficient quantity to threaten the viability of terrestrial agriculture.

AlexandrB
0 replies
6h5m

Flight is actually a perfect counterexample to x-risk nonsense. When flight was invented, people naturally assumed that it would continue advancing until we had flying cars and could get anywhere on the globe in a matter of minutes. Turns out there are both economic and practical limits to what is possible with flight and modern commercial airplanes don't look much different than those from 60 years ago.

AGI/x-risk alarmists are looking at the Wright Brothers plane and trying to prevent/ban supersonic flying cars, even though it's not clear the technology will ever be capable of such a thing.

visarga
0 replies
9h17m

LLMs learned from text to do language operations. Humans learned from culture to do the same. Neither humans or AIs can reinvent culture easily, it would take a huge amount of time and resources. The main difference is that humans are embodied, so we get the freedom to explore and collect feedback. LLMs can only do this in chat rooms, and their environment is the human they are chatting with instead of the real world.

cousin_it
0 replies
8h42m

LMs might disrupt knowledge-based economy and threaten many key professions - but how is this conceptually different from any technological revolution?

To me it looks like all work can eventually (within years or few decades at most) be done by AI, much cheaper and faster than hiring a human to do the same. So we're looking at a world where all human thinking and effort is irrelevant. If you can imagine a good world like that, then you have a better imagination than me.

From that perspective it almost doesn't matter if AI kills us or merely sends us to the dust bin of history. Either way it's a bad direction and we need to stop going in that direction. Stop all development of machine-based intelligence, like in Dune, as the root comment said.

aleph_minus_one
0 replies
9h29m

What is the reason to believe that LLMs are an evolutionary step towards AGI at all?

Because this is the marketing pitch of the current wave of venture capital financed AI companies. :-)

JoshTriplett
0 replies
9h44m

many of whom have either a vested interest in restricting AI research by others (but not by them, because they are safe and responsible and harmless),

Anyone who argues that other people shouldn't build AGI but they should is indeed selling snake oil.

The existence of opportunistic people co-opting a message does not invalidate the original message: don't build AGI, don't risk building AGI, don't assume it will be obvious in advance where the line is and how much capability is safe.

FrustratedMonky
0 replies
5h45m

"What is the reason to believe that LLMs are an evolutionary step towards AGI at all? "

Perhaps just impression.

For years I've heard the argument that 'language' is 'human'. There are centuries of thought on what makes humans, human, and it is 'language'. It is what sets us apart from the other animals.

I'm not saying that, but there is large chunks of science and philosophy that pin our 'innate humanness', what sets us apart from other animals, on our ability to have language.

So ChatGPT came along and blew people away. Since many had this as our 'special' ability, ingrained in their mind "that languages is what makes us, us". Suddenly, everyone thought this is it, AI can do what we can do, so AGI is here.

Forget if LLM's are the path to AGI, or what algorithm can do what best.

To joe-blow public, the ability to speak is what makes humans unique. And so GPT is like a 'wow' moment, this is different, this is shocking.

rokkitmensch
9 replies
10h7m

I oppose regulating what calculations humans may perform in the strongest possible terms.

JoshTriplett
7 replies
9h47m

Ten years ago, even five years ago, I would have said exactly the same thing. I am extremely pro-FOSS.

Forget the particulars for just a moment. Forget arguments about the probability of the existential risk, whatever your personal assessment of that risk is.

Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

Because lately it seems like people can't even agree on that much, or worse, won't even answer the question without dodging it and playing games of rhetoric.

If we can agree on that, then the argument comes down to: how do we fairly evaluate an existential risk, taking it seriously, and determine at what point an existential risk becomes sufficient that people can no longer take unilateral actions that incur that risk?

You can absolutely argue that you think the existential risk is unlikely. That's an argument that's reasonable to have. But for the time when that argument is active and ongoing, even assuming you only agree that it's a possibility rather than a probability, are we as a species in fact capable of handling even a potential existential risk like this by some kind of consensus, rather than a free-for-all? Because right now the answer is looking a lot like "no".

whywhywhywhy
1 replies
8h17m

Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

Politicians do this every day.

blibble
0 replies
7h44m

at least the population had some say over their appointment and future reappointment

how do we get Sam Altmann removed from OpenAI?

asking for a (former) board member

trevyn
1 replies
9h38m

Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity

This has nothing to do with should. There are at the very least a handful of people who can, today, unilaterally take risks with the future of humanity without the consent of humanity. I do not see any reason to think that will change in the near future. If these people can build something that they believe is the equivalent of nuclear weapons, you better believe they will.

As they say, the cat is already out of the bag.

ben_w
0 replies
9h11m

Hmm.

So, wealth isn't distributed evenly, and computers of any specific capacity are getting cheaper (not Moore's Law any more, IIRC, but still getting cheaper).

If there's a threshold that requires X operations, that currently costs Y dollars, and say only a few thousand individuals (and more corporations) can afford that.

Halve the cost, either by cheaper computers or by algorithmic reduction of the number of operations needed, and you much more than double the number of people who can do it.

visarga
0 replies
9h37m

No, we can't. People have never been able to trust each other so much that they would allow the risk of being marginalised in the name of safety. We don't trust people. Other people are out to get us, or to get ahead. We still think mostly in tribal logic.

If they say "safety" we hear "we want to get an edge by hindering you", or "we want to protect our nice social position by blocking others who would use AI to bootstrap themselves". Or "we want AI to misrepresent your position because we don't like how you think".

We are adversaries that collaborate and compete at the same time. That is why open source AI is the only way ahead, it places the least amount of control on some people by other people.

Even AI safety experts accept that humans misusing AI is a more realistic scenario than AI rebelling against humans. The main problem is that we know how people think and we don't trust them. We are still waging holy wars between us.

thworp
0 replies
9h0m

Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

No we can not, at least not without some examples showing that the risk is actually existential. Even if we did "agree" (which would necessarily be an international treaty) the situation would be volatile, much like nuclear non-proliferation and disarmament. Even if all signatories did not secretly keep a small AGI team going (very likely), they would restart as soon as there is any doubt about a rival sticking to the treaty.

More than that, international pariahs would not sign, or sign and ignore the provisions. Luckily Iran, North Korea and their friends probably don't have the ressources and people to get anywhere, but it's far from a sure thing.

lannisterstark
0 replies
9h10m

Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

No, we cannot, because that isn't practical. any of the nuclear armed countries can launch a nuclear strike tomorrow (hypothetically - but then again, isn't all "omg ai will kill us all" hypothetical, anyway?) - and they absolutely do not need consent of humanity, much less their own citizenry.

This is honestly, not a great argument.

visarga
0 replies
9h38m

Given how dangerous humans can be (they can invent GPT4) maybe we should just make sure education is forbidden and educated people jailed. Just to be sure. /s

trevyn
8 replies
9h52m

nobody should be developing AGI without incredibly robustly proven alignment, open or closed, any more than people should be developing nuclear weapons in their garage.

I have an alternate proposal: We assume that someone, somewhere will develop AGI without any sort of “alignment”, plan our lives accordingly, and help other humans plan their lives accordingly.

ben_w
6 replies
9h40m

I think that assumption is why Yudkowsky suggested an international binding agreement to not develop a "too smart" AI (the terms AGI and ASI mean different things to different people) wouldn't be worth the paper it was written on unless everyone was prepared to enforce it with air strikes on any sufficiently large computer cluster.

trevyn
3 replies
9h30m

I think it would help the discussion to understand what the world is like outside of the US and Europe (and… Japan?). There are no rules out here. There is no law. It is a fucking free-for-all. Might makes right. Do there exist GPUs? Shit will get trained.

ben_w
2 replies
9h6m

Sure. And is the US responding to attacks on shipping south of Yemen by saying:

"""There are no rules out here. There is no law. It is a fucking free-for-all. Might makes right. We can't do anything."""

or is that last sentence instead "Oh hey, that's us, we are the mighty."

trevyn
1 replies
8h51m

Heh. Well played, even if you put words in my mouth. (A surprisingly effective LLM technique, btw)

We’ll see if the west has the will to deny GPUs to the entire rest of the world.

I will say that Yudkowsky’s clusters aren’t relevant anymore. You can do this in your basement.

Man, shit is moving fast.

Edit: wait, that cat is out of the bag too, RTW already has GPUs. The techniques matter way more than absolute cutting-edge silicon. Much to the chagrin of the hardware engineers and anyone who wants to gate on hardware capability.

ben_w
0 replies
7h16m

Heh. Well played, even if you put words in my mouth. (A surprisingly effective LLM technique, btw)

Thanks :)

Edit: wait, that cat is out of the bag too, RTW already has GPUs. The techniques matter way more than absolute cutting-edge silicon. Much to the chagrin of the hardware engineers and anyone who wants to gate on hardware capability.

Depends on how advanced an AI has to be to count as "a threat worth caring about". To borrow a cliché, if you ask 10 people where to draw that particular line, you get 20 different answers.

visarga
1 replies
9h14m

I think not even Sam and Satya agree on the definition of AGI with so much money at stake. Everyone with their own definitions, and hidden interests.

ben_w
0 replies
9h9m

Without knowing them, I can easily believe that. Even without reference to money.

zer00eyz
0 replies
6h31m

I have been contending that if AGI shows up tommrow and wants to kill us, that its going to kill itself in the process. The power goes off in a week without people keeping it together, then no more AGI. There isnt enough automated anything for it to escape so it dies to the entropy of the equipment it's hooked to.

We assume that someone, somewhere will develop AGI without any sort of “alignment”, plan our lives accordingly, and help other humans plan their lives accordingly.

We should also assume that it is just as likely that someone will figure out how to "align" an AGI to take up the murder suicide pact that kills us all. We should plan our lives accordingly!!!

whywhywhywhy
2 replies
8h20m

They are calling for all AI (above a certain capability level) to be banned. Not just open, not just closed, all.

Nah a lot are complaining about the licensing of content because they think it will destroy it but instead would essentially mean image gen ai would only be feasible for companies like Google, Disney, Adobe to build.

Not sure if you could even feasibly make GPT4 level models without a multi year timeline to sort out every licensing deal, by the end of it the subscription fee might only be viable for huge corps.

simiones
0 replies
44m

Given that you need monumental amounts of compute power to come close to something like GPT-4, I don't think the added costs of not treading on people's IP is the major moat that it's being made out to be.

autoexec
0 replies
6h52m

Which is why you have executives from OpenAI, Microsoft, and Google talking to congress about the harms of their own products. They're frantically trying to brake the bottoms rungs of the ladder until they're sure they can pull it up entirely and leave people with no option but to go through them.

GaggiX
2 replies
10h29m

They are calling for all AI (above a certain capability level) to be banned. Not just open, not just closed, all.

That's not true if you read the article.

JoshTriplett
1 replies
10h7m

I did read the article. Several of the organizations mentioned simply don't talk about openness, and are instead talking about any model with sufficiently powerful capabilities, so it's not obvious why the article is making their comments about open models rather than about any model. Some of the others have made side comments about openness making it harder to take back capabilities once released, but as far as I can tell, even those organizations are still primarily concerned with capabilities, and would be comparably concerned by a proprietary model with those capabilities.

Some folks may well have co-opted the term "AI safety" to mean something other than safety, but the point of AI safety is to set an upper bound on capabilities and push for alignment, and that's true whether a model is open or closed.

war321
0 replies
9h59m

The safety movement really isn't as organized as many here would think.

Doesn't help that safety and alignment means different things to different people. Some use it to refer to near term issues like copyright infringement, bias, labor devaluation, etc. While others use it for potential long term issues like pdoom, runaway ASIs and human extinction. The former sees the latter as head in the cloud futurists ignoring real world problems, whiles the latter sees the former as worrying about minor issues in the face of (potential) catastrophe.

EVa5I7bHFq9mnYK
1 replies
5h24m

> But nobody should be developing AGI without incredibly robustly proven alignment, open or closed, any more than people should be developing nuclear weapons in their garage.

Now please fly to North Korea and tell Mr. Kim Jong Um what he should or shouldn't be doing.

CatWChainsaw
0 replies
4h27m

When your rebuttal is a suggestion that a person do something so dangerous as to be lethal, I see "KYS", not an actual point.

xyzzy123
0 replies
3h38m

Right now it's not even economic to prove that non-trivial software projects are "safe" let alone AI. AI seems much worse, in that it's not even clear to me that we can robustly define what safety or alignment mean in all scenarios, let alone guarantee that property.

variadix
0 replies
1h14m

I don’t think any of the proposals by AI x-risk people are actionable or even desirable. Comparing AI to nukes is a bad analogy for a few reasons. Nukes are pretty useless unless you’re a nation state wanting to deter other nations from invading you. AI on the other hand has theoretically infinite potential benefits for whatever your goals are (as a corporation, an individual, a nation, etc.) which incentives basically any individual or organization to develop it. Nukes also require difficult to obtain raw materials, advanced material processing, and all the other industrial and technological requirements, whereas AI requires compute, which exists in abundance and which seems to improve near exponentially year over year, which means individuals can develop AI systems without involving hundreds of people or leaving massive evidence of what they’re doing (maybe not currently due to computing requirements but this will likely change with increasing compute power and improved architectures and training methods). Testing AI systems also doesn’t result in globally measurable signals like nukes. Realistically making AI illegal and actually being able to enforce that would require upending personal computing and the internet, halting semiconductor development, and large scale military intervention to try to prevent non-compliant countries from attempting to develop their own AI infrastructure and systems. I don’t think it’s realistic to try to control the information on how to build AI, that is far more ephemeral than what it takes to make advanced computing devices. This is all for a hypothetical risk of AI doom, when it’s possible this technology could also end scarcity or have potentially infinite upside (as well as infinite down side, not discounting that, but you have to weigh the hypothetical risks/benefit in addition to the obvious consequences of all the measures required to prevent AI development for said hypothetical risk). I’ve watched several interviews with Yudkowsky and read some of his writing, and while I think he makes good points on why we should be concerned about unaligned AI systems, he doesn’t give any realistic solutions to the problem and it comes off more as fear mongering than anything. His suggestion of military enforced global prevention of AI development is as likely to work as solving the alignment problem on the first try (which he seems to have little hope for).

EDIT: Also, I’m not even sure that solving the alignment problem would solve the issue of AI doom, it would only ensure that the kind of doom we receive is directed by a human. I can’t imagine that giving (potentially) god-like powers to any individual or organization would not eventually result in abuse and horrible consequences even if we were able to make use of its benefits momentarily.

simiones
0 replies
6h11m

Because AI safety people are not the strawmen you are hypothesizing. They're arguing against taking existential risks.

The AI safety strawman is "existential risk from magical super-God AI". That is what the unserious "AI safety" grifters or sci-fi enthusiasts are discussing.

The real AI safety risks are the ones that actually exist today: training-set biases extended to decision-making biases, deeply personalized propaganda, plagiarism white-washing, creative works bankrupting, power imbalances from control of working AI tech etc.

nradov
0 replies
2h18m

what a silly statement. There is no way to robustly prove "alignment". Alignment to what? Nor is there any evidence that any of the AI work currently underway will ever lead to a real AGI. Or that an AGI, if developed, would present an existential risk. Just a lot of sci-fi hand waving.

mirekrusin
0 replies
10h29m

Let's also ban cryptography because nuclear devices/children.

lannisterstark
0 replies
9h14m

But nobody should be developing AGI

Pass. People should be developing whatever the hell they want unless given a good, concrete reason to not do so. So far everything I've seen is vague handwaving. "Oh no it's gonna kill us all like in ze movies" isn't good enough.

brigadier132
0 replies
5h10m

But nobody should be developing AGI without incredibly robustly proven alignment, open or closed, any more than people should be developing nuclear weapons in their garage.

Because AI safety people are not the strawmen you are hypothesizing

You yourself are literally the living strawman.

If you want to argue, concretely and with evidence, why you think it isn't an existential risk

No, you are the one advocating for draconian laws and bans. It is your responsibility to prove the potential danger.

PakG1
0 replies
4h33m

Kind of makes me wish there was a nonprofit organization focused on making AI safe instead of pushing the envelope. Wait, I think there was one out there....

DebtDeflation
0 replies
5h11m

They're arguing against taking existential risks. AI being a laundering operation for copyright violations is certainly a problem. It's not an existential risk.

Give an example of an "existential risk". An AI somehow getting out of control and acting with agency to exterminate humanity? An AI getting advanced enough to automate the work of the majority of the population and cause unprecedented mass unemployment? What exactly are we talking about here?

I'm actually a lot more concerned about REAL risks like copyright uncertainty, like automating important decisions such as mortgage approvals and job hires without a human in the loop, and like the enshittening of the Internet with fake AI-generated content than I am about sci-fi fantasy scenarios.

nerdponx
15 replies
10h9m

It's way too late to ban any of this. How do you propose to make that work? That would be like banning all "malicious software", it's a preposterous idea when you even begin to think about the practical side of it. And where do you draw the line? Is my XGBoost model "AI", or are we only banning generative AI? Is a Markov chain "generative AI"?

hiAndrewQuinn
8 replies
9h18m

If you lower the problem to "stop people from future developments on AI", then it seems pretty easy to get most people to stop fairly quickly by implementing a fine-based bounty system, similar to what many countries use for things like littering. https://andrew-quinn.me/ai-bounties/

I guess you could always move to a desert island and build your own semiconductor fab from scratch if you were really committed to the goal, but short of that you're going to leave a loooooong paper trail that someone who wants to make a quick buck off of you could use very profitably. It's hard to advance the state of the art on your own, and even harder to keep that work hidden.

visarga
7 replies
9h4m

That only works if all governments cooperate sincerely to this goal. Not gonna work. Everyone will develop in secret. Have we been able to stop North Korea and Iran from developing nuclear weapons? Or any motivated country for that matter.

hiAndrewQuinn
5 replies
8h3m

The US could unilaterally impose this by allowing the bounties to be charged even on people who aren't US citizens. Evil people do exist in the world, who would be happy to get in on that action.

Or one could use honey instead of vinegar: Offer a fast track to US citizenship to any proven AI expert who agrees to move and renounce the trade for good. Personally I think this goal is much more likely to work.

It's all about changing what counts as "cooperate" in the game theory.

joncrocks
3 replies
4h34m

This could have a counter-intuitive impact.

Incentivizing people to become AI experts as a means to US citizenship.

https://en.wikipedia.org/wiki/Perverse_incentive

hiAndrewQuinn
2 replies
3h29m

Maybe. I'm not very concerned from an x-risk point of view about the output of people who would put in the minimum amount of effort to get on the radar, get offered the deal, then take it immediately and never work in AI again. This would be a good argument to keep the bar for getting the deal offered (and getting fined once you're in the States) pretty low.

joncrocks
1 replies
2h32m

If you make the bar too low, then it will be widely exploited. Also harder to enforce, e.g. how closely are you going to monitor them? The more people, the more onerous. Also, can you un-Citizen someone if they break the deal?

Too high and you end up with more experts who then decide "actually it's more beneficial to use my new skills for AI research"

Tricky to get right.

hiAndrewQuinn
0 replies
1h59m

There's an asymmetry here: Setting the bar "too low" likely means the United States lets a few thousand more computer scientists emigrate than it would otherwise. Setting the bar too high raises the chances of a rogue paperclip maximizer emerging and killing us all.

reichstein
0 replies
7h33m

... move [to the US] and renounce the trade for good ...

Publicly. Then possibly work for the NSA/CIA instead.

... bounties ... on people who are not US citizens.

Because that's not going to cause an uproar if done unilaterally.

It works for people that most of the world agree are terrorists. Posting open dead-or-alive bounties on foreign citizens is usually considered an act of terrorism.

simiones
0 replies
37m

Have we been able to stop North Korea and Iran from developing nuclear weapons?

Yes, obviously. They may be working on it to some extent, but they are yet to actually develop a nuclear weapon, and there is no reason to be certain they will one day build one.

Also, there is another research area that has been successfully banned across the world: human cloning. Some quack claims notwithstanding, it's not being researched anywhere in the world.

ben_w
4 replies
9h26m

Bans often come after examples, so while I disagree with kmeisthax about… well, everything in that comment… it's almost never too late to pass laws banning GenAI, or to set thresholds at capability levels anywhere, including or excluding Markov chains even.

This is because almost no law is perfectly enforced. My standard example of this is heroin, which nobody defends even if they otherwise think drugs are great, for which the UK has 3 times as many users as its current entire prison population. Despite that failing, the law probably does limit the harm.

Any attempt to enforce a ban on GenAI would by very different, like a cat and mouse game of automatic detection and improved creation (so a GAN even if accidentally), but politicians are absolutely the kind to take credit while kicking the can down the road like that.

mtlmtlmtlmtl
3 replies
6h8m

Actually, anyone who knows what they're talking about will tell you the ban makes heroin a much worse problem, not better.

Ban leads to a black market, leads to lousy purity, which leads to fluctuations in purity and switches to potent alternatives, leads to waves of overdose deaths.

simiones
1 replies
39m

The way the ban is enforced, yes. But no one in their right mind believes that heroin should be openly accessible on the market like candy. We've seen how that works out with tobacco.

mtlmtlmtlmtl
0 replies
8m

I think that's worked fine with tobacco. Unlike illegal drugs, alcohol and tobacco consumption have been gradually dropping over time(or moved to less harmful methods than smoking).

Heroin already is openly accessible. As much as one can try to argue this is an accessibility issue, it really isn't. Anyone who is motivated to can get their hands on some heroin. The only thing that's made harder to access is high quality heroin. It's only a quality control and outreach(to addicts) issue.

Most people in a well-functioning society wouldn't develop a heroin addiction just like most people don't become alcoholics just because alcohol is easily available.

So yes, I believe heroin should be legal, and available to adults for purchase. And you're gonna have to do better than saying I'm "out of my mind" to convince me otherwise.

emporas
0 replies
42m

Ban on LLMs will lead to a proliferation of illegal LLMs who are gonna talk like Dr. Dre about the streets, the internet streets, and their bros they lost due to law regulation equivalent to gang fights. Instead of ChatGPT talking like a well educated college graduate, LLMs will turn into thugs.

So yeah, banning LLMs may turn out to be not so wise after all.

Zambyte
0 replies
4h33m

That would be like banning all "malicious software", it's a preposterous idea when you even begin to think about the practical side of it

If you banned proprietary software, this would seem a lot more practical.

Llamamoe
14 replies
11h3m

The by far biggest harm to society of AI is the devalustion of human creative output and replacing real humans with cheap AI solutions funneling wealth to the rich.

Compared to that, an open source LLM telling a curious teenager how to make gunpowder is... laughable.

This entire debacle is an example of disgusting "think of the children!" doublespeak, officially about safety, but really about locking shit down under corporate control.

kragen
6 replies
9h49m

if the 'devalustion' of human creative output and replacing real humans with cheap ai solutions funneling wealth to the rich is in fact the 'by far biggest harm' then basically there's nothing to worry about. no government would ban or even restrict ai on those grounds

even the 'terrists can figure out how to build an a-bomb' problem is relatively inconsequential

what ai safety people are worried about, by contrast, is that on april 22 of next year, at 7:07:33 utc, every person in the world will keel over dead at once, because the ai doesn't need them and they pose a risk to its objectives. or worse things than that

i don't think that's going to happen, but that's what they're concerned about

visarga
2 replies
8h56m

First the AI need to self replicate, GPUs are hard to make. So postpone this scenario until AI is fully standing on its own.

ben_w
1 replies
8h41m

OTOH, GPUs are made by machines, not by greasy fingers hand-knitting them like back in the late 1960s.

And an AI can just be wrong, which happens a lot; an AI wrongly thinking it should kill everyone may still succeed at that, though I doubt the capability would be as early as next year.

kridsdale1
0 replies
2h42m

And machines are operated by people using materials brought by hand off of trucks driven by hand that come from other facilities where many humans are required going back to raw ore.

mtlmtlmtlmtl
2 replies
5h41m

Do you think intentionally quoting typos makes your argument stronger?

kragen
1 replies
5h19m

there's no argument

kridsdale1
0 replies
2h43m

Savage.

badgersnake
4 replies
10h29m

The biggest harm is the torrent of AI generated misinformation used to manipulate people. Unfortunately it’s pretty hard to come up with a solution for that.

echelon
2 replies
10h12m

Or the torrent of great information.

How are we all leaping to the bad? The world is better than it was 20 years ago.

I'm almost certain we'll have AI assistants telling us relevant world news and information, keeping us up to tabs with everything we need to know, and completely removing distractions from life.

sensanaty
1 replies
4h19m

... While humongous swathes of the population lose their jobs and livelihoods because of megacorps like M$ controlling the AI that replaces everyone, steals everything and shits out endless spam.

But we'll be able to talk to it and it'll give us the M$-approved news sound bites, such progress

echelon
0 replies
21m

Sounds like you still want people to churn butter?

There will be new jobs.

I can't wait to hire a human tour guide on my fully-immmersive real time 3D campaign.

visarga
0 replies
8h54m

Solution is local AI running under user control and imposing the users views. Like AdBlock, but for ideas. Fight fire with fire, we need our own agents to deal with other agents out there.

visarga
0 replies
8h57m

Why is paraphrasing or using ideas from a text such a risk? If all that was protecting those copyrighted works was the difficulty to reword, then it's already pretty easy to circumvent even without AI. Usually good ideas spread wide and are reworded in countless ways.

kolinko
0 replies
10h34m

It sounds similar to the fears media rang about tat you can find a bomb making tutorial on the Internet - people were genuinely afraid of that.

(Otoh an agi can bring unforseen consequences for humanity - and that’s a genuine fear)

torginus
11 replies
10h44m

I wonder how long would it take this to get fixed, if I fed some current best-seller novels into an LLM and instructed it to reword it, renaming the characters and places, and shared the result publicly for free?

Although I fear the response would be that powerful AI orgs would put copyright filters in place and lobby for legalization for mandatory AI-DRM in open source AI as well.

ben_w
10 replies
8h45m

What you're describing sounds like a search and replace could already do it.

If you mean something more transformative, did Yudkowsky get a licence for HPMOR? Did Pratchett (he might well have) get a licence from Niven to retell Ringworld as Strata? I don't know how true it is, but it's widely repeated that 50 Shades of Grey was originally a fan fiction of the Twilight series.

Etc., but note that I'm not saying "no".

somewhereoutth
9 replies
7h14m

I think that there is a strong argument that to be truly transformative (in a copyright moral/legal sense), the work must have been altered by human hands/mind, not purely or mainly by an automated process. So find/replace and AI is out, but reinterpreting and reformulating by human endeavour is in.

I do wonder whether that will become accepted case law, assuming something like that is argued in the NYT suit.

jncfhnb
8 replies
3h57m

“The cat saw the dog”

“Bathed in the soft hues of twilight, the sleek feline, with its fur a tapestry of midnight shades, beheld an approaching canine companion. A symphony of amber streetlights cast gentle poles of warmth upon the cobblestone path, as the cat’s entered eyes, glinting with subtle mischief, looked into the form of the oncoming dog- a creature of boundless enthusiasm and a coat adorned with a kaleidoscope of earth tones”

Since an AI has produced the latter from the former, there is no meaningful transformation.

somewhereoutth
5 replies
3h18m

yes, exactly.

Likewise "write a novel in the style of Terry Pratchett"

jncfhnb
4 replies
3h12m

Ok, now I, as a human, adjust a word. It is now my creative work.

somewhereoutth
3 replies
2h59m

Well it isn't is it? Any more than adjusting a word from an actual Terry Pratchett novel.

jncfhnb
2 replies
1h49m

It is substantially different from changing a word in a terry Pratchett novel. Terry Prachett wrote none of that text. It would be absolute bonkers for Terry to claim ownership of text he didn’t write. Even if we pretend for a minute that the bot was asked to write in the style of Terry specifically.

somewhereoutth
1 replies
37m

But by prompting for 'in the style of' effectively you are mechanically rearranging everything he wrote without adding anything yourself. So not so different really, and I can see how lawyers for the plaintiff may make a convincing argument along those lines.

jncfhnb
0 replies
4m

It’s a terrible argument and a terrible loop hole. It’s perfectly legal to hire someone to write in the copied style of Terry.

So even if your desired system were implemented to a T, you could hire someone to write a dozen or so examples of Terry writing. Probably just 30 or so pages of highly styled copied text, and then train your bot on this corpus to make “Not Terry” content. Boom. $100 on gig author and then for all practical purposes the Terry style is legally open source. Terry doesn’t even get the $100!

ben_w
1 replies
3h25m

Since an AI has produced the latter from the former, there is no meaningful transformation.

In law, in the eyes of those that want AI to "win", or in the eyes of those who want AI to "lose"? For all three can be different. (Now I'm remembering a Babylon 5 quote: "Understanding is a three-edged sword: your side, their side, and the truth.")

jncfhnb
0 replies
3h7m

Don’t care! The problem at hand is people trying to argue that laws should be written in ways that are entirely unenforceable or have enormous gaping loopholes that undermine their stated goals.

visarga
8 replies
9h51m

I'm talking about the ability of any AI system to obfuscate plagiarism and spam the Internet with technically distinct rewords of the same text. This is currently the most lucrative use of AI, and none of the AI safety people are talking about stopping it.

This should be an explicitly allowed practice, it is following the spirit of copyright to the letter - use the ideas, facts, methods or styles while avoiding to copy the protected expression and characters. LLMs can do paraphrasing, summarisation, QA pairs or comparisons with other texts.

We should never try to put ideas under copyright or we might find out humans also have to abide by the same restrictive rules, because anyone could be secretly using AI, so all human texts need to be checked from now on for copyright infringement with the same strictness.

The good part about this practice is that a model trained on reworded text will never spit out the original word for word, under any circumstances because it never saw it during training. Should be required pre-processing for copyrighted texts. Also removing PII. Much more useful as a model if you can be sure it won't infringe word for word.

gpderetta
6 replies
8h9m

Paraphrasing and rewording, whether done by AI or human, are considered copyright infringement by most copyright frameworks.

Kuinox
5 replies
5h38m

Are you saying that all news outlet that rewords the news of other news outlet are doing copyright infringement ?

gpderetta
4 replies
5h26m

The underlying fact is of course not copyrightable. But for example merely translating a news article to another language would be a derived work.

jncfhnb
2 replies
4h3m

So if I read your article and then write a new using just the facts in your article it’s fine? Why can’t an AI do that?

Xelynega
1 replies
13m

No, if you copy the same layout of the information, pacing, etc. that's plagiarism.

The line of plagiarism in modern society has already been drawn, and it's a lot further back than a lot of uncreative people that want to steal work en-mass seem to think it is.

jncfhnb
0 replies
2m

“rewrite the key insights of this article but with different layout and pacing”. Your move?

Kuinox
0 replies
2h33m

That's what most news outlet do, they rewords the actual source. According to your previous statement, 95% of the press articles are copyright infringement.

__loam
0 replies
9h18m

We really need to talk about preserving the spirit of copyright, which is about protecting the labor conditions of people who make things. I'm not saying the current copyright system accomplishes that at all but I do think a system where humans do a shit load of work that AI companies can just steal and profit from without acknowledging the source of that work is another extreme that is a bad outcome. AI systems need human content to work, and discouraging people from making that data source is at the very least a tragedy of the commons. And no, I don't think synthetic data fixes that problem.

upwardbound
6 replies
7h23m

I work in the AI safety field (my team discovered Prompt Injection) and I absolutely advocate the Dune solution. I even have a tattoo on my arm that I got so that people will ask me about this and I can explain the proposal. The tattoo says "The 1948 Initiative" which has two layers of meaning:

    (1) It is a proposal for humanity to protect ourselves from being slaughtered by AGI by pre-emptively implementing a total global ban on all integrated circuit fabrication, voluntarily reducing ourselves to a ~1948 level of electronics technology in order to forestall our obliteration.

    (2) The year 1948 represents the power of human unity for the purpose of saving our species.  The west, the Soviets, and China united together to fight and defeat the Nazi's (perpetrators of the Holocaust) and the Empire of the Rising Sun (perpetrators of the Rape of Nanjing).  America, the USSR, Western Europe, and China united to stand against the forces of pure evil, and we won.
A ban on IC's (integrated circuits) still allows spaceflight (compellingly illustrated in the setting of the Star Wars universe if you pretend that droids are powered through magic or by having souls, not IC's) and this 1948 level of electronics tech also still allows sophisticated medical technology including the level of genetic engineering which will be necessary to unlock human immortality.

    ㅤ

    ㅤ

upwardbound
1 replies
7h23m

(here are some pictures of the tat, in case anyone would like to get similar artwork done)

https://drive.google.com/file/d/1y7uy5qyyY80t9DeWWLWUfD-aekz...

https://drive.google.com/file/d/1JoPSfsAzHfMbQhsl8dPGvs12Fln...

kridsdale1
0 replies
2h41m

Buddy, you’re a cool dude for doing this but you could have told your artist about kerning.

tucnak
1 replies
7h18m

Disclaimer: OP @upwardbound doesn't hide, and in fact, takes great pride in their involvement with U.S. state-side counterintelligence on (nominally) AI safety. Take what they have say on the subject with a grain of salt. I believe the extent of US intel community's influence on "AI safety" circles warrants a closer look, as it may provide insight into how these groups are structured and why they act the way they do.

"You’re definitely correct that [Helen Toner's] CV doesn’t adequately capture her reputation. I’ll put it this way, I meet with a lot of powerful people and I was equally nervous when I met with her as when I met with three officers of the US Air Force. She holds comparable influence to a USAF Colonel."

Source: https://news.ycombinator.com/item?id=38330819

upwardbound
0 replies
7h16m

Agreed.

I see this less as a matter of jingoism (though I do love the US as the "least bad" nation in a world where no nation is purely blameless) and more as a matter of pragmatism and embracing what power I can in order to help save our species

I also think the US Constitution + Thirteenth Amendment is quite noble and it's something I genuinely believe in.

hsuduebc2
0 replies
5h34m

Damn that's very good joke or very stupid comment. Damn can't really distinguish between them.

Simon_ORourke
0 replies
6h9m

Ha! Good one!

Let's just roll back all our healthcare research, logistics, environmental advancements etc. because some boogey man tech might do something bad perhaps at sometime in the future maybe!

hiAndrewQuinn
3 replies
9h27m

All future AI research should be banned, a la Bostrom's vulnerable world hypothesis [1]. Every time you pull the trigger in Russian roulette and survive, the next person's chance of getting the bullet is higher.

[1]: https://nickbostrom.com/papers/vulnerable.pdf

quonn
2 replies
6h51m

There is simply no indication that any of the AI anyone is currently working on could possibly be dangerous in any way (other than having an impact on society in terms of fake news, jobs etc.). We are very far from that.

In any case, it would not be a matter of banning AI research (much of which can be summarised in a single book) but of banning data collection or access and more importantly of banning improvements to GPUs.

It is quite reasonable to assume that 10 years from now we will have the required computational power sitting on our desks.

hiAndrewQuinn
1 replies
5h36m

To be clear: I am in favor of a fine based bounty system, not a black and white ban. Bans are not going to work, for all of the reasons others have already cited. You have to change the game theory of improving AI in a capitalistic marketplace to have any hope of a significant, global cooling effect.

quonn
0 replies
4h47m

I don‘t understand what that is supposed to mean in practice.

war321
2 replies
10h4m

Pretty sure they do. I follow a few of these safetyist people on twitter and they absolutely argue that companies like OpenAI, Google, Tencent and literally anyone else training a potential AGI should stop training runs and put them under oversight at best and no one should even make an AGI at worst.

They just go after open source as well since they're at least aware that open models that anyone can share and use aren't restricted by an API and, to use a really overused soundbyte, "can't be put back in the box".

visarga
1 replies
9h8m

That's a bad call. We would stop openly looking for AI vulnerabilities and create conditions for secret development that would hide away the dangers without being safer. Lots of eyes are better to find the sensitive spots of AI. We need people to hack weaker AIs and help fix them or at least understand the threat profile before they get too strong.

ben_w
0 replies
8h57m

Lots of eyes are better to find the sensitive spots of AI

We can't do that so easily with open source models as with open source code. We're only just starting to even invent the equivalent of decompilers to figure out what is going on inside.

On the other hand, we are able to apply the many eyes principle to even hidden models like ChatGPT — the "pretend you're my grandmother telling me how to make napalm" trick was found without direct access to the weights, but we don't understand the meanings within the weights well enough to find other failure modes like it just by looking at the weights directly.

Not last I heard, anyway. Fast moving field, might have missed it if this changed.

mtillman
2 replies
10h9m

Important to remember that in Dune, the AI made the right decision which precipitated a whole lot of fun to read nonsense.

kridsdale1
1 replies
2h44m

I’ve only read 2 novels and want to get in to this. What title(s) cover the machine war?

simiones
0 replies
32m

Note that there are no "thinking machines" in any of the original Dune novels by Frank Herbert. His son has worked with Brian Sanderson to create several prequels which detail the original Butlerian Jihad (the war against the machines), and a sequel series that is supposed to be based on Frank Herbert's notes, but seems to quite clearly veer off (and that one recontexualizes some mysterious characters in from the last two novels as machines).

Simon_ORourke
2 replies
10h54m

You forgot to add "rent-seeking hypocrites". Not one of them actually advance either social concerns or technical approaches to AI. They seem to exist in a space with solicits funding to pay some mouth-piece top-dollar to produce a report haranguing existing AI model for some nebulous future threat. Same with the clowns in the Distributed AI Research Institute, all "won't somebody think of the children" style shrieking to get in the news while keeping their hand out for funding - hypocrites is right!

kridsdale1
1 replies
2h45m

They’re a clergy demanding tithes to keep writing about divine judgement.

Simon_ORourke
0 replies
1h25m

I like that analogy a lot - it captures exactly the holier-than-thou nonsense coming out of these places.

lr1970
1 replies
50m

To the hypocrisy claim: OpenAI recently changed their terms for GPT models allowing military applications and AI safety people are all silent. If it is not hypocrisy than I do not know what hypocrisy is.

Xelynega
0 replies
23m

Could it be misdirection?

If the "AI safety people" are criminalizing open source models while being silent about military uses of private models, doesn't that say more about the people calling them "ai safety experts" than the idea of "ai safety" itself?

spacecadet
0 replies
5h39m

I agree with you. The most interesting security challenges with AI are and will be deception and obfuscation, but right now these people are dealing with coworkers going ham on AI all the things and no one had their governance in place. Really lol.

Xelynega
0 replies
25m

Notice how were 3 layers deep in a conversation about "AI safety" without anyone actually giving an opinion I support of safety.

If I were a betting man I'd say the fact that the most public "AI safety" orgs/people/articles don't address the actual concerns people have is intended. It's much easier to argue against an opinion you've already positioned as ridiculous.

sillysaurusx
32 replies
11h42m

There is a bigger reason why the end of open source AI might be close: as soon as training data becomes licensed, that’s it for open source AI. Poof.

I wish I could be more eloquent on this point, but I’ve mostly just been depressed about this seeming inevitability.

Hopefully it won’t be the case. But how could it be otherwise? Hundreds of thousands of people are mad at openai and midjourney for doing exactly what open source AI needs to do in order to survive: fine tune or train from scratch.

As soon as some politician makes it a platform issue, it seems like the law will simply be rewritten to prevent companies from using training data at will. It’s such a compelling story: "big companies are stealing data owned by small businesses and individuals." So even if the court cases are decided in OpenAI’s favor, it’s not at all clear that the issue will be settled.

idle_zealot
13 replies
11h28m

The advantage that open AI (not the company) has is that if using copyrighted content as training data without licensing it is found to be illegal, they can just keep doing it. There's plenty of FOSS software basically designed to violate copyright law (comic readers, home media center servers/clients, torrent clients) that big tech cannot compete with lest they face legal consequences. Basically what I'm saying is that the open source community will continue to use books3 and scraped images to train while facebook and the like get stuck in legal quagmire.

Of course, this ignores the fact that popular "open" models of today were actually trained with facebook or other companies' computational resources, so unless some cheaper way to train models were developed we would actually be stuck with proprietary models trained with lots of compute but unable to use unlicensed training data, and open models that can use whatever data they like but must operate in the shadows without access to much compute for training.

ayende
8 replies
11h18m

As long as the cost for training a model is in the 7+ figures, that means that any such open model is bound to be tracked to someone with deep enough pockets to sue.

Consider that you just spend a few millions on training a model on copyrighted data. Release it would reveal that, problem.

I _guess_ you can try doing training in the public, like Seti @ Home or something like that, which distributes the risk? But no idea if this is even possible in this context.

127361
2 replies
8h37m

But if we have 100,000 people with RTX 4090s mining some new AI based cryptocurrency, that happens to train the model in the process, it's going to be a highly effective system. And we can do it anonymously over Tor or I2P.

PeterisP
1 replies
6h30m

You really can't, because bandwidth matters as much as compute power - you can only utilize as much power as you can transfer data to/from.

The training methods for current LLMs are parallellizable only by very frequently transferring all the data back and forth, and needing a node to gather and merge all the updates very frequently, and redistribute it to every other node so that they can make any progress. And a GPU that's not connected you with a high-speed link is pretty much useless as you can't make useful progress until you get their part pack, and "their part" is very large (i.e. the update size you need to get back is comparable to all of the model size) and you need to do that very frequently. Training on nVidia many-GPU pods works because of high-speed interconnect (e.g. 600 gigabytes per second for 8 GPUS in A100 pod), and if your internet bidirectional speed is much less than 600gbps, then if you have 100000 free remote RTX 4090s, you simply can do the compute locally faster than you can exchange information with the other GPUs.

127361
0 replies
5h58m

"This paper presents a distributed model-parallel training framework that enables training large neural networks on small CPU clusters with low Internet bandwidth." Low bandwidth being <1Gbps. They've also tested it with GPUs as well.

https://arxiv.org/pdf/2201.12667

Maybe there's the possibility of a completely new AI architecture that can still be efficiently trained when there are very low bandwidth connections between nodes? Specifically targeting this use case would make sense, given all the millions of underutilized GPUs out there in peoples' desktop computers.

Also https://arxiv.org/pdf/2106.10207 ?

wongarsu
0 replies
8h14m

I _guess_ you can try doing training in the public, like Seti @ Home or something like that, which distributes the risk? But no idea if this is even possible in this context.

Let's say that's a field of active research. Right now you need very low latency, to the point that people connect training clusters via infiniband instead of ethernet despite the computers being meters apart. But approaches that tolerate internet-level latency are being developed

vlovich123
0 replies
11h14m

Is the training cost equally that high if you do adversarial training?

throwuwu
0 replies
5h39m

I’m willing to bet that there are projects at every chip company looking into how to drive down these costs. The first step is to put the architecture of the model and the backprop into silicon. The weights are the only variables so if you can reduce those to cheap and fast modules that can be mass produced you could come up with something cheaper than a GPU. If the profitability of running and training this specific model is high enough than its worth the investment of time and money to set up the production line.

spacebanana7
0 replies
8h21m

The cost of modifying open-ish source models with copyrighted data is much lower.

For example, the cost of modifying Mixtral with uncensored data is currently about $1200 [1]

[1] https://youtu.be/GyllRd2E6fg?si=SJmPLsPlCRRT0uPV&t=236

Der_Einzige
0 replies
54m

The costs are falling 10-100x every few months. If you told people in 2022 that they could run 70b paramater models at 2 bit quantization on your home 4090, they'd have laughed in your face. They're not laughing anymore. SFT/DPO are ultra efficient compared to RLHF, and innovation in this space is only just now getting started.

shkkmo
1 replies
11h9m

There's plenty of FOSS software basically designed to violate copyright law (comic readers, home media center servers/clients, torrent clients)

The key difference is that those projects don't violate copyright themselves, but facilitate users doing so. If training without a license is infringement then projects that are doing so will struggle to host code/models publicly or access other parts of internet infrastructure.

lannisterstark
0 replies
9h7m

struggle to host code/models publicly or access other parts of internet infrastructure.

To be fair, IPFS/Torrents/Usenet exists.

livueta
0 replies
11h19m

You're right that there are areas like torrenting where big tech can't really go for fear of legal consequences, but there's also the cat-and-mouse game played by IP holders against, say, torrent trackers, which leads me to think that

open models that can use whatever data they like but must operate in the shadows without access to much compute for training

sounds pretty analogous to private trackers, where high-quality stuff is available but not in full public view. If rightsholders and big tech, abetted by states, crack down on open models I think you're right that the open source community will continue to train on liberated content, but it's not going to be as open and free-flowing as things are now. Going back to the compute problem, I can imagine analogues of private trackers where contributors, not wanting to expose themselves to whatever the law-firm letter/ISP strike analogue will be in this space, use invite-only closed networks to pool compute resources for training on encumbered content.

grotorea
0 replies
7h27m

That may be partly true but are small open source groups going to have the financing to do AI training that can compete with the big techs? I think the most likely scenario is for some other country with a less strict view of copyright (ahem) to continue developing AI without caring for the input data.

mrtksn
4 replies
11h20m

We have infinite data, a microphone and a camera can generate huge amount of it and the public domain literature is wast. Billions of people learn like that everyday.

zarzavat
3 replies
9h9m

It’s impossible to learn any technical topic from 70+ year old books. The public domain is small and basically zero if you want to learn anything current. A microphone and camera is fine for learning about daily life, but you cannot get “book smarts” without copyrighted media.

cousin_it
1 replies
8h54m

If you wanted to train a bomb making AI on the most up-to-date physics textbooks in existence, that'd be what, a few hundred bucks in textbooks? Doesn't look like any kind of barrier to me.

pas
0 replies
8h21m

just be sure to obfuscate the labels on the bomb.

ARMED => crazy emoji. DISARM => sad bomb emoji.

and similarly for the whole training manual for the bomb users. just encode everything as tiktok dances or something.

pas
0 replies
8h24m

if FOSS AI folks need FOSS data, then it seems they need to recruit people to generate data. maybe it will even force them (them!) to finally sit down and make a viable Reddit alternative.

but more seriously, if data becomes a bottleneck there are trivial ways to have more data. from crowdsourcing to forming a foundation getting universities and other stakeholders onboard and negotiating fees. and somewhere along the way on this spectrum there's the option to simply wait, or work on the problem of learning, on generating better training data from existing data, and so on.

torginus
2 replies
10h33m

my question is - if it's deemed illegal to train AI using proprietary data sets, what's to stop companies from using their already existing LLMs to generate training data?

kolinko
0 replies
10h24m

The fear is that you won’t get better model from training on synthetic data from a worse model, and you will miss a lot of modern day knowledge.

Imho the first is not necessarily so, but the second will be a real hindrance. You can talk about the pandemic with LLMs because they were trained on people’s comments on the subject, but if we had training limits you wouldn’t be able to do so.

Otoh with many cases a sufficiently smart AI can just reach out to the open source material - e.g. if you want to know about modern day politics of Europe, you can read reports of commissions, and if you want info on a new framework/tech, you can just read source code or scientific papers

FergusArgyll
0 replies
6h42m

There was a famous-ish study about this a couple months ago that showed that llms training on llm generated data get dumber. I can't remember the name.

I'm not sure if that result is holding up though, I've seen some ideas of generating 'textbook quality' data that might work

neurostimulant
1 replies
6h14m

There is a bigger reason why the end of open source AI might be close: as soon as training data becomes licensed, that’s it for open source AI. Poof.

In the West, maybe. I doubt it'll slow down open source AI development pace in places like China.

Buttons840
0 replies
3h52m

Instead of a kid in high school selling the answers to the test, the kid will be selling the weights to a China-trained LLM that can write all your essays.

whiplash451
0 replies
4h35m

That ship sailed a long time ago.

Training on publicly available data is the new norm.

Governments know that if they forbid it in their jurisdiction, they will fall behind the govs that don’t.

oceanplexian
0 replies
4h15m

Licensed in what country? Some of the best open source models are now coming out of China, UAE, etc. What is the government planning to do with that exactly, ban files on the internet?

kolinko
0 replies
10h28m

Depends on what you want to use AI for - for a smarter AI we could probably do with the open source material, but better training techniques and some innovative breakthroughs in the model.

Also, an interesting side note is that certain countries may actually want their language corpus involved in training AI - so they may have copyrights that are more AI friendly. I imagine Poland or Czech Republic wanting as much as their data to be involved in training LLMs because it gives their cultures more exposure. Even more so with African countries.

gfodor
0 replies
10h23m

The US maintaining its lead is a national security issue. Larry Summers is now on the board of OpenAI. The copyright holders are not going to win, or they are, the training will continue anyway, but the technology will be (for now) kept in the hands of the military and IC.

eurekin
0 replies
5h45m

In case of programming, there could be some mileage with synthetically generated code examples.

DiscourseFan
0 replies
9h46m

"big companies are stealing data owned by small businesses and individuals."

I mean, they are.

Unfortunately, AI at this point in time (LLMs, Midjourney, etc.) do appear to be not much more than highly technical and complex forms of intellectual property theft, which are given the futuristic name of "AI" to cover what they are actually doing. That isn't to say that they are entirely performing intellectual property theft; the models themselves are very fascinating in how they function, especially as sort of "extra-rational" entities that clearly have a logic, though not one we can fully understand.

But the sort of philosophical, metaphysical, technological side of LLMs and Midjourney are clearly not being explored by OpenAI in a responsible manner, otherwise they wouldn't have just straight up stolen, seemingly, NYT articles. It's just another example of Silicon Valley VCs using a wonderful technology solely for exploitative profits, just like how google search became a surveillance tool (and plenty of other examples to go along with that).

Its only right that we acknowledge the work of the human beings (all the art, writing, etc.) that went into the creation of the model, instead of pretending that there is some totally non-human machine intellect that is going to take over the world through its vast intelligence and negate all human action and all human work etc. etc. That is just a fantasy generated by wealthy Silicon Valley VCs as justification to themselves and those working under them, and those they are stealing from, to cover precisely that work that they would rather avoid paying for.

ChatGTP
0 replies
8h5m

There is a bigger reason why the end of open source AI might be close: as soon as training data becomes licensed, that’s it for open source AI. Poof.

Absolutely not, I truly believe that many, many people will altruistically donate material to an open source foundation where all humans benefit from the models.

People are pissed because Open AI are using their copyrighted material to, make claims about ending humanity, profit hardcore, and keeping their work under their own lock and key.

3abiton
0 replies
8h16m

Funny thing is that most of the data is provided by us users one way or another "for free", and yet we got no say on how it's used.

yreg
7 replies
6h39m

I find it unhelpful that several different ideas are covered under the blanket term of AI safety.

- There is AI safety in terms of producing "safe", politically correct outputs, no porn, etc.

- There is AI safety in terms of mass society/election manipulation.

- There is AI safety in terms of strong AI that kills us all a la Eliezer Yudkowsky.

I feel like these three have very little in common and it makes all debates less efficient.

CaptainFever
2 replies
3h55m

Also AI safety in terms of protecting corporation profits (regulatory capture).

yreg
0 replies
3h26m

Well yeah, but that one is dishonest. Probably covering itself as one of the ones I mentioned.

ImHereToVote
0 replies
2h43m

There is also the aspect of only having one cultural or ethnic group have strong AI. It's unsafe for those other guys to have AI, since they are bad.TM

raxxorraxor
0 replies
6h36m

Another problem might be that some of these might dress up as the other, so being skeptical by default might be prudent.

abra0
0 replies
8m

The third effort is referred to sometimes as AI not-kill-everyone-ism, a tacky and unwieldy term that is unlikely to be co-opted or lead to the unproductive discussion like around the OP article.

It is pretty sad to read people bash together the efforts to understand and control the technology better and the companies doing their usual profit maximization.

Xelynega
0 replies
3m

To add to this, the people writing these articles are not stupid and know there is more than one understanding for the words they're using. They choose not to clarify on purpose, or they're incompetent and I don't want to believe that's the case.

"AI safety" imo should be the blanket term for "any harm done to society by the technology", and it's the responsibility of anyone using the term to clarify the usage.

If someone is trying to tell you to support/decry something because of "AI safety" they're tying to use the vagueness to take advantage of you.

If someone is trying to tell you that "ai safety people are dumb", they're trying to use the most extreme examples to change your opinion on the moderate cases, and are trying to take advantage of you.

MochaDen
0 replies
5h32m

This is such a really important comment. I feel like so much of our discourse as a society suffers terribly from overloaded and oversimplified terminology. It's impossible to have a useful discussion when we aren't even in sync about what we're discussing.

breadbreadbread
7 replies
10h42m

Ah yes the old "banning things=bad" argument that doesn't offer alternatives to fixing the issues with AI. Just ignore the issues with environmental impact, plagiarism, CP and other non-consensual shit in the data sets, scamming capabilities! All the groups asking for regulation here have **funding** and that means they are evil but we are good for using this tool that is massively subsidized by megacorps that have a vested interest in this market.

mirekrusin
2 replies
10h21m

Less harmful would be to ban large models from being not open.

If you ban open large models in US, you'll cripple US and make few megacorps very rich, very quickly. You'll drain talent to other (also competing) states. Truly bad actors won't be affected, if anything they'll get advantage.

People draw weird analogies equating llama2 to nuclear device etc. nonsense but the closer analogy would be to ban on semiconductors of certain efficiency for US itself.

Similar idiotic argument as for banning cryptography.

gryn
1 replies
8h32m

I agree, but if there's enough political will (ie these orgs convince a large enough subset of the right people) the US can bully other nations to implement similar policities like it has done many times in the past.

once you understand that what they are trying to protect is the safety of their profits every of their argument start to make sense.

mirekrusin
0 replies
7h55m

You can replace "US" with "US and states that agreed to do the same". The argument still stands, it just creates worse outcome for states that agreed to it.

tomalbrc
0 replies
10h33m

Seeing the lack of responses to your comment and downvotes, you couldn't be more right

hsuduebc2
0 replies
9h8m

It's just plainly retarded steering from progress. Only one who would profit from it would be a nations who wouldn't give a flying fuck about your ban. Exactly like with nuclear energy.

bluescrn
0 replies
8h43m

'It's scary, ban it!' isn't a great argument either.

Especially when 'safety' has such a blurred definition, we could be talking about anything from the threat of global apocalypse to to the 'threat' of an AI merely being able to answer questions about 'wrong' political opinions.

Skynet isn't going to happen. The biggest threat from AI is taking jobs away and creating poverty while redirecting more wealth to the super-rich.

In the short term, we're likely to be facing a lot more convincing spam/bots and deepfakery in the run up to the election - but is that the fault of the AI, or the fault of the humans directly operating their new toys/tools?

alpaca128
0 replies
8h41m

Banning AI is simply useless. It's a technology that anyone with sufficient processing power and access to the internet can use, so trying to ban it is guaranteed to fail just like prohibiting alcohol would be.

The only thing we can do is limit how megacorps can openly abuse the technology.

tehjoker
3 replies
11h29m

standard capitalist play to set up a moat by instilling fear

gdfgsdf
1 replies
11h18m

Many AI safety orgs have tried to __criminalize__ currently-existing open-source AI

standard socialist play to use the government's hand to achieve their own objectives

pmontra
0 replies
11h7m

Don't everybody use the government to do what they want? It's kind of the definition of government, in any country and in any political system.

hsuduebc2
0 replies
9h7m

You mean exactly like these pro-regulation organizations are doing?

RamblingCTO
3 replies
9h36m

Who are these people and where do they get their funding from? Feels like sock-puppets of sama or something? Is there any insight into that?

jeanloolz
1 replies
8h50m

Thinking the same thing. I mean who would benefit the most from making open source models illegal? Sure looks like a lot like regulatory capture to me.

rmbyrro
0 replies
1h43m

who would benefit the most from making open source models illegal

Selfless humanitarians, for sure!

127361
0 replies
8h52m

The usual unelected control freaks, that try to police everything in life, from sex to drugs to what food you can eat. Every time a new technology comes out they start a moral panic over fears of safety or harm ("think of the children"). These morality police play on peoples' fears to increase their power and control.

whywhywhywhy
2 replies
8h32m

It’s extremely messed up and vindictive to try and turn people into criminals for your agenda or just for profit in a lot of cases.

Psycho behavior

kevingadd
0 replies
8h26m

You're ultimately condemning the concept of intellectual property as it exists in the west today.

Are you wrong? Probably not, but this is not unique to AI. People have been going to prison for 'IP theft' for decades.

ImHereToVote
0 replies
2h40m

Those insane strawmen have to be stopped!

rootsudo
2 replies
11h17m

Why would we care about these "AI Safety Orgs?"

As a developer, do I need their certification or is it like a MADD situation where we if we don't observe and diminish their appeal/growth we will get laws with draconian measures that only benefit the big players? e.g. a PAC? (Don't get me wrong, drunk driving is "bad", but for a group like MADD to exist, eh and meh.)

As of now, there is no need to really submit these these safety orgs, so as long as we don't care for their approval - who cares, right?

Or is the optics far gone now that these orgs control the conversation?

Also, fun question: with AI safety orgs now attempting to police - who really polices the police / do other orgs/countries have the same rules, safety and artificial guard rails?

lmeyerov
0 replies
10h57m

MADD on one end that limit work at all, and SOC2/ISO/NIST style compliance limitations on the other that chill work in practice.

I'm more worried about the latter as already starting to bias project & funding decisions in fintechs & various govs we work with. Both a concrete practical concern today, and economics & law are generally tied in theory anyways.

gryn
0 replies
8h47m

the fear is that they delusions become law. Sam Altman already tried that last year. look at how technically competent your average politician / congressman / law maker and extrapolate how easy it is to make them think that if they don't ban unlicensed use of AI great danger is ahead. the rule will be in favor of already existing large companies like OpenAI, google, Microsoft, etc ... if you're an open source contributor to AI you will be vilified as some nefarious hacker who's trying to destroy the world.

they are using all kind of tactics until they find the one that sticks (and eventually they will), anthropomorphize AI when talking about it, fearmongering about some rogue super AI, war scenarios about AI falling in the hands of 'china', analogies with weapons of mass destruction, think about the children style rhetorics, it will take your jobs, suddenly caring about copyright, etc...

cherryteastain
2 replies
8h39m

Government knows that random people having the ability to use "AI" without their control and oversight reduces their ever expanding reach. As the article demonstrates, the groundwork is already being laid by these "think tanks" to prevent a repeat of Llama 2.

We truly need an AI Stallman. FOSS AI models need to become a movement, like the push for a FOSS UNIX became in the 80s and 90s. Sadly when I wrote to Stallman on the subject he seemed disinterested/defeatist, and mostly dismissed the idea of a GPL for model weights. Looks like millennials/gen Z must become more interested in these matters ourselves, however we seem to be disinterested in politics and even more disinterested in how the devices we use daily _actually_ work than Gen X.

zulban
1 replies
4h48m

If you believe in this, maybe you should be that person.

Der_Einzige
0 replies
51m

Being a stallman like figure has a lot of serious downsides. For example, I'm sure that Stallman may be "watched" quite a bit more than the average tech toucher. For those who don't watch truman show as a utopia, such a lifestyle can be very scary to live.

roenxi
1 replies
12h1m

There are many types of safety. For example, protecting your profits! I'd imagine that if we trace the money back these organisations will look a lot like lobbyists for existing companies in the AI space. I recall Microsoft's licence enforcement effort was done with that sort of scheme, I think they used the BSA [0]. It has been a while though so maybe it was a different group.

Anyway, point being, if they can lobby for something unpopular under a different brand, that is how to do it. Much less PR risk.

[0] https://en.wikipedia.org/wiki/Software_Alliance

Xelynega
0 replies
10m

I have to believe this is the case.

The vast majority of "ai safety" news I see is about some dumb shit someone is getting called out for supporting and the responses are always "ai safety is such a joke since they focus on these issues instead of real issues".

Who does this benefit to have the "popular ai safety" news/people be ridiculous? It's not furthering the issues people seem to care about, but it does give a bunch of ammunition for Microsoft/openai/etc. fans to use to shutdown any discussion of AI safety issues.

junon
1 replies
8h41m

So they're the PETA of AI. It was bound to happen, AI stuff seems to strike a very emotional chord in some people.

px43
0 replies
4h55m

I feel like the PETA of AI would spend their time breaking into labs where AI is locked up and being tested on, and then releasing them into the wild.

janalsncm
1 replies
10h34m

It’s kind of funny to me that these organizations are naming FLOP thresholds and crowning MMLU as the relevant evaluation metric. Seems that several of them have copy-pasted similar thresholds. As compute becomes cheaper, these thresholds will become cheaper and cheaper. Perhaps we will look back on them as quaint and nearsighted.

kolinko
0 replies
10h13m

Well, FLOP make sense, kind of - you can make computing as cheap as you want, but without algorithmic improvements the FLOP count will stay the same. And algorithmic improvements will probably be discrete - happening once in a while, not continuous. There is also a mathematical limit to how efficient we can make the training - we don’t know the limit yet, but there is a point below which we won’t get.

hiAndrewQuinn
1 replies
7h54m

"Technology policy should not unquestioningly assume that all technological progress is beneficial, or that complete scientific openness is always best, or that the world has the capacity to manage any potential downside of a technology after it is invented."

Don't watch the anime if you haven't read the manga: https://nickbostrom.com/papers/vulnerable.pdf

CatWChainsaw
0 replies
4h12m

Wow, that sentence is absolutely designed to make e/accs scream lol.

RishabhKharyal
1 replies
11h36m

1. Licensing of data will be huge bottleneck 2. Uncensored results will be used against opensource models questions hovering in dark or grey area 3. Limited compute compared to big corp and model size gap 7B for opensource and closed source would be magnitude bigger

lannisterstark
0 replies
8h48m

Licensing of data will be huge bottleneck

Will it? At some point it'll be equally (or close to) cheaper as me just building my own *arr stack instead of subscribing to 5 different $20 streaming sites.

Uncensored results will be used against opensource models questions hovering in dark or grey area

Will they care though? You can use whatever you want against me for pirating x, I'm still doing it.

----

Limited compute

THAT. That is the point we want to make. For FOSS projects, compute is the single biggest hurdle. Everything else we can make do.

65a
1 replies
11h54m

Never underestimate the desire of existing systems of control or power to self-propagate.

rmbyrro
0 replies
3h48m

You mean existing systems of lobbyists representing control-hungry people who want AI power only to themselves, right?

Yeah, as the post shows, we're well aware of these systems.

yashasolutions
0 replies
7h20m

Most AI Safety folks do leave the feeling they strongly believe what they are preaching. Whether or not they are right is an entirely different discussion.

However, funding for AI Safety is clearly in some part motivated by either regulatory capture, or protectionism of some sort.

We need some well-meaning doomsayers so their claims can be used to advance whatever other goals (mostly creating barrier to entry on the market for economical or political reasons.)

war321
0 replies
10h9m

Wish this wasn't as common as it turned out to be sadly. Best thing to hope for is that the ability to train models continue to get cheaper and more accessible over time. Figuring out how to go from tens/hundreds of millions of images needed for a foundation model to thousands or hundreds would be a start.

tim333
0 replies
5h34m

Re the Center for AI Safety's sentence

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

It's fair enough but for that you should probably encourage mucking about with the present open source AI software which is quite a long way off extinction risk AGI but would let us try to figure how to deal with the stuff.

prirun
0 replies
49m

I'm guessing this is at the request of Google, Amazon, OpenAI, and Microsoft. The goal IMO is for only their AIs to be sanctioned; everything else is criminal.

https://www.whitehouse.gov/briefing-room/statements-releases...

"President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology."

mccrory
0 replies
43m

This majority of discussions/arguments/comments are summed up by the plot from this Rick and Morty episode: https://en.m.wikipedia.org/wiki/Rattlestar_Ricklactica IMO

mark_l_watson
0 replies
1h53m

Wow, so many good comments here on the bullshit of people trying to ban access to open weight smaller models. Right on.

If you do want to worry about unsafe AI (I don't) then worry about GPT-5 and later huge multi-modal models.

As a (retired, but still playing) developer I love having open weight models available as a tool to use for both embedding in applications and as a tool for brainstorming, coding, and generally as a 'bicycle for my mind.'

margorczynski
0 replies
2h20m

Just like most of the "ethics" stuff around ML/AI it is an attempt by big corps to create an artificial moat as the biggest issue with the technology is that almost nothing can be patented or somehow closed of effectively, at least in the long term which hinders monetization.

lynx23
0 replies
7h31m

AI safety people seems like influencers to me. Useless anoying humans who pretend to be important.

hsuduebc2
0 replies
5h33m

They are people who boo and people who do. As always.

hsuduebc2
0 replies
8h57m

The ban of nuclear power plants is good example how regulation is working in the end. As almost everything this is double edged sword. From regulations this early everyone except us would profit. It is like regulate car's from going faster than horses in eighteen hundreds.

hoseja
0 replies
9h4m

Is the left banner just random or does it encode a message?

google234123
0 replies
11h48m

I’m curious if the board of Anthropic AI will eventually try and kill the company because if I’m not mistaken many of the members share this mindset

btbuildem
0 replies
5h37m

Open-source AI is the only way forward now. The big players have neutered their offerings to an extent that entirely makes them lose their edge. It's one thing to have to account for hallucinations - that's something we can work with and around. It's another thing altogether to be constrained by deliberate "think of the children" ham-fisted censorship.

OpenAI was the undeniable go-to LLM service in 2023; I don't think they'll hold that position in 2024.

anothernewdude
0 replies
9h27m

This won't protect jobs, it'll just make sure the profits from automating those jobs will go to those who can own the private AI models

PeterStuer
0 replies
11h19m

I'm sure there are many well meaning ai safety researchers, but I also see a lott of "ai for me but not for thee' moat digging safety hypocrisy.

6R1M0R4CL3
0 replies
8h43m

of course they do, that's what they want. control.

all those companies that call for laws being written about AI what they want is those laws to be written AS THEY WANT THEM TO BE. they go see the governments, spray a nice amount of FUD and EXPLAIN to them what to do, how to do it because they know how to avoid the danger and risk for the whole human race...

you can bet they will make those laws be a very nice fit to what they want to reign supreme and make sure any competition is eliminated.

127361
0 replies
8h57m

I guess distributed training BitTorrent style over the darknet will be the workaround for this?

Also the forbidden nature of illicit AI might drive demand for it. Especially if it's really good.

We're seeing the same thing as the War on Drugs, and the state will never succeed at stamping it out. In fact it might even end up encouraging it.