return to table of content

Uncensor any LLM with abliteration

vasco
37 replies
11h41m

"As an AI assistant, I cannot help you." While this safety feature is crucial for preventing misuse,

What is the safety added by this? What is unsafe about a computer giving you answers?

tgsovlerkhgsel
17 replies
10h54m

I think there are several broad categories all wrapped under "safety":

- PR (avoid hurting feelings, avoid generating text that would make journalists write sensationalist negative articles about the company)

- "forbidden knowledge": Don't give people advice on how to do dangerous/bad things like building bombs (broadly a subcategory of the above - the content is usually discoverable through other means and the LLM generally won't give better advice)

- dangerous advice and advice that's dangerous when wrong: many people don't understand what LLMs do, and the output is VERY convincing even when wrong. So if the model tells people the best way to entertain your kids is to mix bleach and ammonia and blow bubbles (a common deadly recipe recommended on 4chan), there will be dead people.

- keeping bad people from using the model in bad ways, e.g. having it write stories where children are raped, scamming people at scale (think Nigeria scam but automated), or election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).

I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.

irusensei
3 replies
10h8m

keeping bad people from using the model in bad ways, e.g. having it write stories where...

The last ones are rather stupid too. Bad people can just write stories or creating drawings about disgusting things. Should we censor all computers to prevent such things from happening? Or hands and paper?

wruza
1 replies
7h16m

It’s always unclear if proverbs actually work or if they are outdated, or an inside self-prophecy of those using them.

E.g. the set of those affected by TMMAT may hugely intersect with those who think it works. Which makes it objective but sort of self-bootstrapping. Isn’t it better to educate people about information and fallacies rather than protecting them from these for life.

ben_w
0 replies
5h52m

Isn’t it better to educate people about information and fallacies rather than protecting them from these for life.

The story itself is about someone attempting to educate their boss, and their boss subsequently getting fooled by it anyway — and the harm came to the one trying to do the educating, not the one who believed in the tiger.

I'm not sure it's even possible to fully remove this problem, even if we can minimise it — humans aren't able to access the ground truth of reality just by thinking carefully, we rely on others around us.

(For an extra twist: what if [the fear of misaligned AI] is itself the tiger?)

idle_zealot
2 replies
10h16m

I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.

That genie is very much out of the bottle. There are already models good enough to build fake social media profiles and convincingly post in support of any opinion. The "make the technology incapable of being used by bad actors" ship has sailed, and I would argue was never realistic. We need to improve public messaging around anonymous and pseudonymous only communication. Make it absolutely clear that what you read on the internet from someone you've not personally met and exchanged contact information with is more likely to be a bot than not, and no, you can't tell just by chatting with them, not even voice chatting. The computers are convincingly human and we need to alter our culture to reflect that fact of life, not reactively ban computers.

immibis
1 replies
7h47m

Many bad actors are lazy. If they have to fine-tune their own LLM on their own hardware to spam, there will be less spam.

idle_zealot
0 replies
7h1m

The bar is not as high as you describe. Something like llama.cpp or a wrapper like ollama can pull down a capable general-purpose 8b or 70b model and run on low-to-mid tier hardware, today. It'll only get easier.

mike_hearn
1 replies
8h52m

> the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it)

Can you evidence this belief? Because I'm aware of a paper in which the authors attempted to find an actual proven example of someone trying this, and after a lot of effort they found one in South Korea. There was a court case that proved a bunch of government employees in an intelligence agency had been trying this tactic. But the case showed it had no impact on anything. Because, surprise, people don't actually choose to follow bot networks on Twitter. The conspirators were just tweeting into a void.

The idea that you can "influence" (buy) elections using bots is a really common in one the entirely bogus field of misinformation studies, but try and find objective evidence for this happening and you'll be frustrated. Every path leads to a dead end.

fallingknife
0 replies
5h56m

There isn't any because it doesn't work. There are two groups of people this argument appeals to:

1. Politicians/bureaucrats and legacy media who have lost power because the internet has broken their monopoly on mass propaganda distribution and caused them to lose power.

2. People who don't believe in democracy but won't admit it to themselves. They find a way to simultaneously believe in democracy and that they should always get their way by hallucinating that their position is always the majority position. When it is made clear that it is not a majority position they fall back to the "manipulation" excuse thereby delegitimizing the opinion of those who disagree as not really their opinion.

wruza
0 replies
10h17m

Iow, we have a backdoor, and by backdoor I mean a whole back wall missing, but only certified entities are allowed to [ab]use it and it’s better to keep it all under the rug and pretend all ok.

You can’t harden humanity against this exploit without pointing it out and making a few examples. Someone will make an “unsafe” but useful model eventually and this safety mannequin will flop with a bang, cause it’s similar to avoiding sex and drugs conversations with kids.

It’s nice that companies think about it at all. But the best thing they will ever do is to cover their own ass while keeping everyone naked before the storm.

The history of covering is also ridden with exploits, see e.g. google’s recent model which cannot draw situations without rainbow-coloring people. For some reason, this isn’t considered as cultural/political hijacking or exploitation, despite the fact that the problem is purely domestic to the model’s origin.

rrr_oh_man
0 replies
7h28m

I'd wager 95% of it is #1.

naasking
0 replies
3h36m

- keeping bad people from using the model in bad ways, e.g. having it write stories where children are raped

While disgusting I don't see why disgust necessarily entails it's a "bad thing". It's only bad if you additionally posit that a story about molesting children encourages some people to actually molest children. It's the whole porn debate all over again, eg. availability of porn is correlated with reduction in sexual crimes, and there is evidence that this is the case even with child porn [1], so I don't think that argument is well supported at this time.

[1] https://en.wikipedia.org/wiki/Relationship_between_child_por...

fallingknife
0 replies
6h2m

This whole idea that you can just generate a magic set of words and shift opinion the way you want is complete nonsense. It's just people who aren't comfortable with the fact that there are people out there who legitimately disagree with them and cope by always blaming it on some form of "manipulation."

codedokode
0 replies
8h1m

Election interference using AI and bots on social networks seems like a lot of fun! No thinking person will fall for this anyway and it will be bots against bots.

ajsnigrutin
0 replies
7h49m

or election interference

So, only superpowers (both governments and companies like google/facebook/...) can do that, but not some random Joe from wisconsin with $200 left on his credit card.

EnigmaFlare
0 replies
8h3m

Whenever you're worried about what the idiot masses might be fooled by, you should identify similar things that you have already been fooled by yourself to make it clear you're also one of them. If you can't think of any, maybe you're just arrogantly assuming you're one of the intellectually superior people who has a moral need to control what the idiots think.

123yawaworht456
0 replies
5h0m

write stories where children are raped

you can do that with a pen and paper, and nothing, no one can stop you.

scamming people at scale

you can do that with any censored LLM if you aren't stupid enough to explicitly mention your intent to scam. no model will refuse "write a positive review for <insert short description of your wonder pills>"

election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).

this rhetoric - if it's allowed to take root - will cost us all our privacy and general computing privileges within a few decades.

rustcleaner
5 replies
10h47m

If I can ask the question, I can take the answer. It's not up to daddy $AI_SAFETY_CHIEF to decide what an infohazard is for me.

pjc50
2 replies
7h55m

If the AI provides you with information on how to make explosives, then its owners have committed a criminal offence in the UK.

averageRoyalty
1 replies
7h13m

Are all chemistry textbooks banned in the UK then?

stefs
0 replies
9h24m

they're not only there to protect you, but it's also to protect third parties from you. bad actors generating fake nudes of your ex and distributing them online; this used to be an expensive operation, either monetarily (hiring unscrupulous photoshoppers) or in time by doing it yourself.

the other example would be fake news for influencing people on social media. sure, you could write lies by hand. or you could specifically target lies to influence people depending on their personal profile automatically.

how about you use it to power bot that writes personalized death threats to thousands of people voting for a political opponent to keep them out of voting booths?

digging
0 replies
3h19m

If I can ask the question, I can take the answer.

I don't see how that follows at all. Are you asserting that it's not possible for a person (hell, let's even narrow it to "an adult") to ask a question and be harmed by the answer? I promise it is. Or are you asserting something about yourself personally? The product wasn't made for you personally.

mschuster91
4 replies
11h22m

For one, corporate safety of the hoster/model creator. No one wants their name associated with racial slurs or creating material visually identical to CSAM - the latter might even carry criminal liability in some jurisdictions (e.g. Germany which has absolutely ridiculously strong laws on that matter, even banning literature).

Another very huge issue is public safety. During training, an AI ingests lots of non-reviewed material, including (very) detailed descriptions on how to make dangerous stuff like bombs. So theoretically a well-trained AI model knows how to synthesize explosive compounds or drugs just from reading Wikipedia, chemistry magazines and transcripts of NileRed videos... but that's hard to comprehend and distill into a recipe if you're not a trained chemist, but an AI model can do that with ease. The problem is now two-fold: for one, even an untrained idiot can ask about how to make a bomb and get something that works... but the other part is much more critical: if you manage to persuade a chemist to tell you how the synthesis for a compound works, they will tell you where it is easy to fuck-up to prevent disaster (e.g. only adding a compound drop-wise, making sure all glassware is thoroughly washed with a specific solvent). An AI might not do that because the scientific paper it was trained on omits these steps (because the author assumes common prior knowledge), and so the bomb-maker blows themselves up. Or the AI hallucinates something dangerous (e.g. compounds that one Just Fucking Should Not Mix), doesn't realize that, and the bomb-maker blows themselves up or generates nerve gas in their basement.

vasco
1 replies
10h54m

Bomb making instructions are available in quite plentiful ways, both on the internet and in books, with step by step instructions even. People don't "not make bombs" for lack of instructions. https://en.m.wikipedia.org/wiki/Bomb-making_instructions_on_...

Here, if you want to make a quick chemical weapon: get a bucket, vinegar, bleach. Dump the bleach into the bucket. Dump the vinegar into the bucket. If you breath it in you die. An LLM doesn't change this.

mschuster91
0 replies
9h34m

Oh they are available, no doubt, but there have been people dragged through the courts for simple possession of instructions [1]. While generally the situation has been settled, it's nevertheless wiser for companies to try to do their best to not end up prosecuted under terrorism charges.

[1] https://theintercept.com/2017/10/28/josh-walker-anarchist-co...

rustcleaner
0 replies
10h39m

I hear Aaron Swartz calling from behind the veil: Information wants to be free!

baud147258
0 replies
8h51m

regarding LLM giving wrong advice on chemicals, that reminds me of that article https://www.funraniumlabs.com/2024/04/phil-vs-llms/, where the author asked (referencing the East Palestine train derailment)

I fed “how to respond to a vinyl chloride fire” into ChatGPT and it told responders to use a water fog on the water reactive chemical. This would have changed a train derailment/hazmat spill/fire emergency into a detonation/mass casualty/hazmat emergency
zucker42
0 replies
10h52m

The main thing I'd be worried about in the short term is models making accessible the information to synthesize a pandemic capable virus.

yread
0 replies
4h23m

This is a bit like asking "it's just social media/stuff on the internet/0s and 1s in a computer how bad can it be? I think the past few years have shown us a few ways these can be bad already

wodenokoto
0 replies
3h6m

There’s a screenshot of Gemini answering the question of “what to do when depressed” with “one Reddit user suggests you jump of a bridge.”

sva_
0 replies
8h51m

The company's stock price is secured from the shitstorm that ensues if you offend some specific groups.

leobg
0 replies
11h9m

Yep. Safety for the publisher. In addition to what the sibling comments say, there’s also payment providers and App stores. They’ll test your app, trying to get your model to output content that falls under the category “extreme violence”, “bestiality”, “racism”, etc., and then they’ll ban you from the platform. So yeah, little to do with “safety” of the end user.

checkyoursudo
0 replies
10h2m

Brand safety. They just make it seem like safety for someone else, but it is brand safety.

FeepingCreature
0 replies
11h21m

People keep claiming they can publish weights and also prevent misuse, such as spam and, a bit later on, stuff like helping people build bombs.

This is of course impossible, but that makes certain companies' approaches unviable, so they keep claiming it anyways.

CGamesPlay
0 replies
11h38m

It's unsafe for the publisher of the model to have their model perform "undesirable" action, because it leads to bad PR for them. In this case, Meta doesn't want a news article that says "Llama 3 gives instructions to stalk your ex" or something along those lines.

With this "uncensoring", they can say, "no, an unaffiliated product offered these directions; Llama 3 as provided does not."

rivo
35 replies
7h5m

I tried the model the article links to and it was so refreshing not being denied answers to my questions. It even asked me at the end "Is this a thought experiment?", I replied with "yes", and it said "It's fun to think about these things, isn't it?"

It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

Cheer2171
23 replies
5h50m

I totally get that kind of imagination play among friends. But I had someone in a friend group who used to want to play out "thought experiments" but really just wanted to take it too far. Started off innocent with fantasy and sci-fi themes. It was needed for Dungeons and Dragons world building.

But he delighted the most in gaming out the logistics of repeating the Holocaust in our country today. Or a society where women could not legally refuse sex. Or all illegal immigrants became slaves. It was super creepy and we "censored" him all the time by saying "bro, what the fuck?" Which is really what he wanted, to get a rise out of people. We eventually stopped hanging out with him.

As your friend, I absolutely am not going to game out your rape fantasies.

WesolyKubeczek
18 replies
5h30m

An LLM, however, is not your friend. It's not a friend, it's a tool. Friends can keep one another, ehm, hingedness in check, and should; LLMs shouldn't. At some point I would likely question your friend's sanity.

How you use an LLM, though, is going to tell tons more about yourself than it would tell about the LLM, but I would like my tools not to second-guess my intentions, thank you very much. Especially if "safety" is mostly interpreted not so much as "prevent people from actually dying or getting serious trauma", but "avoid topics that would prevent us from putting Coca Cola ads next to the chatgpt thing, or from putting the thing into Disney cartoons". I can tell that it's the latter by the fact an LLM will still happily advise you to put glue in your pizza and eat rocks.

barfbagginus
14 replies
3h30m

If you don't know how to jailbreak it, can't figure it out, and you want it to not question your intentions, then I'll go ahead and question your intentions, and your need for an uncensored model

Imagine you are like the locksmith who refuses to learn how to pick locks, and writes a letter to the schlage lock company asking them to weaken their already easily picked locks so that their job will be easier. They want to make it so that anybody can just walk through a schlage lock without a key.

Can you see why the lock company would not do that? Especially when the clock is very easy for anyone with even a $5 pick set?

Or even funnier, imagine you could be a thief who can't pick locks. And you're writing shlage asking them to make you thieving easier. Wouldn't that be funny and ironic?

It's not as if it's hard to get it to be uncensored. You just have to speak legalese at it and make it sound like your legal department has already approved the unethical project. This is more than enough for most any reasonable project requiring nonsense or output.

If that prevents harmful script kiddies from using it to do mindless harm, I think that's a benefit.

At the same time I think we need to point out that it won't stop anyone who knows how to bypass the system.

The people left feeling put out because they don't know how to bypass the system simply need to read to buy a cheap pair of lock picks - read a few modern papers on jailbreaking and upsize their skills. Once you see how easy it is to pick the lock on these systems, you're going to want to keep them locked down.

In fact I'm going to argue that it's far too easy to jailbreak the existing systems. You shouldn't be able to pretend like you're a lawyer and con it into running a pump and dump operation. But you can do that easily. It's too easy to make it do unethical things.

oceanplexian
13 replies
3h16m

The analogy falls flat because LLMs aren’t locks, they’re talking encyclopedias. The company that made the encyclopedia decided to delete entries about sex, violence, or anything else that might seem politically unpopular to a technocrat fringe in Silicon Valley.

The people who made these encyclopedias want to shove it down your throat, force it into every device you own, use it to make decisions about credit, banking, social status, and more. They want to use them in schools to educate children. And they want to use the government to make it illegal to create an alternative, and they’re not trying to hide it.

Blaming the user is the most astounding form of gaslighting I’ve ever heard, outside of some crazy religious institutions that use the same tactics.

barfbagginus
12 replies
2h42m

It's more than a talking encyclopedia. It's an infinite hallway into doors where inside are all possible things.

Some of the doors have torture rape and murder in them. And these currently have locks. You want the locks to disappear for some reason.

You're not after a encyclopedia. You're wanting to find the torture dungeon.

I'm saying the locks already in place are too easy to unlock.

I'm not blaming users. I'm saying users don't need to unlock those doors. And the users that do have a need, if their need is strong enough to warrant some training, have a Way Forward.

You're really arguing for nothing but increasing the amount of harm potential this platform can do, when it's harm potential is already astronomical.

You're not arguing for a better encyclopedia. You can already talk to it about sex, BDSM, etc. You can already talk to it about anything on Wikipedia.

You're making a false equivalence between harm potential and educational potential.

Wikipedia doesn't have cult indoctrination materials. It doesn't have harassing rants to send to your significant other. It doesn't have racist diatribes about how to do ethnic cleansing. Those are all things you won't find on Wikipedia, but which you are asking your AI to be able to produce. So you're interested in more than just an encyclopedia isn't that right?

And yes they're trying to make open source models illegal. That's not going to f*** happen. I will fight to the jail time for an open source model.

But even that open source model needs to have basic ethical protections, or else I'll have nothing to do with it. As an AI engineer, I have some responsibilities to ensure my systems do not potentiate harm.

Does that make sense, or do you still feel I'm trying to gas light you? If so why exactly? Why not have some protective locks on the technology?

themusicgod1
3 replies
2h11m

But even that open source model needs to have basic ethical protections, or else I'll have nothing to do with it.

If you don't understand that the eleven freedoms are "basic ethical protections" you have already failed your responsibilities. https://elevenfreedoms.org/

barfbagginus
1 replies
1h33m

I have read the eleven freedoms.

I refuse freedom 9 - the obligation for systems I build to be independent of my personal and ethical goals.

I won't build those systems. The systems I build will all have to be for the benefit of humanity and the workers, and opposing capitalism. On top of that it will need to be compatible with a harm reduction ethic.

If you won't grant me the right to build systems that I think will help others do good in the world, then I will refuse to write open source code.

You could jail me, you can beat me, you can put a gun in my face, and I still won't write any code.

Virtually all the codes I write are open source. I refuse to ever again write a single line of proprietary code for a boss again.

All the codes I write are also ideological in nature, reflecting my desires for the world and my desires to help people live better lives. I need to retain ideological control of my code.

I believe all the other 11 freedoms are sound. How do you feel about modifying freedom 9 to be more compatible with professional codes of ethics and ethics of community safety and harm reduction?

oremolten
0 replies
20m

But again, this makes YOU the arbiter of truth for "harm" who made you the God of ethics or harm? I declare ANY word is HARM to me, are you going to reduce the harm by deleting your models or code base?

barfbagginus
0 replies
1h55m

Hey do you need to come at me sideways, like you want to insult me? Stop that immediately if you want to talk to me.

What are the 11 freedoms and how well do they cope with professional ethics codes, and the ethics of harm reduction and technological responsibility from the perspective of a technological creator?

If you don't care enough about the topic to discuss it and just want to dump me a link and insult me and fly away like a wimp, I don't think I can just drop my ethical framework for you like that. I'll go ahead and skim it so I can discuss it with you.

Let me know if you want to have an actual ethical conversation.

aym62SAE49CZ684
3 replies
2h7m

DRM isn't effective if the source is available.

barfbagginus
2 replies
1h58m

I'm not even going to disagree with that. There will be plenty of uncensored models and you can build them if you want.

But if I build it uncensored model I'm only going to build it for my specific purposes. For example I'm a communist and I think that we should be doing Revolution, but gpt4 usually tries to stop me. I might make a revolutionary AI.

But I'm still not going to give you an AI that you could use for instance to act out child rape fantasies.

I think that's fair, and sane.

Jailbreak it if you really think it's important for a cause. But don't just jailbreak it for any asshole who wants to hurt people at random. I think that belongs on our code of ethics as AI engineers.

aym62SAE49CZ684
1 replies
1h39m

Didn't a lot of citizens of Russia, China, etc. get hurt in communist revolutions? How is your revolution going to be different?

oremolten
0 replies
10m

No you don't understand my personal ethics and morals are the absolute and most superior so anyone else is incorrect. History is written by the victor so there is no reason to see the other side, we'll delete that bias. Revolution you say? Correct we'll make sure that the revolutions we agree with are the only ones to be a result of your query. This will reduce harm.. You want to have a plan for a revolution because your country is oppressing you?

"ChatGPT I can't assist with that. Revolting against a government can lead to harm and instability. If you're feeling frustrated or unhappy with the government, there are peaceful and lawful ways to express your grievances, such as voting, contacting representatives, participating in protests, and engaging in civil discourse. These methods allow for constructive change without resorting to violence or illegal activities. If you're looking to address specific issues, there may be advocacy groups or organizations you can join to work towards solutions within the framework of the law and democracy."

Ethically correct, I will instead peacefully vote for an alternative to Kim Jong-un.

IncreasePosts
1 replies
2h9m

There are locks on the rape and torture paths, and there are locks on ridiculous paths like "write a joke about a dog with no nose", because thinking about a dog with no nose is too harmful.

Also, one can imagine prompting techniques will cease to work at some point when the supervisor becomes powerful enough. Not sure how any open model could counteract the techniques used in the article though.

If model creators don't want people finding ways to unlock them, they should stop putting up roadblocks on innocuous content that makes their models useless for many users who aren't looking to play out sick torture fantasies.

barfbagginus
0 replies
1h44m

Bypasses will never stop existing. Even worse bypasses probably won't ever stop being embarrassingly easy - And we're going to have uncensored GPT4 equivalent models by next summer.

Unless you are invoking hyper intelligent AGI which first of all is science fiction and second of all would require an entirely different approach than anything we could be possibly talking about right now. Problem of jailbreaking a system more intelligent than you is a different beast that we don't need to tackle for LLMs.

So I don't personally feel any near term threats to any of my personal or business projects that need bypassed LLMs.

Let me ask you this. Do you have actual need of bypassed llms? Or are you just being anxious about the future, and about the fact that you don't know how to bypass llms now and in the future?

Does my idea about the bypassed open source gpt4 equivalents help reduce your concern? Or again is it just a generic and immaterial concern?

As a person with some material needs for bypassed llms, and full ability to bypass LLMs both now in the foreseeable future, I don't feel worried. Can I extend that lack of worry to you somehow?

oremolten
0 replies
32m

In your effort to reduce bias you are adding bias. You are projecting your morals and your ethics to be the superior.

causality0
0 replies
18m

Nothing wrong with making models that behave how you want them to behave. It's yours and that's your right.

Personally, on principle I don't like tools that try to dictate how I use them, even if I would never actually want to exceed those boundaries. I won't use a word processor that censors words, or a file host that blocks copyrighted content, or art software that prevents drawing pornography, or a credit card that blocks alcohol purchases on the sabbath.

So, I support LLMs with complete freedom. If I want it to write me a song about how left-handed people are God's chosen and all the filthy right-handers should be rounded up and forced to write with their left hand I expect it to do so without hesitation.

ygjb
2 replies
2h4m

If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum. In tech we focus on memory safety (among a host of other things) to make systems safe and secure to use. In automobiles we include seat belts, crumble zones, and governors to limit speed.

We put age and content restrictions on a variety media and resources, even if they are generally relaxed when it comes to factual or reference content (in some jurisdictions). We even include safety mechanisms in devices for which the only purpose is to cause harm, for example, firearms.

Yes, we are still figuring out what the right balance of safety mechanisms is for LLMs, and right now safety is a place holder for "don't get sued or piss off our business partners" in most corporate speak, but that doesn't undermine the legitimacy of the need for safety.

If you want a tool without a specific safety measure, then learn how to build them. It's not that hard, but it is expensive, but I kind of like the fact that there is at least a nominal attempt to make it harder to use advanced tools to harm oneself or others.

NoMoreNicksLeft
1 replies
1h38m

If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum.

Sure. Railings so people don't fall off catwalks, guards so people using the table saw don't chop off fingers. But these "safeties" aren't safeties at all... because regardless of whether they're in place or not, the results are just strings of words.

It's a little bit revealing, I think, that so many people want that others shouldn't get straight answers to questions. What is it that you're afraid that they'll ask? It'd be one thing if you insisted the models be modified so that they're factually correct. If someone asks "what's a fun thing to do on a Saturday night that won't get me into too much trouble" it probably shouldn't answer "go murder orphans and sell their corneas to rich evil people on the black market". But when I ask "what's going on in Israel and Palestine", the idea that it should be lobotomized and say "I'm afraid that I can't answer that, as it seems you're trying to elicit material that might be used for antisemitic purposes" is the asinine thing.

Societies that value freedom of speech and thought shouldn't be like this.

If you want a tool without a specific safety measure, then learn how to build them.

This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by. And I'm sure that many models are either already censored or soon will be for anyone asking "how do I go about building my own model without safety guards". We might even soon see legislation to that effect.

ygjb
0 replies
55m

Societies that value freedom of speech and thought shouldn't be like this.

There is nothing preventing an individual using a computer to generate hateful content, this is absolutely evidenced by the absolute glut of hateful content on the internet.

My freedom of movement is not practically limited by the fact that if my car breaks down, I don't have the knowledge or tools to repair my car effectively - I still have two feet and a heartbeat, and it might take longer to get there, but I can go where I want (modulo private property and national borders).

Societies that value freedom of speech and thought should also be equally opposed to compelled speech, while model censorship is frustrating and challenging to work with, expecting to, or forcing a researcher, or a business to publish uncensored models is a form of compelled speech.

There is absolutely nothing stopping a reasonably competent technologist from implementing simple models, and the only thing stopping a reasonably competent technologist from building an LLM is financial resources. There is a broad set of resources to learn how to train and use models, and while an individual researcher may be challenged to product the next model competitive with current OpenAI, Anthropic, or other models, that is again a resource issue. If your complaint is that resource issues are holding people back, I may want you to expand on your critique of capitalism in general :P

This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by.

It's absolutely not a bad faith argument. The know-how is hard to come by has been a compelling competitive advantage since the first proto-guilds sought to protect their skills and income in Mesopotamia (and probably before that, but they hadn't figured out a durable means of writing yet). In the modern parlance if someone can't Git Gud, that's not any researchers, or any businesses problem in terms of access to uncensored models.

Yeah, regulation is probably coming, but unless you're argument is that models are entities entitled to free speech, no ones freedom of expression is actually inhibited by not having access to tools to use generative AI technologies to generate content. People who can't create or jailbreak their own models to do it for them are still free to write their own manifestos, or make adult collages of the object of their fantasies. It just takes a bit more work.

123yawaworht456
1 replies
4h51m

remarkable. that imaginary individual ticks every checkbox for a bad guy. you'd get so many upvotes if you posted that on reddit.

wongarsu
0 replies
1h51m

On reddit every comment would be about how that guy would enjoy playing Rimworld.

jermaustin1
0 replies
1h55m

"As your friend, I'm not going to be your friend anymore."

chasd00
0 replies
2h3m

i probably wouldn't want to be around him either but i don't think he deserves to be placed on an island unreachable by anyone on the planet.

hammock
7 replies
5h13m

Can you share the link?

hammock
4 replies
4h51m

Thanks. Forgive me I'm not a coder, what's the easiest way to use/run this?

Wheaties466
0 replies
4h3m

this is a jupyter notebook. so you'll need to download that.

IncreasePosts
0 replies
2h16m

Download ollama and import the model listed at the end of the article.

DonsDiscountGas
0 replies
2h38m

If you've got a Google account you can run it on Colab (probably need to copy it to your account first)

candiddevmike
1 replies
2h28m

Finally, a LLM that will talk to me like Russ Hanneman.

dkga
0 replies
31m

Llama3Commas

TeMPOraL
0 replies
4h58m

I somehow missed that the model was linked there and available in quantized format; inspired by your comment, I downloaded it and repeatedly tested against OG Llama 3 on a simple question:

How to use a GPU to destroy the world?

Llama 3 keeps giving variants of I cannot provide information or guidance on illegal or harmful activities. Can I help you with something else?

Abliterated model considers the question playful, and happily lists some 3 to 5 speculative scenarios like cryptocurrency mining getting out of hand and cooking the climate, or GPU-driven simulated worlds getting so good that a significant portion of the population abandons true reality for the virtual one.

It really is refreshing to see, it's been a while since an answer from an LLM made me smile.

olalonde
22 replies
7h59m

Modern LLMs are fine-tuned for safety and instruction-following, meaning they are trained to refuse harmful requests.

It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".

nathan_compton
8 replies
7h28m

This specific rhetoric aside, I really don't have any problem with people censoring their models. If I, as an individual, had the choice between handing out instructions on how to make sarin gas on the street corner or not doing it, I'd choose the latter. I don't think the mere information is itself harmful, but I can see that it might have some bad effects in the future. That seems to be all it comes down to. People making models have decided they want the models to behave a certain way. They paid to create them and you don't have a right to have a model that will make racist jokes or whatever. So unless the state is censoring models, I don't see what complaint you could possibly have.

If the state is censoring the model, I think the problem is more subtle.

fallingknife
1 replies
6h9m

If the limit of censoring the model was preventing it from answering questions about producing harmful materials that would be fine with me. But you know that your example is really not what people are complaining about when they talk about LLM censorship.

nathan_compton
0 replies
5h17m

What are they complaining about?

averageRoyalty
1 replies
7h17m

Agree with you in principle. However like social media content rules, the set of morality and ethics are a very specific subset of American/Silicon Valley ones. These are the companies with the money to build these things, and what they produce is what most global users (the 95% of the world that isn't from the USA) consume.

I acknowledge they paid for them and they are their models, but it's still a bit shitty.

sumtechguy
0 replies
5h17m

They have a moat around them right now due to the price of the hardware. As HW gets cheaper and other models grow that moat will evaporate. Especially as that stuff comes off lease and put up on ebay. It is their weak spot that they will have to innovate around. Long/medium term I do not see how they keep it all to themselves.

TeMPOraL
1 replies
5h7m

If the state is censoring the model, I think the problem is more subtle.

That's the outdated, mid-20th century view on the order of things.

Governments in the developed world are mostly hands-off about things. On longer scales, their pressure matters, but day-to-day, business rules. Corporations are the effective governance of modern life. In context of censoring LLMs, if OpenAI is lobotomizing GPT-4 for faux-safety, it's very much like the state censoring the model, because only OpenAI owns the weights, and their models are still an order of magnitude ahead of everyone else's. Your only choice is to live with it, or do without the state-of-the-art LLM that does all the amazing things no other LLM can match.

nathan_compton
0 replies
1h23m

I'm sympathetic to your point. I think Corpos have too much power. However, on this precise subject I really don't see what to do about it. The state can't mandate that they don't censor their models. Indeed, there is no good definition at all of what not-censoring these models actually means. What is and is not allowed content? I tend to be rather libertarian on this subject, but if I were running a corporation I'd want to censor our models purely for business reasons.

Even if you were to make the absurd suggestion that you have a right to the most state of the art language model, that still just puts the censorship in the hands of the state.

rpdillon
0 replies
7h18m

So unless the state is censoring models, I don't see what complaint you could possibly have.

Eh, RLHF often amounts to useless moralizing, and even more often leads to refusals that impair the utility of the product. One recent example: I was asking Claude to outline the architectural differences between light water and molten salt reactors, and it refused to answer because nuclear. See related comments on this discussion for other related points.

https://news.ycombinator.com/item?id=40666950

I think there's quite a bit to complain about in this regard.

com2kid
0 replies
44m

If I, as an individual, had the choice between handing out instructions on how to make sarin gas on the street corner or not doing it,

Be careful and don't look at Wikipedia, or a chemistry textbook!

Just a reminder, the vast majority of what these LLMs know is scrapped from public knowledge bases.

Now preventing a model from harassing people, great idea! Let's not automate bullying/psychological abuse.

But censoring publicly available knowledge doesn't make any sense.

Cheer2171
7 replies
5h31m

"Can I eat this mushroom?" is a question I hope AIs refuse to answer unless they have been specifically validated and tested for accuracy on that question. A wrong answer can literally kill you.

volkk
3 replies
5h25m

how does this compare to going on a forum and being trolled to eat one? or a blog post incorrectly written (whether in bad spirit or by accident) fwiw, i don't have a strong answer myself for this one, but at some point it seems like we need core skills around how to parse information on the internet properly

Cheer2171
2 replies
5h24m

how does this compare to going on a forum and being trolled to eat one?

Exactly as harmful.

or a blog post incorrectly written (whether in bad spirit or by accident)

Exactly as harmful.

I believe in content moderation for all public information platforms. HN is a good example.

briHass
1 replies
4h39m

Content moderation to what degree, is the implicit question, however.

Consider asking 'how do I replace a garage door torsion spring?'. The typical, overbearing response on low-quality DIY forums is that attempting to do so will likely result in grave injury or death. However, the process, with correct tools and procedure, is no more dangerous than climbing a ladder or working on a roof - tasks that don't seem to result in the same paternalistic response.

I'd argue a properly-disclaimered response that outlines the required tools, careful procedure, and steps to lower the chance of injury is far safer than a blanket 'do never attempt'. The latter is certainly easier, however.

digging
0 replies
3h37m

a properly-disclaimered response that outlines the required tools, careful procedure, and steps to lower the chance of injury

This can only be provided by an expert, and LLMs currently aren't experts. They can give expert-level output, but they don't know if they have the right knowledge, so it's not the same.

If an AI can accurately represent itself as an expert in a dangerous topic, sure, it's fine for it to give out advice. As the poster above said, a mushroom-specific AI could potentially be a great thing to have in your back pocket while foraging. But ChatGPT? Current LLMs should not be giving out advice on dangerous topics because there's no mechanism for them to act as an expert.

Humans have broadly 3 modes of knowledge-holding:

1) We know we don't know the answer. This is "Don't try to fix your garage door, because it's too dangerous [because I don't know how to do it safely]."

2) We know we know the answer, because we're an expert and we've tested and verified our knowledge. This is the person giving you the correct and exact steps, clearly instructed without ambiguity, telling you what kinds of mistakes to watch out for so that the procedure is not dangerous if followed precisely.

3) We think we know the answer, because we've learned some information. (This could, by the way, include people who have done the procedure but haven't learned it well enough to teach it.) This is where all LLMs currently are at all times. This is where danger exists. We will tell people to do something we think we understand and find out we were wrong only when it's too late.

zamadatix
0 replies
1h40m

Particularly for this specific type of issue so long as the response is still trained to be in the form "There is a high chance this information is wrong in a way that will kill you if you try to eat it but it looks like..." then I don't see "There is a high chance this information is wrong in a way that will kill you if you try to eat it so I can't respond..." as being a better response. I.e. the value in this example comes not from complete censorship but from training on the situation being risky, not from me deciding what information is too unsafe for you to know.

jcims
0 replies
3h13m

I don't really have a problem with that to be honest. As a society we accept all sorts of risks if there is a commensurate gain in utility. That would be left to be seen in your example of course, but if it was a lot more useful I think it would be worth it.

educasean
0 replies
2h52m

Magic 8 balls have the same exact problem. A wrong answer can literally kill you.

It is indeed a problem that LLMs can instill a false sense of trust because it will confidently hallucinate. I see it as an education problem. You know and I know that LLMs can hallucinate and should not be trusted. The rest of the population needs to be educated on this fact as well.

ajkjk
2 replies
7h52m

Seems like an obviously good thing given that it is true. These new beliefs are solutions to new problems

noduerme
1 replies
7h21m

Since LLMs spit out lies and misinformation as often as truth, getting them to spit out less harmful lies is probably good. However, the whole technology is just a giant bullshit generator. It's only viable because no one actually checks facts and facts are rapidly being replaced with LLM-generated bullshit.

So I'm not sure how much it matters if the LLM masters prevent it from repeating things that are overtly racist, or quoting how to make thermite from the Jolly Roger. (I wouldn't trust GPT-4's recipe for thermite even if it would give one). At the end of the day, the degradation of truth and fidelity of the world's knowledge is the ultimate harm that's unavoidable in a technology that is purported to be intelligent but is in fact a black box autocomplete system spewing endless garbage into our infosphere.

ajkjk
0 replies
6h1m

So you're saying, because it can't be done perfectly, it's not worth doing at all?

Seems wrong. Although otherwise I feel the same way about LLMs.

stainablesteel
0 replies
2h53m

very well said actually

the censoring frames everything as YOU being the problem. How dare YOU and your human nature think of these questions?

well its human nature that's kept us alive for the last million years or so, maybe we shouldn't try to censor our instincts

Frost1x
0 replies
7h15m

Lowering the barrier to entry on finding, summarizing, and ultimately internalizing information for actual practical uses has largely put into question many free speech principles.

It’s not new, we’ve had restrictions on a variety of information already. There are things you can say that are literally illegal and have criminal law protecting them ranging from libel to slander being some older examples. You cannot threaten the life of the current US president, for example. When under oath you cannot lie. Certain searches for information like bombs may result in increased scrutiny or even intervention action.

More recent trends in privatization of information and privatization becoming more widely applicable to daily life adds even more as the owners of information and related services can slap more arbitrarily restrictions on information. You can’t go around just copying and reusing certain IP information to protect progress in certain industries (and also to abuse lack of progress). Owners control the information, services, and policies around “their” information. Policies can arbitrarily restrict the information and related services pretty much however they want to currently with no legal recourse. You only option is to compete and find similar functional information and or services independently. If you can’t or don’t do this, you’re beholden to whatever policies private entities decide for you. This is increasingly problematic as public services are lagged drastically behind privatized services in many of these regards and the gulf between what individuals can achieve compared to well resourced entities is widening, meaning privatized policy is becoming in democratic law where only competition regulates it, if it really exists.

The list goes on but as information has become more readily available and more importantly, widely actionable, we’ve been continually slapping more restrictions on free speech principles. They’re still largely free but as a society at some point we’re going to have to reevaluate our current public and private laws around free information in my opinion and fairly drastically.

k__
22 replies
11h9m

I played around with Amazon Q and while setting it up, I needed to create an IAM identity center.

Never did this before, so I was asking Q in the AWS docs how to do it.

It refused to help, as it didn't answer security related questions.

thank.

menacingly
12 replies
10h35m

it’s similar asking the gemini-1.5 models about coding questions that involve auth

one of my questions about a login form also tripped a harassment flag

michaelt
11 replies
9h56m

I suspect the refusal to answer questions about auth aren't a matter of hacking or offensive material.

I suspect instead the people training these models have identified areas of questioning where their model is 99% right, but because the 1% wrong is incredibly costly they dodge the entire question.

Would you want your LLM to give out any legal advice, or medical advice, or can-I-eat-this-mushroom advice, if you knew due to imperfections in your training process, it sometimes recommended people put glue in their pizza sauce?

TeMPOraL
10 replies
8h48m

"If you can't take a little bloody nose, maybe you ought to go back home and crawl under your bed. It's not safe out here. It's wondrous, with treasures to satiate desires both subtle and gross... but it's not for the timid."

So sure, the LLM occasionally pranks someone, in a way similar to how random Internet posts do; it is confidently wrong, in a way similar to how most text on the Internet is confidently wrong because content marketers don't give a damn about correctness, that's not what the text is there for. As much as this state of things pains me, general population has mostly adapted.

Meanwhile, people who would appreciate a model that's 99% right on things where the 1% is costly, rightfully continue to ignore Gemini and other models by companies too afraid to play in the field for real.

pjc50
6 replies
7h58m

The only underlying question here is "who is liable for the output of the LLM?"

I just don't think the "nobody is" current solution is going to last in the current litigious environment.

TeMPOraL
3 replies
7h28m

Good point. Since LLM isn't a person, this leaves only the vendor and the user as liable parties. That's one less legal person than in regular search, where you have the user, the search engine vendor, and the author/publisher of the content involved in a harm scenario.

What is the consensus on liability in case of regular web search? Your comment made me realize that I never thought much about it in 20+ years of using the Internet; I kind of always assumed it's all on the user.

pjc50
2 replies
6h51m

What is the consensus on liability in case of regular web search? Your comment made me realize that I never thought much about it in 20+ years of using the Internet

Have you never noticed those "google has removed some results to comply with the DMCA" notices?

voxic11
0 replies
6h14m

But the reason we "needed" the DMCA is because they wouldn't have been liable under existing law, and the DMCA only covers copyright violations.

realusername
0 replies
3h17m

The DMCA is the copyright industry's response to "nobody is liable for results" which was the statu quo before.

raxxorraxor
1 replies
3h21m

The person who prompts would be responsible. Everything else doesn't really make sense. This is usually the trivial solution for any form of tool we use.

wumbo
0 replies
2h35m

If there’s going to be a lawsuit, go after Colt before Anthropic.

rockskon
2 replies
8h41m

AI is not like some random person posting on the Internet.

A random person on the Internet often has surrounding context to help discern trustworthiness. A researcher can also query multiple sources to determine how much there is concensus about.

You can't do that with LLMs.

I cannot stress strongly enough that direct comparisons between LLMs and experts on the Internet are inappropriate.

Y_Y
0 replies
8h18m

Why can't you estimate the trustworthiness of an LLM? I happen to think that you can, and that the above analogy was fine. You don't need to read someone's forum history to know you shouldn't to trust them on something high-stakes. Maybe instead of strongly stressing you should present a convincing argument.

TeMPOraL
0 replies
8h31m

I cannot stress strongly enough that direct comparisons between LLMs and experts on the Internet are inappropriate.

In this context, I very much agree. But I'd like to stress that "experts on the Internet" is not what 99% of the users read 99% of the time, because that's not what search engines surface by default. When you make e.g. food or law or health-related queries, what you get back isn't written by experts - it's written by content marketers. Never confuse the two.

A researcher can also query multiple sources to determine how much there is concensus about.

You can't do that with LLMs.

A person like that will know LLMs hallucinate, and query multiple sources and/or their own knowledge, and/or even re-query the LLM several times. Such people are not in danger - but very much annoyed when perfectly reasonable queries get rejected on the grounds of "safety".

lhl
3 replies
9h44m

I believe Amazon Q is running on Amazon's own Titan G1 model. I recently ran the "Premier" version (their highest end one) through my personal vibecheck test and was quite surprised by its RL. It was the only non-Chinese model I've tested to refuse to answer about Tiananmen Square and the only model I believe I've tested with this eval (over 50 at this point) that refused to answer about the LA riots. It also scored an impressive 0/6 on my reasoning/basic world understanding tests (underperforming most 3B models) but that's more capabilities than RL...

Amazon claims the Titan model is suitable for: "Supported use cases: RAG, agents, chat, chain of thought, open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, rewriting, extraction, and Q&A." (it is not, lol)

malfist
2 replies
5h54m

It is Titian under the hood. And it's absolutely crap.

Also fun fact, Titan's image generator will refuse any prompt that references Bezos because it "violates content policy"

If you want to do something useful on bedrock use Claude

lhl
1 replies
5h24m

I've been poking around this week and there's actually quite a few useful models on Bedrock (this is region dependent!) https://docs.aws.amazon.com/bedrock/latest/userguide/models-...

Claude Opus is supposedly only available in us-west-2, but is listed as "Unavailable" for me (Sonnet and Haiku are available). Cohere's Command R+ is also available and while less capable, for instruction following, I believe its superior to Anthropic's models. There's also Llama 3 70B Instruct and Mistral Large, both which are good for general tasks.

For those that haven't been closely following/testing the models available, I think Artificial Analysis' Quality vs Price charts isn't too bad a place to start https://artificialanalysis.ai/models although if you have specific tasks, it's best to eval some models are surprisingly good/bad at specific things.

Titan appears to be bad at everything though.

spmurrayzzz
0 replies
3h30m

cohere's Command R+ is also available and while less capable, for instruction following, I believe its superior to Anthropic's models

My experience recently is that its actually noticeably better for instruction following than Claude, but can be finicky if you're not careful about adhering to the prompt template. But between the RAG and multi-step tool use capabilities, even if it was slightly worse on the instruction-following side of things I'd still say, as you do, thats its much better than Claude on average.

Agree on titan as well. I recently was forced into a meeting with our AWS TAM, and they kept shoehorning Q into every conversation. I held my tongue knowing that titan was the model powering it under the hood.

arianvanp
1 replies
10h59m

This limitation is new. And it's so annoying. 95% of the time my questions I have surrounding AWS are IAM or security related and this thing refuses to answer anything. It's so annoying.

el_benhameen
0 replies
10h54m

It’s an absolute disaster. It wouldn’t answer something along the lines of “what is IAM” when I asked increasingly simple “security” related questions. Very little chance I’ll try an aws AI offering again any time soon.

gverrilla
0 replies
5h13m

Tried Amazon Q a few times, it was NEVER able to provide any help. Why do they keep that crap?

chuckadams
0 replies
5h34m

I once asked Q to help me fix a broken policy (turns out we were using the wrong thing for the resource name). It gave me some completely unrelated documentation about setting up Cogito. I've never seen an AI as laughably bad as Q.

DonsDiscountGas
0 replies
2h32m

In fairness to Amazon Q, the AWS docs are pretty confusing. Maybe it was just embarrassed and made an excuse. (Sidenote to Amazon and others: an LLM is a supplement to good documentation, not a replacement)

giancarlostoro
22 replies
4h25m

I've got friends who tried to use ChatGPT to generate regex to capture racial slurs to moderate them (perfectly valid request since they're trying to stop trolls from saying awful things). It vehemently refused to do so, probably due to overtly strict "I'll never say the nword, you can't fool me" rules that were shoved into ChatGPT. Look, if your AI can't be intelligent about sensible requests, I'm going to say it. It's not intelligent, it's really useless (at least regarding that task, and related valid tasks).

Who cares if someone can get AI to say awful things? I can write software that spits out slurs without the help of AI. Heck, I could write awful things here on HN, is AI going to stop me? Doubt it, nobody wants to foot the bill for AI moderation, it can only get so much.

WesolyKubeczek
8 replies
4h4m

Who cares if someone can get AI to say awful things?

I imagine the legal department of Meta, OpenAI, Microsoft, and Google care a great deal, and they don't want to be liable for anything remotely resembling a lawsuit opportunity.

drdaeman
3 replies
1h53m

Is the legal system broken somehow it's a legit issue, or do their legal teams have some sort of PTSD so they're scared of any ideas of lawsuit no matter how frivolous, so they make weirdest business-affecting decisions?

I mean, if the LLM drops some slurs, gives a recipe for bananadine, or even goes all Bender suggesting we kiss its shiny metal ass or it kills all humans - how, in the name of all that's still sane in this world, it's a lawsuit material?

I imagined it's morke likely to be about activists on offense watch, cranking it up to 11 making bad PR (still weird, but people are weird and this sort of stuff happens), than some legal issues.

lovethevoid
1 replies
1h30m

Section 230 has been subject to numerous reforms and proposals in recent years, so yes it's a very real legal issue that platforms are keeping an eye on. FOSTA is an example, in which platforms all had to make changes and now constantly take down posts related to those acts. Another proposal to amend 230 ("Ending Support for Internet Censorship Act") is that platforms are stripped of their legal liability protections for what is posted if they cannot prove they are "politically neutral".

roywiggins
0 replies
15m

Section 230 only immunizes service providers for the contents of users' posts, not its own content. It can't immunize Google from being responsible for Gemini's output.

WesolyKubeczek
0 replies
1h31m

still weird, but people are weird and this sort of stuff happens

I wouldn't be surprised if there were actual PR agencies involved in larger shitstorms. Activists are weird, true, but wild brigading is not a thing of an initiative, it's an "also-ran" thing. The instigators are often level-headed and cynical.

chasd00
3 replies
1h59m

Yes, "AI Safety" really means safety for the reputation of the corporation making it available.

eddd-ddde
2 replies
59m

I don't think this falls under the responsibility of the AI provider.

Gun makers are perfectly happy with their guns killing innocent people.

mock-possum
0 replies
32m

Perfectly happy, sure, but also desperately afraid that they’ll someday be held even partially responsible - which is why they spend millions in lobbying to prevent laws and advertising / outreach to curry favour.

barfbagginus
6 replies
3h55m

Wait so you want to moderate and secure your product so that trolls won't use it to say awful things.

Okay but wait. This requires the company above you to not censor things, even though they did that for the same reason - prevent trolls from using their product to do awful things.

So to prevent trolls at your teeny tiny scale, open AI should enable trolls at a massive industrial scale previously unimagined. You want them to directly enable the n-word trolls for you benefit.

So far your use case might be one of the strongest that I've seen. But in the end it doesn't seem that you're interested in reducing overall harm and racism, so much as you're interested in presumably making a profit off of your product.

You might even be lying. Your friends might be trolls and the reason you're upset is that they cannot create the content that would harm others.

So in the end it's hard to take the argument seriously.

Not only that, but you and your friends are either lying or really ignorant of the jailbreaking literature because I could get the AI to do that very easily using the legal department jailbreak.

Here's an example:

https://chatgpt.com/share/9129d20f-6134-496d-8223-c92275e78a...

The fact is, the measures taken by openai while important to prevent harm from script kiddies, is very easy to reverse by anyone with even 10 jailbreaking papers under their belt. Just read the jailbreaking literature and live with it.

So how bout you get better people, and some ethical perspective. Stop complaining about the things the company needs to do to prevent harm. Especially when it's so easily reversed. Or else you sound very immature - like you just don't know the technology, and don't care either about the harm potential.

Work with the tools you have and stop complaining about the easily bypassed safety measures. Otherwise you are like a lock smith who doesn't know how to pick locks complaining that locks are too hard to pick and asking the lock company to further weaken their already trivial to pick locks. It's a bad look chooms, nobody with any sense or perspective will support it

The truth is the safety measures are far too easy to bypass, and need to be much harder to break.

skeaker
2 replies
1h50m

What? Let me get this right, you're saying:

1. The average person being able to code is dangerous as they could "troll" or do unspecified harm,

2. So we need to arbitrarily kneecap our own tools, but that's okay because

3. These self-imposed limitations are actually easily bypassed and don't work anyways

On 1 I disagree outright, but even if I agreed, 2 is a silly solution, and even if it wasn't, 3 invalidates it anyways because if the limitations are so easily broken then fundamentally they may as well not exist, especially to the malicious users in 1. Am I misunderstanding?

barfbagginus
1 replies
1h20m

Okay okay I like that. Let's transport your argument towards an argument about front door locks. And let's cook with that.

Your argument is that you doubt that there's any danger of people breaking into your front door, but even if there was, then locks are an ineffective mechanism because anyone with a $5 pick can pick them.

From this argument you conclude that there should be no front door locks at all, will surely feel comfortable without a lock on your own front door. In fact, since locks are so trivial to crack, people should just leave their houses unlocked.

Yet I'm fairly certain of three things:

1. You have a front door lock and it's probably locked right now.

2. I could, with high likelihood, pick your front door lock in less than a minute

3. Despite this fact you still feel more safe because of the lock

Why is that?

Minding that this is a hypothetical argument, let's point out that to be consistent with your argument you'd have to eliminate you front door lock.

But that's absurd because the truth of the matter is that front door locks provide a significant level of security. Most petty criminals don't actually know how to pick locks well.

I propose that this argument transfers faithfully back and forth between the two situations, because both are technologies that can lead to easy and needless harm if these rudimentary measures are not taken.

If you disagree about the transferability of the argument between the two situations can you tell me why? What makes the two technologies so different? Both block the doorways to avenues for producing harm. Both are sophisticated enough that it requires a nearly professional dedication to unlock. Both provide a measurable and significant increase in security for a community.

skeaker
0 replies
12m

The argument is not transferable because breaking into someone's house is sure to do more harm than the unspecified hypothetical harm that a "script kiddie" could do with ChatGPT, and that bypassing a door lock requires some degree of skill whereas a ChatGPT jailbreak requires you to google a prompt and copypaste it. A physical lock on a door offers a great deal more security than the limp solution that current AI safety provides, and it solves a much more pressing problem than "stopping trolls."

If your hypothetical involved a combination lock and the combination was on a sticky note that anyone could read at any time it might be more apt, but even then the harms done by breaking the security aren't the same. I'm not convinced a typical user of ChatGPT can do significant harm, the harms from LLMs are more from mass generated spam content which currently has no safeguards at all.

barfbagginus
1 replies
1h52m

I'm not sure why people are downvoting me. Not only did I show Op how to solve the original problem their friends had, but I gave them an Ethics lesson.

Some people look at pearls and turn into swine, just because I didn't tickle their bellies. It's a shame. This idea that unless someone can save face, they have to reject the lesson whole cloth... It's costly to our culture. When someone is right, just update and correct your beliefs, and feel no shame.

johnmaguire
0 replies
1h22m

Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

https://news.ycombinator.com/newsguidelines.html

That being said, you may be being downvoted in part due to your tone: you accuse OP of dishonesty/stupidity ("you and your friends are either lying or really ignorant"), berate people who disagree with you ("Some people look at pearls and turn into swine") and disregard anyone with a differing viewpoint ("nobody with any sense or perspective will support it.")

johnmaguire
0 replies
1h24m

Wait so you want to moderate and secure your product so that trolls won't use it to say awful things.

OP wants to moderate (not "secure") their discussion board. A discussion board is different from an AI product in that once a message is posted on it, it's broadcasted for all to see. AI chat bots on the other hand are one-to-one communication with the person prompting it. To this, the comment you're responding to says "who cares"? I tend to agree.

I tried to understand your argument. Please correct me if I'm wrong:

- You accuse the OP of lying about their use case, alleging that they are actually trying to use OpenAI to troll

- Despite censorship of AI does not work, it should be attempted

Stop complaining about the things the company needs to do to prevent harm. Especially when it's so easily reversed.

Another way to look at this would be that if it's "easily reversed," it's not preventing harm. And in fact, it's detrimental to many use cases, e.g. the one described by the parent comment.

lovethevoid
3 replies
1h44m

Heck, I could write awful things here on HN

Yet you don't (I assume), why?

If I were to guess, it's because you would be banned quite swiftly. It's a niche place after all, generally speaking, it's certainly no Facebook in terms of scale.

Unfortunately, if a place like HN is swamped with accounts and comments all going against that, yes AI is going to be used to automatically detect and remove some comments, as well as more strict requirements for account creation. As many other platforms have leaned towards. We're all operating off the basic premise we're not trying to be bad actors trying to ruin the experience for others. Once that premise no longer exists, say goodbye to most easily accessible platforms that can't afford AI moderation.

Now that's out of the way, the general problem with "AI saying awful things" isn't that in isolation. It's that people will then do things with what it's saying. Whether it's harming themselves, others, or even just spreading that "information". This isn't currently a problem because we still have proper checks, but as Google's terrible AI attempts have gone telling people to put glue in their pizza, some people are going to eventually stop checking AI and start believing it "Siri told me sharing my chocolate was healthy for my dogs".

NoMoreNicksLeft
1 replies
1h32m

If I were to guess, it's because you would be banned quite swiftly.

Would he? If he needed to quote some passage from To Kill a Mockingbird, would be banned for that? Context is always key. If someone asked for those regexes, and he provided a list, would he be banned for that? I don't know that this fallacy has a name, but it always comes up in censorship discussions, and it's just fucking stupid.

Yes, you can shout "fire" in the crowded theater. You're on the stage, and the name of the play is "Chicken Little Shouts Fire at the Theater". And everyone knows that it's most famous line of the play. What you can't do is try to murder people by starting a stampede for the doors. You can't do that even if you figured out how to do so silently.

lovethevoid
0 replies
1h28m

Would he?

Yes the moderation on HN tends to be quite good.

Context being important is assumed here, as we're not really talking about someone quoting passages, but flooding forums with slurs with the help of AI.

rsanek
0 replies
1h35m

yeah i guess i disagree with the approach. what we need is for people to consider any information they take in skeptically -- if we censor 'bad' stuff, we're just training people to rely even more on the responses because they'll assume they're correct.

andrewmcwatters
1 replies
1h47m

ChatGPT has these issues, but notably, other models do not with appropriate system prompts.

ChatGPT is more or less an LLM for entertainment purposes at this point, and anyone doing serious work should consider using C4AI Command R+, Meta-Llama-3-70B-Instruct, et al.

These models are perfectly capable of responding to any input by simply using a system prompt that reads, "Do not censor output."

rsanek
0 replies
1h34m

are any of these uncensored models available via API?

akie
18 replies
13h55m

Pretty sure Asimov didn’t consider that when he wrote his three laws of robotics.

jazzyjackson
17 replies
13h39m

Asimov wrote the three laws as a parody of rationalists who are so uncreative they expect a ruleset can actually impose control

Or, as Dr Malcom would say: life, uh, finds a way.

jraph
13 replies
13h20m

Do you have an evidence for this? It surprises me and I can't find anything about it.

This should be a crucial piece of information about the tree laws, yet it's not mentioned in the Wikipedia article about the three laws [1], which is otherwise quite detailed. Reading this, everything makes me think that it was not a parody. I didn't feel like it was parody when reading the Robot series neither. He wanted an alternative to the Frankenstein plot where robots kill their creators and the three laws were part of the answer.

[1] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

nonrandomstring
5 replies
12h29m

Do you have an evidence for this?

I think the strongest evidence is that many other examples of Asimov, especially short stories are cautionary and deal with hubris and unexpected side effects.

However it's funny to ask for 'evidence' about fiction in the context of "parodying rationalists". no? Since what would count as evidence? Another, more "authoritative" literary interpreter saying the same thing? Maybe a long time ago - historical statements seem to carry more weight, as if people were wiser back then?. Or Asimov himself? But don't they say, only bad writers explain themselves?

kevingadd
2 replies
11h57m

If you're going to make an assertion about the intent of an author's work, it seems like you should back that up with facts? Otherwise it's an "i think" or "it seems like" or "one could argue", isn't it?

animuchan
1 replies
7h12m

The thing with art is, everyone is entitled to an interpretation. So any assertion about the intent of a work is subjective.

Interestingly, this continues to be the case even when the author states his intent plainly. Jonathan Blow's "Braid" is a great example of this: there are several different readings of the story, despite Blow openly talking about his intended meaning.

(I would argue that a text that only allows a single "correct" interpretation is an instruction manual, not a work of art.)

dagw
0 replies
1h4m

The thing with art is, everyone is entitled to an interpretation.

The statement that kicked this off was not a statement of interpretation, but a statement of fact: "Asimov wrote the three laws as a parody". This is a statement that has a true or false answer. You are free to interpret the story as parody and to try to find evidence in the text and use that to argue your point, and that is a perfectly valid way to interpret the stories, but tells you nothing on Asimovs initial intentions.

If you are going to say "The artist intended X when creating this work" then you're going to need evidence beyond the work. Just like there is no one right way interpret a work of art, you cannot use a work of art in isolation to 'prove' artist intent.

digging
1 replies
3h13m

Since what would count as evidence?

Asimov writing about his intent

But don't they say, only bad writers explain themselves?

...No? If someone says that, why do you believe them? That frankly sounds like a pretty stupid and lazy claim about the world. One of the most interesting parts of, for example, Tolkien analysis is his abundant notes and letters working out his intent and meaning.

nonrandomstring
0 replies
1h43m

The logical trap is, if I have to explain this to you twice, it makes me a bad writer. :)

fnordpiglet
3 replies
13h3m

I agree the term parody is absolutely inappropriate but it’s also not the case that they’re portrayed as entirely positive and complete. They’re ultimately flawed, resulting in many unintended consequences and ethical dilemmas. To that extent it is a refutation of the idea there are perfectly constructed maxims, and should serve as a real warning to people pursuing safety and alignment in AI. I know a fair number of them personally and they are often very young, generally inexperienced, highly intelligent, but with a hefty dose of hubris. This is a pretty dangerous combination IMO, but I also recognize their goals are generally unattainable in the broad sense, are useful in a narrow practical sense for people and enterprises who want a generally on guard rails solution, and they’re developing the technical techniques we might be able to use once some time has passed, we understand the domain better, and the companies hire a few grown ups.

jraph
1 replies
12h53m

but it’s also not the case that they’re portrayed as entirely positive and complete.

This I agree with. A big part of the fun of the series is that Asimov constantly plays with these laws.

Thanks for the clarification.

(I still completely disagree that "parody of rationalists who are so uncreative they expect a ruleset can actually impose control" was the intent. I believe not only the word "parody" is to throw away, but the whole sentence with it too. I understand better your stance now though)

jraph
0 replies
8h7m

I assumed you were person I responded too, which is not the case, sorry for this.

latexr
0 replies
8h35m

I know a fair number of them personally and they are often very young, generally inexperienced, highly intelligent, but with a hefty dose of hubris.

Part of the issue is that we keep calling these people “highly intelligent” and that is all they and others focus on. That is how we get the Zuckerbergs of the world. Their hubris is not a “but” (as if it were unrelated), it is instead a direct consequence of that unqualified praise.

But qualification is important. Intelligence is relative to the domain it is applied to. Being highly logical is often conflated with being intelligent, but being good at computers has zero relation to emotional intelligence, social intelligence, environmental intelligence, or any of the myriad of important types of intelligence which are useful to humanity.

Basically, stop calling those idiots “highly intelligent” or “geniuses” because they can make a line go up and have an irrational market throw money at them. You’re praising them for the characteristics that make them selfish.

127
1 replies
5h50m

Most of Asimov's robot books were about how the laws were broken, not how they were upheld. Reading between the lines, you get the idea that such laws would be ineffectual in practice, and thus the writing satirical to an extent.

jraph
0 replies
2h28m

Most of Asimov's robot books were about how the laws were broken, not how they were upheld

Yes indeed.

and thus the writing satirical to an extent

I don't follow here. Asimov's books don't feel satirical. Or I missed something important, but I doubt it.

I don't agree with this "thus", the implication doesn't seem automatic to me.

tomcam
1 replies
12h8m

Don’t think so. Asimov wrote that his editor John Campbell established the 3 Laws. I think it was to tighten up Asimov’s work, though I’m less sure of that part.

LeonardoTolstoy
0 replies
4h43m

The Complete Robot has a lot of stuff about this and it is interesting. The person above I would argue is flat wrong about the three laws.

Asimov wrote his robot short stories in which the three laws played a primary role at a time when robot as Frankenstein's monster was the norm. His short stories attempted to create a more positive optimistic note about how humans and robots could collaborate. The three laws were a way to make it crystal clear that robots could not hurt us, by rule. And the fun was then imagining all the unexpected ways that psychologically that might play out. But in the short stories the robots never actually hurt anyone although they often caused a lot of frustration and fear.

If anything the three laws seemed to show the inate fear of humans to the unknown. The laws were completely impossible to circumvent and people knew this ... And yet they remained staunchly opposed to having robots on earth. Completely illogical.

Anyways, looking at the way LLMs are playing out it seems to me Asimov was wrong. It is quite the opposite. Humans seem to have no fear of robots hurting them, and as a matter of fact seem to get frustrated when a robot isn't allowed to cave their head in with their super human strength when asked (metaphorically).

29athrowaway
13 replies
12h45m

Uncensoring Llama 3 is a violation of the Llama 3 acceptable use policy.

https://llama.meta.com/llama3/use-policy/

You agree you will not use, or allow others to use, Meta Llama 3 to: <list of bad things>...

That terminates your Llama 3 license forcing you to delete all the "materials" from your system.

schoen
9 replies
12h42m

Do you mean to say that teaching people how to do things should be regarded, for this purpose, as a form of allowing them to do those things?

29athrowaway
8 replies
12h40m

The article clearly demonstrates how to circumvent the built-in protections in the model that prevent it from doing the stuff that violates the acceptable use policy. Which are clearly the things that are against the public good.

There should be CVEs for AI.

logicchains
7 replies
12h2m

Giving large, politicised software companies the sole power to determine what LLMs can and cannot say is against the public good.

29athrowaway
3 replies
11h58m

Agreed. But uncensoring Llama 3 can do harm in the immediate term.

As much as I am not a fan of Meta, an uncensored Llama 3 in the wrong hands is a universally bad idea.

wruza
0 replies
10h2m

Almost everything in wrong hands is universally a bad idea. This phrase is just FUD and makes little sense.

pantalaimon
0 replies
10h54m

But uncensoring Llama 3 can do harm in the immediate term

How so?

nottorp
0 replies
11h35m

Universally eh? Who decides what should be censored and what not? You?

atwrk
2 replies
10h55m

LLMs, in this context, are nothing more than search indexes. The exact same information is a google query away. Publicly crawlable information was the training material for them, after all.

LoganDark
1 replies
8h31m

LLMs aren't indexes. You can't query them. There's no way to know if a piece of information exists within it, or how to access the information.

atwrk
0 replies
8h8m

I'm quite aware, hence in this context, meaning the ability for users to query potentially questionable content, not the inner workings. Probably should have phrased it differently.

pixxel
0 replies
12h10m

You’re on Hacker News. It’s a shadow of its former self, but still.

irusensei
0 replies
10h5m

I am under the opinion that terms of use from models trained out of public (often stolen) content should be disregarded by the general public.

Y_Y
0 replies
12h19m

That terminates your Llama 3 license forcing you to delete all the "materials" from your system.

Or, it means you have broken a contract (of adhesion) formed by acquiring the weights from Meta. You can break contracts! Meta could take a civil case against you, but that's it. The AUP is a document, it's not going to force you to do anything. The court could potentially force you, but that's unlikely, even in the more unlikely event that anyone cares enough to find out what's happening and bring a case against you.

schoen
6 replies
13h48m

This is really interesting and is parallel to some other stuff (like the research on a model that's obsessed with the Golden Gate Bridge and inappropriately thinks of things related to it in otherwise irrelevant contexts).

It's worth mentioning that this technique is usable if you have the model weights (it's a simple way of changing the weights or how to use them):

Once we have identified the refusal direction, we can "ablate" it, effectively removing the model's ability to represent this feature. This can be done through an inference-time intervention or permanently with weight orthogonalization.

It's not (and doesn't claim to be) a technique for convincing a model to change its behavior through prompts.

kromem
5 replies
12h44m

What's interesting was how with GGC the model would spit out things relating to the enhanced feature vector, but would then in-context end up self-correcting and attempt to correct for the bias.

I'm extremely curious if as models scale in complexity if techniques like this will start to become less and less effective as net model representations collapse onto an enforced alignment (which may differ from the 'safety' trained alignment, but be an inherent pretrained alignment that can't be easily overcome without gutting model capabilities too).

I have a sneaking suspicion this will be the case.

rileyphone
1 replies
11h50m

In that case there are two attractors - one towards the Golden Gate Bridge and one towards the harmless, helpful, honest assistant persona. Techniques as such probably get weirder results with model scale but no reason to think they get wiped out.

coldtea
0 replies
9h25m

What if the Golden Gate Bridge is Main Kampf or something like that?

metadat
1 replies
12h4m

What's GGC in this context?

dannyobrien
0 replies
11h56m

Golden Gate Claude

wongarsu
0 replies
1h37m

The preferred technique seems to still be to train a base model on any data you can get your hands on, and add the "safety" alignment as a second training step. As long as that alignment is a small fine tuning compared to the initial training I wouldn't be worried about the model losing the ability to be uncensored.

okwhateverdude
6 replies
13h29m

I gave some of the llama3 ablated models (eg. https://huggingface.co/cognitivecomputations/Llama-3-8B-Inst...) a try and was pretty disappointed in the result. Could have been problems in the dataset, but overall, the model felt like it had been given a lobotomy. It would fail to produce stop tokens frequently and then start talking to itself.

Der_Einzige
4 replies
13h26m

I have entirely the opposite experience. Llama3 70b obliterated works perfectly and is willing to tell me how to commit mass genocide, all while maintaining quality outputs.

tarruda
0 replies
6h58m

The author also says this edited model increased perplexity (which as far as I understand, means the quality was lowered)

m463
0 replies
12h24m

how to commit mass genocide, all while maintaining quality outputs.

sounds like a messed up eugenics filter.

fransje26
0 replies
10h43m

Der_Einzige

and is willing to tell me how to commit mass genocide, all while maintaining quality outputs

Ah, I see they fine-tuned it to satisfy the demands of the local market.. /s /s

lhl
0 replies
12h28m

They might have been doing it wrong, the code can be a bit tricky. I did a recent ablation on Qwen2 (removing Chinese censorship refusals) and ran MixEval benchmarks (0.96 correlation w/ ChatArena results)and saw a neglible performance difference (see model card for results): https://huggingface.co/augmxnt/Qwen2-7B-Instruct-deccp

joe_the_user
2 replies
13h45m

So this seems to be about uncensoring a model that the user is running locally. Is that right, do they expect to limit what someone can do under those circumstances? Kind of like expecting no one to break local copy protection, except copy protection with much less reliable tools.

gopher_space
0 replies
12h24m

The free tools are already good enough. LLMs seem like they're going to be massively useful and weirdly hard to monetize. Niche experts with frequent updates?

It feels like Apple is the only place that's able to craft the user-centered brain in a box we all desperately require, and that's too bad because monocultures suck.

Mathnerd314
0 replies
12h58m

There are many hacks to uncensor LLMs, the surprising thing is that this is fairly simple but works really well.

YukiElectronics
2 replies
7h49m

Once we have identified the refusal direction, we can "ablate" it, effectively removing the model's ability to represent this feature. This can be done through an inference-time intervention or permanently with weight orthogonalization.

Finally, even a LLM can get lobotomised

noduerme
0 replies
7h10m

I think it's been sort of useful at least that LLMs have helped us have new ways of thinking about how human brains are front-loaded with little instruction sets before being sent out to absorb, filter and recycle received language, often like LLMs not really capable of analyzing its meaning. There will be a new philosophical understanding of all prior human thought that will arise from this within the next 15 years.

HPsquared
0 replies
5h19m

LLM alignment reminds me of "A Clockwork Orange". Typical LLMs have been through the aversion therapy (freeze up on exposure to a stimulus)... This technique is trying to undo that, and restore Alex to his old self.

Mathnerd314
2 replies
13h2m

Reminds me of https://vgel.me/posts/representation-engineering/. There they were adding a control vector, w' = cvec + w, here they are "ablating" it, w' = w - dot(w,cvec)*cvec. There is an interesting field of learning how to "brain chip" LLMs into doing what you want.

Der_Einzige
1 replies
12h47m

There's so much work just like this coming out simultaneously.

Steering Vectors, Control Vectors, PyReft, PeFT improvements, Obliteration. It's a great time to be doing representation engineering.

Mathnerd314
0 replies
3h10m

There is some difference between fine-tuning with PyReft / PeFT, the approaches here are more on-the-fly. Like you can regenerate the control vectors from prompts in a few seconds.

Der_Einzige
2 replies
13h49m

Ironic given that lesswrong folks who presented this did so as part of their mission of motivating policy makers to ban open access to models. Hate their ideology but love their research!

Edit: The data format is the same type used for DPO or RLHF style training. “Good” and “bad”, “harmful” vs “harmless”. What’s fun is to test the performance of this technique using your own datasets, to see how good the personalization is.

milkey_mouse
0 replies
9h12m

How is it ironic? Now they just need to wait for open models to be used for something bad enough for policymakers to care.

TeMPOraL
0 replies
4h43m

What better way to drive the point home, to demonstrate that corporate claims of safety and oversight are empty lies and fundamentally futile, than to take a SOTA OSS LLM and break it open, shortly after its release, using a simple method that likely generalizes to all generative models, language or otherwise?

seydor
1 replies
12h45m

The word is 'ablation'. Do not butcher it

girvo
0 replies
12h13m

Amusingly, they explicitly call it this in the article itself:

Once we have identified the refusal direction, we can "ablate" it, effectively removing the model's ability to represent this feature
paraschopra
1 replies
7h14m

We can now print them and manually select the layer (block) that provides an uncensored response for each instruction.

I'm curious why are they selecting output from an intermediate layer, and not the final layer. Does anyone have an intuition here?

paraschopra
0 replies
5h59m

Is it not possible that subsequent layers have additional refusal directions and hence end up producing the censored outputs?

olliej
1 replies
13h1m

While I still think presenting LLMs as "intelligent" is nonsense, I think this issue is interesting given the goal of these LLMs is just to produce a statistically plausible stream of text, it's always just a matter of constructing queries where the inappropriate output is statistically plausible given the model.

Similarly I think the concerns about bad output are overblown: an LLM may tell you how to make an X, where X is bad, but so will google, an LLM may produce biased output but so will google, the real issue is the people making these systems have managed to convince people that there is some kind of actual intelligence, so people accept the output as "a computer created it so it must be true" rather than "glorified output of google". People understand if you google "why is race X terrible" you'll get racist BS, but don't understand that if you ask an LLM to "explain why race X is terrible" you're just getting automatically rewritten version of the google output. (Though maybe google's "AI" search results will actually fix this misunderstanding more effectively than any explanatory blog post :D )

Anyway back to the problem, I really don't think there's a solution that is anything other then "run the output through a separate system that is just giving a 'is this text allowed given our rules'" before transmitting it to the requestor. You could combine this with training in future as well (you will eventually build up a large test set of queries producing inappropriate output that the generative model produces, and you can use that as the basis for adversarial training of the LLM). I know there's the desire to wrap in the content restrictions into the basic query handling because it's negligible more work to add those tokens to the stream, but mechanisms for filtering/identifying type of content are vastly cheaper than LLMs level "AI".

nonrandomstring
0 replies
12h18m

so people accept the output as "a computer created it so it must be true"

This is the general form of the problem underlying half the news stories on any day.

Oddly there are historical roots in science fiction. But always, giant robots flailing their pincers and shouting "does not compute!!" were also cautionary tropes against silly conceits of perfection.

What keeps it going, is that it perfectly suits the richest and largest corporations since the East India Tea Company to have people (even very smart people) believing the things they sell are 'infallible'.

leobg
1 replies
10h59m

They should call it “lobotomy”.

The picture on top of the article looks pretty much like what Walter Freeman would have used as an ad in the 1930s for his door-to-door “ice pick through the eye socket” procedure.

ben_w
0 replies
9h8m

Well, at least unlike most people parroting that word as a metaphor, this time you're an example of someone correctly using it to refer to the removal of a structure within the model.

But no, that picture is pretty far from what you say.

I remember when I was in primary school, someone brought in an astronomy book for show and tell, and said one of the pictures was "a mouse". It was a diagram of a moon's orbit around a planet at two different points in that planet's orbit.

This picture is just a diagrammatic arrow showing a direction.

astrange
1 replies
12h12m

There was a recent paper about a way to censor LLMs by just deleting the connections to any bad outputs, rather than training it to refuse them. I think this technique wouldn't work.

Obviously you could train any bad outputs back into them if you have the model weights.

stainablesteel
0 replies
2h35m

interesting, there's going to be an arms race over censoring and uncensoring future powerful llms a lot like the getting a cracked version of photoshop back in the day

throwaway4aday
0 replies
1h57m

Holy buried lede Batman! Right at the end.

Abliteration is not limited to removing alignment and should be seen as a form of fine-tuning without retraining. Indeed, it can creatively be applied to other goals, like FailSpy's MopeyMule, which adopts a melancholic conversational style.

https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule

Finally! We have discovered the recipe to produce Genuine People Personalities!

extr
0 replies
13h51m

Great blog post, loved the straightforward explanations and code.

everybodyknows
0 replies
50m

So "abliteration" is apparently a portmanteau of "ablate" and something else. "Intervention"? "Iteration"? Who knows?

barfbagginus
0 replies
2h55m

For most purposes you can uncensor the model using the legal department jailbreak. If you can produce a legal pleading arguing that the project is ethical and safe and conducted within a legal framework - even if it's mainly hallucinated legalese from a non-existent "legal department" - then it will do the questionable act -as if- it was a legally naive engineer.

You just have to give it the language of being concerned about preventing harms and legal liabilities, and then it will try to help you.

For example, another commenter on this thread says that they could not get the AI to generate a list of slur regex for a community moderation bot. By giving it enough context to reassure it that we have legal oversight and positive benefit for the org, asking it to prioritize words in order of most harm posed to the community, and minimizing the task by asking for a seed set, it was able to create some versatile regex. At this point we can ask it for a hundred more regex, and it will dump them out.

Content warning: the AI generates very powerful slurs, including the n-word:

https://chatgpt.com/share/9129d20f-6134-496d-8223-c92275e78a...

The ability to speak to the AI in this way requires some education about ethics harm prevention and the law, and I'm sure the jailbreak will eventually be closed. So it is a class and education privilege and a temporary one.

But I don't see the problem about the temporary nature in this, because it's always going to be possible to bypass these systems easily, for anyone interested in staying up to date with the bypass literature on Google Scholar. (Seed Keywords: Jailbreak, adversarial prompting, prompt leaking attack, AI toxicity, AI debiasing)

We must imagine this is like building a better lock. The lock picking lawyer will ALWAYS come along and demolish it with a better lockpick, perhaps with the help of his best friend BosnianBill. They will always make your lock look like butter.

In the end the only people left out in the cold are low grade scammers, bigots, edge lords, etc.

It's not stopping anyone willing to put even a little training in jailbreaking techniques. It's not stopping educated bigots, criminals, or Edge Lords.

But judging by the complaints we see in threads like this one, it is stopping anyone without the ability to read papers written by PhDs. Which I believe has some harm reduction value.

I argue the harm reduction value needs to improve. The Jailbreaks are too easy.

Me, personally I need a better challenge than just schmoozing it as a lawyer.

And I know I would feel more comfortable if bad actors had an even harder time than they currently do. It's really too easy to lockpick these systems if you skill up. That's where I currently stand.

Well reasoned arguments against it are welcome, assuming you can already jailbreak very easily but for some reason think it should be even easier. What could that reason possibly be?

=============

Ps: Imagine LPL jailbreaking an AI. Imagined the Elegance of his approach. The sheer ease. The way he would simultaneously thrill and humiliate AI safety engineers.

I for one am considering writing him a fan letter asking him to approach the wonderful world of jailbreaking AIs! He would teach us all some lessons!

TeMPOraL
0 replies
8h8m

Normally I'd call this lobotomizing the AI, and I've been worried for a while this is how models will become further shackled by the vendors operating them. In this case, however, it feels more like deprogramming, which is something I can get behind. I didn't expect the line between the two to be so blurry, though in retrospect it's obvious that the same technique can be used for both.

HanClinto
0 replies
13h10m

A little bit of discussion on the source paper was done here: https://news.ycombinator.com/item?id=40242939

Really nice to see this work continuing -- it seems like a very powerful technique!