return to table of content

Statement from Scarlett Johansson on the OpenAI "Sky" voice

anon373839
394 replies
19h29m

Well, that statement lays out a damning timeline:

- OpenAI approached Scarlett last fall, and she refused.

- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)

- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.

- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.

Perhaps Sam’s next tweet should read “red-handed”.

nickthegreek
262 replies
19h17m

This statement from scarlet really changed my perspective. I use and loved the Sky voice and I did feel it sounded a little like her, but moreover it was the best of their voice offerings. I was mad when they removed it. But now I’m mad it was ever there to begin with. This timeline makes it clear that this wasn’t a coincidence and maybe not even a hiring of an impressionist (which is where things get a little more wishy washy for me).

windexh8er
102 replies
15h44m

The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT. What he did, the timeline, everything - is sleazy if, in fact, that's the story.

The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.

[0] https://www.businessinsider.com/sam-altman-openai-artificial...

ocodo
29 replies
14h59m

Altman has proven time and again that he is little more than a huckster wrt technology, and in business he is a stone cold shark.

Conman plain and simple.

lawn
15 replies
13h41m

You'd think that Worldcoin would be enough proof of what he is but I guess people missed that memo.

ben_w
12 replies
12h18m

Much as I dislike crypto, that's more of "having no sense of other people's privacy" (and hubris) than general scamminess.

It's a Musk-error not an SBF-error. (Of course, I do realise many will say all three are the same, but I think it's worth separating the types of mistakes everyone makes, because everyone makes mistakes, and only two of these three also did useful things).

lawn
7 replies
11h56m

It's not just about privacy either.

Worldcoin is centrally controlled making it a classic "scam coin". Decentralization is the _only_ unique thing about cryptocurrencies, when you abandon decentralization all that's left is general scamminess.

(Yes, there's nuance to decentralization too but that's not what's going on with Worldcoin.)

ben_w
6 replies
9h40m

True decentralisation is part of the problem with cryptocurrencies and why they can't work the way the advocates want them to.

Decentralisation allows trust-less assurance that money is sent, it's just that's not useful because the goods or services for which the money is transferred still need either trust or a centralised system that can undo the transaction because fraud happened.

That's where smart contracts come in, which I also think are a terrible idea, but do at least deserve a "you tried!" badge, because they're as dumb as saying "I will write bug-free code" rather than as dumb as "let's build a Dyson swarm to mine exactly the same amount of cryptocurrency as we would have if we did nothing".

lawn
5 replies
8h23m

Decentralisation allows trust-less assurance that money is sent

That is indeed something it does.

But it also gives you the assurance that a single entity can't print unlimited money out of thin air, which is the case with a centrally controlled currency like Worldcoin.

They can just shrug their shoulders and claim that all that money is for the poor and gullible Africans that had their eyeballs scanned.

ben_w
4 replies
8h2m

But it also gives you the assurance that a single entity can't print unlimited money out of thin air, which is the case with a centrally controlled currency like Worldcoin.

Sure, but the inability to do that when needed is also a bad thing.

Also, single world currencies are (currently) a bad thing, because when your bit of the world needs to devalue its currency is generally different to when mine needs to do that.

But this is why economics is its own specialty and not something that software nerds should jump into like our example with numbers counts for much :D

bavell
3 replies
4h4m

Sure, but the inability to do that when needed is also a bad thing.

When and why would BTC or ETH need to print unlimited money and devalue themselves?

ben_w
2 replies
3h56m

Wrong framing, currencies don't have agency. You should be asking when would you need your currency to be devalued, regardless of what it's called or made from.

And the answer to that is all the reasons governments do just that, except for the times where the government is being particularly stupid and doing hyperinflation.

bavell
1 replies
3h51m

Not a very convincing answer at all.

ben_w
0 replies
3h50m

What would a convincing answer look like?

pwdisswordfishc
2 replies
12h5m

that's more of "having no sense of other people's privacy"

Sufficiently advanced incompetence is indistinguishable from malice.

ben_w
1 replies
9h55m

It's not particularly advanced, it's the same thing that means the supermajority of websites have opted for "click here to consent to our 1200 partners processing everything you do on our website" rather than "why do we need 1200 partners anyway?"

It's still bad, don't get be wrong, it's just something I can distinguish.

Intralexical
0 replies
32m

If it fools billions of people and does significant damage to the lives of people, then it's plenty advanced to me, even if it happens through a more simple or savant-like process than something that looks obviously deliberate.

I don't think the cookies thing is a good example. That's passive incompetence, to avoid the work of changing their business models. Altman actively does more work to erode people's rights.

It's still bad, don't get be wrong, it's just something I can distinguish.

Can you? Plausible deniability is one of the first things in any malicious actor's playbook. "I meant well…" If there's no way to know, then you can only assess the pattern of behavior.

But realistically, nobody sapient accidentally spends multiple years building elaborate systems for laundering other people's IP, privacy, and likeness, and accidentally continues when they are made aware of the harms and explicitly asked multiple times to stop…

JoRyGu
0 replies
12h41m

Because of course he's got a crypto grift going. Shocking.

Intralexical
0 replies
13m

I think people tend to assume our own values and experiences have some degree of being universal.

So scammers see other scammers, and they just think there's nothing wrong with it.

While normal people who act in good faith see scammers, and instinctively think that there must be a good reason for it, even (or especially!) if it looks sketchy.

I think this happens a lot. Not just with Altman, though that is a prominent currently ongoing example.

Protecting yourself from dark triad type personalities means you need to be able to understand a worldview and system of values and axioms that is completely different from yours, which is… difficult. …There's always that impulse to assume good faith and rationalize the behavior based on your own values.

wraptile
9 replies
14h31m

Not going to lie, he had me. He appeared very genuine and fair in almost all media he appeared like podcasts but many of his actions are just so hard to justify.

svachalek
3 replies
14h8m

He has a certain charm and seeming sincerity when he talks. But the more I see of him, the more disturbing I find him -- he combines the Mark Zuckerberg stare with the Elizabeth Holmes vocal fry.

jesterson
1 replies
12h18m

so all psychopaths do, aren't they?

johnnyanmac
0 replies
12h7m

CEO's have been studied to have a disproportionately higher rate of psychopathy. So there's a little correlation. You don't get to the top of a company in this kind of society without having some inherent charm (assuming you aren't simply inheriting billions from a previous generation).

polotics
0 replies
11h32m

Do you have a link to a video of Altman's voice shifting from controlled deep to nasal? The videos of Elizabeth Holmes not being able to keep up with the faked deep tone are textbook-worthy...

kristiandupont
3 replies
12h48m

I have exactly the same feeling as I think you do. When you reach the levels of success he has, there will always be people screaming that you are incompetent, evil and every other negative adjective under the sun. But he genuinely seemed to care about doing the right thing. But this is just so lacking of basic morals that I have to conclude that I was wrong, at least to an extent.

wraptile
1 replies
11h30m

I feel that this is a classic tale of success getting to you. It almost feels like it's impossible to be successful at this level and remain true. At least, I hadn't seen it yet.

Intralexical
0 replies
30m

Or "success" itself acts as a filter selecting for those who are ruthless enough to do amoral and immoral things.

gdilla
0 replies
2h30m

He just, to his credit, understands his public persona has to be the non-douchy-tech-bro and the media will eat it up. Much like a politician. He doesn't want to be like Elon or Travis K in public (though he probably agrees with them more than his public persona would imply).

JohnFen
0 replies
1h37m

This is why it's a mistake to go by "vibes" of a person when they're speaking to an audience. Pay attention to what they do, not what they say.

mlindner
0 replies
10h46m

I'm glad more people are thinking this. It's amazing that he got his way back into OpenAI somehow. I said as much that he shouldn't go back to OpenAI and got downvotes universally both here and on reddit.

m000
0 replies
9h7m

Aspiring technofeudalist.

firebirdn99
0 replies
4h14m

almost every tech ceo is like that. Could list many examples. It's an effect of capitalism.

vasilipupkin
18 replies
15h15m

if this account is true, Sam Altman is a deeply unethical human being. Given that he doesn't bring any technical know how to building of AGI, I just don't see the reason to have such a person in charge here. The new board should act.

ornornor
5 replies
14h40m

He has “The Vision”… It’s the modern entrepreneurship trope that lowly engineers won’t achieve anything if they weren’t rallied by a demi-god who has “The Vision” and makes it all happen.

azinman2
2 replies
14h33m

Probably not wrong. Lots and lots of examples of that being true.

safety1st
1 replies
11h27m

There is something to it. Someone has to identify the intersection between what the engineering can do and what the market actually wants, then articulate that to a broad enough audience. Engineers constantly undervalue this very fuzzy and very human centric part of the work.

I don't think the issue is that Vision doesn't matter. I think the issue is Sam doesn't have it. Like Gates and Jobs had clear, well defined visions for how the PC was going to change the world, then rallied engineering talent around them and turned those into reality, that's how their billions and those lasting empires were born. Maybe someone like Elon Musk is a contemporary example. Just don't see anything like that from SamA, we see him in the media, talking a lot about AI, rubbing shoulders with power brokers, being cutthroat, but where's the vision of a better future? And if he comes up with one does he really understand the engineering well enough to ground it in reality?

azinman2
0 replies
3h17m

I don’t know enough about him or his vision. It doesn’t seem he’s as clear as say Jobs in the past. But I do look at all the amazing things openai has done in a short period of time, and that the employees overwhelmingly backed him with the whole board chaos issue. He also has fundraised a lot money for the company. It appears he’s doing more right than wrong, and openai pulled everyone else’s pants down.

parpfish
1 replies
13h23m

I roll my eyes when somebody says that they’re “the idea person” or that they have “the vision”.

I’d wager that most senior+ engineers or product people also have equally compelling “the vision”s.

The difference is that they need to do actual work all day so they don’t get to sit around pontificating.

JohnFen
0 replies
1h33m

Ideas are a dime a dozen. The value of "idea men" isn't their ideas, it's their ability to rally people around them. It's the exact same skill that con men use for nefarious purposes.

silver_silver
2 replies
13h37m

It shouldn’t be forgotten that his sister has publicly accused him and his brother of sexually abusing her as a child.

lrvick
0 replies
10h32m

"Some commenters on Hacker News claim that a post regarding Annie's claims that Sam sexually assaulted her at age 4 has been being repeatedly removed."

Whelp. Let us see if this one sticks.

serial_dev
1 replies
11h52m

He must be bringing something to the table as they tried to get rid of him and failed spectacularly. Business is not only about technical know how.

surfingdino
0 replies
11h30m

Microsoft. They are protecting their investment.

jcranmer
1 replies
14h4m

I mean, there's already been some yellow flags with Altman already. He founded Worldcoin, whose plan is to airdrop free money in exchange for retinal scans. And the board of OpenAI fired him for (if I've got this right) lying to the board about conversations he'd had with individual board members.

JohnFen
0 replies
1h32m

WorldCoin is how I first heard of him, and it's what made me think he was a bad actor. I think of it as a red flag, not yellow.

imjonse
1 replies
14h1m

He rubs elbows with very powerful people including CEOs, heads of state and sheiks. They probably want 'one of them' in charge of the company that has the best chances of getting close to AGI. So it's not his technical chops and not even 'vision' in the Jobs sense that keeps him there.

dontupvoteme
0 replies
6h41m

Are they really the ones with the best chance now though?

They're basically owned by Microsoft, they're bleeding tech/ethnical talent and credibility, and most importantly Microsoft Research itself is no slouch (especially post-Deepmind poaching) - things like Phi are breaking ground on planets that openai hasn't even touched.

At this point I'm thinking they're destined to become nothing but a premium marketing brand for Microsoft's technology.

latexr
0 replies
5h55m

if this account is true, Sam Altman is a deeply unethical human being.

This isn’t even close to the most unethical thing he has done. This is peanuts compared to the Worldcoin scam.

https://news.ycombinator.com/item?id=40427454

insane_dreamer
0 replies
13h58m

I thought we had already established this when the previous board tried to oust him for failing to stick to OpenAI’s charter. This is just further confirmation.

The new board should act

You mean like the last board tried? Besides the board was picked to be on Altman’s side. The independent members were forced out.

gdilla
0 replies
2h29m

because so many people ran cover for him, from paul graham to whos-who of silicon valley.

startupsfail
9 replies
15h35m

Most likely it was an unforced error, as there’ve been a lot of chaos with cofounders and the board revolt, easy to loose track of something really minor.

Like some intern’s idea to train the voice on their favorite movie.

And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

This could be a well-planned opening move of a regulation gambit. But unlikely.

mbreese
3 replies
15h13m

This is an unforced error, but it isn’t minor. It’s quite large and public.

The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.

But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.

surfingdino
2 replies
11h47m

I had a conversation with someone responsible for introducing LLMs into the process that involves personal information. That person rejected my concern over one person's data appearing in the report on another person. He told me that it will be possible to train AI to avoid that. The rest of the conversation convinced me that AI is seen as magic that can do anything. It seems to me that we are seeing a split between those who don't understand it and fear it and those who don't understand it, but want to align themselves with it. Those latter are those I fear the most.

kombookcha
1 replies
10h2m

The "AI is magic and we should simply believe" is even being actively promoted because all these VC hucksters need it.

Any criticism of AI is being met with "but if we all just hype AI harder, it will get so good that your criticisms won't matter" or flat out denied. You've got tech that's deeply flawed with no obvious way to get unflawed, and the current AI 'leaders' run companies with no clear way to turn a profit other than being relentlessly hyped on proposed future growth.

It's becoming an extremely apparent bubble.

surfingdino
0 replies
9h52m

On the plus side, lots of cheap nVida cards heading for eBay once it bursts.

windexh8er
0 replies
15h27m

I don't think this makes any sense, at all, quite honestly. Why would an "intern" be training one of ChatGPT's voices for a major release?

If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]

[0] https://x.com/gdb/status/1791869138132218351

mmastrac
0 replies
15h32m

It makes a lot more sense that he was caught red-handed, likely hiring a similar voice actress and not realizing how strong identity protections are for celebs.

kergonath
0 replies
13h2m

Like some intern’s idea to train the voice on their favorite movie.

Ah, the famous rogue engineer.

The thing is, even if it were the case, this intern would have been supervised by someone, who themselves would have been managed by someone, all the way to the top. The moment Altman makes a demo using it, he owns the problem. Such a public fuckup is embarrassing.

And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

You mean, they were reckless and tried to wing it? Yes, that’s exactly what’s wrong with them.

This could be a well-planned opening move of a regulation gambit. But unlikely.

LOL. ROFL, even. This was a gambit all right. They just expected her to cave and not ask questions. Altman has a common thing with Musk: he does not play 3D chess.

Cheer2171
0 replies
15h17m

easy to loose track of something really minor. Like some intern’s idea

Yes, because we all know the high profile launch for a major new product is entirely run by the interns. Stop being an apologist.

Always42
0 replies
15h20m

At first I thought there may be a /s coming...

latexr
8 replies
6h6m

The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT.

He lies and steals much more than that. He’s the scammer behind Worldcoin.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

Altman is, and wants to be, a large part of AI regulation. Quite the public contradiction.

That’s as much of a contradiction as a thief wanting to be a large part of lock regulation. What better way to ensure your sleazy plans benefit you, and preferably only you but not the competition, than being an active participant in the inevitable regulation while it’s being written?

ben_w
7 replies
5h48m

That’s as much of a contradiction as a thief wanting to be a large part of lock regulation.

Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.

Now, the NSA and crypto standards, that would have worked as a metaphor for your point.

(I don't think it's correct, but that's an independent claim, and I am not only willing to discover that I'm wrong about their sincerity, I think everyone writing that legislation should actively assume the worst while they do so).

latexr
2 replies
5h24m

Based on what I see in the videos from The Lockpocking Lawyer

The Lockpicking Lawyer is not a thief, so I don’t get your desire to incorrectly nitpick. Especially when you clearly understood the point.

ben_w
1 replies
5h9m

You noticed your confusion but still went on the aggressive, huh. Ah well.

"A is demonstrating a proof of B" does not require "A is a clause in B".

A being TLPL, B being that the entire lock industry is bad, so bad that anyone with experience would be a massive improvement, for example a thief.

latexr
0 replies
4h53m

I’m not confused and my reply was not aggressive. I don’t think it will be a good use of time to continue this conversation because discussions should get more substantive as they go on and this was an irrelevant tangent to which I have no desire to get sucked in to.

Other people have commented to further explain the point in other words. I recommend you read those, perhaps it’ll make you understand.

https://news.ycombinator.com/item?id=40428005

https://news.ycombinator.com/item?id=40428280

maeln
1 replies
5h21m

> That’s as much of a contradiction as a thief wanting to be a large part of lock regulation.

Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.

A thief is not a lock picker and they don't have the same incentive. A thief in a position to dictate lock regulation would try to have a legal backdoor on every lock in the world. One that only he has the master key for. Something something NSA & cryptography :)

ben_w
0 replies
3h52m

Indeed, but locks are so bad (as demonstrated by LPL) that even a thief would make them better.

Something something NSA & cryptography :)

Indeed, as I said :)

lawn
1 replies
4h57m

Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.

If you've watched his videos then surely you should know that lockpicking isn't even on the radar for thieves as there are much easier and faster methods such as breaking the door or breaking a window.

ben_w
0 replies
3h50m

Surely that just makes the thief/locks metaphor even worse? Or have I missed something?

dvhh
8 replies
15h39m

Some people might see some parallel with SBF and see how Altman would try to regulate competition without impeding OpenAI progress

viking123
5 replies
15h4m

I always mix up those two in my head and have to think which one is which

ocodo
4 replies
14h58m

One is in jail, when it should be two are in jail

garbthetill
3 replies
13h1m

I dont like sam, but he moves way smarter than ppl like sbf or Elizabeth holmes. He actual has a product close to the reported specs, albeit still far away from the ultimate goal of AGI

i dont see why he should be in jail

Findecanor
1 replies
10h49m

If his sister's words about sexually abusing her are true, he should be in jail.

bryanrasmussen
0 replies
10h7m

no, in that case he should have been in the Juvenile incarceration system, unless the argument is that he should have been charged as an adult, or that Juvenile abusers should always be charged and sentenced as adults, or that Juvenile sex offenders who were not charged as Juveniles should be charged as adults.

Which one?

on edit: this being based on American legal system, you may come from a legal system with different rules.

choppaface
0 replies
12h42m

Should be in jail for Worldcoin which has pilfered people of their biological identity. I guess you could literally delete Worldcoin and in theory make people whole, but that company treats humans like vegetables that have no rights.

numpad0
0 replies
14h40m

Maybe they were the rogue AGI escapes we found along the way

choppaface
0 replies
12h45m

sama gets to farm out much of the lobbying to Microsoft’s already very powerful team, which spends a mere $10m but that money gets magnified by MS’s gov and DoD contracts. That’s a huge safety net for him, he gets to steal and lie (as demonstrated w/ Scarlett) and yet the MS lobbying machine will continue unphased.

https://www.opensecrets.org/federal-lobbying/clients/summary...

beefnugs
7 replies
12h55m

the whole technology is based on fucking over artists, who didn't expect this exact thing?

surfingdino
6 replies
11h54m

It's not just the artists, anything you do in the digital realm and anything that can be digitised is fair game. In the UK NHS GP practices refuse to register you to see a doctor even when it's urgent and tell you to use a third-party app to book an appointment. You have use your phone to take photos of the affected area and provide a personal info. I fully expect that data to be fed into some AI and sold without me knowing and without a process for removal of data should the company go bust. It is preying on the vulnerable when they need help.

4ndrewl
2 replies
11h11m

Important to note the "The NHS" is not a single entity and the GP practice is likely a private entity owned in partnership by the doctors. There are a number of reasons why individual practices can refuse to register.

Take your point about LLMs though.

surfingdino
1 replies
11h6m

I went to see my GP and the lady at the reception told me they no longer book visits at the reception and I had to use the app. Here's the privacy policy https://support.patientaccess.com/privacy-policy They reserve the right to pass your data to third party contractors and to use it for marketing purposes. There is the obligatory clause on regarding the right to be forgotten, but the AI companies claim it is impossible to implement.

4ndrewl
0 replies
10h54m

I didn't read that as reserving the right - looks like a standard dpia that is opt-in and limited.

However, GP practices are essentially privatised - so you do have the right to register at another practice.

KineticLensman
1 replies
11h29m

Last time I booked a blood test it was via the official NHS app , not a third party.

jjgreen
0 replies
9h46m

App? What's an app?

It's a thing you put on your phone

I don't have a phone

Well, we can't register you

You don't accept people who don't have phones? Could I have that in writing please, ..., oh, your signature on that please ...

trustno2
5 replies
12h18m

Altman wants to be a part of AI regulation in the same way Bankman Fried wanted to be a part of cryptocurrency regulation.

gds44
2 replies
11h48m

Whats really interesting about our timeline is when you look at the history of market capture in Big Oil, Telco, Pharma, Real Estate, Banks, Tobacco etc all the lobbying, bribing, competition killing used to be done behind the scenes within elite circles.

The public hardly heard from or saw the mgmt of these firm in media until shit hit the fan.

Today it feels like managment is in the media every 3 hours trying to capture attention of prospective customers, investors, employees etc or they loose out to whoever is out there capturing more attention.

So false and condradictory signalling is easy to see. Hopefully out of all this chaos we get a better class of leaders not a better class of panderers.

hoseja
1 replies
10h13m

So great to have twitter so the narcissistic psychopaths can't resist revealing themselves for clout.

ToucanLoucan
0 replies
2h17m

I mean, to whatever extent it matters. All these outrageously rich morons still have tons of economic and social clout. They still have pages upon pages of fans foaming at the mouth for the opportunity to harass people asking basic questions. They still carry undue influence in our society and in our industry no matter how many times they are "outed."

What does being outed even mean anymore? It's just free advertising from all the outlets that feel they can derive revenue off your name being in their headlines. Nothing happens to them. SBF and Holmes being the notable exceptions, but that's because they stole from rich people.

dqft
0 replies
4h11m

(AI)tman tries to be, (Bank)man fried to be, who is letting Kojima name all these villians?

askl
0 replies
9h31m

I always had trouble telling apart those two Sams. Turns out they're the same person.

akudha
3 replies
9h51m

What is so special about her voice? They could’ve found a college student with a sweet voice and offered to pay her tuition in exchange for using her voice, no? Or a voice actor?

Why be cartoonishly stupid and cartoonishly arsehole and steal a celebrity’s voice? Did he think Scarlett won’t find out? Or object?

I don’t understand these rich people. Is it their hobby to be a dick to as many people as they can, for no reason other than their amusement? Just plain weirdos

meat_machine
1 replies
9h43m

Scarlett voiced Samantha, an AI in the movie "Her"

Considering the movie's 11 years old, it's surprisingly on-point with depictions of AI/human interactions, relations, and societal acceptance. It does get a bit speculative and imaginative at the end though...

But I imagine that movie did/does spark the imagination of many people, and I guess Sam just couldn't let it go.

mike_hearn
0 replies
8h24m

It's not just that. Originally the AI voice in Her was played by someone else, but Spike Jonze felt strongly that the movie wasn't working and recast the part to Johansson. The movie immediately worked much better and became a sleeper hit. Johansson just has a much better fitting voice and higher skill in voice acting for this kind of role, to the extent that it maybe was a make/break choice for the movie. It isn't a surprise that after having created the exact tech from the movie, OpenAI wanted it to have the same success that Jonze had with his character.

It's funny that just seven days ago I was speculating that they deliberately picked someone whose voice is very close to Scarlett's and was told right here on HN, by someone who works in AI, that the Sky voice doesn't sound anything like Scarlett and it is just a generic female voice:

https://news.ycombinator.com/item?id=40343950#40345807

Apparently .... not.

sage76
0 replies
8h16m

Is it their hobby to be a dick to as many people as they can, for no reason other than their amusement? Just plain weirdos

They seem to love "testing" how much they can bully someone.

I remember a few experiences where someone responded by being an even bigger dick, and they disappeared fast.

xinayder
1 replies
9h42m

The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT. What he did, the timeline, everything - is sleazy if, in fact, that's the story.

Correcting, the thing about this whole situation with OpenAI is they are willing to steal everything for use in ChatGPT. They trained their model with copyrighted data and for some reason they won't delete the millions of protected data they used to train the AI model.

JohnFen
0 replies
1h27m

Using other people data for training without their permission is the "original sin" of LLMs[1]. That will, at best, be a shadow over the entire field for an extremely long time.

[1] Just to head off people saying that such a use is not a copyright violation -- I'm not saying it is. I'm just saying that it's extremely sketchy and, in my view, ethically unsupportable.

sirsinsalot
0 replies
9h19m

Wow, Altman in the replies there:

Cool story bro.

Except I could never have predicted the part where you resigned on the spot :)

Other than that, child's play for me.

Thanks for the help. I mean, thanks for your service as CEO.
choppaface
0 replies
12h54m

Altman doesn’t want to be part of regulation. sama wants to be the next tk. he wants to be above regulation, and he wants to spend Microsoft’s money getting there.

E.g. flying Congress to Lake Cuomo for an off-the-record “discussion” https://freebeacon.com/politics/how-the-aspen-institute-help...

Intralexical
0 replies
4m

"Not consistently candid", the last board said.

Like many people who try to oppose psychopaths though, they don't seem to be around much anymore.

crimsoneer
97 replies
19h9m

But it's clearly not her voice right? The version that's been on the app for a year just isn't. Like, it clearly intending to be slightly reminiscent of her, but it's also very clearly not. Are we seriously saying we can't make voices that are similar to celebrities, when not using their actual voice?

ncallaway
60 replies
18h50m

Are we seriously saying we can't make voices that are similar to celebrities, when not using their actual voice?

They clearly thought it was close enough that they asked for permission, twice. And got two no’s. Going forward with it at that point was super fucked up.

It’s very bad to not ask permission when you should. It’s far worse to ask for permission and then ignore the response.

Totally ethically bankrupt.

avarun
18 replies
18h28m

They clearly thought it was close enough that they asked for permission, twice.

You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her. That doesn't mean they thought they had to ask permission for the similar voice assistant.

voltaireodactyl
9 replies
18h1m

You seem to be misunderstanding the legalities at work here: reaching out to her multiple times beforehand, along with tweets intended to underline the similarity to her work on Her, demonstrates intention. If they didn’t think they needed permission, why ask for permission multiple times and then yank it when she noticed?

Answer: because they knew they needed permission, after working so hard to associate with Her, and they hoped that in traditional tech fashion that if they moved fast and broke things enough, everyone would have to reshape around OAs wants, rather than around the preexisting rights of the humans involved.

KHRZ
4 replies
17h23m

You could also ask: If Scarlett has a legal case already, why does she want legislation passed?

staticman2
0 replies
2h19m

Before Roe vs Wade was overturned you might have asked if abortion is legal why do abortion rights advocates want legislation passed?

The answer is without legislation you are far more subject to whether a judge feels like changing the law.

ncallaway
0 replies
15h41m

Because a legal case under the current justice system and legislative framework would probably take hundreds of thousands to millions of dollars to bring a case that requires discovery and a trial to accomplish.

Maybe (maybe!) it’s worth it for someone like Johansson to take on the cost of that to vindicate her rights—but it’s certainly not the case for most people.

If your rights can only be defended from massive corporations by bringing lawsuits that cost hundreds of thousands to millions of dollars, then only the wealthy will have those rights.

So maybe she wants new legislative frameworks around these kind of issues to allow people to realistically enforce these rights that nominally exist.

For an example of updating a legislative framework to allow more easily vindicating existing rights, look up “anti-SLAPP legislation”, which many states have passed to make it easier for a defendant of a meritless lawsuit seeking to chill speech to have the lawsuit dismissed. Anti-SLAPP legislation does almost nothing to change the actual rights that a defendant has to speak, but it makes it much more practical for a defendant to actually excercise those rights.

So, the assumption that a call for updated legislation implies that no legal protection currently exists is just a bad assumption that does not apply in this situation.

minimaxir
0 replies
17h18m

To prevent it from happening again, with more legal authority than a legal precedent.

bradchris
0 replies
15h4m

She has a personal net worth of >$100m. She’s also married to a successful actor in his own right.

Her voice alone didn’t get her there — she did. That’s why celebrities are so protective about how their likeness is used: their personal brand is their asset.

There’s established legal precedent on exactly this—even in the case they didn’t train on her likeness, if it can reasonably be suspected by an unknowing observer that she personally has lent her voice to this, she has a strong case. Even OpenAI knew this, or they would not have asked in the first place.

parineum
1 replies
15h31m

If they didn’t think they needed permission, why ask for permission multiple times and then yank it when she noticed?

Many things that are legal are of questionable ethics. Asking permission could easily just be an effort for them to get better samples of her voice. Pulling the voice after debuting it is 100% a PR response. If there's a law that was broken, pulling the voice doesn't unbreak it.

voltaireodactyl
0 replies
52m

They are trying to wriggle out of providing insight into how that voice was derived at all (like Google with the 100% of damages check). It would really suck for OpenAI if, for example, Altman had at some point emailed his team to ensure the soundalike was “as indistinguishable from Scarlet’s performance in HER as possible.“

Public figures own their likeness and control its use. Not to mention that in this case OA is playing chicken with studios as well. Not a great time to do so, given their stated hopes of supplanting 99% of existing Hollywood creatives.

munksbeer
1 replies
3h24m

If they didn’t think they needed permission, why ask for permission multiple times and then yank it when she noticed?

One very easy explanation is that they trained Sky using another voice (this is the claim and no reason to doubt it is true) wanting to replicate the stye of the voice in "Her", but would have preferred to use SJ's real voice for the PR impact that could have.

Yanking it could also easily be a pre-emptive response to avoid further PR drama.

You will obvious decide you don't believe those explanations, but to many of us they're quite plausible, in fact I'd even suggest likely.

(And none of this precludes Sam Altman and OpenAI being dodgy anyway)

voltaireodactyl
0 replies
56m

I actually believe that’s quite plausible. The trouble is, by requesting permission in the first place, they demonstrated intent, which is legally significant. I think a lot of your confusion is attempting to employ pure logic to a legal issue. They are not the same thing, and the latter relies heavily on existing precedent — of which you may, it seems, be unaware.

chromakode
4 replies
18h1m

So, what would they have done if she accepted? Claimed that the existing training of the Sky voice was voiced by her?

sangnoir
0 replies
16h50m

Claimed that the existing training of the Sky voice was voiced by her?

That claim could very well be true. The letter requested information on how the voice was trained - OpenAI may not want that can of worms opened lest other celebrities start paying closer attention to the other voices.

og_kalu
0 replies
17h44m

Voice cloning could be as simple as a few seconds of audio in the context window since GPT-4o is a speech to speech transformer. They wouldn't need to claim anything, just switch samples. They haven't launched the new voice mode yet, just demos.

consf
0 replies
4h37m

That would be suspicious too

blackoil
0 replies
15h4m

Maybe they have second trained on her voice.

ncallaway
1 replies
15h56m

You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her.

And promoted it using a tweet naming the movie that Johansson performed in, for the role that prompted them to ask her in the first place.

You have to be almost deliberately naive to not see that the were attempting to use her vocal likeness in this situation. There’s a reason they immediately walked it back after the situation was revealed.

Neither a judge, nor a jury, would be so willingly naive.

munksbeer
0 replies
3h22m

This is a genuine question. If it turns out they trained Sky on someone else's voice to similarly replicate the style of the voice in "Her", would you be ok with that? If it was proven that the voice was just similar, to SJ's would that be ok?

My view is, of course it is ok. SJ doesn't own the right to a particular style of voice.

tomrod
0 replies
18h5m

And... No. That is what OpenAI will assert, and good discovery by Scar Jo reps may prove or disprove.

ants_everywhere
16 replies
18h2m

Yes, totally ethically bankrupt. But what bewilders me is that they yanked it as soon as they heard from their lawyers. I would have thought that if they made the decision to go ahead despite getting two "no"s, that they at least had a legal position they thought was defensible and worth defending.

But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.

ethbr1
10 replies
17h34m

I would have thought that if they made the decision to go ahead despite getting two "no"s, that they at least had a legal position they thought was defensible and worth defending.

They likely have a legal position which is defensible.

They're much more worried that they don't have a PR position which is defensible.

What's the point of winning the (legal) battle if you lose the war (of public opinion)?

Given the rest of their product is built on apathy to copyright, they're actively being sued by creators, and the general public is sympathetic to GenAI taking human jobs...

... this isn't a great moment for OpenAI to initiate a long legal battle, against a female movie actress / celebrity, in which they're arguing how her likeness isn't actually controlled by her.

Talk about optics!

(And I'd expect they quietly care much more about their continued ability to push creative output through their copyright launderer, than get into a battle over likeness)

XorNot
7 replies
14h24m

How is the PR position not defensible? One of the worst things you can generally do is admit fault, particularly if you have a complete defense.

Buckle in, go to court, and double-down on the fact that the public's opinion of actors is pretty damn fickle at the best of times - particularly if what you released was in fact based on someone you signed a valid contract with who just sounds similar.

Of course, this is all dependent on actually having a complete defense of course - you absolutely would not want to find Scarlett Johannsen voice samples in file folders associated with the Sky model if it went to court.

ethbr1
6 replies
13h58m

In what world does a majority of the public cheer for OpenAI "stealing"* an actress's voice?

People who hate Hollywood? Most of that crowd hates tech even more.

* Because it would take the first news cycle to be branded as that

XorNot
5 replies
11h54m

It is wild to me that on HackerNews of all places, you'd think people don't love an underdog story.

Which is what this would be in the not-stupid version of events: they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".

The issue of course is that OpenAIs reactions so far don't seem to indicate that they're actually confident they can prove this or that this is the case. Coz if this is actually the case, they're going about handling this in the dumbest possible way.

ml-anon
2 replies
11h26m

It’s wild to me that there are people who think that OpenAI are the underdog. A 80Bn Microsoft vassal, what a plucky upstart.

You realise that there are multiple employees including the CEO publicly drawing direct comparisons to the movie Her after having tried and failed twice to hire the actress who starred in the movie? There is no non idiotic reading of this.

XorNot
1 replies
10h31m

You're reading my statements as defending OpenAI. Put on your "I'm the PR department hat" and figure out what you'd do if you were OpenAI given various permutations of the possible facts here.

That's what I'm discussing.

Edit: which is to say, I think Sam Altman may have been a god damn idiot about this, but it's also wild anyone thought that ScarJo or anyone in Hollywood would agree - AI is currently the hot button issue there and you'd find yourself the much more local target of their ire.

ml-anon
0 replies
9h6m

Then why bother mentioning an "underdog story" at all?

Who is the underdog in this situation? In your comment it seems like you're framing OpenAI as the underdog (or perceived underdog) which is just bonkers.

Hacker News isn't a hivemind and there are those of us who work in GenAI who are firmly on the side of the creatives and gasp even rights holders.

jubalfh
0 replies
7h20m

an obnoxious sleazy millionaire backed by microsoft is by no means “an underdog”

Sebb767
0 replies
8h39m

they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".

There are quite a few issues here: First, this is assuming they actually hired a voice-alike person, which is not confirmed. Second, they are not an underdog (the voice actress might be, but she's most likely pretty unaffected by this drama). Finally, they were clearly aiming to impersonate ScarJo (as confirmed by them asking for permission and samas tweet), so this is quite a different issue than "accidentally" hiring someone that "just happens to" sound like ScarJo.

justinclift
1 replies
15h40m

They likely have a legal position which is defensible.

Doesn't sound like they have that either.

bonton89
0 replies
7m

Copilot still tells me I've commit a content policy violation of I ask it to generate an image "in Tim Burton's style". Tim Burton has been openly critical of generative AI.

foobarian
3 replies
16h29m

But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.

That actually seems like there may be a few people involved and one of them is a cowboy PM who said fuck it, ship it to make the demo. And then damage control came in later. Possibly the PM didn't even know about the asks for permission?

kuboble
1 replies
13h25m

a cowboy PM who said fuck it, ship it to make the demo.

Given the timeline it sounds like the PM was told "just go ahead with it, I'll get the permission".

unraveller
0 replies
3h29m

Ken Segall has a similar Steve Jobs story, he emails Jobs that the Apple legal team have just thrown a spanner in the works days before Ken's agency is set to launch Apple's big ad campaign and what should he do?

Jobs responds minutes later... "Fuck the lawyers."

anytime5704
0 replies
14h41m

The whole company behaves like rogue cowboys.

If a PM there didn’t say “fuck it ship it even without her permission” they’d probably be replaced with someone who would.

I expect the cost of any potential legal action/settlement was happily accepted in order to put on an impressive announcement.

emsign
0 replies
13h43m

It looks really unprofessional at minimum if not a bit arrogant, which is actually more concerning as it hints at a deeper disrespect for artists and celebrities.

mensetmanusman
15 replies
17h52m

Effective altruism would posit that it is worth one voice theft to help speed the rate of life saving ai technology in the hands of everyone.

ncallaway
12 replies
16h35m

Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

Their hubris will walk them right into federal prison for fraud if they’re not careful.

If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

parineum
4 replies
15h36m

This is like attributing the crimes of a few fundamentalists to an entire religion.

ncallaway
2 replies
15h29m

I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.

I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.

So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.

parineum
1 replies
3h48m

I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

Did you?

Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.
ncallaway
0 replies
1h19m

Sure, I should’ve said I tried to or I intended to:

You can see another comment here, where I acknowledge I communicate badly, since I’ve had to clarify multiple times what I was intending: https://news.ycombinator.com/item?id=40424566

This is the paragraph that was intended to narrow what I was talking about:

I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

That said, I definitely should’ve said “those Effective Altruists” in the first paragraph to more clearly communicate my intent.

ocodo
0 replies
14h46m

Effective Altruists are the fundamentalists though. So no, it's not.

comp_throw7
2 replies
13h59m

When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Extremely reasonable position, and I'm glad that every time some idiot brings it up in the EA forum comments section they get overwhelmingly downvoted, because most EAs aren't idiots in that particular way.

I have no idea what the rest of your comment is talking about; EAs that have opinions about AI largely think that we should be slowing it down rather than speeding it up.

ncallaway
0 replies
13h49m

In some sense I see a direct line between the EA argument being presented here, and the SBF consequentialist argument where he talks about being willing to flip a coin if it had a 50% chance to destroy the world and a 50% chance to make the world more than twice as good.

I did try to cabin my arguments to Effective Altrusts that are making ends justify the means arguments. I really don’t have a problem with people that are attempting to use EA to decide between multiple good outcomes.

I’m definitely not engaged enough with the Effective Altrusits to know where the plurality of thought lies, so I was trying to respond in the context of this argument being put forward on behalf of Effective Altruists.

The only part I’d say applies to all EA, is the brand taint that SBF has done in the public perception.

emsign
0 replies
13h30m

The speed doesn't really matter if their end goal is morally wrong. A slower speed might give them an advantage to not overshoot and get backlash or it gives artists and the public more time to fight back against EA, but it doesn't hide their ill intentions.

0xDEAFBEAD
1 replies
13h17m

Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

There's a fair amount of EA discussion of utilitarianism's problems. Here's EA founder Toby Ord on utilitarianism and why he ultimately doesn't endorse it:

https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/...

If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

Very few in the EA community want to speed AI adoption. It's far more common to think that current AI companies are being reckless, and we need some sort of AI pause so we can do more research and ensure that AI systems are reliably beneficial.

When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

The all-time most upvoted post on the EA Forum condemns SBF: https://forum.effectivealtruism.org/allPosts?sortedBy=top&ti...

ncallaway
0 replies
12h47m

I’ve had to explain myself a few times on this, so clearly I communicated badly.

I probably should have said _those_ Effective Altruists are shitty utilitarians. I was attempting—and since I’ve had to clarify a few times clearly failed—to take aim at the effective altruists that would make the utilitarian trade off that the commenter mentioned.

In fact, there’s a paragraph from the Toby Ord blog post that I wholeheartedly endorse and I think rebuts the exact claim that was put forward that I was responding to.

Don’t act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.

So, my words were too broad. I don’t actually mean all effective altruists are shitty utilitarians. But the ones that would make the arguments I was responding to are.

I think Ord is a really smart guy, and has worked hard to put some awesome ideas out into the world. I think many others (and again, certainly not all) have interpreted and run with it as a framework for shitty utilitarianism.

Intralexical
0 replies
16m

The central contention of Effective Altruism, at least in practice if not in principle, seems to be that the value of thinking, feeling persons can be and should be reduced to numbers and objects that you can do calculations on.

Maybe there's a way to do that right. I suppose like any other philosophy, it ends up reflecting the personalities and intentions of the individuals which are attracted to and end up adopting it. Are they actually motivated by identifying with and wanting to help other people most effectively? Or are they just incentivized to try to get rid of pesky deontological and virtue-based constraints like empathy and universal rights?

Intralexical
0 replies
28m

Plus, describing this as "speed the rate of life saving ai technology in the hands of everyone" is… A Reach.

ehnto
1 replies
16h47m

It didn't require voice theft, they could have easily found a volunteer or paid for someone else.

consf
0 replies
4h38m

Agree, and it'd be more authentic

gibbitz
4 replies
17h17m

Are we surprised by this bankruptcy. As neat as AI is, it is only a thing because the corporate class see it as a way to reduce margins by replacing people with it. The whole concept is bankrupt.

ncallaway
0 replies
16h50m

I don’t think any said anything about being surprised by it?

emsign
0 replies
13h36m

Problem is they really believe we either can't tell the difference between a human and an AI model eventually, or they think we don't care. Don't they understand the meaning of art?

ecjhdnc2025
0 replies
16h56m

100% this.

It’s shocking to me how people cannot see this.

The only surprise here is that they didn’t think she’d push back. That is what completes the multilayered cosmic and dramatic irony of this whole vignette. Honestly feels like Shakespeare or Arthur Miller might have written it.

Nasrudith
0 replies
3h57m

That is wrong on several levels. First off, it ignores the role of massive researchers. Should we start de-automating processes to employee more people at the cost of worsening margins?

nicce
2 replies
18h42m

And they could have totally get away with it by never mentioning the name of Scarlett. But of course, that is not what they wanted.

Edit: to clarify, since it is not exactly identical voice, or even not that close, they can plausibly deny it, and we never new what their intention was.

But in this case, they have clearly created the voice to represent Scarlett's voice to demonstrate the capabilities of their product in order to get marketing power.

visarga
1 replies
14h51m

since it is not exactly identical voice, or even not that close, they can plausibly deny it

When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?

Maybe if they replicated a scene from the A's movies or there was striking likeness between the voices... but not generally.

nicce
0 replies
11h38m

When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?

The scenario would have been that they approach none.

gedy
16 replies
18h58m

Normally I'd agree if this were some vague "artist style", but this was clearly an attempt to duplicate a living person, a media celebrity no less.

threatofrain
10 replies
18h53m

Is this different from the various videos of the Harry Potter actors doing comedic high fashion ads? Because those were very well received.

https://www.youtube.com/watch?v=ipuqLy87-3A

nicklecompte
4 replies
18h40m

I think anti-deepfake legislation needs to consider fair use, especially when it comes to parody or other commentary on public figures. OpenAI's actions do not qualify as fair use.

throwway120385
3 replies
17h54m

The problem with that idea is that I can hide behind it while making videos of famous politicians doing really morally questionable things and distributing them on YouTube. The reason Fair Use works with regular parodies in my opinion is that everyone can tell that it is obviously fake. For example, Saturday Night Live routinely makes joking parody videos of elected officials doing things we think might be consistent with their character. And in those cases it's obvious that it's being portrayed by an actor and therefore a parody. If you use someone's likeness directly I think that it must never be fair use or we will quickly end up in a world where no video can be trusted.

nicklecompte
1 replies
3h9m

Let me start by saying I despise generative AI and I think most AI companies are basically crooked.

I thought about your comment for a while, and I agree that there is a fine line between "realistic parody" and "intentional deception" that makes deepfake AI almost impossible to defend. In particular I agree with your distinction:

- In matters involving human actors, human-created animations, etc, there should be great deference to the human impersonators, particularly when it involves notable public figures. One major difference is that, since it's virtually impossible for humans to precisely impersonate or draw one another, there is an element of caricature and artistic choice with highly "realistic" impersonations.

- AI should be held to a higher standard because it involves almost no human expression, and it can easily create mathematically-perfect impersonations which are engineered to fool people. The point of my comment is that fair use is a thin sliver of what you can do with the tech, but it shouldn't be stamped out entirely.

I am really thinking of, say, the Joe Rogan / Donald Trump comedic deepfakes. It might be fine under American constitutional law to say that those things must be made so that AI Rogan / AI Trump always refer to each other in those ways, to make it very clear to listeners. It is a distinctly non-libertarian solution, but it could be "necessary and proper" because of the threat to our social and political knowledge. But as a general principle, those comedic deepkfakes are works of human political expression, aided by a fairly simple computer program that any CS graduate can understand, assuming they earned their degree honestly and are willing to do some math. It is constitutionally icky (legal term) to go after those people too harshly.

throwway120385
0 replies
2h27m

I think that as long as a clear "mark of parody" is maintained such that a reasonable person could distinguish between the two, AI parodies are probably fine. The murkiness in my mind is expressly in the situation in the first episode of Black Mirror, where nobody could distinguish between the AI-generated video and the prime minister actually performing the act. Clearly that is not a parody, even if some people might find the situation humorous. But if we're not careful we give people making fake videos cover to hide behind fair use for parody.

I think you and I have the same concerns about balancing damage to the societal fabric against protecting honest speech.

cjbgkagh
0 replies
17h4m

I’m guessing you’re referring to people still thinking Sarah Palin said she could see Russia from her house, that was from a SNL skit and an amazing impression from Tina Fey. I agree, people have a hard time separating reality from obvious parody, how could we expect them to make a distinction with intentional imitation. Society must draw a clear line that it is not ok to do this.

tsimionescu
0 replies
18h5m

That has a much better chance of falling under fair use (parody, non-commercial) if the actors ever tried to sue.

There is a major difference between parodying someone by imitating them while clearly and almost explicitly being an imitation; and deceptively imitating someone to suggest they are associated with your product in a serious manner.

jprete
0 replies
18h36m

Those are parodies and not meant at any point for you to believe the actual Harry Potter actors were involved.

jacobolus
0 replies
18h37m

One is a company with a nearly $100 billion valuation using someone's likeness for their own commercial purposes in a large-scale consumer product, which consumers would plausibly interpret as a paid endorsement, while the other seems to be an amateur hobbyist nobody has ever heard of making a parody demo as an art project, in a way that makes it clear that the original actors had nothing to do with it. The context seems pretty wildly different to me.

I'm guessing if any of the Harry Potter actors threatened the hobbyist with legal action the video would likely come down, though I doubt they would bother even if they didn't care for the video.

bottled_poe
0 replies
18h19m

There’s a big difference between a one off replica and person-as-a-service.

BadHumans
0 replies
18h42m

Is a billion dollar AI company utilizing someone's voice against their will in a flagship product after they said no twice different from a random Youtube channel making comedy videos?

I think so but that could just be me.

__loam
3 replies
18h40m

Why do you have an issue with them taking someone's likeness to use in their product but not with them taking someone's work to use in their product?

gedy
2 replies
18h3m

Because this isn't training an audio model along with a million other voices to understand English, etc. It's clearly meant to sound exactly like that one celebrity.

I suspect a video avatar service that looked exactly like her would fall afoul of fair use as well. Though an image gen that used some images of her (and many others) to train and spit out generic "attractive blonde woman" is fair use in my opinion.

numpad0
0 replies
16h45m

Chances are this is. Basically same as LoRA. One of go-to tools for these literally uses Diffusion model and work on spectrograms as images.

__loam
0 replies
15h41m

Okay so as long as we steal enough stuff then it's legal.

citizenpaul
0 replies
12h49m

An actress that specifically played the voice of AI in a movie about AI no less.

EasyMark
6 replies
18h45m

Sure they could have taken her to court but right now they don't want the bad publicity, especially since it would put everything else in the shadow of such a scandalous "story". Better to just back off, let S.J. win and move on and start planning on they're gonna spend all that paper money they got with announcement of a new, more advanced model. It's a financial decision and a fairly predictable one. I'm glad she won this time.

__loam
3 replies
18h41m

Paper money from the model they're giving away for free?

EasyMark
2 replies
18h29m

I mean if you don't think these kinds of positive announcements don't increase the value of the company or parent company then I don't really know how to convince you as it's a standard business principle.

ml-anon
0 replies
18h12m

There isn’t a positive announcement here, what is wrong with you?

This reads like “we got caught red handed” and doing the bare minimum for it to not appear malicious and deliberate when the timeline is read out in court.

__loam
0 replies
15h40m

I believe there's a difference between building a sustainable and profitable business and pumping the stock.

mschuster91
0 replies
11h52m

Probably (and rightfully) feared that, had Disney stuck with their position, other MCU actors would be much, much harsher in new contract negotiations - or that some would go as far and say "nope, I quit".

dragonwriter
4 replies
18h54m

If the purpose is to trade on the celebrity voice and perceived association, and its subject to California right of personality law, then, yes, we're saying that that has been established law for decades.

Last5Digits
3 replies
18h2m

That's not the purpose though, clearly. If anything, you could make the argument that they're trading in on the association to the movie "Her", that's it. Neither Sky nor the new voice model sound particularly like ScarJo, unless you want to imply that her identity rights extend over 40% of all female voice types. People made the association because her voice was used in a movie that features a highly emotive voice assistant reminiscent of GPT-4o, which sama and others joked about.

I mean, why not actually compare the voices before forming an opinion?

https://www.youtube.com/watch?v=SamGnUqaOfU

https://www.youtube.com/watch?v=vgYi3Wr7v_g

-----

https://www.youtube.com/watch?v=iF9mrI9yoBU

https://www.youtube.com/watch?v=GV01B5kVsC0

cowsup
1 replies
17h20m

People made the association because her voice was used in a movie that features a highly emotive voice assistant reminiscent of GPT-4o, which sama and others joked about.

Whether you think it sounds like her or not is a matter of opinion, I guess. I can see the resemblance, and I can also see the resemblance to Jennifer Lawrence and others.

What Johannson is alleging goes beyond this, though. She is alleging that Altman (or his team) reached out to her (or her team) to lend her voice, she was not interested, and then she was asked again just two days before GPT-4o's announcement, and she rejected again. Now there's a voice that, in her opinion, sounds a lot like her.

Luckily, the legal system is far more nuanced than just listening to a few voices and comparing it mentally to other voices individuals have heard over the years. They'll be able to figure out, as part of discovery, what lead to the Sky voice sounding the way it does (intentionally using Johannson's likeness? coincidence? directly trained off her interviews/movies?), whether OpenAI were willing to slap Johannson's name onto the existing Sky during the presentation, whether the "her" tweet and the combination of the Sky voice was supposed to draw the subtle connection... This allegation is just the beginning.

Last5Digits
0 replies
16h54m

I honestly don't think it is a matter of opinion, though. Her voice has a few very distinct characteristics, the most significant of which being the vocal fry / huskiness, that aren't present at all in either of the Sky models.

Asking for her vocal likeness is completely in line with just wanting the association with "Her" and the big PR hit that would come along with that. They developed voice models on two different occasions and hoped twice that Johannson would allow them to make that connection. Neither time did she accept, and neither time did they release a model that sounded like her. The two day run-up isn't suspicious either, because we're talking about a general audio2audio transformer here. They could likely fine-tune it (if even that is necessary) on her voice in hours.

I don't think we're going to see this going to court. OpenAI simply has nothing to gain by fighting it. It would likely sour their relation to a bunch of media big-wigs and cause them bad press for years to come. Why bother when they can simply disable Sky until the new voice mode releases, allowing them to generate a million variations of highly-expressive female voices?

om2
0 replies
13h32m

I haven’t hear the GPT-4o voice before. Comparing the video to the video of Johansson’s voice in “her”, it sounds pretty similar. Johansson’s performance there sounds pretty different from her normal speaking voice in the interview - more intentional emotional inflection, bubbliness, generally higher pitch. The GPT-4o voice sounds a lot like it.

From elsewhere in the thread, likeness rights apparently do extend to intentionally using lookalikes / soundalikes to create the appearance of endorsement or association.

emmp
1 replies
18h57m

We can seriously say that, yes. The courts have been saying this in the US for over 30 years. See Midler v. Ford Motor Co.

Avshalom
0 replies
18h51m

Tom Waits won a lawsuit against Doritos too.

bigfishrunning
1 replies
18h58m

It could be trained on Scarlett's voice though, there's plenty of recorded samples for OpenAI to use. It's pretty damning for them to take down the voice right away like that

brandall10
0 replies
17h22m

Her statement claims the voice was taken down at her attorney's insistence.

visarga
0 replies
14h39m

Are we seriously saying we can't make voices that are similar to celebrities, when not using their actual voice?

I think the copyright industry wants to grab new powers to counter the infinite capacity of AI to create variations. But that move would knee cap the creative industry first, newcomers have no place in a fully copyrighted space.

It reminds me of how NIMBY blocks construction to keep up the prices. Will all copyright space become operated on NIMBY logic?

callalex
0 replies
18h16m

I think we should all be held to the standard of “Weird” Al Yankovic. In personal matters consent is important.

andrewinardeer
30 replies
18h36m

I thought it sounded like Jodie Foster.

ncr100
29 replies
18h19m

Scar Jo thought it sounded like herself, and so did people who knew her personally.

That is what matters. OWNERSHIP over her contributions to the world.

smt88
21 replies
17h24m

I mostly agree with you, but I actually don't think it matters if it sounded exactly like her or not. The crime is in the training: did they use her voice or not?

If someone licenses an impersonator's voice and it gets very close to the real thing, that feels like an impossible situation for a court to settle and it should probably just be legal (if repugnant).

XorNot
7 replies
14h33m

If OpenAI commissioned a voice actor to lend their voice to the Sky model, and cast on the basis of trying to get someone who is similar sounding to the Scarlett Johannson, but then did not advertise or otherwise use the voice model created to claim it was Scarlett Johannson - then they're completely in the clear.

Because then the actual case would be fairly bizarre: an entirely separate person, selling the rights to their own likeness as they are entitled to do, is being prohibited from doing that by the courts because they sound too much like an already famous person.

EDIT: Also up front I'm not sure you can entirely discuss timelines for changing out technology here. We have voice cloning systems that can do it with as little as 15 seconds of audio. So having a demo reel of what they wanted to do that they could've used on a few days notice isn't unrealistic - and training a model and not using it or releasing it also isn't illegal.

cycomanic
5 replies
13h29m

That's confidently incorrect. Many others already posted that this has been settled case law for many years. I mean would you argue that if someone build a macbook lookalike, but not using the same components would be completely clear?

XorNot
4 replies
13h21m

I ask you what do you call the Framework [1]? Or Dell's offerings?[2] Compared to the Macbook? [3]

Look kind of similar right? Lot of familiar styling queues? What would take it from "similar" to actual infringement? Well if you slapped an Apple Logo on there, that would do it. Did OpenAI make an actual claim? Did they actually use Scarlett Johannson's public image and voice as sampling for the system?

[1] https://images.prismic.io/frameworkmarketplace/25c9a15f-4374...

[2] https://i.dell.com/is/image/DellContent/content/dam/ss2/prod...

[3] https://cdn.arstechnica.net/wp-content/uploads/2023/06/IMG_1...

mrbungie
2 replies
12h29m

You're not arguing your way out of jurisprudence, especially when the subject is a human and not a device nor IP. They (OpenAI) fucked up.

XorNot
1 replies
11h47m

There is not clear jurisprudence on this. They're only in trouble if they actually used ScarJo's voice samples to train the model, or if they intentionally tried to portray their imitation as her without her permission.

The biggest problem on that front (assuming the former is not true) is Altman's tweets, but court-wise that's defensible (though I retract what I had here previously - probably not easily) as a reference to the general concept of the movie.

Because otherwise the situation you have is OpenAI seeking a particular style, hiring someone who can provide it, not trying to pass it off as that person (give or take the Tweet's) and the intended result effectively being: "random voice actress, you sound too much like an already rich and famous person. Good luck having no more work in your profession" - which would be the actual outcome.

The question entirely hinges on, did they include any data at all which includes ScarJo's voice samples in the training. And also whether it actually does sound similar enough - Frito-Lay went down because of intent and similarity. There's the hilarious outcome here that the act of trying to contact ScarJo is the actual problem they had.

EDIT 2: Of note also - to have a case, they actually have to show reputational harm. Of course on that front, the entire problem might also be Altman. Continuing the trend I suppose of billionaires not shutting up on Twitter being the main source of their legal issues.

einherjae
0 replies
10h57m

Are you a lawyer?

einherjae
0 replies
10h58m

Grey laptops that share some ideas in their outline while being distinct enough to not get lawyers from Cupertino on their necks?

jerojero
0 replies
13h0m

Well Sam Altman tweeted "her" so that does seem to me like they're trying to claim a similarity to Scarlett Johannson.

aseipp
3 replies
16h44m

It is not an impossible situation, courts have settled it, and what you describe is not how the law works (despite how many computer engineers think to the contrary.)

smt88
2 replies
11h49m

Courts have settled almost nothing related to AI. We don't even know if training AI using copyrighted works is a violating of copyright law.

Please point to a case where someone was successfully sued for sounding too much like a celebrity (while not using the celebrity's name or claiming to be them).

davidgerard
0 replies
10h57m

Multiple cases already answering your question in this thread.

smt88
1 replies
11h42m

That case seems completely dissimilar to what OpenAI did.

Frito-Lay copied a song by Waits (with different lyrics) and had an impersonator sing it. Witnesses testified they thought Waits had sung the song.

If OpenAI were to anonymously copy someone's voice by training AI on an imitation, you wouldn't have:

- a recognizable singing voice

- music identified with a singer

- market confusion about whose voice it is (since it's novel audio coming from a machine)

I don't think any of this is ethical and think voice-cloning should be entirely illegal, but I also don't think we have good precedents for most AI issues.

jonathankoren
0 replies
9h32m

Let me connect the dots for you.

Company identifies celebrity voice they want. (Frito=Waits, OpenAi=ScarJo)

Company comes up with novel thing for the the voice to say. (Frito=Song, OpenAI=ChatGpt)

Company decides they don’t need the celebrity they want (Frito=Waits, OpenAI=ScarJo) and instead hire an impersonator (Frito=singer, {OpenAI=impersonator or OpenAI=ScarJo-public-recordings}) to get what they want (Frito=a-facsimile-of-Tom-Waitte’s-voice-in-a-commercial, OpenAi=a-fascimilie-of-ScarJo’s-voice-in-their-chatbot)

When made public, people confuse the fascimilie as the real thing.

I don’t see how you don’t see a parallel. It’s literally best for beat the same, particularly around the part about using an impersonator as an excuse.

dv_dt
2 replies
16h38m

As I understand it (though I may be wrong) in music sampling cases, it doesn’t matter if the “sample” is using an actual clip from a recording or if were recreated from scratch using a new media (e.g. direct midi sequence), if a song sampling another song is recognizable it is still infringing.

parineum
1 replies
15h44m

Sampling is not the same as duplication. Sampling is allowed as it's a derivitive work as long as it's substantially different from the original.

It's a "I know it when I see it" situation so it's not clear cut.

Findecanor
0 replies
10h41m

Oh, the day when an artist could sample other artists without attribution and royalties is long gone. The music labels are very hard on this these days.

sangnoir
0 replies
17h0m

The crime is in the training: did they use her voice or not?

This is a civil issue, and actors get broad rights to their likeliness. Kim Kardashian sued Old Navy for using a look-alike actress in an ad; old Navy chose to settle, which makes it appear like "the real actress wasn't involved in any way" may not be a perfect defense. The timeline makes it clear they wanted it to sound like Scarlett's voice, the actual mechanics on how they got the AI to sound like that is only part of the story.

randoglando
0 replies
16h46m

If someone licenses an impersonator's voice and it gets very close to the real thing, that feels like an impossible situation for a court to settle and it should probably just be legal (if repugnant).

Does that mean if cosplayers dress up like some other character, they can use that version of the character in their games/media? I think it should be equally simple to settle. It's different if it's their natural voice. Even then, it brings into question whether they can use "doppelgangers" legally.

parineum
1 replies
15h48m

She doesn't own most (all probably) of her contributions to the world.

If the voice was only trained on the voice of the character she played in Her, would she have any standing in claiming some kind of infringement?

lesostep
0 replies
5h12m

IANAL, but – I think it's most likely to be an infringement on her right of publicity (i.e. the right to control the commercial value of your name, likeness, etc.)

She doesn't have to own anything to claim this right, if the value of her voice is recognizable.

minimaxir
1 replies
17h21m

More notably for legal purposes, there were several independent news reports corroborating the vocal similarity.

sangnoir
0 replies
16h55m

...and sama's tweet referencing "Her"

dyno12345
1 replies
16h23m

I'm not sure how much you currently legally own imitations of your own voice. There's a whole market for voice actors who can imitate particular famous voices.

mcphage
0 replies
18h6m

Clearly Sam Altman though it sounded like ScarJo as well :-(

chii
5 replies
16h28m

get people to think it’s the real thing.

but did openAI make any claims about whose voice this is? Just because a voice sounds similar or familiar, doesn't mean it's fraudulent.

tedivm
4 replies
16h19m

Just read the top post of the thread you're responding to-

- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.
dzhiurgis
3 replies
16h2m

To me reference sounds more to towards omni than her voice

altairprime
1 replies
15h41m

What’s “omni”?

nickthegreek
0 replies
15h29m

GTP-4o is the new model, the o stands for Omni.

nickthegreek
0 replies
15h30m

That’s not a gamble they are willing to take to court of law or public opinion.

dewbrite
1 replies
13h36m

They used a sound-alike and had her sing one of her songs. I believe that's a different precedent, in that it's leveraging her fame.

Imo Sky's voice is distinct enough from Scarlett, and it wasn't implied to _be_ her.

Sam's "Her" tweet could be interpreted as such, but defending the tweet as the concept of "Her", rather than the voice itself, is.

sleepybrett
0 replies
11h15m

Does not the 'her' tweet give away the game. Aas you said, it was a midler impersator singing one of midlers songs. In this case they have a voice for their AI assistant/phone sex toy that is very much like the actress that played a famous ai assistant/phone sex toy. Even if he is taken as meaning the concept it's very very damning. If they had, instead, mimic'ed another famous actor's voice that hasn't played a robot/ai/whatever and used that would that really be any better though? Christopher Walken, say, or hell Bette Midler?

wkat4242
3 replies
17h43m

maybe not even a hiring of an impressionist

If they really hired someone who sounds just like her it's fair game IMO. Johanssen can't own the right to a similar voice just like many people can have the same name. I think if there really was another actress and she just happens to sound like her, then it's really ok. And no I'm not a fan of Altman (especially his worldcoin which I view as a privacy disaster)

I mean, imagine if I happened to have a similar voice to a famous actor, would that mean that I couldn't work as a voice actor without getting their OK just because they happen to be more famous? That would be ridiculous. Pretending to be them would be wrong, yes.

If they hired someone to change their voice to match hers, that'd be bad. Yeah. If they actually just AI-cloned her voice that's totally not OK. Also any references to the movies. Bad.

101008
1 replies
17h10m

But clearly they are advertising as her (no pun intended), which is a gray area.

wkat4242
0 replies
17h1m

Yeah that was the bad part. Agreed there.

I wonder if they deliberately steered towards this for more marketing buzz?

confused_boner
0 replies
17h40m

Discovery process will be interesting

sneak
3 replies
17h15m

Why are you mad? We have no rights to the sound of our voice. There is nothing wrong with someone or something else making sounds that sound like us, even if we don’t want it to happen.

No one is harmed.

ethbr1
0 replies
16h40m

I think it's a different argument with respect to famous media celebrities* too.

If someone clones a random person's voice for commercial purposes, the public likely has no idea who the voice's identity is. Consequently, it's just the acoustic voice.

If someone clones a famous media celebrity's voice, the public has a much greater chance of recognizing the voice and associating it with a specific person.

Which then opens a different question of 'Is the commercial use of the voice appropriating the real person's fame for their own gain?'

Add in the facts that media celebrities' values are partially defined by how people see them, and that they are often paid for their endorsements, and it's a much clearer case that (a) the use potentially influenced the value of their public image & (b) the use was theft, because it was taking something which otherwise would have had value.

Neither consideration exists with 'random person's voice' (with deference to voice actors).

* Defined as 'someone for whom there is an expectation that the general public would recognize their voice or image'

mkehrt
0 replies
17h14m

Are you sure? You certainly have rights to your likeness--it can't be used commercially without permission. Di you know this doesn't cover your voice?

barbariangrunge
3 replies
13h4m

Everyone is so mad about them stealing a beloved celebrity’s voice. What about the millions of authors and other creators whose copyrighted works they stole to create works that resemble and replace those people? Not famous enough to generate the same outrage?

surfingdino
1 replies
11h44m

Welcome to the world where the "fuck the creatives" brigade wants everything for free.

consf
0 replies
4h43m

A mentality that devalues creative work

creato
0 replies
11h1m

I think the unique thing about this case is not specifically the "voice theft", but that OpenAI specifically asked for permission and were denied, which eliminates most of the usual plausible deniability that gets trotted out in these cases.

ekam
2 replies
19h12m

Same here and that voice really was the only good one. I don't know why they don't bring the voices from their API over, which are all much better, like Nova or Shimmer (https://platform.openai.com/docs/guides/text-to-speech)

sanxiyn
1 replies
19h9m

I think because it is not text-to-speech. It probably isn't simple to transfer.

zwily
0 replies
5h56m

The previous chatgpt voice mode uses text to speech, and has different voices than the OpenAI API. Seemed weird to me too.

kapildev
1 replies
19h4m

I can still access the sky voice even though it is supposed to be "yanked".

thorum
0 replies
18h59m

There’s still a Sky option but the actual voice has been changed.

al_borland
0 replies
12h40m

I had to go look at what voice I picked once I heard the news, it was Sky. I listened to them all and thought it sounded the best. I didn’t make any connection to her (Scar Jo or the movie) when going through the voices, but I wasn’t listening for either. I don’t think I know her voice well enough to pick it out of a group like that.

Maybe I liked it best because it felt familiar, even if I didn’t know why. I’m a bit disappointed now that she didn’t sign on officially, but my guess is that Altman just burned his bridge to half of Hollywood if he is looking for a plan B.

IncreasePosts
19 replies
18h29m

I wouldn't necessarily call that damning. "Soundalikes" are very common in the ad industry.

For example, a car company approached the band sigur ros to include some of their music in a car commercial. Sigur ros declined. A few months later the commercial airs with a song that sounds like an unreleased sigur ros song, but really they just paid a composer to make something that sounds like sigur ros, but isn't. So maybe openai just had a random lady with a voice similar to Scarlett so the recording.

Taking down the voice could just be concern for bad press, or trying to avoid lawsuits regardless of whether you think you are in the right or not. Per this* CNN article:

Johansson said she hired legal counsel, and said OpenAI “reluctantly agreed” to take down the “Sky” voice after her counsel sent Altman two letters.

So, Johansson's lawyers probably said something like "I'll sue your pants off if you don't take it down". And then they took it down. You can't use that as evidence that they are guilty. It could just as easily be the case that they didn't want to go to court over this even if they thought they were legally above board.

* https://www.cnn.com/2024/05/20/tech/openai-pausing-flirty-ch...

ProjectArcturis
8 replies
16h31m

Case law says no.

There have been several legal cases where bands have sued advertisers for copying their distinct sound. Here are a few examples:

The Beatles vs. Nike (1987): The Beatles' company, Apple Corps, sued Nike and Capitol Records for using the song "Revolution" in a commercial without their permission. The case was settled out of court.

Tom Waits vs. Frito-Lay (1988): Tom Waits sued Frito-Lay for using a sound-alike in a commercial for their Doritos chips. Waits won the case, emphasizing the protection of his distinct voice and style.

Bette Midler vs. Ford Motor Company (1988): Although not a band, Bette Midler successfully sued Ford for using a sound-alike to imitate her voice in a commercial. The court ruled in her favor, recognizing the uniqueness of her voice.

The Black Keys vs. Pizza Hut and Home Depot (2012): The Black Keys sued both companies for using music in their advertisements that sounded remarkably similar to their songs. The cases were settled out of court.

Beastie Boys vs. Monster Energy (2014): The Beastie Boys sued Monster Energy for using their music in a promotional video without permission. The court awarded the band $1.7 million in damages.

IncreasePosts
6 replies
15h14m

Disregarding the cases settled out of court (which have nothing to do with case law):

1) Tom Waits vs Frito-Lay: Frito-Lay not only used a soundalike to Tom Waits, but the song they created was extremely reminiscent of "Step Right Up" by Waits.

2) Bette Midler vs. Ford Motor Company: Same thing - this time Ford literally had a very Midler-esque singer sing an exact Midler song.

3) Beastie Boys vs. Monster Energy: Monster literally used the Beastie Boys' music, because someone said "Dope!" when watching the ad and someone at Monster took that to mean "Yes you can use our music in the ad".

Does Scarlett Johansson have a distinct enough voice that she is instantly recognizable? Maybe, but, well, not to me. I had no clue the voice was supposed to be Scarlett's, and I think a lot of people who heard it also didn't think so either.

Cheer2171
3 replies
15h7m

Does Scarlett Johansson have a distinct enough voice that she is instantly recognizable?

It absolutely is if you've seen /Her/. It even nails her character's borderline-flirty cadence and tone in the film.

underlogic
1 replies
14h26m

Actually until recently I thought the voice actor for "her" was Rashida Jones

locusofself
0 replies
13h22m

I definitely thought "Sky" was Rashida Jones. I still do.

Y_Y
0 replies
7h40m

I've seen Her and the similarity of the voice didn't occur to me until I read about it. I guess it wasn't super distinct in the movie. Maybe if they'd had Christopher Walken or Shakira or someone with a really distinctive sound it would have been more memorable and noticable to me.

pseudalopex
0 replies
13h39m

The Midler v. Ford decision said her voice was distinctive. Not the song.

parpfish
0 replies
13h10m

And in an interesting coincidence: ScarJo recorded a Tom Waits cover album in 2008

mycologos
0 replies
15h19m

Sucks that he had to do it, but the notion of Tom Waits making Rain Dogs and then pivoting to spending a bunch of time thinking about Doritos must be one of the funnier quirks of music history.

romwell
3 replies
18h15m

Sorry, that's apples-to-pizzas comparison. You're conflating work and identity.

There's an ocean of difference between mimicking the style of someone's art in an original work, and literally cloning someone's likeness for marketing/business reasons.

You can hire someone to make art in the style of Taylor Swift, that's OK.

You can't start selling Taylor Swift figurines by the same principle.

What Sam Altman did, figuratively, was giving out free T-Shirts featuring a face that is recognized as Taylor Swift by anyone who knows her.

IncreasePosts
2 replies
17h42m

But they aren't doing anything with her voice(allegedly?). They're doing something with a voice that some claim sounds like hers.

But if it isn't, then it is more like selling a figurine called Sally that happens to look a lot like Taylor Swift. Sally has a right to exist even if she happens to look like Taylor Swift.

Has there ever been an up and coming artist who was not allowed to sell their own songs, because they happened to sound a lot like an already famous artist? I doubt it.

romwell
0 replies
9h24m

TL;DR: This question had already been settled in 2001 [3]:

The court determined that Midler should be compensated for the misappropriation of her voice, holding that, when "a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."

I hope there's going to be no further hypotheticals after this.

-----

They're doing something with a voice that some claim sounds like hers.

Yes, that's what a likeness is.

If you start using your own paintings of Taylor Swift in a product without her permission, you'll run afoul of the law, even though your painting is obviously not the actual Taylor Swift, and you painted it from memory.

But if it isn't, then it is more like selling a figurine called Sally that happens to look a lot like Taylor Swift. Sally has a right to exist even if she happens to look like Taylor Swift.

Sally has a right to exist, not the right to be distributed, sold, and otherwise used for commercial gain without Taylor Swift's permission.

California Civil Code Section 3344(a) states:

Any person who knowingly uses another’s name, voice, signature, photograph, or likeness, in any manner, on or in products, merchandise, or goods, or for purposes of advertising or selling, or soliciting purchases of, products, merchandise, goods or services, without such person’s prior consent, or, in the case of a minor, the prior consent of his parent or legal guardian, shall be liable for any damages sustained by the person or persons injured as a result thereof.

Note the word "likeness".

Read more at [1] on Common Law protections of identity.

Has there ever been an up and coming artist who was not allowed to sell their own songs, because they happened to sound a lot like an already famous artist? I doubt it.

Wrong question.

Can you give me an example of an artist which was allowed to do a close-enough impersonation without explicit approval?

No? Well, now you know a good reason for that.

Tribute bands are legally in the grey area[2], for that matter.

[1] https://www.dmlp.org/legal-guide/california-right-publicity-...

[2] https://lawyerdrummer.com/2020/01/are-tribute-acts-actually-...

[3] https://repository.law.miami.edu/cgi/viewcontent.cgi?article...

airstrike
0 replies
16h53m

The detail you're missing is that who claim it sounds like "her" includes the CEO of the company

spuz
2 replies
18h6m

The damning part is that they tried to contact her and get her to reconsider their offer only 2 days before the model was demoed. That tells you that at the very least they either felt a moral or legal obligation to get her to agree with their release of the model.

IncreasePosts
1 replies
18h3m

Or, they wanted to be able to say, yes, that is "her" talking to you.

I have no idea if they really used her voice, or it is a voice that just sounds like her to some. I'm just saying openai's behavior isn't a smoking gun.

johnnyanmac
0 replies
11h43m

a reference to an object or fact that serves as conclusive evidence of a crime or similar act, just short of being caught in flagrante delicto.

If this isn't a smoking gun, I don't know what it.

I think people forget the last part of the definition, though. A Smoking gun is about as close as you get without having objective, non-doctored footage of the act. There's a small chance the gun is a red herring, but it's still suspicious.

dragonwriter
2 replies
18h17m

I wouldn't necessarily call that damning. "Soundalikes" are very common in the ad industry.

As are disclaimers that celebrity voices are impersonated when there is additional context which makes it likely that the voice would be considered something other than a mere soundalike, like direct reference to a work in which the impersonated celebrity was involved as part of the same publicity campaign.

And liability for commercial voice appropriation, even by impersonation, is established law in some jurisdictions, including California.

IncreasePosts
1 replies
17h37m

The most famous case of voice appropriation was Midler vs Ford, which involved Ford paying a Midler impersonator to perform a well known Midler song, creating the impression that it was actually Bette.

Where are the signs or symbols tying Scarlett to the openAI voice? I don't think a single word, contextless message on a separate platform that 99% of openAI users will not see is significant enough to form that connection in users heads.

pseudalopex
0 replies
15h22m

The Midler v. Ford decision said her voice was distinctive. Not the song.

The replies to Altman's message showed readers did connect it to the film. And people noticed the voice sounded like Scarlett Johansson and connected it to the film when OpenAI introduced it in September.[1]

How do you believe Altman intended people to interpret his message?

[1] https://www.reddit.com/r/ChatGPT/comments/177v8wz/i_have_a_r...

npunt
18 replies
19h5m

When people cheat on (relatively) small things, it's usually an indication they'll cheat on big things too

ncr100
4 replies
18h14m

Stealing someone's identity is indeed one of those "big things".

sneak
3 replies
17h13m

Impersonating someone’s voice isn’t stealing anything, and certainly not their identity.

iainctduncan
1 replies
15h33m

If they are a celebrity actor it sure is.

hn_user82179
0 replies
2h20m

Voice actors especially. Voice acting isn't what ScarJo is really "known for" but she's done a ton of work for animated films so her distinct voice really is a part of her livelihood.

LewisVerstappen
4 replies
17h49m

How did they even cheat here?

OpenAI did nothing wrong.

The movie industry does the same thing all the time. If an actor/actress says no they you find someone else who can play the same role.

tjmc
1 replies
17h20m

Nothing? If you're acting like the sea witch in "The Little Mermaid" you're probably doing something wrong.

ramenbytes
0 replies
15h17m

Key difference here is that Scarlett still has her voice.

ramenbytes
0 replies
15h13m

I don't think that's quite the same. Are they going out and hiring impersonators of the actors who declined the role or digitally enhancing the substitute to look like them? That seems closer to what happened here.

falloutx
0 replies
6h33m

If they are so "Open" they should reveal their training data which created this voice. I am sure it is just movie audio from S. Johansson's movies.

slg
2 replies
18h29m

Which is what makes me wonder if this might grow into a galvanizing event for the pro-creator protests against these AI models and companies. What happened here isn't particularly unique to voices or even Scarlett Johansson, it is just how these companies and their products operate in general.

bakuninsbart
1 replies
11h54m

I think the only way for these protests to get really tangible results is in case we reach a ceiling in LLM capabilities. The technology in its current trajectory is simply too valuable both in economic and military applications to pull out of, and "overregulation" can be easily swatted citing national security concerns in regards to China. As far as I know, China has significantly stricter data and privacy regulations than the US when it comes to the private sector, but these probably count for little when it comes to the PLA.

andy_ppp
0 replies
8h19m

We have almost run out of training data already so I’m not convinced they will get massively more generalised suddenly. If you give them reasoning tasks they haven’t seen before LLMs absolutely fall apart and produce essentially gibberish. They are currently search engines that give you one extremely good result that you can refine up to a point, they are not thinking even though there’s a little bit more understanding than search engines of the past.

iosjunkie
2 replies
18h43m

I would love to see the providence of their training data.

ojbyrne
0 replies
18h36m

I think you want the word “provenance.”

blackeyeblitzar
0 replies
17h56m

We need laws where companies are forced to reveal source of personal data. Like how did XYZ company get my contact info to spam me?

sneak
0 replies
17h14m

Who cheated whom? Out of what?

nwoli
0 replies
18h35m

OpenAI only hires and is built on the culture that data and copyright is somehow free for the taking, otherwise they would have zero ways to make a profit or “build agi”

minimaxir
12 replies
18h14m

Given the timeline, I’m still baffled Sam Altman tweeted “her.” That just makes plausible deniability go away for a random shitpost.

prepend
9 replies
17h38m

I thought it was about functionality more than the specific voice.

minimaxir
7 replies
17h31m

I suspect that'll be OpenAI's defense.

krisoft
6 replies
16h32m

That is what discovery is for. If this would ever get to that phase.

Someone from OpenAi hired the agency who hired the voice talent (or talents) for the voice data. They sent them a brief explaining what they are looking for, followed by a metric ton of correspondence over samples and contracts and such.

If anywhere during those written communications anyone wrote “we are looking for a ScarlettJ imitator”, or words to that effect, that is not good for OpenAI. Similarly if they were selecting between options and someone wrote that one sample is more Johansson than an other. Or if anyone at any point asked if they should clear the rights to the voice with Johansson.

Those are the discovery findings which can sink such a defense.

unraveller
2 replies
12h35m

Discovery works both ways. The original Her voice actress[1] was recast to someone more SoCal in post-production, so there is evidence of the flirty erotic AI style itself not being a unique enough selling point.

It will come down to what makes the complaining celebrity's voice iconic, which for Scarjo is the 'gravelly' bit. Which smooth Sky had none of.

[1] actress reading poem: https://www.youtube.com/watch?v=eWEEAjRFJKc

krisoft
1 replies
6h45m

Discovery works both ways.

Ok? What materials would you suspect discovery can uncover from Scarlett or her team?

was recast to someone more SoCal in post-production

Was recast to Scarlett Johansson. Hardly a good argument if you want to argue that her voice is not unique.

unraveller
0 replies
3h46m

I argued Scarlett Johansson's natural voice is iconic and unique and it is not apparent in the Sky voice. The "new" intoned voice of Sky 4o is receiving most claims of likeness ripoff but this intoning part is not unique to ScarJo either since the Her production already had a version of it with the other VA. I doubt scarlett will suggest she was playing herself without creative direction from Her team.

Are you arguing that Her performance is not the resembling factor here but simply the natural voice of ScarJo at rest, disregarding Her?

prepend
2 replies
16h13m

I recall the basics from my contracts law class that it’s not against the law to hire an impersonator as long as you don’t claim it’s the celebrity.

So it’s legal to hire someone who sounds like SJ. And likely legal to create a model that sounds like her. But there will likely need to be some disclaimer saying it’s not her voice.

I expect that OpenAI’s defense will be something like “We wanted SJ. She said no, so we made a voice that sounded like her but wasn’t her.” It will be interesting to see what happens.

staticman2
0 replies
2h11m

Odd. This wouldn't typically be covered in contracts class since it's an intellectual property issue beyond the scope of a contracts class. Scarlet Johansen didn't have a contract with OpenAI after all.

What you recall also doesn't sound correct given right of publicity laws.

pseudalopex
0 replies
13h31m

Bette Midler and Tom Waits won cases where the companies didn't claim the impersonators were them.

jrflowers
0 replies
11h41m

What part of the functionality from the movie Her did you think it meant?

mvdtnz
0 replies
16h3m

The same egomaniac tendencies that cause people like Elon Musk or Paul Graham to post the first dumbass thing that comes to their mind because they think everyone absolutely has to see how smart and witty they are.

brown9-2
0 replies
17h17m

Some people are just addicted to posting

MrMetlHed
8 replies
19h18m

Would love to see this get far enough for discovery to see how that all played out behind the scenes.

hehdhdjehehegwv
7 replies
18h56m

They’ll settle as soon as they figure that out. Idiot tax.

ml-anon
5 replies
18h8m

She has no incentive to settle and actually could win big by being the figurehead of the creative industry against AI. It’s understandable why she accepted a settlement from Disney, but there’s no reason why she should settle with a random startup that has no other influence on her employability in Hollywood.

hehdhdjehehegwv
3 replies
15h10m

OpenAI will settle, not sure how you read that in reverse.

eurleif
2 replies
15h9m

Settling isn't unilateral. OpenAI can offer to settle, but if she doesn't accept, there will be no settlement.

hehdhdjehehegwv
1 replies
12h38m

I also said “they” instead of “her”, I’m confused as to why anybody misinterpreted what I said.

ml-anon
0 replies
11h36m

Everyone knows what you meant. But it’s not up to them to “settle”. If she brings forward a formal complaint they can offer to but she has no obligation or incentive to accept.

This may turn out to be something they can’t just buy their way out of with no other consequences.

bradchris
0 replies
55m

She has a net worth of $100-$200m dollars. I doubt she wants $5-10m more.

OpenAI cannot hurt her standing in the industry— in fact, “ScarJo takes on Big Tech and wins”, in an era after the Hollywood unions called a strike and won protections from studios using generative AI for exactly this scenario, is ironically probably one of the best thing she can do for her image right now.

She is also one of the most litigious actresses in the industry, taking on Disney and winning what’s estimated to be 8 figures.

Good luck OpenAI!

og_kalu
6 replies
18h59m

- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)

New voice mode is a speech predicting transformer. "Voice Cloning" could be as simple as appending a sample of the voice to the context and instructing it to imitate it.

jprete
5 replies
18h32m

If they really did that then (A) it's not much better (B) they didn't even wait for an answer from Johansson (C) it's extraordinarily reckless to go from zero to big-launch feature in less than two days.

og_kalu
4 replies
18h24m

(A) it's not much better

OP seems to be on the "They secretly trained on her voice" train. The only reason "Two Days!" would be damning is if a finetune was in order to replicate ScarJo's voice. In that sense, it's much better.

(C) it's extraordinarily reckless to go from zero to big-launch feature in less than two days.

Open AI have launched nothing. There's no date for new voice mode other than, "alpha testing in the coming weeks to plus users". No-one has access to it yet.

lolinder
3 replies
18h0m

I'm really confused by this claim. How is it that so many people tried this Sky voice if no one has access yet?

og_kalu
1 replies
17h48m

There is a voice mode in the GPT app that's been out (even for free users) for nearly a year now. There are a couple voices to chooses from and sky was one of them.

This mode works entirely differently from what Open AI demoed a few days ago (the new voice mode) but both seem to utilize the same base sky voice. All this uproar is from the demos of new sky which sounds like old sky but is a lot more emotive, laughs, a bit flirty etc.

tnias23
0 replies
13h51m

Idk the voice in the 4o demo and the existing Sky voice seemed quite different to me. And Scarlett’s letter says she and her friends were shocked when they heard the 4o demo. This whole situation is about the newly unveiled voice in the demo. It’s different.

jonpo
0 replies
12h44m

Sky voice is old

benreesman
6 replies
14h41m

I know I have a reputation as an OpenAI hater and I understand why: it’s maybe 5-10% of the time that the news gives me the opportunity to express balance on this.

But I’ve defended them from unfair criticism on more than a few occasions and I feel that of all the things to land on them about this one is a fairly mundane screwup that could be a scrappy PM pushing their mandate that got corrected quickly.

The leadership for the most part scares the shit out of me, and clearly a house-cleaning is in order.

But of all the things to take them to task over? There’s legitimately damning shit this week, this feels like someone exceeded their mandate from the mid-level and legal walked it back.

crznp
5 replies
14h16m

It really doesn't sound like a "mid-level exceeding their mandate".

It sounds like Altman was personally involved in recruiting her. She said no and they took what they wanted anyway.

benreesman
4 replies
14h4m

It feels weird to be defending Altman, but those of us who go hard on the legitimately serious shit are held to a high standard on being fair and while multiple sources have independently corroborated e.g. the plainly unethical and dubiously legal NDA shit Vox just reported on, his links to this incident seem thinly substantiated.

I’m not writing the guy a pass, he’s already been fired for what amount to ethics breaches in the last 12 months alone. Bad actor any way you look at it.

But I spent enough time in BigCo land to know stuff like this happens without the CEO’s signature.

I’d say focus on the stuff with clear documentary evidence and or credible first-hand evidence, there’s no shortage of that.

I get the sense this is part of an ambient backlash now that the damn is clearly breaking.

Of all the people who stand to be harmed by that leadership team, I think Ms. Johansson (of who I am a fan) is more than capable of seeing her rights and privileges defended without any help from this community.

om2
0 replies
13h22m

There’s more evidence of Altman being personally involved in this incident than in him being personally involved in the OpenAI exit agreement, and he has denied the latter. I’m not sure I believe his denial in the latter case.

Having an NDA in exit terms you don’t get to see until you are leaving that claim ability to claw back your vested equity if you don’t agree seems more severely unethical, to be sure. But that doesn’t mean there’s more reason to blame it on Altman specifically. Or perhaps you take the stance that it reflects on OpenAI and their ethics whether or not Altman was personally involved, but then the same applies to the voice situation.

ml-anon
0 replies
8h38m

Scarlet Johansson literally mentioned that Sam personally reached out to her team.

The CTO was on stage presenting the thing and the CEO was tweeting about it.

Please explain for us which part of this is happening without the CEO's signature.

Of everyone who has been harmed and had their work stolen or copyright infringed by Sam's team, Scarlett Johansson is the one person (so far) who can actually force the issue and a change, and so the community is right to rally behind her because if they're so brazen about this, it paints a very clear picture of the disdain they hold the rest of us in.

gunsle
0 replies
13h39m

You’re allegedly “not writing the guy a pass” but then you go on to do so anyways. If Johansson isn't lying and Altman did personally reach out to her, I really don’t see how you can even attempt to argue this is some middle manager gunning for a promotion. In the same way you’re complaining that she needs no help from the community in defending herself, Altman needs no help from you reaching this hard. Like how can you not see that hypocrisy?

crznp
0 replies
13h24m

If this isn't the thing that makes your blood boil, that's fine. The world could probably do with less boiling blood, and it is still early, more evidence may come out. However, she indicated in her statement that Altman asked her, not OpenAI. It seems credible that he would want to be involved.

Both sides of the story feel like we're slowly being brought to a boil: Sutskever's leaving feels like it was just a matter of time. His leaving causing a mess seems predictable. Perhaps I am numb to that story.

But stealing a large part of someone's identity after being explicitly told not to? This one act is not the end of the world, but feels like an acceleration down a path that I would rather avoid.

ProjectArcturis
5 replies
16h36m

I'm beginning to think this Sam Altman guy isn't so trustworthy.

disqard
2 replies
16h10m

You beat me to it.

Bon mots apart, he really appears to have an innate capacity for betrayal.

throwaway635383
0 replies
10h39m

Was there something in the water? Lots of rumbles around the early OpenAI members and questionable behavior.

owlninja
0 replies
15h29m

And he must be a helluva pitchman, given the weird fired/hired debacle. Mixed with some of the resignations that made the HN front page recently, apparently anyone leaving OpenAI signed away the right to speak. I even find it odd that their statement says they hired a voice actress, but they want to protect her privacy? Seems like a helpful alibi if true, or likely, said actress has signed an agreement to never reveal she worked with OpenAI.

insane_dreamer
0 replies
12h40m

that took a while ;)

0xDEAFBEAD
0 replies
12h57m

And perhaps not consistently candid either.

spullara
4 replies
17h18m

The voice was shipped last september.

burntalmonds
3 replies
14h8m

Do you know if the voice was the same back in september?

tnias23
0 replies
13h46m

Scarlet says she was shocked to hear the voice in the 4o demo, and they had requested her consent (for the 2nd time) 2 days prior to the demo. If that demo voice was the same as the existing Sky voice, this wouldn’t be happening.

spullara
0 replies
14m

It was essentially the same, the new one is definitely more human like. I have been calling it Scarjo since then.

    Sam Pullara @sampullara
    If you have the ChatGPT app, set it to the Sky voice and talk to it. It is definitely a clone of ScarJo from Her.
    6:21 PM · Dec 13, 2023
https://x.com/sampullara/status/1735122897663094853

exitb
0 replies
13h14m

The voices were available for some time for the ChatGPT TTS model, but it seems that they reused them for the 4o audio output, which sounds significantly more human-like. I’ve heard the Sky voice before and never made the connection. I did think of Johansson though during the live demo, as the voice + enhanced expressiveness made it sound much like the movie Her.

nox101
4 replies
11h27m

To each their own. I personally didn't get Scarlett Johansson vibes from the voice on the GPT-4o demo (https://openai.com/index/hello-gpt-4o/) even though I'm a huge fan of hers (loved Her, loved Jojo Rabbit, even loved Lucy, and many many others) and have watched those and others multiple times. I'd even say I have a bit of a celebrity crush.

To me it's about as close to her voice as saying "It's a woman's voice". Not to say all women sound alike but the sound I heard from that video above could maybe best be described and "generic peppy female American spokesperson voice"

Even listening to it now with the suggestion that it might sound like her I don't personally hear Scarlett Johansson's voice from the demo.

There may be some damming proof where they find they sampled her specifically but saying they negotiated and didn't come to an agreement is not proof that it's supposed to be her voice. Again, to me it just sounds like a generic voice. I've used the the version before GPT-4o and I never got the vibe it was Scarlett Johansson.

I did get the "Her" vibe but only because I was talking to a computer with a female voice and it was easy to imagine that something like "Her" was in the near future. I also imagined or wished that it was Majel Barrett from ST:TNG, if only because the computer on ST:TNG gave short and useful answers where as ChatGPT always gives long-winded repetitive annoying answers

GaryNumanVevo
3 replies
8h20m

OpenAI confirmed it by removing the voice immediately after Johansson's lawyers reached out

nox101
2 replies
6h57m

That's not confirmation. That's called prudence.

wasteduniverse
1 replies
6h3m

Ask to use Alice's likeness in my product, get declined. Ask again. Debut product, users say "this sounds a lot like Alice". The day my product releases, I make a public statement referencing a movie Alice starred in. When Alice asks how I made my product, I delete all access to the product.

How else am I supposed to interpret this?

dewbrite
0 replies
44m

"better safe than sorry"

- any legal team, ever, in the history of the universe

cjbgkagh
4 replies
17h20m

AFAIK they yanked it pretty quickly and the subsequent scandal has widely informed people that it was not authorized by Scarlett Johansson. So while it was clearly a violation resulting from a sequence of very stupid decisions by OpenAI, I am not sure if there would be much in the way of damages.

ml-anon
1 replies
8h49m

Johansson is rich. The real value she could get from this would be as an advocate for the rights of creatives, performers and rights holders in the face of AI. If this goes to discovery OpenAI is done.

How much do you think Disney or Universal Music or Google or NYT would give to peek inside OpenAI's training mixture to identify all the infringing content?

cjbgkagh
0 replies
4h21m

Not a lawyer, but as far as I understand; Certainly, the lack of significant damages would usually prevent a lawsuit because there wouldn't be any money in it, but since the amount of damages are figured out during the trial and the lawsuit alone would be damaging to OpenAI the threat of a lawsuit could be enough for OpenAI to try and settle quickly out of court. Though OpenAI would have to weigh that with against encouraging other similar lawsuits - they might think that dragging out the lawsuit might be better for them as they can continue establishing a foothold in AI in the interim even if they know they will lose and know that discovery would be damaging.

As you state, Mrs. Johansson would benefit reputationally form the lawsuit and could act as a front for other powerful institutions who would greatly benefit from the lawsuit.

BeefWellington
1 replies
12h17m

In cases like this, don't damages essentially equate to the profit a company makes from the false association with the celebrity?

Otherwise, it'd be impossible to show damages if you weren't personally being denied business because of the association.

cjbgkagh
0 replies
3h52m

That is one way to measure damages but it's difficult. It would be easier to measure what she would have been paid if she had agreed to do it, Mrs. Johansson is in the business of selling rights to use her likeness for large amounts of money - it would be easy to demonstrate that she would have been paid at least as much as the use of her voice in the movie Her. Since she did decline the offer it would be easy to suggest that the amount she would have agreed to would have been much more than the movie Her.

The other aspect is that OpenAIs infringement is open ended, in that it is not equivalent to the single use within a single movie, it is substantially more. People who want Mrs. Johansson to voice their projects could now instead use OpenAI to obtain the same likeness - so the damages are for the deprivation all the future earnings Mrs. Johansson could have made.

Then there is reputational damage, if someone using OpenAI generated and disseminated a voice message before committing a heinous crime then the public would associate the voice with the crime and Mrs. Johansson would be intrinsically linked to the same crime. People hearing her voice would be reminded of the crime. This would again prevent Mrs. Johansson from making money from her likeness but would also negatively impact all aspects of her life and all future earnings, business dealings, and personal life.

Unusually the initial remedy is an injunction to prevent further infringement but OpenAI yanked it immediately so no injunction was needed. One way to repair reputational damage is to pay for news media to widely and publicly correct the reputationally damaging falsehood. As this has been a substantial scandal that media coverage has been already given for free and it is unlikely that there is anyone left who still believes that OpenAI is using Mrs. Johansson's likeness with either tacit or explicit permission.

As stated by your peer comment there are still reasons for Mrs. Johansson to bring a lawsuit for damages even where there are no significant damages - the lawsuit itself would be damaging to OpenAI so it would be in OpenAIs interest to pay to avoid it. At the same time it would be in other large companies interests for it to continue, so there may be a bidding war on whether or not Mrs. Johansson continues with the lawsuit. In addition it would benefit Mrs. Johansson personally and professionally to be seen as a champion of the rights of artists - especially at a time when AI companies like OpenAI are trampling all over those rights.

sangupta
3 replies
18h43m

With the recent departures at OpenAI it seems that all ethics and morals are going down the drain and OpenAI becoming the big-bully.

hyperhopper
2 replies
14h49m

There were never any. None of the models or code are actually open. It claims to be a nonprofit but is effectively a for profit company pulling the strings of a nonprofit just to avoid taxes.

sangupta
0 replies
14h12m

I guess you are right. The era of "don't be evil" if there ever was one, is now long gone and forgotten.

lambdaxyzw
0 replies
9h20m

There were never any.

This is a bit unfair. Some people left OpenAI on the ground of ethics, because they were unsatisfied with how this supposed nonprofit operates. The ethics was there, but OpenAI got rid of it.

toss1
2 replies
16h1m

ChatGPT is way better than to need stupid ripoffs than this

Sam should be ashamed to have ever thought of ripping off anyone's voice, let alone done it and rolled it out.

They are building some potentially world-changing technology, but cannot rise above being basically creepy rip-off artists. Einstein was right about requiring improved ethics to meet new challenges, and also that we are not meeting that requirement.

sad to see

swiftcoder
0 replies
8h3m

It's part and parcel of the LLM field's usual disdain for any property rights that might belong to other people. What they did here is not categorically different than scraping every author and visual artist on the internet - but in this instance they've gone and brazenly "copied" (read: stolen) from one of the few folks with more media clout than they themselves have.

latexr
0 replies
5h41m

ChatGPT is way better than to need stupid ripoffs than this

Of course it’s not. All of ChatGPT is a ripoff. That’s what training data (which they did not license) is.

spuz
2 replies
17h52m

It's also worth noting that Sam Altman admitted that he had only used GPT4o for one week before it was released. It's possible that in the rush to release before Google's IO event, they made the realisation of the likeness of the voice to Scarlett Johansen way too late hence the last minute contact with her agent.

https://www.youtube.com/watch?v=fMtbrKhXMWc

mrbungie
0 replies
12h24m

The "Sky" voice and it's likeness to SJo's have been there in the ChatGPT app for months.

emsign
0 replies
13h27m

Then asking Johansson for permission months before was pure coincidence?

rvz
2 replies
18h38m

Almost as if they knew that they cloned her voice without her permission.

Don't hear any arguments on how this is fair-use. (It isn't)

Why? Because everyone (including OpenAI) knows it clearly isn't fair-use even after pulling the voice.

dragonwriter
1 replies
18h36m

Don't here any arguments on how this is fair-use.

Why?

Because it's a right of personality issue, not copyright, and there is no fair use exception (the definition of the tort is already limited to a subset of commercial use which makes the Constitutional limitation that fair use addresses in copyright not relevant, and there is no statutory fair use exception to liability under the right of personality.)

kragen
0 replies
18h30m

you're doing god's work; ignorance is never-ending

dontupvoteme
2 replies
7h10m

Could they have made it look less like Midler vs Ford?

"Midler was asked to sing a famous song of hers for the commercial and refused. Subsequently, the company hired a voice-impersonator of Midler and carried on with using the song for the commercial, since it had been approved by the copyright-holder. Midler's image and likeness were not used in the commercial but many claimed the voice used sounded impeccably like Midler's."

As a casual mostly observer of AI, even I was aware of this precedent

OkGoDoIt
1 replies
5h11m

What was the result of that? Did Ford or Midler end up winning?

bradchris
0 replies
1h2m

Midler won, it’s a cornerstone case in protecting image/likeness.

In tech we’re used to IP law. In entertainment, there is unsurprisingly a whole area of case law on image and likeness.

Tech will need to understand this—and the areas of domain specific case law in many, many other fields—if AI is really to be adopted by the entire world.

rlt
1 replies
12h2m

I don’t think that’s quite right.

OpenAI first demoed and launched the “Sky” voice in November last year. The new demo doesn’t appear to have a new voice.

I doubt it would take them long to prepare a new voice, and who’s to say they wouldn’t delay the announcements for a ScarJo voice?

A charitable interpretation of the “her” tweet would be a comparison to the conversational and AI capabilities of the product, not the voice specifically, but it’s certainly not a good look.

GaggiX
0 replies
11h18m

I believe that "Sky" voice was first released in September last year and according to the blog post released by OpenAI they were working with "Sky" voice actress months before even contacting Scarlett Johansson for the first time.

arvinsim
1 replies
16h54m

Does it really cost a to train one voice?

Seems pretty reckless to not have alternatives just in case Scarlett refused.

numpad0
0 replies
14h47m

Probably 5-10 minutes worth of dataset and GPU time for finetuning on an existing base model. Could be done on a Blu-ray rip or an in-person audition recording, legality and ethics aside.

__loam
1 replies
18h43m

The tweet is so fucking brazen lol

RCitronsBroker
0 replies
12h12m

yeah, that was just poking the hornets nest. Even if i wasn’t mad enough to make a stink over my voice before, plausible deniability and all, that would’ve sealed the deal for me.

LewisVerstappen
1 replies
17h48m

They approached Johansson and she said no. They found another voice actor who sounds slightly similar and paid her instead.

The movie industry does this all the time.

Johansson is probably suing them so they're forced to remove the Sky voice while the lawsuit is happening.

I'm not a fan of Sam Altman or OpenAI but they didn't do anything wrong here.

falloutx
0 replies
6h29m

Then they should credit that actress and we can see if its legit, otherwise we believe they used copyrighted audio from S. Johansson's movies.

Havoc
1 replies
18h55m

OpenAI yanked the voice from their product line.

Still live for me? Unless the Sky I’m getting is a different one?

cjbillington
0 replies
13h18m

It is. They didn't remove the UI option, they just swapped it out under the hood for the "juniper" voice.

Aeolun
1 replies
12h25m

What I don’t understand is what they expected to happen?

Apparently they had no confidence in defending themselves, so why even release with the voice in the first place?

unraveller
0 replies
8h52m

They underestimated how quickly people would take off the headphones and jump on the bandwagon to claim affinity with an injured celebrity.

Are you suggesting they should have engineered the voice actress' voice to be more distinct from another actress they were considering for the part? Or just not gone near it with a 10ft pole? because if the latter the studios can just release a new Her and Him movie with different voices in different geo regions and prevent anyone from having any kind of familiar engaging voice bot.

zombiwoof
0 replies
17h43m

Smug Silicon Valley entrepreneur. Sam is a trash human

nvy
0 replies
16h8m

It's not possible for me to express the full measure of my disdain for Sam Altman without violating the HN guidelines.

m_mueller
0 replies
14h10m

I still have Sky voice. Is it because of my region?

gdilla
0 replies
3h15m

Well, if any one had doubts Altman is the classic mold of fuck-it-we-know-better-than-anyone tech bro, this is your proof.

fakedang
0 replies
18h36m

Please tell us about the time you most successfully hacked some (non-computer) system to your advantage.

cm2012
0 replies
15h3m

People hire celebrity voice impersonators all the time. You've heard a few impersonators this month probably from ads. This is such a non-issue that's only blowing up because Johannsen wrote a letter complaining about it and because people love "big tech is evil" stories.

azinman2
0 replies
14h34m

I still have the sky voice in my app.

aaronharnly
105 replies
19h49m

Well, this confirms that OpenAI have been shooting from the hip, not that we needed much confirmation. The fact that they repeatedly tried to hire Johansson, then went ahead and made a soundalike while explicitly describing that they were trying to make it be like her voice in the movie … is pretty bad for them.

infotainment
48 replies
19h37m

It’s definitely sketchy (classic OpenAI) But my question is: is what they did actually illegal? Can someone copyright their own voice?

crazygringo
17 replies
19h24m

Yes, absolutely illegal. You don't need to copyright anything, you simply own the rights your own likeness -- your visual appearance and your voice.

A company can't take a photo from your Facebook and plaster it across an advertisement for their product without you giving them the rights to do that.

And if you're a known public figure, this includes lookalikes and soundalikes as well. You can't hire a ScarJo impersonator that people will think is ScarJo.

This is clearly a ScarJo soundalike. It doesn't matter whether it's an AI voice or clone or if they hired someone to sound just like her. Because she's a known public figure, that's illegal if she hasn't given them the rights.

(However, if you generate a synthetic voice that just happens to sound exactly like a random Joe Schmo, it's allowed because Joe Schmo isn't a public figure, so there's no value in the association.)

howbadisthat
10 replies
18h17m

Scarlet owns the voice of a stranger that happens to sound like her? That seems absurd.

Just find someone who sounds like her, then hire them for the rights to their voice.

callalex
8 replies
18h8m

It’s really hard to assume in good faith that you are unfamiliar with the concept of impersonation. Just in case: https://en.m.wikipedia.org/wiki/Impersonator

There is no doubt that the hired actor was an impersonator, this was explicitly stated by scama himself.

warcher
3 replies
16h13m

It’s just that her voice by itself is relatively unremarkable. Someone like say, Morgan freeman, or Barack Obama, someone with a distinctive vocal delivery, that’s one thing. Scarlett Johansson, I couldn’t place her voice out of a lineup. I’m sure it’s pleasant I just can’t think of it.

llamaimperative
2 replies
16h2m

Scarlett Johansson does absolutely have a distinctive and very famous voice. I wouldn’t take your own ignorance (not meant disparagingly) as evidence otherwise.

That’s why she was the voice actor for the AI voice in Her.

serf
1 replies
11h44m

That’s why she was the voice actor for the AI voice in Her.

She was used in Her because she has a dry/monotone/lifeless form of diction that at the time seemed like a decent stand-in for an non-human AI.

IMDB is riddled with complaints about his vocal-style/diction/dead-pan on every one of her movies. Ghost World, Ghost in the Shell, Lost in Translation, Comic-Book-Movie-1-100 -- take a line from one movie and dub it across the character of another and most people would be fooled, that's impressive given the breadth of quality/style/age across the movies.

When she was first on the scene I thought it was bad acting, but then it continued -- now I tend to think that it's an effort to cultivate a character personality similar to Steven Wright or Tom Waits; the fact that she's now litigating towards protection of her character and likeness reinforces that fact for me.

It's unique to her though , that's for sure.

kristiandupont
0 replies
8h14m

She was used in Her because she has a dry/monotone/lifeless form of diction that at the time seemed like a decent stand-in for an non-human AI

Do you have a source for this?

howbadisthat
1 replies
17h26m

The variance in voice is not that great. Just find someone who is very close to her voice naturally.

airstrike
0 replies
16h51m

Doesn't matter if the intent is to make the listener think they're hearing ScarJo

tivert
0 replies
13h9m

There is no doubt that the hired actor was an impersonator, this was explicitly stated by scama himself.

And here's some caselaw where another major corporation got smacked down for doing the exact same thing: https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.

But given how unscrupulous Sam Altman appears to be, I wouldn't be surprised if OpenAI hired an impersonator as some kind half-ass legal cover, and went about using Johansson's voice anyway. Tech people do stupid shut sometimes because they assume they're so much cleverer than everyone else.

sneak
0 replies
16h47m

I missed that; where did he say that?

planede
0 replies
9h19m

Impersonating is defined by intent. "Just find someone who sounds like her" implies intent.

zooq_ai
2 replies
19h19m

But is that Scarlett Jo or Producers of Her that own the copyright?

If you imitate Darth Vader, I don't think James Earl Jones has as much case for likeliness as Star Wars franchise

crazygringo
0 replies
19h7m

It's both.

If you just want ScarJo's (or James Earl Jones') voice, you need the rights from them. Period.

If you want to reuse the character of her AI bot from the movie (her name, overall personality, tone, rhythm, catchphrases, etc.), or the character of Darth Vader, you also need to license that from the producers.

And also from ScarJo/Jones if you want the same voice to accompany the character. (Unless they've sold all rights for future re-use to the producers, which won't usually be the case, because they want to be paid for sequels.)

nickthegreek
2 replies
19h14m

If they didn’t use her actual voice for the training, didn’t hire voice talent to imitate her, didnt pursue her for a voice contract, didn’t make a reference to the movie in which she voices an AI, I feel OpenAI would have been on more stable legal footing. But they aren’t playing with a strong hand now and folded fast.

GuB-42
0 replies
17h2m

Not only that but they didn't credit the voice actress who sounds like her. If she was semi-famous and just naturally sounded like Scarlett Johansson, maybe they could have an argument: "it is not Scarlett, it is the famous [C-list actress] who worked in [production some people may know]".

emmp
11 replies
19h27m

There are two similar famous cases I know offhand. Probably there are more.

https://en.m.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.

Bette Middler successfully sued Ford for impersonating her likeness in a commercial.

Then also:

https://casetext.com/case/waits-v-frito-lay-inc

Tom Waits successfully sued Frito Lay for using an imitator without approval in a radio commercial.

The key seems to be that if someone is famous and their voice is distinctly attributeable to them, there is a case. In both of these cases, the artists in question were also solicited first and refused.

hooloovoo_zoo
4 replies
18h35m

Both cases seem to have also borrowed from the artists’ songs too however. That could perhaps make a difference.

pseudalopex
3 replies
18h8m

Bette Midler and Tom Waits didn't control their songs when they sued the companies.

hooloovoo_zoo
2 replies
17h53m

But it makes it more likely that the listener will associate the commercial with the artist than just using the voice.

pseudalopex
0 replies
14h27m

The Midler v. Ford decision said her voice was distinctive. Not the song.

OpenAI didn't just use a voice like Scarlett Johansson's. They used it in an AI system they wanted people to associate with AI from movies and the movie where Johansson played an AI particularly.[1][2]

[1] https://blog.samaltman.com/gpt-4o

[2] https://x.com/sama/status/1790075827666796666

deprecative
0 replies
14h42m

True to an extent. I'd argue that celebrity of a certain level would make one's voice recognizable and thus confusion can happen.

gcanyon
1 replies
18h12m

That's weird -- I would think Morgan Freeman would be able to sue over that, but I Am Not An Intellectual Property Lawyer.

kelnos
0 replies
13h5m

I feel like that's a little different. In the cases of Midler, Waits, and Johansson, the companies involved wanted to use their voices, were turned down, and then went with an imitator to make it seem to the audience that the celebrity was actually performing. In the case of this "Morgan Freeman" video, Freeman himself is very obviously not performing: the imitator appears on screen, so it's explicitly acknowledged in the ad.

But I'm not a lawyer of any sort either, so... ::shrug::

yread
0 replies
10h31m

The Tom Waits case had a payout of 2.6 million for services with fair market cost of 100k. What would it cost openai to train chatgpt using her voice? Is she also going to get a payout 26 times that? That GPU budget is starting to look inexpensive...

kcplate
0 replies
17h39m

You would have to argue the distinctiveness of the voice (if they hadn’t already pursued her to do it). Tom Waits…that’s pretty distinct voice. Scarlett Johansson…not so much

duskwuff
11 replies
19h27m

It's not precisely copyright, but most states recognize some form of personality rights, which encompass a person's voice just as much as the person's name or visual appearance.

bhhaskin
10 replies
19h10m

But where it will get murky is people sound like other people. Most voices are hardly unique. It will be interesting to see where this lands.

ocdtrekkie
8 replies
18h43m

It isn't murky, because law is about intent more than result. It doesn't matter if they hired someone who sounds like Scarlett, it matters if they intended to do so.

If they accidentally hired someone who sounds identical, that's not illegal. But if they intended to, even if it is a pretty poor imitation, it would be illegal because the intent to do it was there.

A court of law would be looking for things like emails about what sort of actress they were looking for, how they described that requirement, how they evaluated the candidate and selected her, and of course, how the CEO announced it alongside a movie title Scarlett starred in.

howbadisthat
7 replies
17h21m

Under what legal theory is intending to do something which is legal (hiring a person that has a voice you want) becomes illegal because there is another person who has a similar voice?

ocdtrekkie
6 replies
17h14m

It's not intending to do something legal, it's intending to do something illegal: Stealing their likeness. The fact you used an otherwise legal procedure to do the illegal activity doesn't make it less illegal.

howbadisthat
5 replies
15h57m

How can something be illegal if every step towards the objective is legal? This would result in an incoherent legal system where selective prosecution/corruption is trivial.

ocdtrekkie
3 replies
15h39m

It is legal to buy a gun, and legal to fire a gun, and it can even be legal to fire a gun at someone who is threatening to kill you in the moment, but if you fire a gun at someone with the intention of killing someone that happens to be very, very illegal.

howbadisthat
2 replies
15h14m

Very well. But in this case the end goal is the end of someone's unique life.

In the case of acquiring a likeness, if it's done legally you acquire someone else's likeness that happens to be shared with your target.

The likeness is shared and non-unique.

If you objective is to take someone's life, there is no other pathway to the objective but their life. With likeness that isn't the case.

tivert
0 replies
12h59m

OpenAI should hire you as their lawyer.

kelnos
0 replies
13h1m

So? You're merely (correctly) pointing out that the acts have consequences that are of wildly differing severity. Not that one is a legal and the other is not.

jcranmer
0 replies
13h41m

What's illegal, in general, is not the action itself but the intent to do an action and the steps taken in furtherance of that intent.

Hiring someone with a voice you want isn't illegal; hiring someone with a voice you want because it is similar to a voice that someone expressly denied you permission to use is illegal.

Actually, it's so foundational to the common law legal system that there's a specialized Latin term to represent the concept: mens rea (literally 'guilty mind').

tivert
0 replies
13h2m

But where it will get murky is people sound like other people. Most voices are hardly unique. It will be interesting to see where this lands.

Yes, it will be interesting in June 1988 when we will find out "where this lands": https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.

simonsarris
0 replies
17h15m

This is known as personality rights or right to publicity. Impersonating someone famous (eg faking their likeness or voice for an ad) is often illegal.

https://en.wikipedia.org/wiki/Personality_rights

kcplate
0 replies
17h43m

The problem is they pursued, was rejected, then approximated. Had they just approximated and made no references to the movie…then I bet social marketing would have made the connection organically and neither Ms Johansson or the Her producers would have much ground because they could reasonably claim that it was just a relatively generic woman’s voice with a faint NY/NJ accent.

foota
0 replies
19h29m

I think this will fall under what are termed personality rights, and the answer varies by state within the US.

bl4kers
0 replies
12h27m

Not here to weigh in on the answers to these questions. But it certainly feels pretty scary to have to ask such questions about a company leading the LLM space, considering the U.S. currently has little to no legal infrastructure to reign in these companies.

Plus the tone of the voice is likely an unimportant detail to theor success. So pushing up against the legal boundaries in this specific domain is at best strange and at worst a huge red flag for their ethics and how they operate.

aaronharnly
0 replies
19h26m

I’m not a lawyer and don’t have any deep background this area of IP, but there is at least some precedent apparently:

In a novel case of voice theft, a Los Angeles federal court jury Tuesday awarded gravel-throated recording artist Tom Waits $2.475 million in damages from Frito-Lay Inc. and its advertising agency.

The U.S. District Court jury found that the corn chip giant unlawfully appropriated Waits’ distinctive voice, tarring his reputation by employing an impersonator to record a radio ad for a new brand of spicy Doritos corn chips.

https://www.latimes.com/archives/la-xpm-1990-05-09-me-238-st...

signal11
46 replies
19h14m

OpenAI claimed they hired a different professional actor who performed using her own voice [1].

If so, I suspect they’ll be okay in a court of law — having a voice similar to a celebrity isn’t illegal.

It’ll likely cheese off actors and performers though.

[1] https://www.forbes.com/sites/roberthart/2024/05/20/openai-sa...

hn_20591249
26 replies
19h6m

Seems like sama may have put a big hole in that argument when he tweeted "her", now it is very easy to say that they knowingly cloned ScarJo's likeness. When will tech leaders learn to stop tweeting.

chipweinberger
15 replies
18h26m

Or perhaps they cloned a character's likeness?

Is there a distinction?

Are they trying to make it sound like Her, or SJ? Or just trying to go for a similar style? i.e. making artistic choices in designing their product

Note: I've never watched the movie.

romwell
5 replies
18h8m

Is there a distinction?

Yes, that would be a copyright violation on top of everything else.

Great idea though!

I'm going to start selling Keanu Reeves T-Shirts using this little trick.

See, I'm not using Keanu's likeness if I don't label it as Keanu. I'm just going to write Neo in a Tweet, and then say I'm just cloning Neo's likeness.

Neo is not a real person, so Keanu can't sue me! Bwahahaha

moralestapia
2 replies
17h44m

If you find a guy that looks identical to him, however ...

romwell
1 replies
9h38m

If you find a guy that looks identical to him, however ...

...it wouldn't make any difference.

A Barack Obama figurine is a Barack Obama figurine, no matter how much you say that it's actually a figurine of Boback O'Rama, a random person that coincidentally looks identically to the former US President.

moralestapia
0 replies
5h33m

What?

That situation would be completely legal. Come on.

whoknowsidont
0 replies
17h38m

I love these takes that constantly pop up in tech circles.

There's no way "you" (the people that engage in these tactics) believe anyone is that gullible to not see what's happening. You either believe yourselves to be exceedingly clever or everyone else has the intelligence of toddler.

With the gumption some tech "leaders" display, maybe both.

If you have to say "technically it's not" 5x in a row to justify a position in a social context just short-circuit your brain and go do something else.

planede
0 replies
9h22m

Nitpick: it's not copyright, it's personality rights and likeness. It's a violation of it nonetheless.

moralestapia
5 replies
17h54m

I honestly don't think Scarlett (the person, not her "her" character) has anything to favor their case, aside from the public's sympathy.

She may have something only if it turns out that the training set for that voice is composed of some recordings of her (the person, not the movie), which I highly doubt and is, unfortunately, extremely hard to prove. Even that wouldn't be much, though, as it could be ruled a derivative work, or something akin to any celebrity impersonator. Those guys can even advertise themselves using the actual name of the celebrities involved and it's allowed.

Me personally, I hope she takes them to court anyway, as it will be an interesting trial to follow.

An interesting facet is, copyright law goes to the substance of the copyrighted work; in this case, because of the peculiarities of her character in "her", she is pretty much only voice, I wonder if that make things look different to the eyes of a judge.

pseudalopex
4 replies
15h18m

Likeness rights and copyright are different.

moralestapia
3 replies
15h9m

Fictional characters cannot have personality rights, for obvious reasons.

That falls under copyright, trademarks, ...

pseudalopex
2 replies
14h27m

Actors who play fictional characters have personality rights.

moralestapia
1 replies
5h34m

Yes, but not the fictional character. Scarlett Johansson's character in her, credited only as "voice", is a fictional character.

pseudalopex
0 replies
5h9m

No one claimed the fictional character's personality rights were infringed.

People who want to use an actor's likeness can't get around likeness rights by saying they impersonated a specific performance actually.

gcr
2 replies
18h4m

That’s a weaksauce argument IMO. The character was played by SJ. Depictions of this character necessarily have to be depictions of the voice actress.

Your argument may be stronger if OpenAI said something like “the movie studio owns the rights to this character’s likeness, so we approached them,” but it’s not clear they attempted that.

moralestapia
1 replies
17h42m

They didn't approach her, they approached her agent, which should've the point of contact for either case.

As to whether she owns the rights of that performance or somebody else, we'd have to read the contract; most likely she doesn't, though.

mrbungie
0 replies
12h17m

Probably anyone but her inner circle can "approach her" directly. I would expect any other kind of connection to be made through her agent.

apantel
4 replies
14h55m

They can say they had a certain thing in mind, which was to produce something like ‘Her’, and obviously Scarjo would have sold it home for them if she participated. But in lieu of the fact that she didn’t, they still went out and created what they had in mind, which was something LIKE ‘Her’. That doesn’t sound illegal.

jahewson
3 replies
13h56m

Not illegal, because this would be a civil case. But they’re on thin ice because there’s a big difference between “creating something like Her” and “creating something like Scarlet Johansson’s performance in Her”.

apantel
2 replies
13h33m

Creating something like Her is creating something like Scarlet Johansson’s performance of her. The whole point is to hit the same note, which is a voice and an aesthetic and a sensibility. That’s the point! She wasn’t willing to do it. If they hit the same note without training on her voice, then I think that’s fair game.

a_wild_dandan
1 replies
12h21m

Yeah, influential people shouldn't get to functionally own a "likeness." It's not a fingerprint. An actor shouldn't credibly worry about getting work because a rich/famous doppelganger exists (which may threaten clientele).

Explicit brand reference? Bad. Circumstantial insinuation? Let it go.

plasticchris
0 replies
4h53m

What else does a famous actor have but a likeness? What if their likeness is appropriated to support a cause they disagree with, or a job they declined for moral or ethical reasons? The public will still associate them first and they will bear the consequences.

diego_sandoval
2 replies
17h21m

It's also very easy to say that they were inspired by the concept of the movie, but the voice is different.

llamaimperative
1 replies
17h18m

Sure if they hadn’t contacted her twice for permission, including 2 days before launch.

lyu07282
0 replies
10h58m

I heard the voice before hearing this news and didn't recognize her, but it's crazy if they really cloned her voice without her permission. Even worse somehow since they did such a bad job at it.

crimsoneer
0 replies
18h51m

Yeeah, this was very stupid. Sigh.

catchnear4321
0 replies
18h42m

only when they can get a bigger fix from something else.

it takes more than money to fuel these types, and they would have far better minders and bumpers if the downside outweighed the upside. they aren’t stupid, just addicted.

musk was addict smart, owned up to his proclivities and bought the cartel.

zone411
12 replies
18h42m

It probably is illegal in CA: https://repository.law.miami.edu/cgi/viewcontent.cgi?article...

"when voice is sufficient indicia of a celebrity's identity, the right of publicity protects against its imitation for commercial purposes without the celebrity's consent."

charlieyu1
10 replies
18h1m

But why? Sounds like a violation to the rights of the sound actor

whoknowsidont
7 replies
17h43m

Because it's meant to give the _appearance_ or _perception_ that a celebrity is involved. Their actions demonstrate they were both highly interested and had the expectation that the partnership was going to work out, with the express purpose of using the celebrity's identity for their own commercial purposes.

If they had just screened a bunch of voice actors and chosen the same one no one would care (legally or otherwise).

whynotminot
2 replies
17h23m

Sounds like one of those situations you'd have to prove intent.

(and given the timeline ScarJo laid out in her Twitter feed, I'd be inclined to vote to convict at the present moment)

sangnoir
0 replies
16h43m

Sounds like one of those situations you'd have to prove intent.

The discovery process may help figuring the intent - especially any internal communication before and after the two(!) failed attempts to get her sign-off, as well as any notes shared with the people responsible for casting.

jahewson
0 replies
14h2m

Not necessarily, because this would be a civil matter, the burden of proof is a preponderance of the evidence - it’s glaring obvious that this voice is emulating the movie Her and I suspect it wouldn’t be hard to convince a jury.

janalsncm
1 replies
14h15m

What OpenAI did here is beyond the pale. This is open and shut for me based off of the actions surrounding the voice training.

I think a lot of people are wondering about a situation (which clearly doesn’t apply here) in which someone was falsely accused of impersonation based on an accidental similarity. I have more sympathy for that.

But that’s giving OpenAI far more than just the benefit of the doubt: there is no doubt in this case.

sneak
0 replies
12h47m

I think "beyond the pale" is a bit hyperbolic. The voice actor has publicity rights, too.

charlieyu1
1 replies
11h39m

I guess the Trump lookalike satire guy would not want to go to California then

actionfromafar
0 replies
7h59m

Ah, so OpenAI does satire. That explains a lot.

mandmandam
0 replies
8h43m

Did you read the statement? They approached Scarlett twice, including two days before launch. Sam even said himself that Sky sounds like 'HER'.

This isn't actually complicated at all. OpenAI robbed her likeness against her express will.

csomar
0 replies
15h22m

I am guessing it's because you are trying to sell the voice as "that" actor voice. I guess if the other voice become popular on its own right (a celebrity) then there is a case to be made.

romwell
0 replies
18h6m

I'd be surprised if it was legal anywhere in the US, but this just puts the final nail into Sky's coffin.

rockemsockem
2 replies
18h16m

It's almost certainly not legal exactly because of the surrounding context of openai trying to hire her along with the "her" tweet.

There's not a lot of precedent around voice impersonation, but there is for a very, very similar case against Ford

https://en.m.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.

signal11
0 replies
18h14m

Amazing case law, thank you! I suspect OpenAI have just realised this, hence the walking back of the “Sky” voice.

JeremyNT
1 replies
17h25m

Whether or not what they've done is currently technically illegal, they're priming the public to be pissed off at them. Making enemies of beloved figures from the broader culture is likely to not make OpenAI many friends.

OpenAI has gone the "it's easier to ask forgiveness than permission" route, and it seemed like they might get away with that, but if this results in a lot more stories like this they'll risk running afoul of public opinion and future legislation turning sharply against them.

klyrs
0 replies
16h49m

Problem is, they asked for permission, showing their hand multiple times. Can I say, I'm somewhat relieved to learn that Sam Altman isn't an evil genius.

ocdtrekkie
0 replies
18h47m

I mean, unless an investigation can find any criteria used to select this particular actress like "sounds like Scarlett" in an email somewhere, or you know, the head idiot intentionally and publicly posting the title of a movie starring the actress in relation to the soundalike's voice work.

tootie
5 replies
19h34m

This is so pointless and petty too. Like "hee hee our software is just like the movies". And continuing the trend of tech moguls watching bleak satire and thinking it's aspirational.

yazzku
1 replies
18h33m

If anyone thinks this demo is cool, I regret to inform you that your life is very, very sad.

tsimionescu
0 replies
18h9m

"Over the top" means exaggerated and corny, almost the opposite of "cool".

mateus1
0 replies
5h14m

I couldn’t take more than 10s of that. It’s so cringey and for some reason trying to be sexy? Like why would I want a bot to flirt with me while I’m trying to get information?

whamlastxmas
0 replies
19h20m

I think this is such a massively trivial detail it’s hard to draw broader conclusions from it

llamaimperative
0 replies
19h48m

“Shooting from the hip” is giving them too much credit. Actual knowing malice and dishonesty is more like it.

OrangeMusic
0 replies
6h38m

I honestly didn't think it sounded like Johansson. Because of the controversy I just now re listened to the demos and I still find if very unlikely that someone would think it was her.

MBCook
50 replies
19h30m

I really hope she sues the company to hell and back.

She has the resources to fight back and make an example of them, and they have the resources to make it worthwhile.

option
38 replies
18h43m

not defending oai here, but why do you seem to hate them? did chatGPT make your life better or worse?

101008
14 replies
16h55m

it could make life better for a lot of people, but training their models on copyright material, which makes life worse for another set of people.

MBCook
6 replies
16h42m

Right. If what they’re doing is so incredibly valuable, just pay for the resources you’re using.

If your business model only works if you steal stuff from other people you don’t have a business.

rockemsockem
3 replies
16h31m

Not commenting on open AI specifically, but I feel like actually acquiring the data is borderline impossible if you only follow legal channels.

Where can I insert my money to buy every penguin random house book? How do I pay every deviant art artist whatever amount they're owed?

MBCook
1 replies
15h49m

You could go to Penguin directly. Or perhaps the Authors Guild would help you set something up across publishers.

All of deviant art? Probably not. But do you need to train on all of that? You could certainly run ads telling people you’d be willing to pay a small amount to train on their art and let them choose.

Would it be legal to train against the national archives?

Options exist. No you won’t get as much stuff as you do by taking whatever you want, but people are being compensated for their work or at least being given the choice to opt in.

tavavex
0 replies
12h41m

Ignoring the actual legality of using training data in machine learning, let's look at these "options" in a purely objective-driven way. If you do that, you'll quickly realize that what you describe would strengthen megacorporations to an even greater extent. If training on public data is banned, then the entities who have complete ownership of all their data would be granted a de facto monopoly. Stock image services, media conglomerates, music labels, industry giants would start in the winning position. When one side just gets their way for free and the other has to beg for scraps, do you really think this is a fair proposition?

The reason why open-source AI exists at all is that we've always been allowing use of public data - it was okay when Google did it, it was okay when the Internet Archive did it, it was even okay when text translation services used that same data to train their models - or really, that applies to basically anything ML-driven before generative AI.

There's, like, a sea of reasons to criticize OpenAI for - but arguing for extending IP laws even further and calling out opponents for "literal theft" is one of the weaker options that caught on with many people.

qbxk
0 replies
13h34m

you're not wrong, and i think the technology exists to accomplish something that did this, and has for a few years. but the end result would be a system that funnels money from large numbers of people, to different large numbers of people. and a lot of overlap and cross payments too. maybe there's a good business in transaction fees on that? but it seems like big numbers going to other people while a much smaller number goes to the party owning the system is not an appealing business in our world.

ipso facto why does it not exist while spotify buys podcasts and kills them

throwaway115
1 replies
16h14m

Sounds like you're using the music industry's definition of "steal." Nothing was stolen, because nobody was deprived of the thing.

visarga
0 replies
15h9m

In this case it just sounds like her , not even a direct copy

wraptile
5 replies
11h33m

I think majority agrees that we're ok with a few millionaire priviledge actors losing small amount of their value in favor of personal AI assistants for general populace. How is that even a topic worth discussing is trully perplexing.

krainboltgreene
2 replies
11h9m

No man, a majority do not believe that and it's really weird that you think they do.

wraptile
1 replies
10h18m

Why? The data clearly points that nobody will go out of their way to protect IP laws. In majority of the world nobody could care less about some hollywood actor with tough to spell name getting their voice cloned.

It's really weird that someone would think that IP law is more important than access to information. Must be some dystopian bubble.

krainboltgreene
0 replies
1h17m

Just because companies are violating people's artistic ownership and the State isn't punishing or preventing it fast enough is not an indication that a majority of people "could care less". If anything you can see the fomenting dislike with OpenAI right in front of you.

You can frame it however you like to make it justifiable, but I'm not going to get into the mud with you.

mepiethree
1 replies
5h35m

I don’t know a single person who has ever mentioned wanting an AI assistant, and I know a lot of people who are creeped out by the idea. I live in a swing state and work for a big tech company but most of my friends aren’t in tech. So I’m not sure that a “majority agrees” about anything related to an AI assistant

wraptile
0 replies
2h30m

Yet every single person uses an AI assistant every day when they Google something or ask their phone for who performs the song that's playing in the background. You must know some of these people?

visarga
0 replies
15h11m

Creatives will use AI even more than regular people and with better results. What I find dangerous is to protect all paraphrases and variations as well, akin to owning an idea. When not replicating expression ideas should be free, even in copyrights works

z7
9 replies
17h25m

Very obvious bias against OpenAI in the comments here. Possible motives: a) there's a visceral human reaction against anyone extraordinarily successful and powerful (Nietzsche wrote about this). b) resentment for OpenAI's advancements in code generation and its possible impact on the job market. I don't think much of the outrage here is motivated by altruism, it's probably more about siding with whoever opposes your perceived enemy.

Ar-Curunir
5 replies
17h14m

Or maybe the prospect of theft and unpermissioned copying of an extremely personal quality (one's voice) is (correctly) being called out?

People aren't as selfish/petty as you're making them out.

eschaton
2 replies
15h38m

Most people aren’t that selfish or petty. But there are some people who are—and by an unfortunate quirk of human nature, they tend to believe everyone else is just like them.

z7
1 replies
8h58m

So your response is ... "No I'm not, you are."

Well. This reaction invites further psychological interpretations, lol.

eschaton
0 replies
8h36m

Uh, what? This is a pretty well-researched phenomenon: A very high percentage of people who engage in pathological behaviors that take advantage of other people believe that “everyone does it” and rely on that as justification. Penn & Teller did a pithy bit on the topic.

z7
0 replies
8h55m

Can you provide the evidence indicating that her voice was stolen? Maybe also an audio analysis of the vocal characteristics involved and the degree of similarity? And wouldn't it be prudent to investigate these matters before making such accusations?

8note
0 replies
16h15m

There's any number of people with equivalent voices that Scarlet has stolen the voices of by selling it to movies, and all of those other people who sound the same deserve compensation from her for it

maxbond
2 replies
16h12m

I propose the following razor:

Never attribute to jealousy that which can be explained by a difference of perspective.

Jealousy can explain any criticism of anybody. That makes it an epistemic hazard; you can always fall back to it, and you can develop a habit of doing so. And then you have deafened yourself. Any substantive criticisms will be lost on you. As will any opportunity to learn from those you disagree with. Your ideas will be hot house flowers, comfortable and safe in their controlled environment, but unable to contend in the wild. Aspire to weeds.

z7
1 replies
8h42m

That would be tautological, as 'difference of perspective' is a restatement of the phenomenon at a higher level of abstraction. Someone who is resentful will likely have a different perspective than they would otherwise have. Psychological motives are real and don't disappear by simply assuming a variant of a problem-solving heuristic. Notice also I didn't use the word 'jealousy.' I do think that speculations about psychological motives should be made only after careful consideration and generally remain unprovable. However, the uncertainty of psychological motives shouldn't prevent us from ever addressing them.

maxbond
0 replies
28m

You can use whatever high minded references to philosophy and psychology you please, but from what I've read from you in this thread - you're missing the point. You aren't understanding the criticism, and you're addressing the weakest form of the argument. I'm suggesting you aspire to address the strongest form, whether the weakest is present or not.

If you choose to do nothing with that information, that's up to you.

mepiethree
6 replies
18h27m

I think for most people the answer is “worse”.

wilg
2 replies
18h21m

lol

DavidPiper
1 replies
16h39m

Not OP, or the replier, but I think the longer answer here is that if Scarlett Johansson <insert any wealthy and/or popular figure> can't win a lawsuit against a company that has effectively:

(1) Used content she has created (vocal lines) in order to train a generative AI with the ability to create more of that content (vocal lines), without permission or payment, let alone acknowledgement

(2) In doing so removed the scarcity of the content she provides (Generative AI is effectively unlimited in the lines it can produce once effectively trained)

Then no smaller time actor, voice actor, artist, musician, etc, is likely to have any chance defending themselves against the theft of their work for AI purposes.

And, with that precedent set, the legal and financial landscape of art and creativity will have changed in a way that discourages anybody from creating original works for financial gain, because we've systematised the creation of original and derivative works with no legal or financial ramifications.

wilg
0 replies
12h37m

Maybe, but I don't think it's likely that they used any actual copyrighted material of her voice. It's not illegal to want one voice actor to play a role (or record voice data), then get a different, somewhat (but not actually very) similar sounding voice actor to do it instead.

rockemsockem
2 replies
17h31m

Curious, how?

mepiethree
1 replies
5h31m

The above comment from MBCook captures it well but basically I would say that for the majority of people I know:

Cons: the rise of scams, carbon emissions, and, some think, existential risk.

Pros: ??? You can be more efficient at the email job you hate. And it gives you advice on gardening and home improvement so you don’t have to call your uncle.

Cons outweigh the pros

rockemsockem
0 replies
2h0m

Your pros list being empty is the real answer. Basically you don't know the pros because you haven't taken the time to find out.

Kinda delegitimizes your cons list too. I'm not going to bother word-by-word checking your work.

lotsoweiners
2 replies
17h29m

Sam Altman has a smarmy, borderline vomit inducing face. The fact that I have to see his face every day while scrolling through the interwebs feeds has made my life worse.

mycologos
1 replies
15h8m

See, I don't like Sam Altman either, but this habit of criticizing people for their faces seems wrong-headed. Isn't it his behavior that we should be criticizing?

sooheon
0 replies
9h40m

I could be wrong but in these cases (or when people criticize voices, similarly), people are more put off by expressions than raw features. Nonverbal communication is high emotional bandwidth. I don’t begrudge someone disliking someone else’s self presentation.

MBCook
2 replies
16h43m

They stole someone’s voice after being told not to?

They unleashed a massive new wave of spam and scams and garbage onto the Internet?

Because they’re stealing every single bit of content on the Internet and everywhere else they can get their hands on without paying anything for it and then expect to sell it back to us in chewed up garbage form?

They help single-handedly cause MS to blast past their carbon commitments by 30%? And got everyone else into a big race for how many resources they could waste to power AI nonsense that doesn’t even actually work that well for what people want to use it for?

Perhaps the fact that they don’t seem to care one bit about any of the consequences, legal/moral/ethical/economical/etc. caused by what they’ve done as long as they make money?

I don’t have a grudge against open AI. I have a grudge against the AI industry and the way these kind of SV golden boys seem to think they’re immune from criticism.

Why do you think stealing a professional actor’s voice should be OK and immune from criticism? This is horrible.

wraptile
0 replies
11h35m

They stole someone’s voice after being told not to?

Stole someone's voice? That's not stealing, let's not pump more power for copyright propaganda even if this case is correct.

eschaton
0 replies
15h39m

Amen. We need ethics and accountability and this wave of “AI” has been sorely lacking those, instead preferring the Uber model of “let’s just get big enough to bully things into working out in our favor.”

It’s important to nip that shit in the bud lest it spread.

threatofrain
7 replies
18h44m

I'm not sure she can do much since OpenAI withdrew so quickly. What damages are there?

MBCook
3 replies
16h48m

Withdrew too quickly?

They didn’t come to an agreement to use her voice, they used her voice. And they obviously knew it was a problem because they went back to her like the night before trying to get her approval again.

The correct thing to do was NOT use her voice.

You don’t get to steal something, get all the benefit from it (the press coverage), and then say “oops never mind it was just a few hours you can’t sue us”.

Why don’t we try selling tickets to watch a Disney movie “just one time” and see how well that goes. I don’t think Disney’s lawyers will look at it and say “oh well they decided not to do it again.“

8note
2 replies
16h17m

They make no representation that it is her voice, and there's a really good chance that they separately made a voice that sounds similar enough to where if they could tack her name to it, it'd be good for advertising, but otherwise isn't her voice.

Voices are really hard for people to distinguish as being a certain person without priming, so really she's doing for free the advertising they were hoping she'd do for pay

thih9
0 replies
15h11m

there's a really good chance that they separately made a voice that sounds similar enough to where if they could tack her name to it, it'd be good for advertising, but otherwise isn't her voice.

There’s also a really good chance this is in some way a deepfake. Would be interesting to see this get examined by courts.

MBCook
0 replies
14h39m

I don’t know if it’s actually her voice. At this point I wouldn’t put it past them.

But if they concocted a fake voice to sound as much like her as possible, that’s not really better.

Altman’s tweet, combined with previous statements Her is his favorite movie, and trying to secure rights twice looks really really damning.

so really she's doing for free the advertising they were hoping she'd do for pay

They didn’t want her to advertise for them. They wanted to use her voice. Do you not see a difference?

system2
0 replies
17h45m

Sky voice is still there.

callalex
0 replies
17h51m

The way USA courts are set up, setting precedent and assessing damages are two distinct things. I agree that the precedent she would be targeting wouldn’t be all that financially rewarding but that’s not the only thing that motivates humans.

DevX101
0 replies
17h6m

The demo OpenAI was a massive marketing campaign for GPT-4o and led to the largest increases in revenue for their mobile app. The voice was a large part of why this release was a hit. The demo is still on youtube with 4M views. She has a great case for financial remuneration even if they haven't yet launched the voice feature.

ml-anon
2 replies
18h52m

Scarlett Johansson made Disney cave. She’s going to absolutely destroy this band of grifters.

MBCook
1 replies
16h40m

Not only that, I don’t think Disney wants this precedent. They may also want to get back on her good side. Either way given they own a number of movies with her in starring roles I wouldn’t be surprised if they were happy to help in her lawsuit with the legal fees.

Hell they may sue on their own or join as another damaged party.

ml-anon
0 replies
11h34m

I’m sure they are salivating at the thought of discovery forcing OAI to crack open their datasets so they can put a dollar amount to every piece of infringing material in there.

alsodumb
20 replies
19h52m

Why do I feel like Sam's 'her' tweet pretty much gave Scarlett Johansson's legal counsel all the ammo they needed lol.

elevatedastalt
12 replies
19h30m

It probably made things worse, but the fact that they reached out to her to use her voice and she explicitly refused would be sufficient ammo I feel. (Not a lawyer of course).

Of course, Twitter continues to bring people with big egos to their own downfall.

not2b
10 replies
19h18m

Not to mention that it matches up pretty much exactly with the Bette Midler and Tom Waits cases, where courts ruled against companies using soundalikes after the person they really wanted turned them down. Doesn't matter if they hired a soundalike actress rather than clone her voice, it still violates her rights.

cortesoft
9 replies
18h44m

I am guessing this is only because they first tried to hire the originals before hiring sound-alikes... otherwise, would it mean that if your voice sounds similar to someone else, you can't do voice work?

recursive
6 replies
18h39m

would it mean that if your voice sounds similar to someone else, you can't do voice work?

Maybe only when the director's instructions are "I want you to sound like XYZ".

wilg
3 replies
18h23m

Of course, surely you can do this if you're playing a character, such as impersonating Trump or Obama or an actor on SNL?

not2b
2 replies
18h16m

Yes, parody's fine when it's clearly parody. But if you try to pretend that Trump or Obama (rather than an impersonator) is endorsing a product, you're in trouble.

wilg
1 replies
11h53m

but openai has only ever said that the chatgpt voice has nothing to do with scojo

cortesoft
0 replies
2h53m

What you say after the fact doesn't really matter...

smugma
1 replies
18h12m

Exception: Parody is covered under the first amendment.

meat_machine
0 replies
9h26m

Wheel of Fortune hostess Vanna White had established herself as a TV personality, and consequently appeared as a spokesperson for advertisers. Samsung produced a television commercial advertising its VCRs, showing a robot wearing a dress and with other similarities to White standing beside a Wheel of Fortune game board. Samsung, in their own internal documents, called this the "Vanna White ad". White sued Samsung for violations of California Civil Code section 3344, California common law right of publicity, and the federal Lanham Act. The United States District Court for the Southern District of California granted summary judgment against White on all counts, and White appealed.

The Ninth Circuit reversed the District Court, finding that White had a cause of action based on the value of her image, and that Samsung had appropriated this image. Samsung's assertion that this was a parody was found to be unavailing, as the intent of the ad was not to make fun of White's characteristics, but to sell VCRs.

https://en.wikipedia.org/wiki/White_v._Samsung_Electronics_A....

Maybe it depends on which court will handle the case, but OpenAI's core intent isn't parody, but rather to use someone's likeness as a way to make money.

(I am not a lawyer)

planede
0 replies
9h10m

It's only really about the intent of the voice to sound like the original. Reaching out to the originals first implies intent, so it makes the case easier.

It would be harder to find a case if they simply just hired someone that sounds similar, but if they did that with the intention to sound like the original that's still impersonation, only it's harder to prove.

If they just happened to hire someone that sounded like the original, then that's fair game IMO.

IANAL

not2b
0 replies
18h17m

Good voice actors can do a whole range of voices, including imitating many different people. The cases where someone got in trouble are where there's misrepresentation. If it goes to court, there's discovery, and if the OpenAI people gave specific instructions to the voice actor to imitate Scarlett Johansson, after denying it, there could be big trouble. We don't know that, but it looks likely given how they first approached her and how they seemed to be going for something like the "Her" film.

joegibbs
0 replies
19h6m

Definitely. GPT4o has a voice that sounds like Scarlett Johannson? They'd probably get away with it, I'm sure there are a lot of people that sound like her. Tweeting a reference to a movie she was in - a bit more murky because it's starting to sound like they are deliberately cloning her voice. Asking to use her voice, then using a soundalike, then referencing the movie? 100%, no doubt.

CharlesW
2 replies
19h51m

Also, it shows that today’s blog post was fiction.

crimsoneer
1 replies
18h41m

The sky voice they took down has existed for more than a year. It's different to the new demo that kicked this all off.

CharlesW
0 replies
18h32m

The voice didn’t change, just the ability of the model to “output laughter, singing, or express emotion”.

akr4s1a
1 replies
19h30m

So was asking her to reconsider 2 days before the demo, how blatant can you get

OrangeMusic
0 replies
6h35m

They really wanted her voice yes. Does that prove anything?

dclowd9901
0 replies
12h21m

They’ll make the case that the abilities of the device is what he was referring to, but I think more the fact they were pushing her so hard for her involvement will actually be the damning aspect for them with that line of defense.

dilap
19 replies
19h49m

4. Naughtiness

Though the most successful founders are usually good people, they tend to have a piratical gleam in their eye. They're not Goody Two-Shoes type good. Morally, they care about getting the big questions right, but not about observing proprieties. That's why I'd use the word naughty rather than evil. They delight in breaking rules, but not rules that matter. This quality may be redundant though; it may be implied by imagination.

Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.

"What We Look for in Founders", PG

https://paulgraham.com/founders.html

I think the more powerful you become, the less endearing this trait is.

aeurielesn
11 replies
19h37m

This quote actually makes me disgusted. I don't think this is a quality to encourage on, especially since despite the tone it reads more as abuse.

laborcontract
4 replies
19h34m

You are on "Hacker News", surely it's not that much of a surprise?

christina97
3 replies
19h20m

It’s a different type of hacking. This was about hacking a system for one’s advantage, not the computer type of hacking.

laborcontract
2 replies
18h34m

I always interpreted the hacker in HN as a spirit, irrespective of vocation.

jprete
1 replies
17h44m

What is endearing and admirable as the underdog very easily becomes contemptible abuse in the biggest dog of the pack. It's not a contradiction - the hacking spirit is a trait that causes more damage the more power you have.

abvdasker
0 replies
9h34m

The persistent belief in the tech industry as some kind of underdog I think explains much of the recent deplorable behavior by some of the richest people on the planet. A bunch of unbelievably wealthy nerds are mentally trapped in the past and too out of touch to realize they have become the bullies.

dylan604
3 replies
19h21m

It's actually the most rational thing I've heard quoted from him. You have to be willing to open the box to see what's inside to know if you can do it better/cheaper/faster/smaller. There's ways of doing that without breaking laws, or doing something unethical with the what you learn. There's also ways of doing it without destroying something or violating anyone/anything. It also allows you to hear their response to see if where person is in that rationale. Do they toe the lines, do they run right across it, do they bend but not break, do they scorched earth everything they touch?

croes
1 replies
17h51m

Those are the same people who break the laws and exploit people if they know they can't fight back.

Usually those people are considered sociopaths.

Maybe it's time to ask the employees of OpenAI who fought to get Altman back, How this behavior is compatible with their moral standards or whether money is the most important thing.

dylan604
0 replies
16h49m

Is that something that needs to be asked? I thought it was pretty evident when the coup was happening.

talldayo
0 replies
18h21m

Makes me glad there aren't people betting on how far I'm willing to go to bend the law. When you lay it all out like that you make PG sound like a cockfighter paying to get his champion bloodied.

whamlastxmas
0 replies
19h22m

I think governance is overly restrictive and stifles innovation. For example I love that Uber and Airbnb exist even though they both sort of skirt or acceptably break rules in place that a complete rule follower wouldn’t have violated.

Taking taxis 15 years ago was an absolute scammy shitty experience and it’s only marginally better now thanks to an actual competitive marketplace

blibble
0 replies
19h20m

personally I think it completely sums up silicon valley perfectly

shombaboor
1 replies
19h44m

it seems most of the big companies try to break the rules while in the process become so strong they trade it off for what becomes a marginal fine & cost of doing business. Facebook, Uber come to mind first. This may just be the same.

astrange
0 replies
19h29m

Everyone let Uber get away with breaking taxi rules because those rules were only good for the people with the taxi medallion monopoly.

(Which wasn't even the taxi drivers, although they were plenty bad enough on their own.)

themagician
0 replies
19h34m

Sure, a bit of rebellion can fuel innovation in founders, but as they gain power, it's important to keep things ethical. What seems charming at the startup phase might raise eyebrows as the company expands.

rachofsunshine
0 replies
19h21m

The problem is this line:

They delight in breaking rules, but not rules that matter.

The question becomes "what rules matter?". And the answer inevitably becomes "only the ones that work in my favor and/or that I agree with".

I think someone trying to defend this would go "oh come on, does it really matter if a rich actress gets slightly richer?" And no, honestly, it doesn't matter that much. Not to me, anyway. But it matters that it establishes (or rather, confirms and reinforces) a culture of disregard and makes it about what you think matters, and not about what someone else might think matters about the things in their life. Their life belongs to them, a fact that utopians have forgotten again and again everywhere and everywhen. And once all judgement is up to you, if you're a sufficiently ambitious and motivated reasoner (and the kind of person we're talking about here is), you can justify pretty much whatever you want without that pesky real-world check of a person going "um actually no I don't want you to do that".

Sometimes I think anti-tech takes get this wrong. They see the problem as breaking the rules at all, as disrupting the status quo at all, as taking any action that might reasonably be foreseen to cause harm. But you do really have to do that if you want to make something good sometimes. You can't foresee every consequence of your actions - I doubt, for example, that Airbnb's founders were thinking about issues with housing policy when they started their company. But what differentiates behavior like this from risk-taking is that the harm here is deliberate and considered. Mistakes happen, but this was not a mistake. It was a choice to say "this is mine now".

That isn't a high bar to clear. And I think we can demand that tech leaders clear it without stifling the innovation that is tech at its best.

ixaxaar
0 replies
19h20m

So a kind of lack of empathy? Do these guys have this image of "autists" and are basically filtering for them, cause this criteria seems to be favoring oppositional defiance disorder.

I mention this specifically because I remember mark andreseen comment something similar in lex fridman's podcast, something along the lines of getting "those creative people" together to build on ai.

idontknowtech
0 replies
19h24m

They delight in breaking rules, but not rules that matter.

To them*

Which is the whole problem. These narcissistic egotists think they, alone, individually, are capable of deciding what's best not just for their companies but for humanity writ large.

deletedie
0 replies
18h53m

The more accurate (though somewhat academic) term for this trait is 'narcissism'.

keepamovin
16 replies
19h41m

I think it’s interesting that Johansson chose to forgo substantial royalties and collaboration potential

But it must feel pretty fucking weird and violatory when you spend your entire life thinking about how you are going to deliver certain lines and that’s your creative Body of work, and then for someone to just take that Voice and apply it to any random text that can be generated?

I get why she wouldn’t want to let it go.

In a way it is similar to how a developer might feel about their code being absorbed, generalized, and then regurgitated almost verbatim as part of some AI responses

But in the case of voice it’s even worse as the personality impression is contained in the slightest utterance… whereas a style of coding Or a piece of code might be less Recognizable, and generally applicable to such a wide range of productions

Voice is the original human technology, To try to take that from someone without their consent is a pretty all encompassing grab

steveBK123
7 replies
19h36m

She choose to forego being a billion lonely guys AI girlfriend

Not a bad call for someone already rich

keepamovin
4 replies
19h30m

She choose to forego being a billion lonely guys AI girlfriend

To suggest that Johansson’s only appeal is to the opposite gender (and ‘lonely’ ones at that!) I think is myopic and reductive of her impact

steveBK123
1 replies
19h27m

That is not her only appeal, she is a renowned actress.

However Sam tweeted "her" which is literally the movie where she voices the AI girlfriend. And then made a synthetic replica of her voice the star of their new demo against her wishes.

It's pretty direct what he is pitching at.

keepamovin
0 replies
19h25m

Fair enough. But I think your comment was reductive…

but it doesn’t matter, because how he may have marketed it in a 140 character tweet does not encompass the entirety of how it could be used, of course

skywhopper
1 replies
18h31m

That was not an exclusive statement. It can be a billion lonely guys, AND lonely women, AND gregarious enbies, AND everyone else.

steveBK123
0 replies
18h29m

Indeed, they can find the male voice equivalent. Though to be fair I think men are MUCH more susceptible to this than women.

That said, Krazam covered this topic well already https://www.youtube.com/watch?v=KiPQdVC5RHU

consumer451
0 replies
6h13m

I would have to imagine that having your voice being spit out of millions of machines, all day long, might dilute one's brand, no?

Also, suddenly your voice is being used to say things that you would never say. What could go wrong?

Suddenly, instead of ScarJo, the famous movie star, you are that crazy voice from OpenAI.

chemmail
0 replies
18h32m

More like delaying the inevitable. We have ai voice generation so good now, they use it to stage ransom calls and parents cannot tell the difference.

logrot
2 replies
19h35m

I think you're pretty naive.

keepamovin
1 replies
19h28m

I think you're pretty naive.

I don’t think it’s about me, but since you brought it up, I try to maintain my innocence in this world. I try to be biased towards that rather than cynicism, at least… I think that’s important. Cynicism is a kind of arrogance: where you think you’ve seen it all before… but you’re wrong.

But thank you for the opportunity Your comment provides to speak to that: I do appreciate it. the chance to add clarity.

thelastCube
0 replies
10h0m

I think you are a class act.

suddenexample
1 replies
19h22m

Were royalties in the picture? Don't think it's crazy to think that a company obsessed with replacing artists wouldn't value one enough to pay royalties.

And in terms of collaboration potential... OpenAI is a big draw for businesses and a subset of tech enthusiasts, but I don't think artists in any industry are dying to collaborate with them.

cdrini
0 replies
11h35m

According to Open AI's post about the Sky voice controversy:

"Each actor receives compensation above top-of-market rates, and this will continue for as long as their voices are used in our products."

https://openai.com/index/how-the-voices-for-chatgpt-were-cho...

Not sure if this is royalties, but it seems like there's some form of long term compensation. But it's a little vague so not sure.

whamlastxmas
0 replies
19h19m

SJ is a major character in like 5 of the top grossing movies of all time. The royalties from her voice being used by ChatGPT would be meaningless to her

tootie
0 replies
19h24m

ScarJo probably has enough money for ten lifetimes already. The potential downside of signing your identity away to a SV plutocrat with questionable morality and world-changing technology is enormous. And it seems like she was instantly vindicated in not trusting him.

fxd123
0 replies
18h4m

chose to forgo substantial royalties and collaboration potential

We know nothing about their offer to her. Could have just been a bad deal

worstspotgain
14 replies
18h50m

Most of the reactions here are in unison, so there's little left to contribute in agreement.

I'll ask the devil's advocate / contrarian question: How big a slice of the human voice space does Scarlett lay a claim to?

The evidence would be in her favor in a civil court case. OTOH, a less famous woman's claim that any given synthesized voice sounds like hers would probably fail.

Contrast this with copyrighted fiction. That space is dimensionally much bigger. If you're not deliberately trying to copy some work, it's very unlikely that you'll get in trouble accidentally.

The closest comparison is the Marvin Gaye estate's case. Arguably, the estate laid claim to a large fraction of what is otherwise a dimensionally large space. https://en.wikipedia.org/wiki/Pharrell_Williams_v._Bridgepor...

jakelazaroff
8 replies
18h20m

(Not a lawyer) I think the issue is not just that they sound similar, but that OpenAI sought to profit from that perceived similarity. It’s pretty clear from Sam Altman’s “her” tweet that OpenAI, at least, considers it a fairly narrow slice of the human voice space.

worstspotgain
7 replies
17h44m

That's true. As I suggested, this case may well be open and shut. But what if it was Google's or Meta's voice that sounded exactly like Sky, i.e. without a history of failed negotiations? The amount of "likeness" would technically be identical.

Are companies better off not even trying to negotiate to begin with?

MrMetlHed
5 replies
15h14m

It'd be fine to not negotiate if you weren't going to use a voice that sounded famous. If I were offering something that let you make any kind of voice you want, I would definitely not market any voice that sounded familiar in any way. Let the users do that (which would happen almost immediately after launch). I would use a generic employee in the example, or the CEO, or I'd go get the most famous person I could afford that would play ball. I would then make sure the marketing materials showed the person I was cloning and demonstrated just how awesome my tool was at getting a voice match.

What I wouldn't do is use anything that remotely sounds famous. And I would definitely not use someone that said "no thanks" beforehand. And I would under no circumstances send emails or messages suggesting staff create a voice that sounds like someone famous. Then, and only then, would I feel safe in marketing a fake voice.

worstspotgain
4 replies
15h3m

Sounds judicious. You probably wouldn't get sued, and would prevail if sued. However, the question of how much human voice space Scarlett can lay claim to remains unsettled. Your example suggests that it might be quite a bit, if law and precedent causes people to take the CYA route.

Consider the hypothetical: EvilAI, Inc. would secretly like to piggyback on the success of Her. They hire Nancy Schmo for their training samples. Nancy just happens to sound mostly like Scarlett.

No previous negotiations, no evidence of intentions. Just a "coincidental" voice doppelganger.

Does Scarlett own her own voice more than Nancy owns hers?

Put another way: if you happen to look like Elvis, you're not impersonating him unless you also wear a wig and jumpsuit. And the human look-space is arguably much bigger than the voice-space.

kelnos
1 replies
13h16m

However, the question of how much human voice space Scarlett can lay claim to remains unsettled

I don't think it's that unsettled, at least not legally. There seems to be precedent for this sort of thing (cf. cases involving Bette Midler or Tom Waits).

I think the hypothetical you create is more or less the same situation as what we have now. The difference is that there maybe isn't a paper trail for Johansson to use in a suit against EvilAI, whereas she'd have OpenAI dead to rights, given their communication history and Altman's moronic "Her" tweet.

Does Scarlett own her own voice more than Nancy owns hers?

Legally, yes, I believe she does.

MrMetlHed
0 replies
4m

There are other ways public figures are treated differently in the courts in the US. It's much more difficult for them to prove libel or slander, for instance. They have to prove actual malice and intent, whereas a private citizen just has to prove negligence. I imagine "owning" their likeness at a broader sense is the flip side of that coin.

jakelazaroff
1 replies
14h49m

I know toying with these edge cases is the “curious” part of HN discussions, but I can’t help but think of this xkcd: https://xkcd.com/1494/

worstspotgain
0 replies
14h34m

HN discussions, grad school case studies, and Supreme Court cases alike. Bad cases make bad laws, edge cases make extensive appeals.

jakelazaroff
0 replies
17h34m

Not sure legally. I chose the words “perceived similarity” intentionally to encompass scenarios in which the similarity is coincidental but widely recognized. Even in that case, I believe the original person should be entitled to a say.

worstspotgain
1 replies
17h49m

It's indeed pretty similar. However, it involved singing a song Midler was known for. This case is at best peripheral to the movie Her, in that the OpenAI voice does not recite lines from the movie.

pseudalopex
0 replies
13h47m

The decision said Midler's voice was distinctive. Not the song.

wraptile
0 replies
11h13m

This sort of copyright seems completely unethical to me.

We have 8 billion people, probability of unique voice and intonation is extremely unlikely. Imagine someone else owning your voice. Someone much richer and more powerful. No entertainment is worth putting fellow human beings through such discrimination and cruelty.

TaroEld
0 replies
11h57m

My concern is that cases like this would set the precedent that synthetic voices can't be too close to the voice of a real, famous person. But where does that leave us? There's been lots of famous people since the recording age, and the number is only going to increase. It seems unlikely that you can distinguish your fake voice from every somewhat public/famous real voice in existence, especially going forward. Will this not result in a situation where the synthetic voices must either sound clearly fake and non-human to not be confused with an existing famous voice, or the companies/producers must in every case pay royalties to the owner to a famous voice that sounds similarly close, even if their intent wasn't even to copy said voice or any that are similar, to avoid them getting sued afterwards? Are we going to pay famous people for being famous?

steveBK123
13 replies
19h37m

Incredibly stupid

The wink wink at creating an AI girlfriend is so bizarre

I guess we know who their target user base is

talldayo
8 replies
19h31m

Worse than that, good luck positioning yourself as a paragon of "AI safety" when you can't even handle basic human business relationships honestly.

imperialdrive
3 replies
18h55m

Absolutely. Good for Scarlett, and my gosh Sam and that org need to learn a few lessons. What were they thinking?? So gross.

steveBK123
2 replies
18h31m

Will be funny if Sam was the bad guy all along

eschaton
1 replies
15h36m

Uh, we all should know exactly who and what Sam Altman is by now.

He’s absolutely been the bad guy all along.

crazygringo
3 replies
19h14m

Seriously. This is utterly baffling to me.

OpenAI is trying to demonstrate how it's so trustworthy, and is always talking about how important it is to be trustworthy when it comes to something as important and potentially dangerous as AI.

And then they do something like this...??

I literally don't understand how they could be this dumb. Do they not have a lawyer? Or do they not tell their corporate counsel what they're up to? Or just ignore the counsel when they do?

steveBK123
1 replies
19h12m

Also retired the entire safety team in the same weak too, lol.

bigiain
0 replies
18h37m

I wonder howe much their anti disparagement clauses are about covering up how this went down internally?

prawn
0 replies
17h46m

Especially considering that the advantage gained by having an AI sound like this one individual is absolutely minimal. It's not as though any significant portion of a target market is going to throw a tantrum, saying "No, no, I refuse to accept this simulated companionship unless it sounds exactly like the voice in that one particular movie several years ago." Baffling that the company didn't recognise the risk here and retire that voice as soon as they were turned down the first time.

option
1 replies
18h41m

tell me. who target user base is?

__loam
0 replies
18h39m

Lonely losers who think computers are magic.

mrieck
1 replies
12h11m

I can't believe this demo hasn't been deleted yet:

https://twitter.com/OpenAI/status/1790089521985466587

Giggly, flirty AI voice demos were already weird, but now it's even creepier knowing the backstory of how they try to get their voices.

steveBK123
0 replies
7h10m

This demo and the one I linked just seem so open about the AI GF use case its bizarre.

If you actually wanted a voice assistant AI, having a giggly, chatty computer acting like it has a huge crush on you is not remotely useful in day to day real world use. Unless that's exactly what you want.

nabla9
13 replies
19h15m

Johansson has money to hire lawyers and immediate access to media, so they backed off.

Altman and OpenAI will walk over everyone here without any difficulty if they decide to take whats ours.

ecjhdnc2025
11 replies
18h29m

I often wonder why tech people think so positively about companies they idolise who are Uber-ing their way through regulations. Where do they think it stops?

Why would people not want laws? The answer is so they can do the things that the laws prevent.

This is POSIWID territory [0]. "The purpose of a system is what it does". Not what it repeatedly fails to live up to.

What was the primary investment purpose of Uber? Not any of the things it will forever fail to turn a profit at. It was to destroy regulations preventing companies like Uber doing what they do. That is what it succeeded at.

The purpose of OpenAI is to minimise and denigrate the idea of individual human contributions.

[0] https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

rockemsockem
4 replies
17h34m

But like, Uber gave us taxis on our phones. No taxi company was going to do that without a force like Uber making them to it.

ecjhdnc2025
3 replies
17h28m

It also gave you cab drivers who don't earn enough to be able to replace their vehicles.

You can cheer on "forces" like Uber all you like but I would prefer it if progress happened without criminal deception:

https://www.theguardian.com/news/2022/jul/10/uber-files-leak...

I don't see how anyone can read this and think the uber app is a net positive.

dclowd9901
1 replies
12h0m

There’s gotta be a middle ground. The registration system was shit and encouraged a ridiculous secondary market that kept a lot of people under someone else’s thumb too.

Why does everything keep getting worse? Why do people keep making less? We need to figure out the answers to these questions. And no, nobody here knows them.

ecjhdnc2025
0 replies
50m

Black cab regulation in London is not shit. It's simultaneously archaic and valuable; a cabbie can solve problems an Uber driver will never manage.

If there is a middle ground I'd like to think it involves not smashing through regulations that protect individual businesspeople (like cabbies) who have learned a difficult job.

rockemsockem
0 replies
16h52m

Citation needed on the "drivers who can't replace their vehicles" part. Lots of cities in the US have passed laws about how much such workers must be paid and I generally think the government is the one that should be solving that problem.

I read that whole article. I didn't know about the intentional strategy to send Uber drivers into likely violent situations. That's fucked up.

Most of that article seemed to focus on Uber violating laws about operating taxi services though. Sounds good to me? Like there's nothing intrinsically morally correct about taxi service operation laws. This sort of proves my point too. Some company was going to have to fight through all that red tape to get app-based taxis working and maybe it's possible to do that without breaking the law, but if it's easier to just break the law and do it, then whatever. I can't emphasize how much I don't care about those particular laws being broken and maybe if I knew more about them I'd even be specifically happy that those laws were broken.

nerdponx
3 replies
17h41m

I often wonder why tech people think so positively about companies they idolise who are Uber-ing their way through regulations. Where do they think it stops?

Because they don't think about the consequences, and don't want to. Better to retreat into the emotional safety of techno-fantasy and feeling like you're on the cutting edge of something big and new (and might make some good money in the process). Same reason people got into NFTs.

mycologos
1 replies
15h14m

Because they don't think about the consequences, and don't want to.

This is a dangerous way of thinking about people who disagree with you, because once you decide somebody is stupid, it frees you from ever having to seriously weigh their positions again, which is a kind of stupidity all its own.

timeon
0 replies
8h54m

because once you decide somebody is stupid

You just have made up an argument. There is no stated nor implied stupidity.

You can't dismiss critique of carelessness like that.

ecjhdnc2025
0 replies
17h26m

Same reason people got into NFTs.

Same people who got into NFTs.

wilg
0 replies
18h22m

Nobody besides cab medallion owners really liked the regulations that Uber violated is probably a big part of it

afro88
0 replies
14h4m

POSIWID

You need to be honest about what it actually does then. Cherry picking the thing you don't like and ignoring the rest will bring you no closer to true understanding

mateus1
0 replies
17h14m

I agree. They’re clearly have ethics beyond “steal whatever data is out there as fast as you can”.

HarHarVeryFunny
13 replies
18h27m

I found the whole ChatGPT-4o demo to be cringe inducing. The fact that Altman was explicitly, and desperately, trying to copy "her" at least makes it understandable why he didn't veto the bimbo persona - it's actually what he wanted. Great call by Scarlett Johansson in not wanting to be any part of it.

One thing these trained voices make clear is that it's a tts engine generating ChatGPT-4o's speech, same as before. The whole omni-modal spin suggesting that the model is natively consuming and generating speech appears to be bunk.

monroewalker
5 replies
17h55m

One thing these trained voices make clear is that it's a tts engine generating ChatGPT-4o's speech, same as before.

I'm not familiar with the specifics of how AI models work but doesn't the ability from some of the demos rule out what you've said above? Eg. The speeding up and slowing down speech and the sarcasm don't seem possible if TTS was a separate component

nabakin
1 replies
15h7m

Azure Speech tts is capable of speeding up, slowing down, sarcasm, etc with SSML. I wouldn't be surprised if it's what OpenAI is using on the backend.

vessenes
0 replies
15h2m

Greg has specifically said it's not an SSML-parsing text model; he's said it's an end to end multimodal model.

FWIW, I would find it very surprising if you could get the low latency expressiveness, singing, harmonizing, sarcasm and interpretation of incoming voice through SSML -- that would be a couple orders of magnitude better than any SSML product I've seen.

mmcwilliams
1 replies
17h34m

I have no special insight into what they're actually doing, but speeding up and slowing down speech have been features of SSML for a long time. If they are generating a similar markup language it's not inconceivable that it would be possible to do what you're describing.

GrilledChips
0 replies
16h12m

It's also possible that any such enunciation is being hallucinated from the text by the speech model.

AI models exist to make up bullshit that fills a gap. When you have a conversation with any LLM it's merely autocompleting the next few lines of what it thinks is a movie script.

HarHarVeryFunny
0 replies
16h51m

The older formant-based (vs speech sample based) speech sythesizers like DECTalk could do this too. You could select one of a half dozen voices (some male, some female), but also select the speed, word pronunciation/intonation, get it to sing, etc, because these are all just parameters feeding into the synthesizer.

It would be interesting to hear the details, but what OpenAI seem to have done is build a neural net based speech synthesizer which is similarly flexible because it it generating the audio itself (not stitching together samples) conditioned on the voice ("Sky", etc) it is meant to be mimicking. Dialing the emotion up/down is basically affecting the prosody and intonation. The singing is mostly extending vowel sounds and adding vibrato, but it'd be interesting to hear the details. In the demo Brockman refers to the "singing voice", so not clear if they can make any of the 5 (now 4!) voices sing.

In any case, it seems the audio is being generated by some such flexible tts, not just decoded from audio tokens generated by the model (which anyways would imply there was something - basically a tts - converting text tokens to audio tokens). They also used the same 5 voices in the previous ChatGPT which wasn't claiming to be omnimodal, so maybe basically the same tts being used.

leumon
3 replies
17h56m

I think it is more then a simple tts engine. At least from the demo, they showed: It can control the speed and it can sing when requested. Maybe its still a seperate speech engine, but more closely connected to the llm.

sooheon
0 replies
17h33m

tts with separate channels for style would do it, no?

nabakin
0 replies
15h9m

Azure Speech tts is capable of doing this with SSML. I wouldn't be surprised if it's what OpenAI is using on the backend.

kromem
0 replies
15h31m

Most impressive was the incredulity to the 'okay' during the counting demo after the nth interruption.

Was quickly apparent that text only is a poor medium for the variety and scope of signals that could be communicated by these multimodal networks.

aabhay
1 replies
18h5m

I wouldn’t go as far as your last statement. While shocking, it’s not inconceivable that there’s native token I/O for audio. In fact tokenizing audio directly actually seems more efficient since the tokenization could be local.

Nevertheless. This is still incredibly embarrassing for OpenAI. And totally hurts the company’s aspiration to be good for humanity.

timeon
0 replies
9h0m

company’s aspiration to be good for humanity

Seems like they abandoned it pretty early - if it was real in the first place.

og_kalu
0 replies
17h27m

One thing these trained voices make clear is that it's a tts engine generating ChatGPT-4o's speech, same as before. The whole omni-modal spin suggesting that the model is natively consuming and generating speech appears to be bunk.

This doesn't make any sense. If it's a speech to speech transformer then 'training' could just be a sample at the beginning of the context window. Or it could one of several voices used for the Instruct-tuning or RLHF process. Either way, it doesn't debunk anything.

stavros
12 replies
19h14m

This is the first I've heard of this, but I've used the "Sky" voice extensively and never once thought it sounded like Johansson. Has anyone else noticed a similarity? To me they sound pretty different, Johansson's voice is much more raspy.

muglug
3 replies
16h11m

There's two different things: the Sky voice they launched last year, as heard here: https://www.youtube.com/watch?v=RcgV2u9Kxh0. The voice actor is the same, but the intonation is fairly flat.

They changed the voice to intone like Scarlett Johansson's character. It's like they changed the song the voice was singing to one that lots of people recognise.

wraptile
0 replies
11h38m

That's a big stretch. Allowing intonation to be copyrighted would be incredibly silly. The timeline and intention makes sense but this being "similar" would never win and it shouldn't. Sam should have kept his mouth shut.

stavros
0 replies
16h4m

Oh, interesting, I didn't know that, thank you. Do you have an example of the changed voice anywhere?

kromem
0 replies
15h28m

I don't think that's exactly accurate.

What's likely different is that GPT-4o can output the tonality instructions for text to speech now.

It's probably the same voice, but different instructions for generations. One was without tonal indicators, one with.

z7
1 replies
18h27m
WrongAssumption
0 replies
16h16m

I mean, the first post you linked to explicitly says they do sound similar. The last one says they don’t know what Scarlets voice sounds like to begin with.

sooheon
1 replies
9h44m

Yes, I feel gaslit by the whole situation

stavros
0 replies
9h42m

Yeah, very odd. Don't get me wrong, I'd absolutely love Johansson's voice on the thing, but Sky is not it.

nmeofthestate
1 replies
10h3m

Yes when the demos first came out I don't remember seeing anyone comparing the voice to Scarlett Johansson's. It seems to be a meme that's taken hold subsequently with the news about OpenAI trying to license her voice.

chemmail
1 replies
18h25m

Bottom line anyone who watched the movie "Her" will make an immediate connection. Also does not help Sam Xed "her" the night before. Pretty much slam dunk case.

stavros
0 replies
17h51m

Anyone who watched the movie "Her" will make an immediate connection with any of the female voices. It's kind of the entire point of the film.

xbmcuser
11 replies
14h42m

The more I find out about Altman the more I agree with the previous board about removing him. The guy just feels sleazy to me. Though he is doing what I want and that is not giving a fuck about artificial monopolies granted by government ie copyright

titanomachy
3 replies
14h32m

It would be helpful if you linked to something more specific (even a timestamp). I’m not going to watch this nearly 3-hour video to decide whether it supports your statement or not.

titanomachy
0 replies
27m

What's the quote from this that indicates he is trying to "obtain an artificial monopoly on AI"?

SahAssar
0 replies
1h28m

If you make a claim, link a source (but not what part of the source that actually backs your claim) then when asked which part of the source you just link another source that does not explicitly say what you initially said it makes your argument look weak.

Link and quote to the parts that back your claim and if it is not clear how it backs your claim say what your interpretation is.

insane_dreamer
3 replies
12h36m

Actually it’s the copyright that is supposed to prevent a monopoly arising in the form of an OpenAI, or Google News back in the day.

xbmcuser
2 replies
11h59m

Copyright was supposed to protect a small inventor or maker for a short period of time. But as those small makers became corporations and they lobbied for and were able to keep increasing the copyright periods from a few years to decades after the death of the individual the actual essence of the reason for copyright is gone. I am of the opinion that copyright apart from a fixed term should also have a form of value tax. So if the government is allowing you to have a monopoly on something you should pay a yearly tax on its valuation. And if it is not worth it to pay the copyright yearly tax to keep the copyright then the work is released to the commons.

rideontime
0 replies
4h23m

You seem to be talking about patents.

insane_dreamer
0 replies
4h59m

without copyright you have Google News sucking up all news organization content and robbing them of revenue (which is why news orgs sued and most went behind paywalls). Now OpenAI is doing the same thing to all content creators. Copyright law is the only thing protecting content creators or giving them any leverage at all. Copyright terms might be too long (we can debate that) but getting rid of it altogether would be the greater evil.

wraptile
0 replies
14h24m

The board reinstating Altman was what broke the camels back to me. It showed that the board is completely powerless and that Altman is simply a liar.

oblio
0 replies
14h2m

You're sort of funny. He's ignoring the existing artificial monopolies just to make money, when it suits him.

He's using trade secrets, copyright, patents, NDAs liberally.

This is not a principled stand, just opportunism.

ThinkBeat
11 replies
19h26m

SJ mentions deep fakes.

It is quite possible that OpenAI has synthesized the voice from SJ material.

However If OpenAI can produce the woman who did is the current voice, and she has a voice nearly identical that of SJ would that mean OpenAI had done something wrong?

Does SJ since she is a celebrity hold a "patent" right to sound like her.

The more likely scenario is that they have hired a person and told her to try and imitate how SJ sounds.

What is the law on something like that?

not2b
8 replies
19h14m

The answer, based on two different court precedents (Bette Midler, Tom Waits), is that the company can't do that. Companies cannot hire soundalike people to advertise their products after the person with a distinctive voice they really wanted declined. Doesn't matter if they hired a soundalike and used her voice.

crimsoneer
3 replies
19h6m

There is a big, big difference between actively and intentional imitating a voice, and having a broadly similar voice though!

talldayo
0 replies
18h45m

A difference that gets increasingly narrow when you are desperately trying to license the original likeness.

crimsoneer
0 replies
18h48m

The CEO tweeting a jokey reference to your voice really doesn't help though.

bigiain
0 replies
18h40m

Yeah but...

Them trying (and failing) to negotiate the rights, and then them vaguely attempting again 2 days before launch, and fucking Altman tweeting a quite obvious reference to a movie in which SJ is the voice of an AI girlfriend - leans very very strongly in the direction of "active and intentional imitation".

Anybody trying to claim some accidental or coincidental similarity here has a pretty serious credibility hole they need to start digging themselves out of.

chipweinberger
3 replies
17h15m

If they are not trying to trick people into believing xyz person did the voice acting, and instead are going for a certain style, I think they would be protected by freedom of expression. Think of how authors of books describe voices in great detail.

i.e. intent matters.

In this case, since the other voice actor has a clearly different voice than SJ, it seems like their intent is to just copy the general 'style' of the voice, and not SJ's voice itself. Speculative though.

Ar-Curunir
2 replies
17h11m

In this case, since the other voice actor has a clearly different voice than SJ, it seems like their intent is to just copy the general 'style' of the voice, and not SJ's voice itself.

How can you say that when they literally approached SJ for her voice, and then asked the voice actor to reproduce SJ's voice?!

rvz
0 replies
13h30m

Exactly this.

The fact Johansson did not give permission to OpenAI to use their voice, they then hired a voice actor to similarly copy her voice likeliness with Altman tweeting a reference to the film ‘Her’ which Johansson was the starring voice actor in that film, tells you that OpenAI intended to clone and use her voice even without permission.

OpenAI HAD to pull the voice down to not risk yet another lawsuit.

The parent comment clearly has the weakest defense I have seen on this discussion.

chipweinberger
0 replies
16h53m

and then asked the voice actor to reproduce SJ's voice?!

you are just making that up afaict.

How can you say that when they literally approached SJ for her voice

Almost by definition SJ's voice will match the style of 'Her', at least for awhile (*). So why not ask SJ first?

(*) voices change significantly over time.

prawn
1 replies
17h39m

SJ's side would use pre-action discovery and trawl through internal OpenAI communications to check. If there was any suggestion of instruction from management to find a similar voice, they'd potentially be in trouble. There's already the indication that they wanted her voice through official channels.

And would they need to use a voice actor when there is a substantial body of movie dialogue and interviews? I'd be surprised if they'd bothered.

ThinkBeat
0 replies
7h53m

This happens a lot in movies though. If the lead famous actor turns down a part, esp in a sequel the film maker will spend time deliberately finding an actor that looks a lot like the lead star.

(or they rewrite the roll)

kashyapc
10 replies
9h35m

Altman often uses tactical charisma to trap gullible people, government entities, and any unsuspecting powerful person for his ends. He will not bat an eyelid to take whatever unethical route if that gives him "moat". He relentlessly talks as if "near-term AGI" is straining to get out of the bottle in his ClosedAI basement. He will tell you with great concern about how "nervous" or "scared" (he said this to the US Congress[1]) of what he thinks his newest LLM model is gonna let loose on humanity.

So he's here to help regulate it all with an "international agency" (see the reference[2] by windexh8er in this thread)! Don't forget that Altman is the same hack who came up with "Worldcoin" and the so-called "Orb" that'll scan your eyeballs for "proof of personhood".

Is this sleazy marketer the one to be trusted to lead an effort that has a lasting impact on humanity? Hell no.

[1] https://news.ycombinator.com/item?id=38312294

[2] https://news.ycombinator.com/item?id=40423483

ml-anon
9 replies
9h0m

"Tactical charisma" is a good one.

Honestly though, if you actually listen to him and read his words he seems to be even more devoid of basic empathetic human traits than even Zuckerberg who gets widely lampooned as a robot or a lizard.

He is a grifter through-and-through.

jajko
7 replies
8h47m

Emotional intelligence /true empathy cannot be learned or acquired, at least IMHO.

But it can be learned to be mimicked almost to perfection, either by endless trial & error or by highly intelligent motivated people. It usually breaks apart when completely new intense / stressful situation happens. Sociopaths belong here very firmly and form majority.

If you know what to look for, you will see it in most if not all politicians, 'captains of industry' or otherwise people who got to serious power by their own deeds.

Think about a bit - what sort of nasty battles they had to continually keep winning with similar folks to get where they are, this ain't the place for decent human beings, you/me would be outmatched quickly. Jordan Peterson once claimed you have cca 1/20 of sociopaths in general population, say 15 millions just in US? Not every one is highly intelligent and capable of getting far, but many do. Jobs, Gates, Zuckenberg, Bezos, Musk, Altman and so on and on. World is owned and run by them, I'd say without exception.

ml-anon
3 replies
7h41m

In the case of Peterson, I'd say it takes one to know one.

sherkaner
2 replies
5h9m

On the contrary, sociopaths are not emotional or political. They manipulate, while Peterson fights. We might disagree with him, but I'm 100% sure he is an honest human being.

ml-anon
0 replies
4h26m

This is so off topic but he's literally a Griting-101 playbook follower. Harks about strength of will, male manliness and clean rooms. Gets addicted to benzos, goes to russia to get himself practically lobotomized and films himself disheveled and crying in a filthy room. He's 100% a grifter.

ml-anon
0 replies
4h26m

This is so off topic but he's literally a Griting-101 playbook follower. Harks about strength of will, male manliness and clean rooms. Gets addicted to benzos, goes to russia to get himself practically lobotomized and films himself disheveled and crying in a filthy room. He's 100% a grifter.

tim333
0 replies
1h20m

World is owned and run by them

There's a song about that - Jarvis Cocker - "Running the World"

Doesn't get played on radio because of the lyrics

raxxorraxor
0 replies
7h30m

Almost any public figure that displays empathy will do it for show or as a statement. On many occasion not showing an emotional reaction is the empathetic thing to do as well.

You cannot really judge people by their public appearance, it will in most cases be a fake persona. So the diagnosis of Jobs or Zuckerberg isn't really grounded in reality if you do not know them personally.

ecjhdnc2025
0 replies
8h9m

Emotional intelligence /true empathy cannot be learned or acquired, at least IMHO.

I think you are right in general here in this comment but I am not sure if you are right on this bit.

Peterson might be slightly overstating the number of sociopaths (others put it at more like one in thirty).

Those people have to fake it (if they can be bothered; it doesn't seem to hold people back from the highest office if they don't)

The vast majority of people with noticeably low empathy, though, simply haven't ever been taught how to nurture that small seed of empathy, how to use it to see the world, how to feel the reciprocal benefits of others doing the same. How to breathe it in and out, basically. It's there, like a latent ability to sing or draw or be a parent, it's just that we're not good at nurturing it specifically.

Schools teach "teamwork" instead, which is a lite form of empathy (particularly when there is an opposing team to "other")

I was never a team player, but I have learned to grow my own empathy over the years from a rather shaky sense of it as a child.

kashyapc
0 replies
8h6m

I agree. By tactical charisma, I didn't mean to imply that he has genuine empathy. I mean that he says things the other person finds pleasing, in just the right words, and credible-sounding seriousness. Tactical in a tempting sense: "Don't you want to be the bridge between man and machine, Scarlett?" or, "Imagine comforting the whole planet with your voice" — I've slightly rephrased a bit here, but this is how he tried to persuade Scarlett Johannson into "much consideration" (her words).

Yes, I've listened to Altman. A most recent one is him waffling with a straight-face about "Platonic ideals"[1], while sitting on a royal chair in Cambridge. As I noted here[2] six months ago, if he had truly read and digested Plato's works, he simply will not be the ruthless conman he is. Plato would be turning in his grave.

[1] https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3632s

[2] https://news.ycombinator.com/item?id=38312875

PixelPaul
10 replies
19h49m

I am really not liking this Sam guy and how he does things. He has an attitude of “my way and only my way, and I don’t care what you think or do”

dylan604
2 replies
19h38m

This is pretty much quintessential founder behavior. I have had my run-ins with people like this, and the relationship is usually short lived. I do not drink the kool-aid, and question pretty much everything. These types of personalities are like oil and water and do not mix. You almost need a third person to act as a emulsifier to allow the oil&water to mix without separating.

suzzer99
0 replies
18h57m

Yeah this explains why most of my interviews with startups haven't gone well. I've even had friends ask me to do PoC work for them for equity, and they still get all bent out of shape when I'm not instantly smitten by their one-sentence pitch, and instead ask for more details about their business plan.

If I thought I had a great idea, I would want people to try to poke holes in it. Yet founders often universally seem to be the incredibly sensitive and insecure about their idea.

laborcontract
0 replies
19h34m

An emulsifier role is a great way to put it. In basketball, that's the glue guy.

oglop
1 replies
19h43m

That’s all successful tech companies out of Silicon Valley.

It is a silly place.

dylan604
0 replies
19h25m

On second thought, let's not go there.

It's not just successful companies though. There is a bit of ego necessary in a founder that makes them think their idea or their implementation of a thing is better so that it needs to be its own company. Sometimes though they even get caught up in their own reality distortion fields with obviously bad ideas or ideas implemented badly due to their own arrogance that ultimately fails.

octopoc
0 replies
18h14m

"Sam and Jack, I know you remember my Torah portion was about Moses forgiving his brothers. “Forgive them father for they know not what they’ve done” Sexual, physical, emotional, verbal, financial, and technological abuse. Never forgotten."

That is...not a pretty picture. We desperately need someone else at the helm of OpenAI.

aeurielesn
1 replies
19h42m

Him and pretty much the entire SV culture.

talldayo
0 replies
19h37m

Relax guys, he's Open!

Open for business, Open to suggestions, and Open season for any lawyers that want a piece of the sizable damages.

vundercind
0 replies
18h41m

Heavily self-promoting, brash aren’t-I-great business sorts are a pretty damn consistent type across time, and it’s not a good type.

neilv
9 replies
16h55m

Isn't OpenAI mostly built upon disregarding the copyright of countless people?

And hasn't OpenAI recently shown that they can pull off a commercial coup d'état, unscathed?

Why would they not simply also take the voice of some actress? That's small potatoes.

No one is going to push back against OpenAI meaningfully.

People are still going to use ChatGPT to cheat on their homework, to phone-in their jobs, and to try to ride OpenAI's coattails.

The current staff have already shown they're aligned with the coup.

Politicians and business leaders befriend money.

Maybe OpenAI will eventually settle with the actress, for a handful of coins they found in the cushions of their trillion-dollar sofa.

johnnyanmac
6 replies
11h9m

Isn't OpenAI mostly built upon disregarding the copyright of countless people?

It sure was. But OpenAI decided to poke the Bear and is being sued by NYT. And apparently as a sidequest they thought it best to put their head in a lion's mouth. I wouldn't call the PR clout and finances of an A-list celebrity small potators.

They could have easily flown under the radar and have been praised as the next Google if they kept to petty thievery on the internet instead of going for the high profile content.

People are still going to use ChatGPT to cheat on their homework, to phone-in their jobs, and to try to ride OpenAI's coattails.

Sure, and ChatGPT isn't goint to make lots of money from these small time users. They want to target corporate, and nothing scares of coporate more than pending litigation. So I think this will bite them sooner rathter than later.

Maybe OpenAI will eventually settle with the actress, for a handful of coins they found in the cushions of their trillion-dollar sofa.

I suppose we'll see. I'm sure she was offered a few pennies as is, and she rejected that. She may not be in it for the money. She very likely doesn't need to work another day in her life as is.

flanked-evergl
5 replies
8h7m

> Isn't OpenAI mostly built upon disregarding the copyright of countless people?

It sure was.

Can you cite something that elaborates on this point? Do people who read books and then learn from it also disregard copyright? How is what OpenAI does meaningfully different from what people do?

NicuCalcea
3 replies
6h29m

Are those people then reselling the contents of the books?

flanked-evergl
2 replies
5h44m

How is that relevant? Is OpenAI reselling the contests of books? If so, this is the first I have heard of it.

mepiethree
1 replies
5h25m

Yes. You can (could?) paste the “article preview” into GPT-4 and get full articles for free, basically pirating a NYT subscription. Here are “one hundred examples of GPT-4 memorizing content from the New York Times”:

https://nytco-assets.nytimes.com/2023/12/Lawsuit-Document-dk...

flanked-evergl
0 replies
5h22m

Thanks for sharing, that is indeed quite a compelling case, and I did not expect it to be regurgitating verbatim text to this extent. It would be interesting to see how OpenAI addresses this.

mateus1
0 replies
5h8m

DALL-E replicates artist’s lifestyle with no regard to copyright.

Sora is doing the same on YouTube videos. It blocks queries like “in the style of Wes Anderson” but it still uses that as training data and generating content.

tony_cannistra
1 replies
15h12m

No one is going to push back against OpenAI meaningfully.

Couldn't, perhaps, one of the more famous people on Earth be responsible for "meaningfully" taking OpenAI to task for this? Perhaps even being the impetus for legislative action?

neilv
0 replies
14h22m

And allied with an army of other artists.

If they tell the story of OpenAI, in a way that reaches people, that would be a triumph of the real artists, over the dystopian robo-plagiarists.

I love it already.

callalex
8 replies
18h12m

I think we should all strive to meet the standards of “Weird” Al Yankovic. He set out to do something that was widely hated by a very powerful and litigious sector, and yet after a full career he is widely revered by both the public and the industry. He masterfully sidestepped any problems by adhering to the basic concept of consent while still getting what he wanted 99% of the time.

justin66
5 replies
16h36m

Did anyone say no to him other than Prince?

morkalork
3 replies
15h51m

Coolio didn't say yes to Amish paradise but the record label that had the rights agreed to the deal without him (which certainly shows who's in charge in that industry eh?). Many years later, Coolio admitted that he was wrong about the whole thing though.

justin66
2 replies
14h40m

Oh my. That’s disappointing, in the sense that I believe I’ve heard Al say that he always checks with the artist.

zamadatix
1 replies
13h58m

If you take Weird Al's word for it, he was told Coolio had approved and only later found out it was the other way around:

"…two separate people from my label told me that they had personally talked to Coolio… and that he told them that he was okay with the whole parody idea…Halfway into production, my record label told me that Coolio’s management had a problem with the parody, even though Coolio personally was okay with it. My label told me… they would iron things out — so I proceeded with the recording and finished the album."

https://www.vulture.com/2011/12/gangstas-parodist-revisiting...

justin66
0 replies
12h52m

Thanks. Yeah, I don't think Weird Al would lie about something like that.

apengwin
0 replies
16h26m

Paul McCartney

willis936
0 replies
16h11m

It worked for Weird Al because he's always had good intentions. If all he ever tried to do was scam people by overselling, fear mongering, saber rattling, and stealing whatever was in reach then it wouldn't have worked out for him. We'll see if OpenAI can last as long as Weird Al.

paxys
0 replies
5h45m

There was never an argument about whether what Weird Al was doing was legal or not. Parody has been 100% covered by fair use laws from day one, and getting permission was just the cherry on top. OpenAI exists on much murkier legal grounds.

z7
6 replies
18h50m

So they wanted Johansson's voice, she declined, they chose another voice actress who sounds somewhat similar. Can someone explain why that is bad? I don't really get it.

rangerelf
5 replies
18h18m

Did they really hire another voice actress? Who?

As for why it's bad, it's because they set down the precedent of wanting specifically Scarlett Johansen's voice, got declined, doubled down, got declined again, and then went ahead and did it anyway. They can say in their own defense that it's some other voice actress that sounds similar, ok, so produce that name, tell us who she is.

Absent that, it's Johansen's voice, clipped from movies and interviews and shows and whatever.

drcode
2 replies
16h28m

Yeah, an ScarJo asked them to provide evidence of this before suing them, and they couldn't... and they have a track record of lying.

dorkwood
1 replies
12h45m

Why shouldn't we trust what OpenAI says? If they say they used another actress, we should give them the benefit of the doubt. Questioning them only gives fuel to their detractors, which could in turn slow progress and reduce investment in the space.

mrbungie
0 replies
12h2m

Do they get a free pass because of accelerationist reasons? If anything this is a reason to stall/brake.

sama tweet after the demo + SJo's press release + OpenAI not even risking it and pulling out the voice from ChatGPT should raise enough doubts if anything.

minimaxir
0 replies
17h4m

The training for a voice decoder model at GPT-4o's quality would require specific and numerous high-quality examples.

This is different from how voice cloning models like ElevenLabs works.

nicklecompte
6 replies
18h24m

From the Ars Technica story[1], this is very funny:

But OpenAI's chief technology officer, Mira Murati, has said that GPT-4o's voice modes were less inspired by Her than by studying the "really natural, rich, and interactive" aspects of human conversation, The Wall Street Journal reported.

People made fun of Murati when she froze after being asked what Sora was trained on. But behavior like that indicates understanding that you could get the company sued if you said something incriminating. Altman just tweets through it.

[1] https://arstechnica.com/tech-policy/2024/05/openai-pauses-ch...

gkanai
5 replies
13h21m

Bloomberg's Odd Lots Podcast had an ex-CIA officer, Phil Houston, on in April of 2024. He was promoting a new book but he had a lot of great advice for anyone to use regarding 'tells' when people are lying. Murati was clearly lying- that's obvious then and now.

https://podcasts.apple.com/us/podcast/an-ex-cia-officer-expl...

IceDane
2 replies
8h10m

Is there actually any evidence for this? AFAIK, other similar claims about people doing certain things when lying have been debunked(like fidgeting, avoiding eye contact, etc)

j-bos
1 replies
7h8m

It's context amd person to person specific, with many possibilities for false positives and negatives.

nicklecompte
0 replies
5h32m

Right - the only reason Murati's behavior was "a tell" was that rational observers had a good reason to believe her statement was evasive and misleading, regardless of how she said it. In that context her emotional response is a funny (yet only barely rational) indication that she wasn't being honest. But trying to go the other direction - "her emotional response seems dishonest, what part of her statement was a lie?" - is as scientific as astrology[1], and a disaster for real people to use in real life. You'll quickly end up accusing innocent people for mean and bigoted reasons.

[1] Modern quack science might be worst than modern astrology/etc, since the stance of "I know it's not real, but..." means astrology folks can freely disregard horoscopes that are socially or ethically objectionable. If you're claiming your BS is actually Science, I think for a lot of people there's a sort of vicious feedback loop. Especially with social stuff, where these claims are essentially unfalsifiable. ("Body language reading is Science and cannot fail, it can only be failed by my incompetence.")

coolandsmartrr
1 replies
11h45m

Could you explain what "to use regarding 'tells'" means in this context?

applecrazy
0 replies
11h33m

A "tell" in this case is domain-specific terminology to denote a behavior that provides information that the person may have been trying to keep secret. I believe the term comes from poker:

https://en.wikipedia.org/wiki/Tell_(poker)

mxstbr
6 replies
19h48m

There is no source; black text on a white background. How do we know this is real?

Animats
0 replies
15h42m

Variety article updated: [UPDATE: Johansson released a statement saying Altman had reached out to ask her to lend her voice to ChatGPT but she declined; when she heard the demo, “I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.”]

llamaimperative
0 replies
19h45m

It was posted by the tech reporter at NPR. Inb4 “journos can’t be trusted” blah blah blah, here in reality NPR is a reputable org and a reasonable person’s Bayesian priors would put this at “almost certainly an actual statement from ScarJo.”

endisneigh
6 replies
19h47m

Sadly not much will come of this. Even if they’re fined, so what?

MrSkelter
2 replies
19h40m

You have no idea. Th will settle or be forced to admit they used a movie studios IP without payment to clone a voice model. They will cut a check for tens of millions and maybe stock as well. They will run from this as is clearly obvious from the immediate takedown. They are in crisis mode.

astrange
1 replies
19h27m

Who says creating a voice model from a movie would require you to pay anyone?

rangerelf
0 replies
18h22m

I'd say it's a given, every detail in a movie is an artistic expression of some kind.

fullshark
1 replies
19h16m

I don't agree, if Johansson is serious about wanting a legal precedent set and doesn't care about the money (tbd) then a hypothetical lawsuit could go to the US Supreme court and lead to a decision that has significant ramifications.

bigiain
0 replies
18h26m

"US Supreme court"

Yeah. _That_ well known completely rational and unquestionably incorruptible institution.

I would bet Altman had been to more teenage sex parties and paid for more holidays with SC judges that Scarlett has... :sigh:

polynomial
0 replies
19h0m

Exactly, cost of doing business.

rockemsockem
4 replies
18h18m

This is interesting to hear and if she decides to sue there's extremely clear precedent on her side.

The fact that they reached out to her multiple times and insinuated it was supposed to sound like her with Sam's "her" tweet makes a pretty clear connection to her. Without that they'd probably be fine.

Bette Midler sued Ford under very similar circumstances and won.

https://en.m.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.

thefourthchime
1 replies
16h48m

I'm not sure. In the case of *Midler v. Ford Motor Co.*, the advertising agency hired a singer to do an impression of Bette Midler herself, not a character she performs. The singer was instructed to sound as much like Bette Midler as possible while performing her hit song "Do You Want to Dance?" for the Ford commercial. This use of a sound-alike voice led to the lawsuit, as it mimicked Midler's distinctive voice without her permission.

Scarlett Johansson's character in the movie is not Scarlett Johansson although her voice is very similar. I wouldn't say it's identical.

rockemsockem
0 replies
16h37m

I haven't seen the commercial, but I feel like it's also probably not identical. My read of the case is that the context connecting Midler to the ad without her consent was a key feature, which makes me think Scarlett Johansson would be in a similarly strong position if she brought a case.

But ultimately I'm also not sure. There are some differences that courts could find important. I hope she sues and refuses to settle, so we can find out!

thefourthchime
1 replies
17h5m

I wonder if Scarlett Johansson really has a legal copyright since Warner Bros. owns the rights to the movie "Her." It would be like Dan Castellaneta trying to get a copyright for Homer Simpson when the character is owned by Fox.

rockemsockem
0 replies
16h41m

It's not copyright, you can't copyright a voice, it's down to likeness laws. It seems to me that they clearly invoked her likeness as a celebrity.

It'll be interesting to see what happens if she sues and refuses to settle.

GaggiX
4 replies
18h31m

"Sky" voice has been the default for about 8 months now, I think, if it resembles Scarlett Johansson so much, why does no one seem to have mentioned it before?

zamadatix
2 replies
17h27m

It's been mentioned so much in the last 8 months it prompted me to go watch "Her" 11 years after it came out.

GaggiX
1 replies
17h14m

I only saw Scarlett Johansson and "Her" being mentioned only after the presentation of ChatGPT-4o. I saw people asking for a Scarlett Johansson voice tho.

zamadatix
0 replies
16h54m

This way prior to 4o. The demo did seem to make the discussion louder though as everyone was suddenly talking about the voice feature whereas previously it was a slow side option for mobile devices only.

ec109685
0 replies
16h32m

They added much more Her like emotion to the gpt-4o version, so the similarities are more striking.

dheera
3 replies
19h50m

What are the chances that among 7 billion people in the world that there are always going to be 100 people that sound like you? If Sam Altman was going for a particular voice, there are probably 100 people that indistinguishably have that voice and it just becomes a question of a headhunt.

karaterobot
0 replies
19h31m

Trying to imitate her voice to get around paying her wouldn't be okay either. Waits vs. Frito Lay taught us that, if nothing else did. The question is whether people would think they were hearing Scarlett Johansson's voice when using that product, and the answer is yes, so they have to pay her to trade on her identity.

guhidalg
0 replies
19h42m

One word: "Her"

vagab0nd
2 replies
18h18m

What if the model asks the user to input an audio sample of the person they'd like to hear, and use that? Would that be legal?

prawn
0 replies
17h44m

Presumably the software could have terms of use that put any onus of use on the individual, assuming their marketing didn't promote being able to do this with random celebrities' voices.

MarioMan
0 replies
17h27m

That’s how Elevenlabs voice cloning works. They put the onus on the person making the clone to have gotten consent.

https://elevenlabs.io/voice-cloning

unraveller
2 replies
11h54m

Damning would be a side by side comparison of voices to assess the claim. We have the technology.

ChatGPT using Sky voice (not 4o - original release): https://youtu.be/JmxjluHaePw?t=129

Samantha from "Her" (voiced by ScarJo): https://youtu.be/GV01B5kVsC0?t=134

Rashida Jones Talking about herself https://youtu.be/iP-sK9uAKkM

I challenge anyone to leave prejudice at the door by describing each voice in totality first and seeing if your descriptions overlap entirely with others. They each have an obvious unique whispiness and huskiness to them.

dilyevsky
1 replies
11h51m

First and second sample have this super noticeable voice fry that Rashida doesn’t have (as much of)

unraveller
0 replies
10h34m

They would all sound alike when itching for agreement, I bet. The narrow likeness in accent is there at first but that isn't true likeness if significant other details emerge on further listening and that don't overlap.

iainctduncan
2 replies
16h4m

Sam Altman appears to not be smart enough to realize how much damage his unbridled selfishness and weaselry are capable of doing to Open AI.

They have no moat, they can't fix hallucinations, and people are starting to realize it's nowhere near as useful or close to AGI as he's been saying. If they hate him too, this ship is sunk.

What a bloody arrogant idiot.

dsign
0 replies
10h1m

I wish you were right and that ship would be sunk, by the grace of Sam Altman the bloody arrogant idiot. There are burning issues that AI could tackle, unsolved problems that cost billions of lives and whose solution require non-human information processing. But instead of those life-or-death matters, we are using all of that compute to wreck our species cultural back-bone and identity. If Sam Altman the bloody arrogant idiot by his greed manages to convince us that we have taken a wrong turn with our application of AI, then I will hang a portrait of him in my living room.

abakker
0 replies
13h45m

It seems he is either using ChatGPT instead of talking to experts…or just listening to only himself. Either way, Ilya should have stuck to his guns.

heyoni
2 replies
19h50m

Deleted?

jaykru
1 replies
19h40m

Try another browser; I wasn't able to open it on Librewolf (Firefox fork.)

heyoni
0 replies
19h30m

I’m on safari on iPhone though I do use the safari extension for adblocking…

causality0
2 replies
19h19m

I'm sure Jen Taylor would be open to an offer.

causality0
0 replies
16h9m

That's what I was referencing. That, and the fact Microsoft has given up on Cortana so she's probably free.

wyldfire
1 replies
14h35m

Evil genius territory.

When the offer was declined by scarjo, they could still train on her works of art and just hire a soundalike to make recordings regardless of whether they used it during training.

Then, at release time - either they get the buzz of artist-licensed "Her" or they get the buzz /outrage/Streisand of unlicensed "Her". Even if they take it down, OpenAI benefits.

I feel like the folks who fear the tech are wrong. But when the supposed stewards do such a moustache-twirling announcement, it seems like maybe we do need some restraint.

If a trade group can't put some kind of goodwill measures in place, we will inevitably end up with ham fisted legislation.

talldayo
0 replies
14h30m

There's not a lick of genius to it. Sam Altman wasn't willing to compromise on his vision and had to change things after-the-fact because he was threatened.

For me to believe this was genius I'd have to see some actual response from Sam. From the outside-looking-in, it appears that he was caught with his pants down when Jonhansson said no and went ahead even though he was rejected a second time and obviously knew it was the wrong choice. There's no Streisand effect at play here, OpenAI already owned the news cycle with their 4o announcement and could have kept it quiet. But Sam just had to have his One More Thing, and now he's getting his just deserts.

wojciechpolak
1 replies
12h9m

What would happen if there was someone else in the world with exactly the same voice as Scarlett (or very, very similar) and they expressed a desire to work with OpenAI? Would Scarlett still have the right to prohibit its use?

wraptile
0 replies
11h21m

That's a fundamental flaw with this sort of copyright. First come first serve and it's up to courts to "asign rights" so if you don't have that tough luck someone else owns your identity because their bigger.

People cheering for this sort of copyright are completely lost imo. That's not a world anyone but the select few wants to live in.

Nevertheless that's not what happened here.

swat535
1 replies
18h23m

Does Sam has any more reputation to burn? Seriously, who would genuinely trust him at this point?

I mean I know he has hundreds of blind followers but good Lord, you would think that the man, with all his years of experience had some sense to introspect about what he is trying to achieve vs how he is going about it.

Money really does blind all our senses, doesn't it?

rvz
0 replies
13h44m

Does Sam has any more reputation to burn? Seriously, who would genuinely trust him at this point?

Sam doesn’t care.

After the board threw him out of his own company, why would he allow that to happen again? With that, he now trusts far much less people.

Money really does blind all our senses, doesn't it?

That is why the cultishness was full on display last year when he was fired by the board.

sebzim4500
1 replies
19h13m

Maybe I'm crazy but I don't think the voices are even that similar. I simply don't believe that her closest friends could not tell the difference.

cdrini
0 replies
11h29m

I didn't think the old Sky sounded anything like her, but the sky they unveiled at the 4o event seemed super similar. While watching the event I was genuinely wondering "wait did they actually partner with Scarlett Johansson? That's wild!"

This voice: https://x.com/OpenAI/status/1790072174117613963

reducesuffering
1 replies
18h39m

Remember when the OpenAI board said Sam was "not consistently candid" and most people here advocated he be reinstated and how dare the board? They are speedrunning the Google "Don't Be Evil" rug-pull. Not allocating the superalignment team resources they were promised, "don't criticize OpenAI or mention there's an NDA or you lose all your equity"...

globalnode
0 replies
17h55m

I think the 2 tech leads that just resigned are actually dodging a HUGE bullet.

ml-anon
1 replies
18h59m

Combined with Murati’s reaction when asked if they trained Sora on YouTube videos, it’s obvious that OpenAI has trained their TTS systems on a whole bunch of copyrighted content including the output of professional actors and voice actors who definitely weren’t compensated for their work.

Altman and Murati are world-class grifters but until now they were stealing from print media and digital artists. Now they’re clashing with some of the most litigious industries with the deepest pockets. They’re not going to win this one.

meowface
0 replies
15h50m

You claim to work for either Anthropic or DeepMind (almost certainly DeepMind). I'm doubtful their AI products don't use people's works in similar ways.

minimaxir
1 replies
18h12m

One interesting legal caveat is that the Sky voice isn't "ScarJo", it's ScarJo as acted in the movie Her.

An issue with voice actors having their voice stolen by AI models/voice cloning tech is that they have no legal standing because their performance is owned by their client, and therefore no ownership. ScarJo may not have standing, depending on the contract (I suspect hers is much different than typical VA). It might have to be Annapurna Pictures that sues OpenAI instead.

Forbes had a good story about performer rights of voices: https://www.forbes.com/sites/rashishrivastava/2023/10/09/kee...

IANAL of course.

muglug
0 replies
16h20m

IANAL either, but that's not the caveat you think it is.

Bette Midler was able to sue Ford Motor Co. for damages after they hired a sound-alike voice: https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co. Ford had acquired the rights to the song (which Midler didn't write).

fareesh
1 replies
19h50m

am I wrong to think this was the plan all along?

mainstream adoption hasn't been that great - now there's drama

heyoni
0 replies
19h49m

You can read that tweet?

djaykay
1 replies
18h33m

At this point it’s time to create lists of “ethical” AI services, ones that aren’t OpenAI nor other bad actors. I’m dropping my ChatGPT Plus today. Any suggestions on what to replace it with?

1ark
0 replies
15h21m

HuggingFace?

divbzero
1 replies
13h23m

Wanting to imitate Her is rather ironic: It’s like watching Wall Street and wanting to be Gordon Gecko, or watching Gattaca and wanting to genetically engineer humans.

gkanai
0 replies
13h19m

Either _that_ is the joke for Altman or there are lot of Johansson fans at OpenAI.

brcmthrowaway
1 replies
19h36m

This reads like a PR stunt. Why did they clone the voice from Her?

whamlastxmas
0 replies
19h16m

SJ doesn’t have a monopoly on the sultry giggly flirty American female voice. There are a million women who could imitate this pretty closely

blibble
1 replies
19h35m

consent appears to be optional for everything OpenAI does

ml-anon
0 replies
18h50m

Those are the rumors about Sam…

UberFly
1 replies
19h39m

OpenAI has successfully stolen the intellectual property of millions of people to incorporate into their product, so why would they fear stealing someones voice at this point? I hope she wins. Maybe it'll set some kind of precedent.

whamlastxmas
0 replies
19h15m

Calling it stealing is a stretch. They have, at worst, infringed on copyright and terms of service.

SLHamlet
1 replies
17h53m

Really not a smart idea for OpenAI to do this when one of the top Congresspeople represents the Hollywood area, is about to be elected Senator, and already has a bill ready to require AI companies to abide by copyright:

https://nwn.blogs.com/nwn/2024/04/adam-schiff-ai-video-games...

xyst
0 replies
16h55m

VCs and angel investors feeling the burn right now. I hope all the firms on Sand Hill Rd get bent

zmmmmm
0 replies
8h26m

If it spurs on the movement to create legislation controlling how likenesses are used in AI models, Sam Altman has done himself a great disservice here.

xyst
0 replies
16h59m

Sam Altman is an absolute scumbag. Fuck ClosedAI. Hope this company and its VCs crash and burn like Theranos.

Maybe Altman lands in jail or files for bankruptcy after all the dust settles.

xlii
0 replies
12h43m

Just couple of days ago I discussed “Her” in context of Sky voice of ChatGPT, and how it reminds me of the movie.

It’s interesting to see how it unfolding.

wnevets
0 replies
19h29m

An AI company stealing someone elses IP for profit? Unheard of.

whoknowsidont
0 replies
17h50m

Has this industry learned nothing?

I don't know guys, the super hyped up company with next-gen technology might just be using crime, underhanded tactics, and overstating their capabilities to pull in the thing we all love... and it's not each other or your friend's mother!

It's money!

whatever1
0 replies
18h8m

Judging from this, YouTube is gonna make a ton of money from Sora.

totalhack
0 replies
17h4m

Go f'in get 'em Scarlett.

tmsh
0 replies
18h30m

i think openai would do better if they had principles, values, etc. around responsibility and ownership.

it doesn't seem like principles should matter. but then the bill of rights doesn't seem like it should matter either if you were to cold read the constitution (you might be like - hmm, kinda seems important maybe...).

it compounds culturally over time though. principles ^ time = culture.

"Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented."

https://archive.is/wLOfC#selection-1095.112-1095.200

maybe "thoughtful" was the closest (and sam is apologetic and regretful and transparent - kudos to him for that). but it's not that clear without a core principle around responsibility. you need that imho to avoid losing trust.

throwaway5752
0 replies
18h38m

Maybe this "non-profit" shouldn't be entrusted with one of the most potentially dangerous and world changing techologies in 80 years if they can't ethically handle providing a voice to their model.

thih9
0 replies
15h18m

and the passage of appropriate legislation to help ensure that individual rights are protected.

Very interesting to see this there. Does anyone know how could that be legislated?

surfingdino
0 replies
12h6m

It reads to me like he used a big dollop of flattery to get her to agree then asked her to reconsider when she said no? That's cringy.

surfingdino
0 replies
11h42m

I wonder how they'll handle this? A pot of gold and an iron-clad NDA for an out of court settlement?

starspangled
0 replies
13h50m

Too bad nothing substantive will happen to them.

The worst of it is not that this one person is being ripped off (that's bad enough and I hope she gets some kind of resolution). The worst of it is that it shows the company and the people behind it who are making the big decisions are dishonest and unethical.

All the alleged "safety" experts in corporations and in government policy and regulators? All bullshit. The right way to read any of these "safety" laws and policies and regulators is that they are about ensuring the safety of the ruling class.

slim
0 replies
8h6m

  move fast break people
hurting people is just a risk Sam Altman is willing to incorporate into his equation

skilled
0 replies
12h53m

Has Sam already tweeted how sorry he is? All things considered, this might actually give people perspective on how weird OpenAI has been when it comes to respecting other people’s property.

The top comment in this thread is crazy too, they probably contacted her two days prior to launch on the off chance that they could use her as a marketing puppet.

Lost for words on this one.

skepticATX
0 replies
18h14m

There are models that are nearly as good as GPT-4 now. For personal usage, I've been using them for a while now. OpenAI has jumped the shark so much that I'm going to advocate for moving to Anthropic/Google models at work now. OpenAI simply can't be trusted while Sam is at the helm.

sirsinsalot
0 replies
9h10m

Again, I must recommend everyone read Jaron Lanier's "Who Owns The Future".

It's an excellent book, and so so many of the issues raised in it are playing out blow-by-blow.

sashank_1509
0 replies
19h29m

This is hilarious. OpenAI didn’t even need to press for this voice, their technical demo was impressive enough, but they did and now it’ll cast a shadow over a pretty impressive AI advancement. In the long term though, this won’t matter.

samcat116
0 replies
16h55m

What a stupid self own by OpenAI that could have easily been avoided.

rvz
0 replies
19h10m

Again. After Johansson was approached to be hired for the voice then another AI company tried to clone her voice without her permission.

Doesn't matter around similarity. There was nothing fair-use around this voice and it is exactly why OpenAI yanked the voice and indirectly admitted to cloning her voice.

[0 https://news.ycombinator.com/item?id=38154733]

risenshinetech
0 replies
18h32m

It's OK everyone. I saw some people on HN say that "it sounds nothing her voice". I believe them over Her.

rileytg
0 replies
16h47m

i’m confused about the timeline, it is still available to me? did they decide to reactivate?

raverbashing
0 replies
11h42m

Sam: double the grift and half the taste of Steve Jobs

ptelomere
0 replies
17h38m

First they lie to you saying they will save the world. Then they take from you saying they're using them to make the world a better place. Then they rule you, saying "there are no other ways".

All the while many people believe them at every step.

petre
0 replies
12h45m

He could still ask Morgan Freeman I suppose.

ozten
0 replies
19h44m

Nooo. I've been enjoying that voice for a few months on my iPhone ChatGPT app. Launched... and tested... the voice is someone else now.

nyolfen
0 replies
14h34m

so scarlet johansson has rights over every VA that sounds like her as well because she's famous?

nycdatasci
0 replies
14h45m

Could she at least release an audio statement instead of just the text?

nikolay
0 replies
18h25m

Except that Sky doesn't sound like Scarlett Johansson. I'm sick and tired of Hollywood!

nick137381
0 replies
14h55m

Too bad she didn't agree to it. It's the only voice they currently have that I can stand. Hey OpenAI, can you maybe try stealing Morgan Freeman's voice next instead?

mtnGoat
0 replies
18h1m

Oh wait Altman and team acting like they rule the world?!

If that box wasn’t in your bingo card I’m sorry, it’s basically the center/free box at this point.

msoad
0 replies
12h35m

With all of those stories coming out of OpenAI I'm happy I didn't join them. A lot of sleazy and shady practices.

ml-anon
0 replies
17h48m

Last week, OpenAI CTO Mira Murati told me the Sky voice was not patterned after ScarJo.

"I don't know about the voice. I actually had to go and listen to Scarlett Johansson's voice," Murati said.

Seems like a big part of Mira’s job is not knowing things. How is no one questioning how she landed a VP job at OpenAI 2 years after being an L5 PM?

mepian
0 replies
14h51m

Didn't expect Scarlett Johansson to become the flashpoint of the next AI winter.

mensetmanusman
0 replies
17h53m

The little mermaid story was true!

Who would have thought we would be discussing voice theft someday.

lovemenot
0 replies
18h24m

Reminiscent of the movie The Congress, in which Robin Wright's character, a famous actor, is hustled by a movie studio into giving up her likeness for them to continue making films starring her, in perpetuity.

lokhura
0 replies
8h7m

I'm not a fan of Sam Altman, but this is such a non-issue. The solution is simple: adapt and find new ways to be creative in the world of AI. Copyright is becoming a thing of the past and rightly so. We just have to collectively accept this and move on because laws won't stop it.

leobg
0 replies
12h6m

He told me that he felt that by my voicing the system, could bridge the gap between tech companies and creatives and help consumers to feel comronable with the seismic shalt concerning humans and Al. He said he felt that my voice would be comforting to people.

To me, that reads like the same kind of snake oil he sold Elon when he proposed the joint founding of OpenAI.

I can just about imagine the books in his private library. The Prince. 48 Laws of Power. Win Friends and Inference People.

j-bos
0 replies
18h34m

Only partially related: One thing I often wonder with generative tools, and even before that with the explosion of gobal artists' publishing online:

When is it infringing to make something that looks or sounds like somebody famous? I mean, there's only so many ways a human voice voice can sound or face can look. At what point are entire concepts locked down just because somebody famous exists or existed that pattern matches.

iamleppert
0 replies
1h47m

She should be humbled to have been selected. If AI is going to do one thing, its going to be the dethroning of the Hollywood elite. If a regular person had their voice cloned, it would not be news. The only reason it is news is because she is rich and powerful.

Celebrities need to get used to the fact that they will soon be no more important to a corporation than any other rank and file employee. AI is already able to conjure up whatever voices and personas we need at the ready and make the concept of actors all but a thing of the past.

hurtuvac78
0 replies
13h45m

I am wondering if many Open AI engineers feel mistaken today for having promised to follow Sam Altman to MSFT after the board action.

hotdogscout
0 replies
17h28m

Such a nothing burger. Do moralists not get tired of picking up pitchforks?

Imitating a movie AI was a cool idea and imitation was the only legal way to do it.

Do you pull your hair when companies advertise with Elvis impersonators?

Nobody was significantly harmed by this, I can guarantee the rich people that use hacker news consume things from much less savory standards than imitating a celebrity.

Nestlé is strong but you pull the plug at THIS?

Pg has done worse and he owns this forum.

Have some perspective.

helsinki
0 replies
17h52m

Statement from Scarlett Johannson’s publicist* FTFY

hatenberg
0 replies
3h57m

The CEO of Huggingface had to do the "hold my beer" thing and suggest people train an open source model of her voice on Twitter and Linkedin. Can't make this level of detachment up.

gnicholas
0 replies
14h8m

Does anyone have a link to examples? I'm curious to hear how close this sounds to the actress's voice.

gcanyon
0 replies
18h21m

I watched the keynote and many of the demo videos and never once thought, "That sounds just like ScarJo."

That said, the timeline she lays out is damning indeed.

fnord77
0 replies
13h15m

Sam is a creep

floor_
0 replies
6h14m

LLMs are built on top of mass theft and get away with it due to it being "derivative work". I find it strange that stealing her voice was the "this has gone too far" moment and not when they started mass scraping the internet.

exodust
0 replies
4h9m

While GPT4-o tried hard to sound like the movie Her, there's no way the voice is a "clone" of SJ. It's only similar. We shouldn't be able to own the rights to similar sounding voices to our own.

Their efforts to copy the Her character mannerisms is still annoying. They were aiming for the Her-like personality. But it's a stretch to say it's a copy of Johansson's voice. The haze in her real voice isn't there in GPT. Maybe they decided to pull the voice because it's not good publicity to have Scarlett Johansson pissed off with you.

Nevertheless these tech companies should hire professional film writers and artists to help with difficult concepts such as original ideas and not copying other's work.

ethagnawl
0 replies
4h47m

Move fast and steal peoples' likenesses.
erichmond
0 replies
17h24m

This is why OpenAI being the "leader" in the space worries me. We need to be building trust in AI systems, not leaning into what the public perception is. On the other hand, maybe it's good they are showing who they really are.

encoderer
0 replies
16h8m

Quite the scarlet letter on Openai

edpichler
0 replies
4h20m

I understand her concerns but I think it's a very hard problem. Being "similar" is very subjective.

Moreover, what if some actor has a similar voice to her and records and sells an audiobook? What if this actor signs a contract with OpenAI?

ecjhdnc2025
0 replies
18h43m

OpenAI: it's like Uber for not respecting common decency.

dudeinjapan
0 replies
5h8m

“Sora” means “Sky” in Japanese.

drooby
0 replies
19h2m

If we're talking about the voice from the "Say hello to GPT-4o", then this is clearly not Scarlett J.

They have similar voices, but SJ has more bass and rasp.

And if it's true that OpenAI hired a different actor, then this should basically be case closed.

The voice of Sky (assuming that's the same as the demo video), sounds like a run of the mill voice actor tbh. Great, but not that interesting or unique.

dorkwood
0 replies
15h44m

What's the difference between an algorithm training on someone's voice, and a human baby listening to a voice and growing up to speak the same way? Would you punish the baby for that? It's exactly the same thing.

dino1729
0 replies
12h32m

Given the sequence of events, Scarlett Johannsson suing OpenAI is a logical outcome. Sam Altman, of all people, should be anticipating this outcome for sure.

Assuming Sam Altman is not stupid, this could be part of some elaborate plan and a calculated strategy. The end goals could range from immediate practical outcomes like increased publicity (see ChatGPT's mobile app revenue doubled overnight: https://finance.yahoo.com/news/chatgpts-mobile-app-revenue-s...) and market impact, to more complex objectives like influencing future legal frameworks and societal norms around AI.

deadbabe
0 replies
15h30m

Are we doomed to someday have all the popular AI voices sound like submissive, borderline sexually subservient females?

dbg31415
0 replies
15h57m

It’s stuff like this that gives me 0 trust in Altman. Drama follows the guy everywhere and it’s likely because he brings it on himself with questionable morales and actions.

dangoodmanUT
0 replies
19h36m

Alright who left the dwight schrute cloner on overnight in the comments

danans
0 replies
13h48m

What an unforced error on OpenAI's part, but revelatory to all of us how their leaders actually see the world around them: either people whose likeness and style to copy like Johansson or chumps like the rest of us who would marvel at the regurgitated synthetic likeness.

And really, how much worse would the demo have been if they hadn't cloned Johansson's voice, and instead used another unknown voice? If it was similarly flirty, we'd have fallen for it anyways.

consf
0 replies
4h50m

I don't even know if I would agree to such use of my voice. It's interesting to think about this.

chimney
0 replies
19h21m

Sounds like someone has not been consistently candid with their communication.

cdme
0 replies
19h25m

Steal everything they possibly can and hope they end up too big to kill. I sincerely hope they fail spectacularly.

camillomiller
0 replies
12h19m

This is such a classic Altman move. Why everyone keeps defending this guy as he if he wasn’t such a manipulative power hungry weasel is beyond me.

browningstreet
0 replies
16h38m

Sam is aligned with the Elon playbook.

braza
0 replies
8h5m

The most interesting aspect of this debacle, in my opinion, is that with new technologies that allow you to impersonate and/or recruit artists with minor modifications, the figure of "movie star" and the artist itself will be significantly diluted.

For example, I would love to see all of the Bourne books adapted into live-action films, but I know that will be impossible. In the future, I believe it would be great to see some AI actors who are not related to any famous actors/actresses perform the same screenplay: of course, if the book is licensed to that AI movie.

[1] - https://bourne.fandom.com/wiki/The_Bourne_Directory

bpiche
0 replies
16h15m

Luke skywalker shooting torpedoes into the Death Star vibes. Burn it down.

blinding-streak
0 replies
6h44m

Everything the board said about Altman, the reasons for firing him, were correct.

badrunaway
0 replies
11h38m

Sam Altman doesn't inspire confidence in where AI companies are going with user consent. And the board can't even remove him even if he takes OpenAI to the wrong path. He is the board and the company.

amateuring
0 replies
4h33m

sama is as sleazy as sbf

akr4s1a
0 replies
19h30m

I can't fathom such a bad decision as asking someone for permission to use their voice and doing it anyway after they say no. It's almost like NYT is currently suing them for unauthorized use and they should really not be making such an amateur mistake.

ab5tract
0 replies
16h57m

As soon as you heard an AI laughing, you should have rejected it categorically. Simps.

_rm
0 replies
19h29m

It's a real shame she didn't take the deal though

_giorgio_
0 replies
14h21m

I chose the voice a lot of time ago just because it sounded nice, I've never thought of a similarity to Scarlett even after the Sama tweet.

The real problem, now, is that they don't have a nice working voice anymore.

SaintSeiya
0 replies
12h51m

Sam messed with the wrong girl. In the end his firing was the correct thing to do. The "bad guys" of the company were doing the correct thing and like Jesus, we crucifix them.

Madmallard
0 replies
11h39m

The fact that they did it anyway and only took it down after legal threat tells you these are not the people you want to be in charge of such powerful systems. They want their cake and to eat it too, and regular humans be screwed over in the process. I think a relinquishment of power is in order. OpenAI should truly be open and there should be large public discussion forums regarding changes moving forward.

Jayakumark
0 replies
13h32m

At this level of copyright infringement, Now I 100% believe that it’s fully trained on YouTube and other copyright videos, audio, books etc. they don’t care about using any public data or paying a dime as long as they can build a model with it , they will never disclose the data used and won’t allow anyone who quit to talk about it , blackmailing them with equity.

JaviFesser
0 replies
5h9m

Out of curiosity, how can this be enforced?

I assume that they didn't get Scarlett's voice directly but they hired someone who sounds similar to use it for the system.

Is it illegal to hire similar sounding voice actor?

If I, for example, sound very similar to Stephen Fry, would I not be allowed to record audio books because he _owns_ his voice and any similar voice?

Imnimo
0 replies
19h45m

Many of OpenAI's productization ideas make more sense when you remember that the guy in charge also thought Worldcoin and it's eye scanning or were a good idea.

Havoc
0 replies
18h52m

Doesn’t sound all that similar to me

DavidPiper
0 replies
15h47m

Interesting to see how the more upvotes and comments this thread gets, the further DOWN it goes on the frontpage, despite being more recent than almost everything above it.

ClassyJacket
0 replies
19h50m

Since Microsoft has given up on her, they should hire Jen Taylor and do almost-Cortana.

ChrisMarshallNY
0 replies
7h17m

If they used a different voice actress, then it should be trivial to simply tell everyone who she is (She could probably benefit from the publicity), and show the hundreds of audio samples, all dated before this kerfuffle.

Problem solved.

BadHumans
0 replies
19h16m

I know there are people here who think you should be able to use a person's likeness for whatever but regardless of how you feel, I don't think you can disagree this is a pretty bad look and does not reflect well on Altman or OpenAI.

Art9681
0 replies
16h31m

Hot take. The lesson for OpenAI is to STFU. Always. This is always the best thing to do. STFU. You wanted to emulate her voice? Should have done it anyway and not told a soul. You know why? Because there are tens of thousands of women who sound like that. It's a very generic voice and accent. Let's be real here. Many of us have seen her movies and had we not read the controversy we would not have made the connection. All OpenAI had to do was move forward with intent and let HER prove its HER voice, and not a generalization of many similar women's voices that could be found in the public domain applied towards a process that collapsed into something resembling her and many other women's voices.

In the not so distant future, when the world's top AI models can generate endless accents and voices at will, the probability of one of those sounding just like you (and thousands of other people) will be high. It will be VERY high.

All this dealing with Hollywood and music industry and all the crap i've been reading about OpenAI trying to wiggle their way into those industries is absolute damn nonsense. What is Sama thinking?! GO BACK TO BEING NERDS AND STFU.

If you really believe you are going to create a real AGI, none of this is relevant. No one is going to thank you for creating something that can replicate what they value in seconds. Do it anyway.

And remember, STFU.

0xWTF
0 replies
17h54m

Anyone care to bet Microsoft and other investors are actually ok with this narrative? I think they are, and in fact may be advocating for Sam to advance this narrative, because they 1) want a court decision, no matter which way it goes, and 2) they're confident OpenAI has more capabilities in the pipeline