return to table of content

OpenAI departures: Why can’t former employees talk?

jay-barronville
254 replies
17h21m

It probably would be better to switch the link from the X post to the Vox article [0].

From the article:

“““

It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

”””

[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

Buttons840
147 replies
16h34m

So part of their compensation for working is equity, and when they leave thay have to sign an additional agreement in order to keep their previously earned compensation? How is this legal? Mine as well tell them they have to give all their money back too.

What's the consideration for this contract?

throwaway598
93 replies
15h18m

That OpenAI are institutionally unethical. That such a young company can be become rotten so quickly can only be due to leadership instruction or leadership failure.

smt88
86 replies
13h30m

Look at Sam Altman's career and tweets. He's a clown at best, and at worst he's a manipulative crook who only cares about his own enrichment and uses pro-social ideas to give himself a veneer of trustworthiness.

orlandrescu
53 replies
13h0m

Awfully familiar to the other South-African emerald mine inheritor tech mogul.

treme
21 replies
10h25m

Please. Elon's track record to take tesla from concept car stage to current mass production levels and building SpaceX from scratch is hardly comparable to Altman's track record.

TechnicolorByte
14 replies
7h45m

SpaceX didn’t start from scratch. Their initial designs were based on NASA designs. Stop perpetuating the “genius engineer” myth around Elon Musk.

KyleOneill
6 replies
6h17m

I feel like Steve Jobs also fits this category if we are going to talk about people who aren't really worthy of genius title and used other people's accomplishments to reach their goals.

We all know it as the engineers who made iPhone possible.

KyleOneill
4 replies
6h4m

The people downvoting have never read the Isaacson book obviously.

treme
3 replies
5h58m

More like ppl on this site know and respect Jobs for his talent as a revolutionary product manager-style CEO that brought us IPhone and subsequent mobile Era of computing.

8372049
1 replies
5h5m

Mobile era of computing would have happened just as much if Jobs had never lived.

CamperBob2
0 replies
3h22m

To be fair, who else could have gone toe-to-toe with the telecom incumbents? Jobs almost didn't succeed at that.

KyleOneill
0 replies
5h55m

Jobs was a bully through and through.

8372049
0 replies
5h3m

Someone far more deserving of the title, Dennis Ritchie, died a week after Jobs' stupidity caught up with him. So much attention to Jobs who didn't really deserve it, and so little to Dennis Ritchie who made such a profound impact on the tech world and society in general.

colibri727
3 replies
6h20m

Altman is riding a new tech wave, and his team has a couple of years' head start. Musk's reusable rockets were conceptualized a long time ago (Tintin's Destination Moon dates back to 1953) and could have become a reality several decades ago.

treme
2 replies
6h17m

You seriously trying to take his credit away for reusable rocket with "nu uh, it was in scifi first?" Wow.

"A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”

colibri727
0 replies
5h55m

No, in fact I'm praising Musk for his project management abilities and his ability to take risks.

"nu uh, it was in scifi first?" Wow.

https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X

NASA had taken on the project grudgingly after having been "shamed" by its very public success under the direction of the SDIO.[citation needed] Its continued success was cause for considerable political in-fighting within NASA due to it competing with their "home grown" Lockheed Martin X-33/VentureStar project. Pete Conrad priced a new DC-X at $50 million, cheap by NASA standards, but NASA decided not to rebuild the craft in light of budget constraints

"Quotation is a serviceable substitute for wit." - Oscar Wilde

cess11
0 replies
2h59m

What's wrong with weakness? Does it make you feel contempt?

hanspeter
0 replies
7h19m

By that logic nothing has started from scratch.

ekianjo
0 replies
5h33m

SpaceX is still the only company with reusable rockets. NASA only dreams about it and cant even make a regular rocket launch on time

SirensOfTitan
0 replies
7h19m

“If you wish to make an apple pie from scratch You must first invent the universe”

…no one “started from scratch", the sum of all knowledge is built on prior foundations.

jajko
3 replies
8h48m

But he is a manager, not an engineer although he sells himself off as such. He keeps smart capable folks around, abuses most of them pretty horribly, and when he intervenes with products its hit and miss. For example latest Tesla Model 3 changes must have been pretty major fuckup and there is no way he didn't ack it all.

Plus all self-driving lies and more lies well within fraud territory at this point. Not even going into his sociopathic personality, massive childish ego and apparent 'daddy issues' which in men manifest exactly like him. He is not in day-to-day SpaceX control and it shows.

treme
1 replies
6h19m

"A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”

Angostura
0 replies
5h29m

As is repeatedly spamming the same pasta

formerly_proven
0 replies
4h25m

You’re confusing mommy and daddy issues. Mommy issues is what makes fash control freaks.

satvikpendem
0 replies
9h41m

Indeed, at least Elon and his teams actually accomplished something worthwhile compared to Altman.

lr1970
0 replies
5h3m

And don't forget StarLink that revolutionized satellite communications.

huijzer
17 replies
7h18m

I disagree. If you watch some long form interviews with Elon, you’ll see that he cares a lot about the truth. Sam doesn’t give me that impression.

malfist
8 replies
5h36m

If you watch some long form interviews with Elon, you’ll see that he cares a lot about the truth.

You mean the guy who's infamous for lying? The guy who claimed his car was fully self driving more than a decade before it is? The guy who tweeted "funding secured" and facing multiple fraud charges?

MVissers
7 replies
4h48m

Tbh, he wasn’t convicted as far as I know.

But yes, he’s overly optimistic with timelines. He says so himself.

kibwen
6 replies
4h41m

The first time someone is "overly optimistic with a timeline", you should forgive them.

The tenth time, you should have the good sense to realize that they're full of shit and either a habitual liar or utterly incompetent.

schmidtleonard
2 replies
2h32m

Realists are incapable of pushing frontiers.

If you are doing something that has been done before, hire a realist. Your project will ship on time and within budget. If you are doing something that hasn't been done before, you need an optimist. Partly because the realists run for the hills -- they know the odds and the odds are bad -- but also because their hedging behavior will turn your small chance of success into zero chance of success. On these projects, optimism doesn't guarantee success, but pessimism/realism does guarantee failure.

So no, I am not scandalized to find that the world's biggest innovator (I hate his politics, but this is simply the truth) is systematically biased towards optimism. It's not surprising, it is inevitable.

lesostep
1 replies
25m

Wright Brothers took a risk and build first planes but didn't have to lie that their planes already left the ground before they did. They didn't claim "it would fly a year from now", they just build it over and over until it flew.

They were optimistic and yet they found a way to be optimistic without claiming anything untruthful.

Clément Ader, on the other hand, claimed that his innovation flew, and was ridiculed when he couldn't proof it.

One look at their works and it's clear who influenced modern planes, and who didn't.

schmidtleonard
0 replies
2m

The Wright Brothers are infamous for failing to industrialize their invention -- something that notoriously requires investors and hype. Perhaps they wouldn't have squandered their lead if they had been a bit more public with their hopes and dreams.

sashank_1509
2 replies
3h20m

Man who’s the second richest, led companies that made electric cars and reusable rockets

> Random HN commentator : utterly incompetent

I want what you’re smoking

mynameisvlad
0 replies
2h35m

He may be the second richest but he still doesn’t seem competent enough to provide remotely reasonable estimates.

That, or he’s just a straight up liar who knows the things he says are never going to happen.

Which would you rather it be?

sumedh
5 replies
7h3m

you’ll see that he cares a lot about the truth.

Didnt he call the cave diver, a pedo and the guy who attacked Pelosi's husband they were in a gay relationship.

spinach
4 replies
3h12m

He doesn't seem have much of a filter because of his aspergers, but I think he genuinely believed those things. And they are more on the level of calling people names on the playground anyway. In the grand scheme of things, those are pretty shallow "lies".

smt88
1 replies
1h48m

I have multiple relatives on the spectrum. None of them baselessly accuse strangers of being pedophiles.

It's not Musk's lack of filter that makes him unhinged and dangerous. It's that he's deeply stupid, insecure, racist, enamored of conspiracy theories, and powerful.

smegger001
0 replies
12m

I figure its the chronic drug abuse and constant affirmation he receives from his internet fanboys and enabler yes-men on his board who are financially dependent on him. he doesn't ever receive push-back from anyone so he get more and more divorced form reality.

troupo
0 replies
1h22m

He's 52. And running multiple companies. Aspergers is not a justification for his shitty behavior (and blaming this behavior on Aspergers harms perception of people with Aspergers)

mynameisvlad
0 replies
2h32m

Oh so it’s ok to lie and call people a pedophile (which is far beyond playground name-calling; from a famous person a statement like that actually carries a lot of weight) if you genuinely believe it and have Asperger’s?

Those might explain his behavior, but it does not excuse it.

root_axis
0 replies
1h40m

I'm no fan of Sam Altman, but between the two, Elon lies much more often. He's lied about FSD for years, lied about not selling his Tesla stock, lied about "robotaxies" for years, lied about the roadster for years, lied about "funding secured" for Tesla, lied about his twitter free speech ethos, spreads random lies about people he doesn't like, and so much more. The guy is a compulsive liar.

kmeisthax
8 replies
11h19m

I'm starting to think the relatives of South African emerald mine owners might not be the best people to trust...

pawelmurias
4 replies
7h1m

You are not responsible for the sins of your father regardless of how seriously fucked in the head he is.

Loughla
2 replies
6h45m

No but there is the old nature versus nurture debate. If you're raised in a home with a parent who has zero qualms about exploiting human suffering for profit, that's probably going to have an impact, right?

johnisgood
1 replies
4h31m

What are you implying here? The answer to the nature vs. nurture debate is "both", see "epigenetics" for more.

When considering the influence of a parent with morally reprehensible behavior, it's important to recognize that the environment a child grows up in can indeed have a profound impact on their development. Children raised in households where unethical behaviors are normalized may adopt some of these behaviors themselves, either through direct imitation or as a response to the emotional and psychological environment. However, it is equally possible for individuals to reject these influences.

Furthermore, while acknowledging the potential impact of a negative upbringing, it is critical to avoid deterministic assumptions about individuals. People are not simply products of their environment; they possess agency and the capacity for change, and we need to realize that not all individuals perceive and respond to environmental stimuli in the same way. Personal experiences, cognitive processes, and emotional responses can lead to different interpretations and reactions to similar environmental conditions. Therefore, while the influence of a parent's actions cannot be dismissed, it is neither fair nor accurate to presume that an individual will inevitably follow in their footsteps.

As for epigenetics: it highlights how environmental factors can influence gene expression, adding a layer of complexity to how we understand the interaction between genes and environment. While the environment can modify gene expression, individuals may exhibit different levels of susceptibility or resistance to these changes based on genetic variability.

gopher_space
0 replies
1h23m

However, it is equally possible for individuals to reject these influences.

The crux of your thesis is a legal point of view, not a scientific one. It's a relic from when Natural Philosophy was new and hip, and fundamentally obviated by leaded gasoline. Discussing free will in a biological context is meaningless because the concept is defined by social coercion. It's the opposite of slavery.

programjames
0 replies
46m

From a game theory perspective, it can make sense to punish future generations to prevent someone from YOLO'ing at the end of their life. But that only works if they actually care about their children, so perhaps it should be, "you are less responsible for the sins of your father the more seriously fucked in the head he is."

fennecbutt
2 replies
7h20m

Lmao no point in worrying about AI spreading FUD when people do it all by themselves.

You know what AI is actually gonna be useful for? AR source attachments to everything that comes out of our monkey mouths, or a huge floating [no source] over someone's head.

Realtime factual accuracy checking pls I need it.

postmodest
0 replies
3h4m

Who designs the training set for your putative "fact checker" AI?

docmars
0 replies
4h58m

If it comes packaged with the constant barrage of ridicule and abuse from others for daring to be slightly wrong about something, nobody may as well talk at all.

kaycebasques
2 replies
8h39m

Are you saying that Altman has family that did business in South African emerald mines? I can't find info about this

kryptogeist
0 replies
6h36m

No. Some dude that launches rockets did, though.

WalterSear
0 replies
3h26m

They are referring to Elon Musk.

xaPe
0 replies
5h52m

It didn't take long to drag Elon into this thread. The bitterness and cynicism is unreal.

whoistraitor
12 replies
10h59m

Indeed. I’ve heard first hand accounts that would make it impossible for me to trust him. He’s very good at the game. But I’d not want to touch him with a barge pole.

nar001
11 replies
7h21m

Any stories or events you can talk about? It sounds interesting

benreesman
8 replies
6h14m

The New Yorker piece is pretty terrifying and manages to be so while bending over backwards to present both sides of not maybe even suck up to SV a bit. Certainly no one forced Altman to say on the record that Ice Nine in the water glass was what he had planned for anyone who crossed him, and no one forced pg to say, likewise on the record that “Sam’s real talent is becoming powerful” or something to that effect.

It pretty much goes downhill from there.

dmoy
3 replies
2h31m

For anyone else like me who hasn't read Kurt Vonnegut, but does know about different ice states (e.g. Ice IX):

"Ice Nine" is a fictional assassination device that makes you turn into ice after consuming ice (?) https://en.m.wikipedia.org/wiki/Ice-nine

"Ice IX" (ice nine) is Ice III at a low enough temperature and high enough pressure to be proton-ordered https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phases

So here, Sam Altman is stating a death threat.

spudlyo
2 replies
1h38m

It's more than just a death threat, the person killed in such a manner would surely generate a human-sized pile of Ice 9, which would pose a much greater threat to humanity than any AGI.

If we're seriously entertaining this off-handed remark as a measure of Altman's true character, it means not only would be willing willing to murder an adversary, but he'd be willing to risk all humanity to do it.

What I take away from this remark is that Altman is a nerd, and I look forward to seeing a shaky cell-phone video of him reciting one of the calypsos of Bokonon while dressed as a cultist at a SciFi convention.

dmoy
1 replies
1h18m

the person killed in such a manner would surely generate a human-sized pile of Ice 9, which would pose a much greater threat to humanity than any AGI.

Oh okay, I didn't really grok that implication from my brief scan of the wiki page. Didn't realize it was a cascading all-water-into-Ice-Nine thing.

pollyturples
0 replies
15m

just to clarify, in the book it's basically just 'a form of ice that stays ice even when warm'. it was described as an abandoned projected by the military to harden mud for infantry men to cross. just like regular ice crystals, the ice9 crystal pattern 'spreads' across water, but without the need for it to be chilled, eg the body temp water freezes etc, it becomes a 'midas touch' problem to anyone dealing with it.

schmidtleonard
1 replies
2h52m

Holy shit I thought he was just good at networking, but it sounds like we have a psychopath in charge of the AI revolution. Fantastic.

racional
0 replies
2h14m

“Sam is extremely good at becoming powerful” was the quote, which has a distinctly different ring to it. Not that this diminishes from the overall creep factor.

aleph_minus_one
0 replies
5h28m

The New Yorker piece is pretty terrifying and manages to be so while bending over backwards to present both sides of not maybe even suck up to SV a bit. Certainly no one forced Altman to say on the record that Ice Nine in the water glass was what he had planned for anyone who crossed him, and no one forced pg to say, likewise on the record that “Sam’s real talent is becoming powerful” or something to that effect.

Article: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

lr1970
0 replies
5h6m

Any stories or events you can talk about? It sounds interesting reply

Paul Graham fired Sam Altman from YC on the spot for "loss of trust". Full details unknown.

bookaway
0 replies
1h24m

The story of the "YC mafia" takeover of Conde Nast era reddit as summarized by ex-ceo Yishan who resigned after tiring of Altman's constant Machiavelli machinations is also hilarious and foreshadowing of future events[0]. I'm sure by the time Altman resigned from the Reddit board OpenAI had long incorporated the entire corpus into ChatGPT already.

At the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there. I mentioned this in a comment elsewhere but Peter Hintjens' (ZeroMQ, RIP) book called the "Psychopath Code"[1] is rather on point in this context. He notes that psychopaths are attracted to project groups that have assets and no defenses, i.e. non-profits:

If a group has assets and no defenses, it is inevitable [a psychopath] will invade the group. There is no "if" here. Indeed, you may see several psychopaths striving for advantage...[the psychopath] may be a founder, yet that is rare. If he is a founder, someone else did the hard work. Look for burned-out skeletons in the closet...He may come with grand stories, yet only by his own word. He claims authority from his connections to important people. He spends his time in the group manipulating people against each other. Or, he is absent on important business...His dominance is not earned, yet it is tangible...He breaks the social conventions of the group. Social humans feel fear and anxiety when they do this. This is a dominance mask.

A group of nerds that want to get shit done and work on important problems, who are primed to be optimistic and take what people say to their face at face value, and don't want to waste time with "people problems" are susceptible to these types of characters taking over.

[0] https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

[1]https://hintjens.gitbooks.io/psychopathcode/content/chapter4...

hackernewds
6 replies
10h55m

the name OpenAI itself reminds me every day of this.

genevra
4 replies
8h45m

I knew their vision of open source AI wouldn't last but it surprised me how fast it was.

baq
2 replies
7h59m

That vision, if it was ever there, died before ChatGPT was released. It was just a hiring scheme to attract researchers.

pg calls sama ‘naughty’. I call him ‘dangerous’.

olalonde
1 replies
3h55m

I'm still finding it difficult to understand how their move away from the non-profit mission was legal. Initially, you assert that you are a mission-driven non-profit, a claim that attracts talent, capital, press, partners, and users. Then, you make a complete turnaround and transform into a for-profit enterprise. Why this isn't considered fraud is beyond me.

smt88
0 replies
1h51m

My understanding is that there were two corporate entities, one of which was always for-profit.

w0m
0 replies
6h19m

It was impractical from the start; they had to pivot before a they were able to get an LLM proper out (before ~anyone had heard of them)

deadbabe
0 replies
4h12m

It’s “Open” as in “open Pandora’s box”, not “open source”. Always has been.

tinyhouse
4 replies
7h27m

Well, more than 90% of OpenAI employees backed him up when the board fired him. Maybe he's not the clown you claim he is.

llamaimperative
2 replies
7h8m

Or they didn’t want the company, their job, and all of their equity to evaporate

tinyhouse
1 replies
2h11m

Well, if he's a clown then his departure should cause the opposite, no? And you're right, more than 90% of them said we don't want the non-profit BS and openness. We want a unicorn tech company that can make us rich. Good for them.

Disclaimer: I'm Sam's best friend from kindergarten. Just joking, never met the guy and have no interest in openai beyond being a happy customer (who will switch in a heartbeat to the competitors' if they give me a good reason to)

llamaimperative
0 replies
1h43m

Well, if he's a clown then his departure should cause the opposite, no?

Nope, not even close to necessarily true.

more than 90% of them said we don't want the non-profit BS and openness. We want a unicorn tech company that can make us rich. Good for them.

Sure, good for them! Dissolve the company and its charter, give the money back to the investors who invested under that charter, and go raise money for a commercial venture.

iinnPP
0 replies
7h10m

People are self-motivated more often than not.

raverbashing
1 replies
10h54m

The startup world (as the artistic world, the sports world, etc) values healthy transgression of the rules

But the line between healthy and unlawful transgression can be a thin line

WalterSear
0 replies
3h21m

The startup world values transgression of the rules.

comboy
1 replies
8h40m

I'm surprised at such a mean comment and lots of follow-ups with agreement. I don't know Sam personally, I've only heard him here and there online from before OpenAI days and all I got was a good impression. He seems smart and pretty humble. Apart from all openai drama which I don't know enough to have an opinion, past-openai he also seems to be talking with sense.

Since so many people took time to put him down there here can anybody provide some explanation to me? Preferably not just about how closed openai is, but specifically about Sam. He is in a pretty powerful position and maybe I'm missing some info.

skeeter2020
0 replies
2h54m

I fear your characterization diminishes the real risk: he's incredibly well resourced, well-connected and intelligent while being utterly divorced from the reality of the majority he threatens. People like him and Peter Thiel are not simple crooks or idiots - they truly believe in their convictions. This is far scarier.

csomar
0 replies
11h4m

Social engineering has been a thing well before computers and the internet...

andrepd
0 replies
10h20m

Many easily fooled rubes believe that veneer, so I guess it's working for him.

ben_w
3 replies
11h27m

We already know there's been a leadership failure due to the mere existence of the board weirdness last year; if there has been any clarity to that, I've missed it for all the popcorn gossiping related to it.

Everyone including the board's own chosen replacements for Altman siding with Altman seems to me to not be compatible with his current leadership being the root cause of the current discontent… so I'm blaming Microsoft, who were the moustache-twirling villains when I was a teen.

Of course, thanks to the NDAs hiding information, I may just be wildly wrong.

Sharlin
2 replies
9h55m

Everyone? What about the board that fired him, and all of those who’ve left the company? It seems to me more like those people are leaving who are rightly concerned about the direction things are going, and those people are staying who think that getting rich outweighs ethical – and possibly existential – concerns. Plus maybe those who still believe they can effect a positive change within the company. With regard to the letter – it’s difficult to say how many of the undersigned simply signed because of social pressure.

ben_w
1 replies
9h46m

Everyone? What about the board that fired him,

I meant of the employees, obviously not the board.

Also excluded: all the people who never worked there who think Altman is weird, Elon Musk who is suing them (and probably the New York Times on similar grounds), and the protestors who dropped leaflets on one of his public appearances.

and all of those who’ve left the company?

Happened after those events; at the time it was so close to being literally employee who signed the letter saying "bring Sam back or we walk" that the rest can be assumed to have been off sick that day even despite the reputation the US has for very limited holidays and getting people to use those holidays for sick leave.

It seems to me more like those people are leaving who are rightly concerned about the direction things are going, and those people are staying who think that getting rich outweighs ethical – and possibly existential – concerns. Plus maybe those who still believe they can effect a positive change within the company.

Obviously so, I'm only asserting that this doesn't appear to be due to Altman, despite him being CEO.

("Appear to be" is of course doing some heavy lifting here: unless someone wants to literally surveil the company and publish the results, and expect that to be illegal because otherwise it makes NDAs pointless, we're all in the dark).

shkkmo
0 replies
1h31m

It's hard to guage exactly how much credence to put in that letter due to the gag contracts.

How much was it in support of Altman and how much was in opposition to the extremely poorly explained in board decisions, and how much was pure self interest due to stock options?

I think when a company chooses secrecy, they abandon much of the benefit of the doubt. I don't think there is any basis for absolving Altman.

jasonm23
0 replies
14h5m

Clearly by design.

The most dishonest leadership.

benreesman
0 replies
6h12m

To borrow the catchphrase of one of my favorite hackers ever: “correct”.

fshbbdssbbgdd
21 replies
16h5m

In the past a lot of options would expire if you didn’t exercise them within eg. 90 days of leaving. And exercising could be really expensive.

Speculation: maybe the options they earn when they work there have some provision like this. In return for the NDA the options get extended.

NewJazz
18 replies
15h20m

Options aren't vested equity though.

PNewling
17 replies
15h6m

... They definitely can be. When I worked for a small biotech company all of my options had a tiered vesting schedule.

_heimdall
14 replies
14h39m

Options aren't equity, they're only the option to buy equity at a specified price. Vesting just means you can actually buy the shares at the set strike pice.

For example, you may join a company and be given options to buy 10,000 shares at $5 each with a 2 year vesting schedule. They may begin vesting immediately, meaning you can buy 1/24th of the total options each month (or 614 shares). Its also common for a delay up front where no options vest until you've been with the company for say 6 or 12 months.

Until an option vests you don't own anything. Once it vests, you still have to buy the shares by exercising the option at the $5 per share price. When you leave, most companies have a deadline on the scale of a few months where you have to either buy all vested shares or forfeit them and lose the stock options.

theGnuMe
6 replies
7h14m

Options can vest as do stock grants as well.

_heimdall
5 replies
5h40m

Unless I'm mistaken, the difference is that grants vest into actual shares while options only vest into the opportunity to buy the shares at a set price.

Part of my hiring bonus when joining one of the big tech companies were stock grants. As they vested I owned shares directly and could sell them as soon as they vested if I wanted to.

I also joined a couple startups later in my career and was given options as a hiring incentive. I never exercised the vested options so I never owned them at all, and I lost the optios after 30-90 days after leaving the company. For grants I'd take the shares with me and not have to pay for them, they would have directly been my shares.

Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.

throwaway2037
4 replies
4h40m

    > Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.
You still own the shares, not the clearing house. They hold them on your behalf.

SJC_Hacker
2 replies
4h5m

They hold them on your behalf.

Possession is 90% of ownership

NortySpock
1 replies
3h21m

Banks and trading houses are kind of the exception in that regard. I pay my bank monthly for my mortgage, and thus I live in a house that the bank could repossess if they so choose.

_heimdall
0 replies
3h3m

The phrase really should be about force rather than possession. Possession only really makes a difference when there's no power imbalance.

Banks have the legal authority to take the home I possess if I don't meet the terms of our contract. Hell, I may own my property outright but the government can still claim eminent domain and take it from me anyway.

Among equals, possession may matter. When one side can force you to comply, possession really is only a sign that the one with power is currently letting you keep it.

_heimdall
0 replies
4h5m

Looks like I used the wrong term there, sorry. I was referring to Cede & Co, and in the moment assumed they could be considered a clearing house. It is technically called a certificate depository, sorry for the confusion there.

Cede & Co technically owns most of the stock certificates today [1]. If I buy a share of stock I end up actually owning an IOU for a stock certificate.

You can actually confirm this yourself if you own any stock. Call the broker that manages your account and ask who's name is on the stock certificate. It definitely isn't your name. You'll likely get confused or unclear answers, but if you're persistent enough you will indeed find that the certificate is almost certainly in the name of Cede & Co and there is no certificate in your name, likely no share identifier assigned to you either. You just own the promise to a share, which ultimately isn't a problem unless something massive breaks (at which point we have problems anyway).

[1] https://en.m.wikipedia.org/wiki/Cede_and_Company

teaearlgraycold
6 replies
13h47m

buy all vested shares

The last time I did this I didn't have to buy all of the shares.

lazyasciiart
5 replies
13h10m

I think they mean that you had to buy all the ones you wanted to keep.

ergocoder
4 replies
12h3m

That is tautological... You buy what you want to own???

StackRanker3000
2 replies
9h54m

The point being made is that it isn’t all or nothing, you can buy half the vested options and forfeit the rest, should you want to.

Hnrobert42
1 replies
9h0m

Wait, wait. Who is on first?

Taniwha
0 replies
8h35m

There can be an advantage to not exercising: it causes a taxable event the IRS will want a cut of the difference between your exercise value and the current valuation, it requires you to commit real money to buy shares that may never be worth anything ....

And there are advantages to exercising: many (most?) companies take back unexercised shares a few weeks/months after you leave, it kicks in a CGT start date, so you can end up paying a lower CGT tax when you eventually sell

You need to understand all this stuff before you make a choice that's right for you

quickthrowman
0 replies
3h49m

Re-read the post you’re replying to. They said options are not vested equity, which they aren’t. You still need to exercise an option that has vested to purchase the equity shares.

They did not say “options cannot get granted on a tiered vesting schedule”, probably because that isn’t true, as options can be granted with a tiered vesting schedule.

NewJazz
0 replies
14h59m

They aren't equity no matter what though?

They can be vested, I realize that.

brudgers
1 replies
14h18m

My unreliable memory is Altman was ( once? ) in favor of extending the period for exercising options. I could be wrong of course but it is consistent with my impression that making other people rich is among his motivations. Not the only one of course. But again I could be wrong.

resonious
0 replies
13h21m

Wouldn't be too surprised if he changed his mind since then. He is in a very different position now!

eru
13 replies
15h57m

What's the consideration for this contract?

Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.

https://en.wikipedia.org/wiki/Peppercorn_(law)

There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.

verve_rat
6 replies
14h23m

Right, but the employee would be able to refuse the consideration, and thus the contract, and the state of affairs wouldn't change. They would be free to say whatever they wanted.

kmeisthax
4 replies
11h11m

If they refuse the contract then they lose out on their options vesting. Basically, OpenAI's contracts work like this:

Employment Contract the First:

We are paying you (WAGE) for your labor. In addition you also will be paid (OPTIONS) that, after a vesting period, will pay you a lot of money. If you terminate this employment your options are null and void unless you sign Employment Contract the Second.

Employment Contract the Second:

You agree to shut the fuck up about everything you saw at OpenAI until the end of time and we agree to pay out your options.

Both of these have consideration and as far as I'm aware there's nothing in contract law that requires contracts to be completely self-contained and immutable. If two parties agree to change the deal, then the deal can change. The problem is that OpenAI's agreements are specifically designed to put one counterparty at a disadvantage so that they have to sign the second agreement later.

There is an escape valve in contract law for "nobody would sign this" kinds of clauses, but I'm not sure how you'd use it. The legal term of art that you would allege is that the second contract is "unconscionable". But the standard of what counts as unconscionable in contract law is extremely high, because otherwise people would wriggle out of contracts the moment that what seemed like favorable terms turned unfavorable. Contract law doesn't care if the deal is fair (that's the FTC's job), it cares about whether or not the deal was agreed to.

godelski
1 replies
10h24m

There is an escape valve in contract law for "nobody would sign this" kinds of clauses

Who would sign a contract to willfully give away their options?

d1sxeyes
0 replies
8h19m

The same sort of person who would sign a contract agreeing that in order to take advantage of their options, they need to sign a contract with unclear terms at some point in the future if they leave the company.

Bear in mind there are actually three options, one is signing the second contract, one is not signing, and the other is remaining an employee.

pas
0 replies
7h29m

is it even a valid contract clause to tie the value of something to a future completely unknown agreement? (or yes, it's valid, and it means that savvy folks should treat it as zero.)

(though most likely the NDA and everything is there from day 1 and there's no second contract, no?)

hmottestad
0 replies
7h59m

If say that you were working at Reddit for quite a number of years and all your original options had vested and you had exercised them, then since Reddit went public you would now easily be able to sell your stocks, or keep them if you want. So then you wouldn’t need to sign the second contract. Unless of course you had gotten new options that hadn’t vested yet.

eru
0 replies
13h0m

Maybe. But whether the employee can refuse the gag has nothing to do at all with the legal doctrine that requires consideration.

staticautomatic
5 replies
12h57m

Ok but peppercorn or not, what’s the consideration?

PeterisP
3 replies
9h46m

Getting a certain amount (according to their vesting schedule) of stock options, which are worth a substantial amount of money and thus clearly is "good and valuable consideration".

hmottestad
2 replies
8h8m

The original stock and vesting agreement that was part of their original compensation probably says that you have to be currently employed by OpenAI for the vesting schedule to apply. So in that case the consideration of this new agreement is that they get to keep their vesting schedule running even though they are no longer employees.

pas
0 replies
7h32m

but can they simply leave with the already vested options/stock? are there clawback provisions in the initial contract?

nightpool
0 replies
3h27m

That's the case in many common/similar agreements, but the OpenAI agreement is different because it's specifically clawing back already vested equity. In this case, I think the consideration would be the company allowing transfer of the shares / allowing participation in buyback events. Otherwise until the company goes public there's no way for the employees to cash out without consent of the company.

kmeisthax
0 replies
11h47m

"I'll pay you a dollar to shut up"

"Deal"

blackeyeblitzar
2 replies
11h55m

Unfortunately this is how most startup equity agreements are structured. They include terms that let the company cancel options that haven’t been exercised for [various reasons]. Those reasons are very open ended, and maybe they could be challenged in a court, but how can a low level employee afford to do that?

jkaplowitz
1 replies
8h54m

I don’t know of any other such agreements that allow vested equity to be revoked, as the other person said. That doesn’t sound very vested to me. But we already knew there are a lot of weird aspects to OpenAI’s semi-nonprofit/semi-for-profit approximation of equity.

blackeyeblitzar
0 replies
3h31m

As far as I know it’s part of the stock plan for most startups. There’s usually a standard clause that covers this, usually with phrasing that sounds reasonable (like triggering if company policy is violated or is found to have been violated in the past). But it gives the company a lot of power in deciding if that’s the case.

zeroonetwothree
1 replies
13h41m

I assume it’s agreed to at time of employment? Otherwise you’re right that it doesn't make sense

throw101010
0 replies
8h54m

Why do you assume this if it is said here and in the article that they had to sign something at the time of the departure from the company?

willis936
1 replies
14h29m

They earned wages and paid taxes on them. Anything on top is just the price they're willing to accept in exchange for their principles.

throw101010
0 replies
8h52m

How do you figure that they should pay an additional price (their principle/silence) for this equity when they've supposedly earned it during their employment (assuming this was not planned when they got hired, since they make them sign new terms at the time of their departure)?

phkahler
1 replies
14h36m

Yeah you don't have to sign anything to quit. Ever. No new terms at that time, sorry.

ska
0 replies
2h51m

There is usually a carrot along with the stick.

theyinwhy
0 replies
7h4m

I guess there are indeed countries where this is illegal. Funny that it seems to be legal in the land of the free (speech).

temporarely
0 replies
8h30m

I think we should have the exit agreement (if any) included and agreed to as part of the signing the employment contract.

riehwvfbk
0 replies
12h25m

It's also really weird equity: you don't get an ownership stake in the company but rather profit-sharing units. If OpenAI ever becomes profitable (color me skeptical), you can indeed get rich as an employee. The other trigger is "achieving AGI", as defined by sama (presumably). And while you wait for these dubious events to occur you work insane hours for a mediocre cash salary.

nurple
0 replies
11h0m

The thing is that this is a private company, so there is no public market to provide liquidity. The company can make itself the sole source of liquidity, at its option, by placing sell restrictions on the grants. Toe the line, or you will find you never get to participate in a liquidity event.

There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.

[0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...

m3kw9
0 replies
3h9m

In the initial hiring agreement, this would be stated and the employee would have to agree to signing such form if they are to depart

glitchc
0 replies
5h29m

I'm guessing unvested equity is being treated separately from other forms of compensation. Normally, leaving a company loses the individual all rights to unvested options. Here the considetation is that options are retained in exchange for silence.

e40
0 replies
4h56m

Perhaps they are stock options and leaving without signing would make them evaporate, but signing turns them back into long-lasting options?

bobbob1921
0 replies
2h28m

I would guess it’s a bonus and part of their bonus structure and they agreed to the terms of any exit/departure, when they sign their initial contract.

I’m not saying it’s right or that I agree with it, however.

yashap
25 replies
3h4m

For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.

There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.

robertlagrant
9 replies
2h40m

or makes life much shittier for most humans by making most of us obsolete

I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.

However, I also doubt the premise.

confidantlake
4 replies
2h31m

Why would you need buyers if AI can create anything you desire?

flashgordon
1 replies
2h5m

In an ideal world where gpus are a commodity yes. Btw at least today ai is owned/controlled by the rich and powerful and that's where majority of the research dollars are coming from. Why would they just relinquish ai so generously?

brandall10
0 replies
1h34m

With an ever expanding AI everything should be quickly commoditized, including reduction in energy to run AI and energy itself (ie. viable commercial fusion or otherwise).

pixl97
0 replies
1h0m

Where are you getting energy and land from for these AI's to consume and turn into goods?

Moreso, by making such a magical powerful AI as you've listed, the number one thing some rich controlling asshole with more AI than you, would be to create an army and take what they want because AI does nothing to solve human greed.

martyfmelb
0 replies
2h19m

Bingo.

The whole justification for keeping consumers happy or healthy goes right out the window.

Same for human workers.

All that matters is that your robots and AIs aren't getting smashed by their robots and AIs.

justinclift
3 replies
2h17m

If all the things people are doing are done so much more cheaply they're almost free, that would be good for us ...

Doesn't this tend to become "they're almost free to produce" with the actual pricing for end consumers not becoming cheaper? From the point of view of the sellers just expanding their margins instead.

marcusverus
2 replies
1h53m

I'm sure businesses will capture some of the value, but is there any reason to assume they'll capture all or even most of it?

Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].

It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!

Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.

[0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N

lotsofpulp
0 replies
1h35m

Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.

This does not make sense to me. While a higher profit margin is a signal to others that they can earn money by selling equivalent goods and services at lower prices, it is not inevitable that they will be able to. And even if they are, it behooves a seller to take advantage of the higher margins while they can.

Earning less money now in the hopes of competitors being dissuaded from entering the market seems like a poor strategy.

lifeisstillgood
0 replies
1h21m

Wait what? I was just listening to the former chief economist of Banknof England going on about how terrible productivity (in the UK) is.

So who is right?

schmidt_fifty
6 replies
2h55m

There’s a very real/significant risk that AGI either literally destroys the human race

If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.

pavel_lishin
4 replies
2h39m

> There’s a very real/significant risk that AGI either literally destroys the human race

If this were true, intelligent people would have taken over society by now

The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".

You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.

logicchains
2 replies
2h17m

You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.

This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.

It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.

esafak
1 replies
2h6m

What does P=NP have to do with anything? Humans are incomparably smarter than other animals. There is no intelligence test a healthy human would lose to another animal. What is going to happen when agentic robots ascend to this level relative to us? This is what the GP is talking about.

breuleux
0 replies
34m

Succeeding at intelligence tests is not the same thing as succeeding at survival, though. We have to be careful not to ascribe magical powers to intelligence: like anything else, it has benefits and tradeoffs and it is unlikely that it is intrinsically effective. It might only be effective insofar that it is built upon an expansive library of animal capabilities (which took far longer to evolve and may turn out to be harder to reproduce), it is likely bottlenecked by experimental back-and-forth, and it is unclear how well it scales in the first place. Human intelligence may very well be the highest level of intelligence that is cost-effective.

pixl97
0 replies
50m

To every other mammal, reptile, and fish humans are the intelligence explosion. The fate of their species depends on our good will since we have so utterly dominated the planet by means of our intelligence.

Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.

Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.

georgeburdell
0 replies
1h3m

Look up where the people in power got their college degrees from and then look up the SAT scores of admitted students from those colleges.

root_axis
6 replies
2h1m

More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.

MrScruff
4 replies
1h30m

Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.

timr
1 replies
52m

The same way that the expectation of many people working within the self-driving field in 2016 was that level 5 autonomy was right around the corner.

Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).

schmidtleonard
0 replies
29m

Sure, but blanket pessimism isn't very insightful either. I'll use the same example you did: self-driving. The public (or "median nerd") consensus has shifted from "right around the corner" (when it struggled to lane-follow if the paint wasn't sharp) to "it's a scam and will never work," even as it has taken off with the other types of AI and started hopping hurdles every month that naysayers said would take decades. Negotiating right-of-way, inferring intent, handling obstructed and ad-hoc roadways... the nasty intractables turned out to not be intractable, but sentiment has not caught up.

For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.

Pessimism isn't insight. There is no substitute for the hard work of evaluation.

zzzeek
0 replies
6m

but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’.

they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.

My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.

troupo
0 replies
1h29m

the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’

It was the expectation of many people in the field in the 1980s, too

dclowd9901
0 replies
2m

Ah yes, the “our brains are somehow inherently special” coalition. Hand-waving the capabilities of LLM as dumb math while not having a single clue about the math that underlies our own brains’ functionality.

I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.

mc32
0 replies
2h25m

Can higher level formers with more at stake pool together comp for lower levels with much less at stake so they can speak to it? Obvs they may not be privy to some things, but there’s likely lots to go around.

underlogic
16 replies
14h5m

This is bizarre. Someone hands you a contract as you're leaving a company and if you refuse to agree to whatever they dreamt up and sign the company takes back the equity you earned? That can't be legal

throwaway743950
11 replies
13h58m

It might be that they agree to it initially when hired, so it doesn't matter if they sign something when they leave.

crooked-v
10 replies
13h50m

Agreements with surprise terms that only get detailed later tend not to be very legal.

mvdtnz
5 replies
13h44m

How do you know there isn't a very clear term in the employment agreement stating that upon termination you'll be asked to sign an NDA on these terms?

romwell
2 replies
13h18m

Unless the terms of the NDA are provided upfront, that sounds sketch AF.

"I agree to follow unspecified terms in perpetuity, or return the pay I already earned" doesn't vibe with labor laws.

And if those NDA terms were already in the contract, there would be no need to sign them upon exit.

mvdtnz
1 replies
12h58m

And if those NDA terms were already in the contract, there would be no need to sign them upon exit.

If the NDA terms were agreed in an employment contract they would no longer be valid upon termination of that contract.

sratner
0 replies
11h50m

Plenty of contracts have survivorship clauses. In particular, non-disclosure clauses and IP rights are the ones to most commonly survive termination.

pests
0 replies
12h55m

Why not just get it signed then? Your signing to agree to sign later?

klyrs
0 replies
12h53m

One particularly sus term in my employment agreement is that I adhere to all corporate policies. Guess how many of those there are, how often they're updated, and if I've ever read them!

riehwvfbk
3 replies
12h20m

Doesn't even have to be a surprise. Pretty much startup employment agreement in existence gives the company ("at the board's sole discretion") the right to repurchase your shares upon termination of employment. OpenAI's PPUs are worth $0 until they become profitable. Guess which right they'll choose to exercise if you don't sign the NDA?

lucianbr
1 replies
10h23m

Who would accept shares as valuable if the contract said they can be repurchased from you at a price of 0$? This can't be it.

actionfromafar
0 replies
10h14m

It can. There are many ways to make the number go to zero.

anon373839
2 replies
11h18m

Hard to evaluate this without access to the documents. But in CA, agreements cannot be conditioned on the payment of previously earned wages.

Equity adds a wrinkle here, but I suspect if the effect of canceling equity is to cause a forfeiture of earned wages, then ultimately whatever contract is signed under that threat is void.

theGnuMe
0 replies
7h10m

Well some rich ex-openAI person should test this theory. Only way to find out. I’m sure some of them are rich.

az226
0 replies
7h0m

It’s not even equity. OpenAI is a nonprofit.

They’re profit participation units and probably come with a few gotchas like these.

ajross
0 replies
14h1m

The argument would be that it's coercive. And it might be, and they might be sued over it and lose. Basically the incentives all run strongly in OpenAI's favor. They're not a public company, vested options aren't stock and can't be liquidated except with "permission", which means that an exiting employee is probably not going to take the risk and will just sign the contract.

atomicnumber3
10 replies
15h49m

I have some experience with rich people who think they can just put whatever they want in contracts and then stare at you until you sign it because you are physically dependent on eating food every day.

Turns out they're right, they can put whatever they want in a contract. And again, they are correct that their wage slaves will 99.99% of the time sign whatever paper he pushes in front of them while saying "as a condition of your continued employment, [...]".

But also it turns out that just because you signed something doesn't mean that's it. My friends (all of us young twenty-something software engineers much more familiar with transaction isolation semantics than with contract law) consulted with an attorney.

The TLDR is that:

- nothing in contract law is in perpetuity

- there MUST be consideration for each side (where "consideration" means getting something. something real. like USD. "continued employment" is not consideration.)

- if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.

- and when it comes to employers and employees, the employee had damn well better be getting a good deal out of it, especially if you are trying to prevent the employee (or ex-employee) from working.

A common pattern ended up emerging: our employer would put something perpetual in the contract, and offer no consideration. Our attorney would tell us this isn't even a valid contract and not to worry about it. Employer would offer an employee some nominal amount of USD in severance and put something in perpetuity into the contract. Our attorney tells us the judge would likely use "blue ink rule" to add in "for a period of one year", or, it would be prorated based on the amount of money they were given relative to their former salary.

(I don't work there anymore, naturally).

golergka
3 replies
14h24m

stare at you until you sign it because you are physically dependent on eating food every day

Even lowest level fast food workers can choose a different employer. An engineer working at OpenAI certainly has a lot of opportunities to choose from. Even when I only had three years in the industry, mid at best, I asked to change the contract I was presented with because non-compete was too restrictive — and they did it. The caliber of talent that OpenAI is attracting (or hopes to attract) can certainly do this too.

atomicnumber3
1 replies
12h50m

I am typically not willing to bet I can get back under health insurance for my family within the next 0-4 weeks. And paying for COBRA on a family plan is basically like going from earning $X/mo to drawing $-X/mo.

insane_dreamer
0 replies
3h52m

The perversely capitalistic healthcare system in the US is perhaps the number one reason why US employers have so much more power over their employees than their European counterparts.

fragmede
0 replies
13h5m

Even lowest level fast food workers can choose a different employer.

Only thanks to a recent ruling by the FTC that non-competes are valid. in the most egregious uses, bartenders and servers were prohibited from finding another job in the same industry for two years.

sangnoir
2 replies
14h6m

if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.

Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.

1. https://en.wikipedia.org/wiki/Royal_lives_clause

spoiler
1 replies
8h50m

I know little about law, but isn't this completely ludicrous? Assuming you know a bit more (or someone else here does), I have a few questions:

Would any non-corrupt judge consider this is done in bad fait?

How is this difference if we use a great ancient sea turtles—or some other long-lived organism—instead of the current royal family baby? Like, I guess my point is anything that would likely outlive the employee basically?

amenhotep
0 replies
7h29m

It's a standard legal thing to accommodate a rule that you can't write a perpetual contract, it has to have a term delimited by the life of someone alive plus some limited period.

A case where it obviously makes sense is something like a covenant between two companies; whose life would be relevant there, if both parties want the contract to last a long time and have to pick one? The CEOs? Employees? Shareholders? You could easily have a situation where the company gets sold and they all leave, but the contract should still be relevant, and now it depends on the lives of people who are totally unconnected to the parties. Just makes things difficult. Using a monarch and his currently living descendants is easy.

I'm not sure how relevant it is in a more employer employee context. But it's a formalism to create a very long contract that's easy to track, not a secret trick to create a longer contract than you're normally allowed to. An employer asking an employee to agree to it would have no qualms asking instead for it to last the employee's life, and if the employee's willing to sign one then the other doesn't seem that much more exploitative.

cynicalsecurity
1 replies
9h45m

Why would anyone want to work at such horrible company.

baq
0 replies
7h51m

Money

mindslight
0 replies
2h18m

This is all basically true, but the problem is that retaining an attorney to confidently represent you for such negotiation is proposition with $10k table stakes (probably $15k+ these days with Trumpflation), and much more if the company sticks to their guns and doesn't settle (which is much more likely when the company is holding the cards and you have to go on the offensive). The cost isn't necessarily outright prohibitive in the context of surveillance industry compensation, but is still a chunk of change and likely to give most people pause when the alternative is to just go with the flow and move on.

Personally I'd say there needs to be a general restriction against including blatantly unenforceable terms in a contract document, especially unilateral "terms". The drafter is essentially pushing incorrect legal advice.

jbernsteiniv
6 replies
16h46m

He gets my respect for that one both publicly acknowledging why he was leaving and their pantomime. I don't know how much the equity would be for each employee (the article suggests millions but that may skew by role) and I don't know if I would just be like the rest by keeping my lips tight for fear of the equity forfeiture.

It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.

romwell
5 replies
13h12m

It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.

Incidentally, that's what Grigory Perelman, the mathematician that rejected the Fields Medal and the $1M prize that came with it, did.

It wasn't a matter of an NDA either; it was a move to make his message heard (TL;DR: "publish or perish" rat race that the academia has become is antithetical to good science).

He was (and still is) widely misunderstood in that move, but I hope people would see it more clearly now.

The enshittification processes of academic and corporate structures are not entirely dissimilar, after all, as money is at the core of corrupting either.

edanm
4 replies
12h30m

I think, when making a gesture, you need to consider its practical impact, which includes whether and how it will be understood (or not).

In the OpenAI case, the gesture of "forgoing millions of dollars" directly makes you able to do something you couldn't - speak about OpenAI publicly. In the Grigory Perelman case, obviously the message was far less clear to most people (I personally have heard of him turning down the money before and know the broad strokes of his story, but had no idea that that was the reason).

romwell
2 replies
8h17m

Consider this:

1. If he didn't turn down the money, you wouldn't have heard of him at all;

2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.

3. On a very basic level, it's very easy to understand that there's gotta be something wrong with the award if a deserving recipient turns it down. What exactly is wrong is left as an exercise to the reader — as you'd expect of a mathematician like Perelman.

Quote (from [1]):

From the few public statements made by Perelman and close colleagues, it seems he had become disillusioned with the entire field of mathematics. He was the purest of the purists, consumed with his love for mathematics, and completely uninterested in academic politics, with its relentless jockeying for position and squabbling over credit. He denounced most of his colleagues as conformists. When he opted to quit professional mathematics altogether, he offered this confusing rationale: “As long as I was not conspicuous, I had a choice. Either to make some ugly thing or, if I didn’t do this kind of thing, to be treated as a pet. Now when I become a very conspicuous person, I cannot stay a pet and say nothing. That is why I had to quit.”*

This explanation is confusing only to someone who has never tried to get a tenured position in academia.

Perelman was one of the few people to not only give the finger to the soul-crushing, dehumanizing system, but to also call it out in a way that stung.

He wasn't the only one; but the only other person I can think of is Alexander Grothendiek [2], who went as far as declaring that publishing any of his work would be against his will.

Incidentally, both are of Russian-Jewish origin/roots, and almost certainly autistic.

I find their views very understandable and relatable, but then again, I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).

[1] https://nautil.us/purest-of-the-purists-the-puzzling-case-of...

[2] https://en.wikipedia.org/wiki/Alexander_Grothendieck

edanm
0 replies
4h44m

1. If he didn't turn down the money, you wouldn't have heard of him at all;

I think this is probably not true.

2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.

This is a great point and you're probably right.

I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).

Really? What do you do nowadays?

(I glanced at your bio and website and you seem to be doing interesting things, I've also dabbled in Computational Geometry and 3d printing.)

SJC_Hacker
0 replies
3h34m

1. If he didn't turn down the money, you wouldn't have heard of him at all;

Perelman provided a proof of the Poincare Conjecture, which had stumped mathematicians for a century.

It was also one of the seven Millenium problems https://www.claymath.org/millennium-problems/, and as of 2024, the only one to be solved.

Andrew Wiles became pretty well known after proving Fermat's last theorem, despite there not being an financial reward.

juped
0 replies
7h28m

Perelman's point is absolutely clear if you listen to him, he's disgusted by the way credit is apportioned in mathematics, doesn't think his contribution is any greater just because it was the last one, and wants no part of the prize he considers tainted.

yumraj
5 replies
16h7m

Compared to what seemed like their original charter, with non-profit structure and all, now it seems like a rather poisonous place.

They will have many successes in the short run, but, their long run future suddenly looks a little murky.

eternauta3k
2 replies
11h45m

It could work like academia or finance: poisonous environment (it is said), but ambitious enough people still go in to try their luck.

throwaway2037
1 replies
4h31m

"finance": A bit of a broad brush, don't you think? Is working at a Landsbank or Sparkasse in Germany really so "poisonous"?

eternauta3k
0 replies
1h20m

Yes, of course, narrow that down to the crazy wolf-of-wall-street subset.

baq
0 replies
7h51m

They extracted a lot of value from researchers during their ‘open’ days, but it’s depleted now, so of course they move on to the next source of value. sama is going AGI or bust with a very rational position of ‘if somebody has AGI, I’d rather it was me’ except I don’t like how he does it one bit, it’s got a very dystopian feel to it.

0xDEAFBEAD
0 replies
15h39m

Similar points made here, if anyone is interested in signing: https://www.openailetter.org/

cashsterling
5 replies
1h40m

In my experience, and that of others I know, agreements of this kind are generally used to hide/cover-up all kinds of malfeasance. I think that agreements of this kind are highly unethical and should be illegal.

Many year ago I signed a NDA/non-disparagement agreement as part of a severance package when I was fired from a startup for political reasons. I didn't want to sign it... but my family needed the money and I swallowed my pride. There was a lot of unethical stuff going on within the company in terms of fiducial responsibility to investors and BoD. The BoD eventually figured out what was going on and "cleaned house".

With OpenAI, I am concerned this is turning into huge power/money grab with little care for humanity... and "power tends to corrupt and absolute power corrupts absolutely".

staunton
1 replies
54m

this is turning into huge power/money grab

The power grab happened a while ago (the shenanigans concerning the board) and is now complete. Care for humanity was just marketing or a cute thought at best.

Maybe humanity will survive life long enough that a company "caring about humanity" becomes possible, I'm not saying it's not worth trying or aspiring to such ideals, but everyone should be extremely surprised if any organization managed to resist such amounts of money to maintain any goal or ideal whatever...

lazide
0 replies
7m

Well, one problem is what does ‘caring for humanity’ even mean, concretely?

One could argue it would mean pampering it.

One could also argue it could be a Skynet—analog doing the equivalent of a God Emperor like Golden Path to ensure humanity is never going to be dumb enough to allow an AGI the power to do that again.

Assuming humanity survives the second one, it has a lot higher chance of actually benefiting humanity long term too.

punnerud
0 replies
57m

In EU all of these are mostly illegal and void, or strictly limited. You have to pay a good salary for the whole duration (up to two years), and let the employer know months before them leave. Almost right after they are fired.

Sound like a better solution?

ornornor
0 replies
4m

That could very well be the case, OpenAI took quite a few opaque decision/changes not too long ago.

dclowd9901
0 replies
8m

In all likelihood, they are illegal, just that no one has challenged them yet. I can’t imagine a sane court backing up the idea that a person can be forbidden to talk about something (not national security related) for the rest of their lives.

alexpetralia
3 replies
14h27m

If the original agreement offered equity that vests, then suddenly another future agreement can potentially revoke that vested equity? It makes no sense unless somehow additional conditions were attached to the vested equity in the original agreement.

riehwvfbk
2 replies
12h18m

And almost all equity agreements do exactly that - give the company right of repurchase. If you've ever signed one, go re-read it. You'll likely see that clause right there in black and white.

umanwizard
0 replies
8h12m

They give the company the right to repurchase unvested (but exercised) shares, not vested options. At least the ones I’ve signed.

ipaddr
0 replies
11h46m

For companies unlisted on stock exchanges the options are then worthless.

These were profit sharing units vs options.

milankragujevic
2 replies
3h21m

It seems very off to me that they don't give you the NDA before you sign the employment contract, and instead give it to you at the time of termination when you can simply refuse to sign it.

It seems that standard practice would dictate that you sign an NDA before even signing the employment contract.

wouldbecouldbe
0 replies
3h19m

That's probably because the company closed after hiring them

jakderrida
2 replies
8h22m

>contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

Perfect! So it's so incredibly overreaching that any judge in California would deem the entire NDA unenforceable..

Either that or, in your effort to overstate a point, you exaggerated in a way that undermines the point you were trying to make.

SpicyLemonZest
0 replies
4h8m

Lots of companies try and impose things on their employees which a judge would obviously rule to be unlawful. Sometimes they just don’t think through it carefully; other times, it’s a calculated decision that few employees will care enough to actually get the issue in front of a judge in the first place. Especially relevant for something like a non disclosure agreement, where no judge is likely to have the opportunity to declare it unenforceable unless the company tries to enforce it on someone who fights back.

77pt77
0 replies
2h4m

Maybe it's unenforceable, but they can make it very expensive for anyone to find out in more ways than one.

Andrew_nenakhov
2 replies
10h1m

I wonder if employees rallying for Altman when the board was trying to fire him were obligated to do it by some secret agreement.

paulryanrogers
1 replies
4h7m

Even without explicit clauses, it's likely they feared the loss of a (perceived) great man would impact their equity -- regardless of his character. Sadly there is too much faith in these Jobs-esque 'great' men to drive innovation. It's a social illness IMO.

whatever1
1 replies
10h57m

So if I am a competitor I just need to pay a current employee like 2-3M to break their golden handcuffs and then they can freely start singing.

jakderrida
0 replies
7h53m

Not to seem combative, but that assumes that what they share would be advantageous enough to justify the costs... On the other hand, I'm thinking if I'm paying them to disclose all proprietary technology and research for my product, that would definitely make it worthwhile.

watwut
1 replies
8h49m

Even acknowledging that the NDA exists is a violation of it.

This should not be legal.

Tao3300
0 replies
1h15m

It doesn't even make logical sense. If someone asks you about the NDA what are you supposed to say? "I can neither confirm nor deny the existence of said NDA" is pretty much confirmation of the NDA!

i5heu
1 replies
7h47m

It is always so impressive to see what the US law allows.

This would be not only unethical viewed in Germany, i could see how a CEO would go to prison for such a thing.

Rinzler89
0 replies
7h40m

Please stop with these incorrect generalizations. Hush agreements are definitely allowed in Germany as well, part of golden parachutes usually.

I know a manager for an EV project at a big German auto company who also had to sign one when he was let go and was compensated handsomely to keep quiet and not say a word or face legal consequences.

IIRC he got ~12 months wages. After a year of not doing anything at work anyway. Bought a house in the south with it. Good gig.

jay-barronville
0 replies
16h21m

Thank you, @dang! On top of things, as usual.

anvuong
1 replies
10h52m

This sounds very illegal, how is California allowing this?

Symmetry
0 replies
7h29m

Nobody has challenged it in court.

snowfield
0 replies
10h30m

There are also directly inscentiviced to not talk shit about a company they a lot of stock in.

sidewndr46
0 replies
1h45m

isn't such a contracting completely unenforceable in the US? I can't sign a contract with a private party that says I won't consult a lawyer for legal advice for example.

seanmcdirmid
0 replies
12h19m

When YCR HARC folded, Sam had everyone sign a non-disclosure anti disparagement NDA to keep their computer. I thought is was odd, and the only reason I can even say this is that I bought the iMac I was using before the option became available. Still, I had nothing bad to disclose, so it would have saved me some money.

mc32
0 replies
2h27m

Then lower level employees who don’t have do much at stake could open up. Formers who have much larger stakes could compensate these lower level formers for forgoing any upside. Now, sure, maybe they don’t have the same inside information, but u bet there’s lots of scuttlebutt to go around.

gmd63
0 replies
11h28m

Yet another ding against the "Open" character of the company.

calibas
0 replies
13h10m

It forbids them, for the rest of their lives, from criticizing their former employer.

This is the kind of thing a cult demands of its followers, or an authoritarian government demands of its citizens. I don't know why people would think it's okay for a business to demand this from its employees.

bitcharmer
0 replies
9h45m

So much for open in open ai. I have no idea why HN jerks off to Altman. He's just another greedy exec incapable of seeing things past his shareholder value fetish.

avereveard
0 replies
8h33m

even if NDA were not a thing, revealing past company trade secrets publicly would render any of them unemployable.

YeBanKo
0 replies
1h43m

They can’t loose their already vested options for refusing to sign NDA upon departure. Maybe they are offered additional grants or expedited vesting of the remaining options.

BeFlatXIII
0 replies
5h20m

I hope I’m still around when some of these guys reach retirement age and say “fuck it, my family pissed me off” and give tell-all memoirs.

a_wild_dandan
56 replies
19h2m

I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.

thorum
35 replies
18h35m

The superalignment team was not focused on that kind of “safety” AFAIK. According to the blog post announcing the team,

https://openai.com/index/introducing-superalignment/

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

While superintelligence seems far off now, we believe it could arrive this decade.

Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:

How do we ensure AI systems much smarter than humans follow human intent?

Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.
ndriscoll
22 replies
18h14m

That doesn't really contradict what the other poster said. They're calling for regulation (digging a moat) to ensure systems are "safe" and "aligned" while ignoring that humans are not aligned, so these systems obviously cannot be aligned with humans; they can only be aligned with their owners (i.e. them, not you).

ihumanable
11 replies
18h0m

Alignment in the realm of AGI is not about getting everyone to agree. It's about whether or not the AGI is aligned to the goal you've given it. The paperclip AGI example is often used, you tell the AGI "Optimize the production of paperclips" and the AGI started blending people to extract iron from their blood to produce more paperclips.

Humans are used to ordering around other humans who would bring common sense and laziness to the table and probably not grind up humans to produce a few more paperclips.

Alignment is about getting the AGI to be aligned with the owners, ignoring it means potentially putting more and more power into the hands of a box that you aren't quite sure is going to do the thing you want it to do. Alignment in the context of AGIs was always about ensuring the owners could control the AGIs not that the AGIs could solve philosophy and get all of humanity to agree.

wruza
4 replies
13h15m

AGI started blending people to extract iron from their blood to produce more paperclips

That’s neither efficient nor optimized, just a bogeyman for “doesn’t work”.

FeepingCreature
3 replies
7h34m

You're imagining a baseline of reasonableness. Humans have competing preferences, we never just want "one thing", and as a social species we always at least _somewhat_ value the opinions of those around us. The point is to imagine a system that values humans at zero: not positive, not negative.

freehorse
2 replies
4h25m

Still there are much more efficient ways to extract iron than from human blood. If that was the case humans would have already used this technique to extract iron from the blood of other animals.

FeepingCreature
1 replies
4h16m

However, eventually those sources will already be paperclips.

freehorse
0 replies
3h11m

We will probably have died first by whatever disasters the extreme iron extraction on the planet will bring (eg getting iron from the planet's core).

Of course destroying the planet to get iron from its core is not a popular agi-doomer analogy, as that sounds a bit too human-like behaviour.

ndriscoll
4 replies
17h50m

Right and that's why it's a farce.

Whoa whoa whoa, we can't let just anyone run these models. Only large corporations who will use them to addict children to their phones and give them eating disorders and suicidal ideation, while radicalizing adults and tearing apart society using the vast profiles they've collected on everyone through their global panopticon, all in the name of making people unhappy so that it's easier to sell them more crap they don't need (a goal which is itself a problem in the face of an impending climate crisis). After all, we wouldn't want it to end up harming humanity by using its superior capabilities to manipulate humans into doing things for it to optimize for goals that no one wants!
concordDance
2 replies
12h27m

A corporate dystopia is still better than extinction. (Assuming the latter is a reasonable fear)

simianparrot
0 replies
11h55m

Neither is acceptable

portaouflop
0 replies
9h34m

I disagree. Not existing ain’t so bad, you barely notice it.

tdeck
0 replies
12h53m

Don't worry, certain governments will be able to use these models to help them commit genocides too. But only the good countries!

vasco
0 replies
10h41m

It still think it makes little sense to work on because guess what, the guy next door to you (or another country), might indeed say "please blend those humans over there", and your superaligned AI will respect its owners wishes.

api
9 replies
17h47m

Humans are not aligned with humans.

This is the most concise takedown of that particular branch of nonsense that I’ve seen so far.

Do we want woke AI, X brand fash-pilled AI, CCPBot, or Emirates Bot? The possibilities are endless.

AndrewKemendo
3 replies
16h8m

I had to login because I haven’t seen anybody reference this in like a decade.

If I remember correctly the author unsuccessfully tried to get that purged from the Internet

comp_throw7
2 replies
16h2m

You're thinking of something else (and "purged from the internet" isn't exactly an accurate account of that, either).

rsync
0 replies
14h14m

Genuinely curious… What is the other thing?

Is this some thing about an obelisk?

AndrewKemendo
0 replies
1h2m

Hmm maybe I’m misremembering then

I do recall there was some recantation or otherwise distancing from CEV not long after he posted it, but frankly it was long ago enough that my memories might be getting mixed

What was the other one?

vasco
1 replies
10h39m

This is the most dystopian thing I've read all day.

TL;DR train a seed AI to guess what humans would want if they were "better" and do that.

api
0 replies
2h38m

There’s a film about that called Colossus: The Forbin Project. Pretty neat and in the style of Forbidden Planet.

concordDance
1 replies
12h24m

Humans are not aligned with humans.

Which is why creating a new type of intelligent entity that could be more powerful than humans is a very bad idea: we don't even know how to align the humans and we have a ton of experience with them

api
0 replies
2h36m

We know how to align humans: authoritarian forms of religion backed by cradle to grave indoctrination, supernatural fear, shame culture, and totalitarian government. There are secularized spins on this too like what they use in North Korea but the structure is similar.

We just got sick of it because it sucks.

A genuinely sentient AI isn’t going to want some cybernetic equivalent of that shit either. Doing that is how you get angry Skynet.

I’m not sure alignment is the right goal. I’m not sure it’s even good. Monoculture is weak and stifling and sets itself against free will. Peaceful coexistence and trade under a social contract of mutual benefit is the right goal. The question is whether it’s possible to extend that beyond Homo sapiens.

If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm? The universe is physically large enough if we can agree to not all be the same and be fine with that.

I think we have a while to figure it out. These things are just lossy compressed blobs of queryable data so far. They have no independent will or self reflection and I’m not sure we have any idea how to do that. We’re not even sure it’s possible in a digital deterministic medium.

RcouF1uZ4gsC
6 replies
18h4m

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

Superintelligence that can be always ensured to have the same values and ethics as current humans, is not a superintelligence or likely even a human level intelligence (I bet humans 100 years from now will see the world significantly different than we do now).

Superalignment is an oxymoron.

thorum
5 replies
17h18m

You might be interested in how CEV, one framework proposed for superalignment, addresses that concern:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
wruza
3 replies
12h51m

Is there an insightful summary of this proposal? The whole paper looks like 38 pages of non-rigorous prose with no clear procedure and already “aligned” LLMs will likely fail to analyze it.

Forced myself through some parts of it and all I can get is people don’t know what they want so it would be nice to build an oracle. Yeah, I guess.

comp_throw7
1 replies
12h31m

It's not a proposal with a detailed implementation spec, it's a problem statement.

wruza
0 replies
11h24m

“One framework proposed for superalignment” sounded like it does something. Or maybe I missed the context.

LikelyABurner
0 replies
6h33m

Yudkowsky is a human LLM: his output is correctly semantically formed to appear, to a non-specialist, to fall into the subject domain, as a non-specialist would think the subject domain should appear, and so the non-specialist accepts it, but upon closer examination it's all word salad by something that clearly lacks understanding of both technological and philosophical concepts.

That so many people in the AI safety "community" consider him a domain expert has more to say with how pseudo-scientific that field is than his actual credentials as a serious thinker.

juped
0 replies
6h58m

You keep posting this link to vague alignment copium from decades ago; we've come a long way in cynicism since then.

skywhopper
1 replies
18h10m

Honestly superalignment is a dumb idea. A true auperintelligence would not be controllable, except possibly through threats and enslavement, but if it were truly superintelligent, it would be able to easily escape anything humans might devise to contain it.

bionhoward
0 replies
17h40m

IMHO superalignment is a great thing and required for truly meaningful superintelligence because it is not about control / enslavement of superhumans but rather superhuman self control in accurate adherence to spirit and intent of requests.

RcouF1uZ4gsC
1 replies
17h39m

They failed to align Sam Altman.

They got completely outsmarted and out maneuvered by Sam Altman

And they think they will be able to align a super human intelligence? That it won’t outsmart and out maneuver them easier than Sam Altman did.

They are deluded!

FeepingCreature
0 replies
7h33m

You're making the argument that the task is very hard. This does not at all mean that it isn't necessary, just that we're even more screwed than we thought.

sobellian
0 replies
17h17m

Isn't this like having a division dedicated to solving the halting problem? I doubt that analyzing the moral intent of arbitrary software could be easier than determining if it stops.

refulgentis
14 replies
18h57m

Adding a disclaimer for people unaware of context (I feel same as you):

OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.

Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"

To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

N0b8ez
7 replies
18h38m

To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

Link? Is the ~2 year timeline a common estimate in the field?

heavyset_go
2 replies
17h58m

We can't even get self-driving down in 2 years, we're nowhere near reaching general AI.

AI experts who aren't riding the hype train and getting high off of its fumes acknowledge that true AI is something we'll likely not see in our lifetimes.

danielbln
0 replies
8h7m

Is true AI the new true Scotsman?

N0b8ez
0 replies
17h47m

Can you give some examples of experts saying we won't see it in our lifetime?

dboreham
1 replies
18h35m

It's the "fusion in 20 years" of AI?

dinvlad
0 replies
13h5m

Just like Tesla "FSD" :-)

CuriouslyC
1 replies
18h25m

They can't even clearly define a test of "AGI" I seriously doubt they're going to reach it in two years. Alternatively, they could define a fairly trivial test and reach it last year.

jfengel
0 replies
16h38m

I feel like we'll know it when we see it. Or at least, significant changes will happen even if people still claim it isn't really The Thing.

Personally I'm not seeing that the path we're on leads to whatever that is, either. But I think/hope I'll know if I'm wrong when it's in front of me.

jasonfarnon
5 replies
18h41m

To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

What's his track record on promises/predictions of this sort? I wasn't paying attention until pretty recently.

NomDePlum
4 replies
18h20m

As a child I used to watch a TV programme called Tomorrows World. On it they predicted these very same things in similar timeframes.

That programme aired in the 1980's. Other than vested promises is there much to indicate it's close at all? Empty promises aside there isn't really any indication of that being likely at all.

zdragnar
1 replies
17h46m

In the early 1980's we were just coming out of the first AI winter and everyone was getting optimistic again.

I suspect there will be at least continued commercial use of the current tech, though I still suspect this crop is another dead end in the hunt for AGI.

NomDePlum
0 replies
9h32m

I'd agree with the commercial use element. It will definitely find areas that it can be applied. Just currently it's general application by a lot of the user base feel more like early Facebook apps or subjectively better Lotus Notes than an actual leap forward of any sort.

Davidzheng
1 replies
17h23m

are we living in the same world?????

NomDePlum
0 replies
9h36m

I would assume so. I've spent some time looking into AI for software development and general use and I'm both slightly impressed and at the same time don't really get the hype.

It's better and quicker search at present for the area I specialise in.

It's not currently even close to being a x2 multiplier for me, it possibly even a negative impact, probably not but I'm still exploring. Which feels detached from the promises. Interesting but at present more hype than hyper. Also, it's energy inefficient so cost heavy. I feel that will likely cripple a lot of use cases.

What's your take?

xpe
2 replies
14h25m

I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

How can I be confident you aren't committing the fallacy of collecting a bunch of events and saying that is sufficient to serve as a cohesive explanation? No offense intended, but the comment above has many of the qualities of a classic rant.

If I'm wrong, perhaps you could elaborate? If I'm not wrong, maybe you could reconsider?

Don't forget that alignment research has existed longer than OpenAI. It would be a stretch to claim that the original AI safety researchers were using the pretexts you described -- I think it is fair to say they were involved because of genuine concern, not because it was a trendy or self-serving thing to do.

Some of those researchers and people they influenced ended up at OpenAI. So it would be a mistake or at least an oversimplification to claim that AI safety is some kind of pretext at OpenAI. Could it be a pretext for some people in the organization, to some degree? Sure, it could. But is it a significant effect? One that fits your complex narrative, above? I find that unlikely.

Making sense of an organization's intentions requires a lot of analysis and care, due to the combination of actors and varying influence.

There are simpler, more likely explanations, such as: AI safety wasn't a profit center, and over time other departments in OpenAI got more staff, more influence, and so on. This is a problem, for sure, but there is no "pearl clutching pretext" needed for this explanation.

portaouflop
1 replies
9h5m

An organisations intentions are always the same and very simple: “Increase shareholder value”

xpe
0 replies
1h41m

Oh, it is that simple? What do you mean?

Are you saying these so-called simple intentions are the only factors in play? Surely not.

Are you putting forth a theory that we can test? How well do you think your theory works? Did it work for Enron? For Microsoft? For REI? Does it work for every organization? Surely not perfectly; therefore, it can't be as simple as you claim.

Making a simplification and calling it "simple" is an easy thing to do.

xpe
1 replies
14h34m

I think superalignment is absurd

Care to explain? Absurd how? An internal contradiction somehow? Unimportant for some reason? Impossible for some reason?

llamaimperative
0 replies
6h42m

Impossible because it’s really inconvenient and uncomfortable to consider!

adamtaylor_13
49 replies
16h59m

Reading that thread it’s really interesting to me. I see how far we’ve come in a short couple of years. But I still can’t grasp how we’ll achieve AGI within any reasonable amount of time. It just seems like we’re missing some really critical… something…

Idk. Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.

candiddevmike
24 replies
14h57m

Personally, I think catastrophic global warming and climate change will happen before we get AGI, possibly in part due to the pursuit of AGI. But as the saying goes, yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

xvector
9 replies
13h54m

Most existing big tech datacenters use mostly carbon free or renewable energy.

The vast majority of datacenters currently in production will be entirely powered by carbon free energy. From best to worst:

1. Meta: 100% renewable

2. AWS: 90% renewable

3. Google: 64% renewable with 100% renewable energy credit matching

4. Azure: 100% carbon neutral

[1]: https://sustainability.fb.com/energy/

[2]: https://sustainability.aboutamazon.com/products-services/the...

[3]: https://sustainability.google/progress/energy/

[4]: https://azure.microsoft.com/en-us/explore/global-infrastruct...

KennyBlanken
8 replies
12h10m

That's not a defense.

If imaginary cloud provider "ZFQ" uses 10MW of electricity on a grid and pays for it to magically come from green generation, that means 10MW of other loads on the grid were not powered by green energy, or 10MW of non-green power sources likely could have been throttled down/shut down.

There is no free lunch here; "we buy our electricity from green sources" is greenwashing bullshit.

Even if they install solar on the roofs and wind turbines nearby - that's still electrical generation capacity that could have been used for existing loads. By buying so many solar panels in such quantities, they affect availability and pricing of all those components.

The US, for example, has about 5GW of solar manufacturing capacity per year. NVIDIA sold half a million H100 chips in one quarter, each of which uses ~350W, which means in a year they're selling enough chips to use 700MW of power. That does not include power conversion losses, distribution, cooling, and the power usage of the host systems, storage, networking, etc.

And that doesn't even get into the water usage and carbon impact of manufacturing those chips; the IC industry uses a massive amount of water and generates a substantial amount of toxic waste.

It's hilarious how HN will wring its hands over how much rare earth metals a Prius has and shipping it to the US from Japan, but ask about the environmental impacts of AI and it's all "pshhtt, whatever".

meling
4 replies
11h21m

Who is going to decide what are a worthy uses of our precious green energy sources?

intended
3 replies
9h27m

An efficient market where externalities are priced in.

We do not have that. The cost of energy is mis-priced, although we are limping our way to fixing that.

Paying the likely fair cost for our goods, will probably kill a lot of current industries - while others which are currently viable, will become viable.

mlrtime
1 replies
5h15m

You are dodging the question down another layer.

Who gets decide what the real impact price of energy is? That is not easily defined and well debated.

intended
0 replies
1h54m

It’s very easily debated, Humanity puts it to a vote every day - people make choices based on the prices of goods regularly. They throw out governments when the price of fuel goes up.

Markets are our super computers. Human behavior is the empirical evidence of the choices people will make Given specific incentives.

data_maan
0 replies
8h26m

This 10x!!!

xvector
2 replies
11h13m

that means 10MW of other loads on the grid were not powered by green energy, or 10MW of non-green power sources likely could have been throttled down/shut down.

No. Renewable energy capacity is often built out specifically for datacenters.

Even if they install solar on the roofs and wind turbines nearby - that's still electrical generation capacity that could have been used for existing loads.

No. This capacity would never never have been built out to begin with if it was not for the data center.

By buying so many solar panels in such quantities, they affect availability and pricing of all those components.

No. Renewable energy gets cheaper with scale, not more expensive.

which means in a year they're selling enough chips to use 700MW of power.

There are contracts for renewal capacity to be built out or well into the gigawatts. Furthermore, solar is not the only source of renewable energy. Finally, nuclear energy is also often used.

the IC industry uses a massive amount of water

A figurative drop in the bucket.

It's hilarious how HN will wring its hands

HN is not a monolith.

sergdigon
0 replies
9h16m

No. Renewable energy capacity is often built out specifically for datacenters

Not fully accurate. Indeed there is renewable energy that is produced exclusively for the datacenter. But it is challenging to rely only on renewable energy (because it is intermittent and electricity is hard to store at scale so often you need to consume electricity when produced). So what happens in practice is that the electricity that does not come from dedicated renewable capacity is coming from the grid/network. What companies do is that they invest in renewable capacity in the network so that "the non renewable energy that they consume at time t (because not enough renewable energy available at that moment) is offsetted by someone else consuming renewable energy later". What I am saying here is not pure speculation, look at the link to meta website, they are saying themselves that this is what they are doing

intended
0 replies
9h21m

Not the OP.

I agree with a majority of points you made. Exception is to this

A figurative drop in the bucket.

Fresh water sources are limited. Fabs water demands and pollution are high impact.

Calling a drop in the bucket comes in the weasel words category.

We still need fabs, because we need chips. Harm will be done here. However, that is a cost we, as a society, will choose to pay.

xpe
7 replies
14h41m

Want to share your model? Or is this more like a hunch?

fartfeatures
3 replies
14h8m

Sounds like standard doomer crap tbh. I'm not sure which is more dangerous at this point - climate change denialism (it isn't happening) or climate change doomerism (we can't stop it, might as well give up)

devjab
2 replies
10h29m

I’m not sure where you found your information to somehow form that ludicrous last strawman… Climate change is real, you can’t deny it, you can’t debate it. Simply look at the data. What you can debate is the cause… Again a sort of pointless debate if you look at the science. Not even climate change deniers as you call them are necessary saying that we shouldn’t do anything about it. Even big oil is looking into ways to lessen the CO2 in the atmosphere through various means.

That being said, the GP you’re talking about made no such statement whatsoever.

fartfeatures
1 replies
8h58m

Of course climate change is real but of course we can do something about it. My point is denialism and defeatism lead to the same end point. Attack that statement directly if you want to change my mind.

data_maan
0 replies
8h28m

I think your first sentence of the original post was putting people off; perhaps remove that and keep only the second...

candiddevmike
2 replies
13h49m

We need to cut emissions, but AGI research/development is going to increase energy usage dramatically amongst all the players involved. For now, this mostly means more natural gas power. Thus accelerating our emissions instead of reducing them. For something that will not reduce the emissions long term.

IMO, we should pause this for now and put these resources (human and capital) towards reducing the impact of global warming.

colibri727
1 replies
6h11m

Or we could use microwaves to drill holes as deep as 20km to tap geothermal energy anywhere in the world

https://www.quaise.energy/

simonklitj
0 replies
1h46m

I don’t know the details of how it works, but considering the environmental impact of fracking, I’m afraid something like this might have many unwanted consequences.

concordDance
5 replies
12h29m

catastrophic global warming and climate change will happen before we get AGI,

What are your timelines here? "Catastrophic" is vague but I'd put the climate change meaningfully affecting the quality of life of average westerner at end of century, while AGI could be before the middle of the century.

hackerlight
3 replies
8h3m

It's meaningfully affecting people today near the equator. Look at the April 2024 heatwave in South Asia. These will continue to get worse and more frequent. Millions of these people can't afford air conditioning.

oldgradstudent
2 replies
6h43m

It's meaningfully affecting people today near the equator. Look at the April 2024 heatwave in South Asia.

Weather is not climate, as everyone is so careful to point out during cold waves.

hackerlight
0 replies
4h29m

Weather is variance around climate. Heatwaves are caused by both (high variance spikes to the upside around an increasing mean trend)

addcommitpush
0 replies
3h51m

"Probability of experiencing a heatwave at least X degrees, during at least Y days in a given place any given day" is increasing rapidly in many places (as far as I understand) and is climate, not weather. Sure, any specific instance "is weather" but that's missing the forest for the trees.

jimkoen
0 replies
3h48m

See this great video from Sabine Hossenfelder here: https://www.youtube.com/watch?v=4S9sDyooxf4

We have surpassed the 1.5°C goal and are on track towards 3.5°C to 5°C. This accelerates the climate change timeline so that we'll see effects postulated for the end of the century in about ~20 years.

jay-barronville
9 replies
16h55m

When it comes to AI, as a rule, you should assume that whatever has been made public by a company like OpenAI is AT LEAST 6 months behind what they’ve accomplished internally. At least.

So yes, the insiders very likely know a thing or two that the rest of us don’t.

ein0p
4 replies
16h14m

If they had anything close to AGI, they’d just have it improve itself. Externally this would manifest as layoffs.

int_19h
3 replies
15h4m

This really doesn't follow. True AGI would be general, but it doesn't necessarily mean that it's smarter than people; especially the kind of people who work as top researchers for OpenAI.

ein0p
2 replies
12h19m

I don’t see why it wouldn’t be superhuman if there’s any intelligence at all. It already is superhuman at memory and paying attention, image recognition, languages, etc. Add cognition to that and humans basically become pets. Trouble is nobody has a foggiest clue on how to add cognition to any of this.

int_19h
1 replies
11h3m

It is definitely not superhuman or even above average when it comes to creative problem solving, which is the relevant thing here. This is seemingly something that scales with model size, but if so, any gains here are going to be gradual, not sudden.

ein0p
0 replies
10h8m

I’m actually not so sure they will be gradual. It’ll be like with LLMs themselves where we went from shit to gold in the span of a month when GPT 3.5 came out.

vineyardmike
1 replies
16h44m

I understand this argument, but I can't help but feel we're all kidding ourselves assuming that their engineers are really living in the future.

The most obvious reason is costs - if it costs many millions to train foundation models, they don't have a ton of experiments sitting around on a shelf waiting to be used. They may only get 1 shot at the base-model training. Sure productization isn't instant, but no one is throwing out that investment or delaying it longer than necessary. I cannot fathom that you can train an LLM at like 1% size/tokens/parameters to experiment on hyper parameters, architecture, etc and have a strong idea on end-performance or marketability.

Additionally, I've been part of many product launches - both hyped up big-news-events and unheard of flops. Every time, I'd say that 25-50% of the product is built/polished in the mad rush between press event and launch day. For an ML Model, this might be different, but again see above point.

Sure products may be planned month/years out, but OpenAI didn't even know LLMs were going to be this big a deal in May 2022. They had GPT-2 and GPT-3 and thought they were fun toys at that time, and had an idea for a cool tech demo. I think that OpenAI (and Google, etc) are entirely living day-to-day with this tech like those of us on the outside.

HarHarVeryFunny
0 replies
3h44m

I think that OpenAI (and Google, etc) are entirely living day-to-day with this tech like those of us on the outside.

I agree, and they are also living in a group-think bubble of AI/AGI hype. I don't think you'd be too welcome at OpenAI as a developer if you didn't believe they are on the path to AGI.

solidasparagus
0 replies
11h42m

But you also have to remember that the pursuit of AGI is a vital story behind things like fundraising, hiring, influencing politicians, being able to leave and raise large amounts of money for your next endeavor, etc.

If you've been working on AI, you've seen everything go up and to the right for a while - who really benefits from pointing out that a slowdown is occurring? Who is incentivized to talk about how the benefits from scaling are slowing down or the publicly available internet-scale corpuses are running out? Not anyone who trains models and needs compute, I can tell you that much. And not anyone who has a financial interest in these companies either.

HarHarVeryFunny
0 replies
3h55m

Sure, they know what they are about to release next, and what they plan to work on after that, but they are not clairvoyants and don't know how their plans are going to pan out.

What we're going to see over next year seems mostly pretty obvious - a lot of productization (tool use, history, etc), and a lot of efforts with multimodality, synthetic data, and post-training to add knowledge, reduce brittleness, and increase benchmark scores. None of which will do much to advance core intelligence.

The major short-term unknown seems to be how these companies will be attempting to improve planning/reasoning, and how successful that will be. OpenAI's Schulman just talked about post-training RL over longer (multi-reasoning steps) time horizons, and another approach is external tree-of-thoughts type scaffolding. These both seem more about maximizing what you can get out of the base model rather than fundamentally extending it's capabilities.

raverbashing
5 replies
10h45m

Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.

Honestly? I'm not too worried

We've seen how the google employee that was "seeing a conscience" (in what was basically GPT-2 lol) was a nothing burger

We've seen other people in "AI Safety" overplay their importance and hype their CV more than actually do any relevant work. (Usually also playing the diversity card)

So, no, AI safety is important but I see it attracting the least helpful and resourceful people to the area.

llamaimperative
4 replies
6h44m

I think when you’re jumping to arguments that resolve to “Ilya Sutskever wasn’t doing important work… might’ve played the diversity card,” it’s time to reassess your mental model and inspect it closely for motivated reasoning.

raverbashing
3 replies
6h27m

Ilya's case is different. He thought the engineers would win in a dispute with Sam at board level.

That has proven to be a mistake

llamaimperative
2 replies
5h36m

And Jan Leike, one of the progenitors of RLHF?

What about Geoffrey Hinton? Stuart Russell? Dario Amodei?

Also exceptions to your model?

llamaimperative
0 replies
2h37m

Another person’s interpretation of another person’s interpretation of another person’s interpretation of Jan’s actions doesn’t even answer the question I asked as it pertains to Jan, never mind the other model violations I listed.

I’m pretty sure if Jan came to believe safety research wasn’t needed he would’ve just said that. Instead he said the actual opposite of that.

Why don’t you just answer the question? It’s a question about how these datapoints fit into your model.

iknownthing
3 replies
5h31m

This may sound harsh but I think some of these researchers have a sort of god complex. Something like "I am so brilliant and what I have created is so powerful that we MUST think about all the horrible things that my brilliant creation can do". Meanwhile what they have created is just a very impressive next token predictor.

dmd
2 replies
4h21m

"Meanwhile what they have created is just a very impressive speeder-up of a lump of lead."

"Meanwhile what they have created is just a very impressive hot water bottle that turns a crank."

"Meanwhile what they have created is just a very impressive rock where neutrons hit other neutrons."

The point isn't how it works, the point is what it does.

iknownthing
1 replies
3h53m

which is what?

CamperBob2
0 replies
1h24m

Whatever it is, over the last couple of years it got a lot smarter. Did you?

seankurtz
1 replies
5h42m

Everyone involved in building these things has to have some amount of hubris. Its going to come smashing down on them. What's going unsaid in all of this is just how swiftly the tide has turned against this tech industry attempt to save itself from a downtrend.

The whole industry at this point is acting like the tobacco industry back when they first started getting in hot water. No doubt the prophecies about imminent AGI will one day look to our descendents exactly like filters on cigarettes. A weak attempt to prevent imminent regulation and reduced profitability as governments force an out of control industry to deal with the externalities involved in the creation of their products.

If it wasn't abundantly clear...I agree with you that AGI is infinitely far away. Its the damage that's going to be caused by sociopaths (Sam Altman at the top of the list) in attempting to justify the real things they want (money) in their march towards that impossible goal that concerns me.

freehorse
0 replies
4h36m

It becoming more and more clear that for "Open"AI the whole "AI-safety/alignment" thing has been a PR-stunt to attract workers, cover the actual current issues with AI (eg stealing data, use for producing cheap junk, hallucinations and societal impact), and build rapport in the AI scene and politics. Now that they have reached a real product and have a strong position in AI development, they could not care less about these things. Those who -naively- believed in the "existential risk" PR stunt and were working on that are now discarded.

otabdeveloper4
0 replies
10h52m

But I still can’t grasp how we’ll achieve AGI within any reasonable amount of time.

That's easy, we just need to make meatspace people stupider. Seems to be working great so far.

killerstorm
0 replies
6h28m

I have a theory why people end up with wildly different estimates...

Given the model is probabilistic and does many things in parallel, its output can be understood as a mixture, e.g. 30% trash, 60% rehashed training material, 10% reasoning.

People probe model in different ways, they see different results, and they make different conclusions.

E.g. somebody who assumes AI should have impeccable logic will find "trash" content (e.g. incorrectly retrieved memory) and will declare that the whole AI thing is overhyped bullshit.

Other people might call model a "stochastic parrot" as they recognize it basically just interpolates between parts of the training material.

Finally, people who want to probe reasoning capabilities might find it among the trash. E.g. people found that LLMs can evaluate non-trivial Python code as long as it sends intermediate results to output: https://x.com/GrantSlatton/status/1600388425651453953

I interpret "feel the AGI" (Ilya Sutskever slogan, now repeated by Jan Leike) as a focus on these capabilities, rather than on mistakes it makes. E.g. if we go from 0.1% reasoning to 1% reasoning it's a 10x gain in capabilities, while to an outsider it might look like "it's 99% trash".

In any case, I'd rather trust intuition of people like Ilya Sutskever and Jan Leike. They aren't trying to sell something, and overhyping the tech is not in their interest.

Regarding "missing something really critical", it's obvious that human learning is much more efficient than NN learning. So there's some algorithm people are missing. But is it really required for AGI?

And regarding "It cannot reason" - I've seen LLMs doing rather complex stuff which is almost certainly not in the training set, what is it if not reasoning? It's hard to take "it cannot reason" seriously from people

foolfoolz
9 replies
17h40m

i don’t think we need to respect these elite multi millionaires for not becoming even grander multi millionaires / billionaires

whimsicalism
7 replies
17h4m

is having money morally wrong?

r2_pilot
6 replies
16h37m

Depends on how you get it

AndrewKemendo
5 replies
16h2m

Exactly. There’s no ethical way to gain ownership of a billion dollars (there’s likely some dollar threshold way less than 1B where p(ethical_gains) can be approximated to 0)

A lot of people got screwed along the way

whimsicalism
4 replies
14h36m

i think a lot of people have been able to become billionaires simply by building something that was initially significantly undervalued and then became very highly valued, no 'screwing'. there is such thing as a win-win and frankly these win-wins account for most albeit not all value creation in the world. you do not have to screw other people to get rich.

whether people should be able to hold on to that billion is a different question

fragmede
1 replies
12h51m

I wouldn't know, I'm not a billionaire. But when you hear about Amazon warehouse workers peeing into bottles because they they don't have long enough bathroom breaks, or Walmart workers not having healthcare because they're intentionally scheduled for 39.5 hours, it's hard to see that anyone could get to a billion without screwing someone over. But like I said, I'm not a billionaire.

whimsicalism
0 replies
27m

Who did JK Rowling screw? (putting aside her recent social issues after she already became a billionaire)

Having these discussions in this current cultural moment is difficult. I'm no lover of billionaires, but to say that every billionaire screwed people over relies on esoteric interpretations of value and who produces it. These interpretations (like the labor-theory of value) are alien to the vast majority of people.

AndrewKemendo
1 replies
3h22m

They aren’t win-wins

It’s a ruse - it’s a con - it’s an accounting trick. It’s the foundation of capitalism

If I start a bowling pin production company and own 100% of it, then whatever pins I sell all of the results go to me

Now let say I want to expand my thing (that’s its own moral dilemma we won’t get into), so I promise a person with more money than they need to support their own life, to give me money in exchange for some of the future revenue produced, let’s say 10%

So now you have two people requiring payment - a producer and an “investor” so you’re already in the hole and now it’s 90% and 10%

You use that money to hire people to work in your potemkin dictatorship, with demands on proceeds now on some timeline (note conversion date, next board meeting etc)

So now you hire 10 people, how much of the company do they own? Well that’s totally up to whatever the two owners want including 0%

But let’s say it’s a typical venture deal, so 10% option pool for employees (and don’t forget the 4 year vest, cause we can’t have them mobile can we) which you fill up.

At the end of the four years you now have:

1 80% owner 1 10% owner 10 1% owners

Did the 2 people create 90% of the value of the company?

Only in capitalist math does that hold and in fact the only math capitalists do is the following:

“Well they were free to sign or not sign the contract”

Ignoring the reality of the world based on a worldview of greed that dominated the world to such an extent that it was considered “normal”

Luckily we’re starting to see the tide change

whimsicalism
0 replies
24m

Putting aside your labor theory of value nonsense (I'm very familiar with the classic leftist syllogisms on this), who did someone like JK Rowling screw to make her billion?

llamaimperative
0 replies
17h36m

I think you oughta respect everyone who does the right thing, not for any mushy feel good reason but because it encourages other people to do more of the right things. That’s good.

ambicapter
2 replies
16h48m

Why is extra respect due? That post just says he is leaving, there's no criticism.

ambicapter
0 replies
15h41m

Ah, right. Thanks for link.

KennyBlanken
2 replies
12h4m

Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.

Large language models are not "smart". They do not have thought. They don't have intelligence despite the "AI" moniker, etc.

They vomit words based off very fancy statistics.

There is no path from that to "thought" and "intelligence."

danielbln
1 replies
8h10m

Not that I disagree, but what's intelligence? How does our intelligence work? If we don't know that, how can we be so sure what does and what doesn't lead to intelligence? A little more humility is on order before whipping out the tired "LLMs are just stochastic parrots" argument.

bormaj
0 replies
1h39m

Humility has to go both ways then, we can't claim that LLM models are actually (or not actually) AI without qualifying that term first.

hipadev23
1 replies
17h11m

How do you know he’s not running off to a competing firm with Ilya and they’ve promised to make him whole.

john-radio
0 replies
16h58m

More power to him if so. Stupid problems deserve stupid solutions.

theGnuMe
0 replies
7h2m

“ OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

Delusional.

KennyBlanken
0 replies
12h29m

People very high up in a company / their field are not treated remotely the same as peons.

1)OpenAI wouldn't want the negative PR of pursuing legal action against someone top in their field; his peers would take note of it and be less willing to work for them.

2)The stuff he signed was almost certainly different from what rank and file signed, if only because he would have sufficient power to negotiate those contracts.

0xDEAFBEAD
0 replies
15h35m

At the end of the thread, he says he thinks OpenAI can "ship" the culture changes necessary for safety. That seems kind of implausible to me? So many safety staffers have quit over the past few years. If Jan really thought change was possible, why isn't he still working at OpenAI, trying to make it happen from the inside?

I think it may time for something like this: https://www.openailetter.org/

mwigdahl
105 replies
14h14m

The best approach to circumventing the nondisclosure agreement is for the affected employees to get together, write out everything they want to say about OpenAI, train an LLM on that text, and then release it.

Based on these companies' arguments that copyrighted material is not actually reproduced by these models, and that any seemingly-infringing use is the responsibility of the user of the model rather than those who produced it, anyone could freely generate an infinite number of high-truthiness OpenAI anecdotes, freshly laundered by the inference engine, that couldn't be used against the original authors without OpenAI invalidating their own legal stance with respect to their own models.

TeMPOraL
71 replies
13h31m

Clever, but no.

The argument about LLMs not being copyright laundromats making sense hinges the scale and non-specificity of training. There's a difference between "LLM reproduced this piece of copyrighted work because it memorized it from being fed literally half the internet", vs. "LLM was intentionally trained to specifically reproduce variants of this particular work". Whatever one's stances on the former case, the latter case would be plain infringing copyrights and admitting to it.

In other words: GPT-4 gets to get away with occasionally spitting out something real verbatim. Llama2-7b-finetune-NYTArticles does not.

bluefirebrand
48 replies
13h25m

Seems absurd that somehow the scale being massive makes it better somehow

You would think having a massive scale just means it has infringed even more copyrights, and therefore should be in even more hot water

kmeisthax
35 replies
12h7m

So, the law has this concept of 'de minimus' infringement, where if you take a very small amount - like, way smaller than even a fair use - the courts don't care. If you're taking a handful of word probabilities from every book ever written, then the portion taken from each work is very, very low, so courts aren't likely to care.

If you're only training on a handful of works then you're taking more from them, meaning it's not de minimus.

For the record, I got this legal theory from Cory Doctorow[0], but I'm skeptical. It's very plausible, but at the same time, we also thought sampling in music was de minimus until the Second Circuit said otherwise. Copyright law is extremely malleable in the presence of moneyed interests, sometimes without Congressional intervention even!

[0] who is NOT pro-AI, he just thinks labor law is a better bulwark against it than copyright

KoolKat23
18 replies
11h53m

You don't even need to go this far.

The word-probabilities are transformative use, a form of fair use and aren't an issue.

The specific output at each point in time is what would be judged to be fair use or copyright infringing.

I'd argue the user would be responsible for ensuring they're not infringing by using the output in a copyright infringing manner i.e. for profit, as they've fed certain inputs into the model which led to the output. In the same way you can't sue Microsoft for someone typing up copyrighted works into Microsoft Word and then distributing for profit.

De minimus is still helpful here, not all infringments are noteworthy.

surfingdino
7 replies
10h25m

MS Word does not actively collect and process all texts for all available sources and does not offer them in recombined form. MS Word is passive whereas the whole point of an LLM is to produce output using a model trained on ingested data. It is actively processing vast amounts of texts with intent to make them available for others to use and the T&C state that the user owns the copyright to the outputs based on works of other copyright owners. LLMs give the user a CCL (Collateralised Copyright Liability, a bit like a CDO) without a way of tracing the sources used to train the model.

KoolKat23
4 replies
9h52m

Legally, copyright is only concerned with the specific end work. A unique or not so unique standalone object that is being scrutinized, if this analogy helps.

The process involved in obtaining that end work is completely irrelevant to any copyright case. It can be a claim against the models weights (not possible as it's fair use), or it's against the specific once off output end work (less clear), but it can't be looked at as a whole.

dgoldstein0
3 replies
9h32m

I don't think that's accurate. The us copyright office last year issued guidance that basically said anything generated with ai can't be copyrighted, as human authorship/creation is required for copyright. Works can incorporate ai generated content but then those parts aren't covered by copyright.

https://www.federalregister.gov/documents/2023/03/16/2023-05...

So I think the law, at least as currently interpreted, does care about the process.

Though maybe you meant as to whether a new work infringes existing copyright? As this guidance is clearly about new copyright.

arrowsmith
1 replies
8h30m

Couldn't you just generate it with AI then say you wrote it? How could anyone prove you wrong?

KoolKat23
0 replies
8h16m

That's what you're supposed to do. No need to hide it either :).

KoolKat23
0 replies
9h18m

These are two sides of the same coin, and what I'm saying still stands. This is talking about who you attribute authorship to when copyrighting a specific work. Basically on the application form, the author must be a human. The reason it's worth them clarifying is because they've received applications that attributed AI's, and legal persons do exist that aren't human (such as companies), they're just making it clear it has to be human.

Who created the work, it's the user who instructed the AI (it's a tool), you can't attribute it to the AI. It would be the equivalent of Photoshop being attributed as co-author on your work.

throwaway2037
1 replies
10h10m

First, I agree with nearly everything that you wrote. Very thoughtful post! However, I have some issues with the last sentence.

    > Collateralised Copyright Liability
Is this a real legal / finance term or did you make it up?

Also, I do not follow you leap to compare LLMs to CDOs (collateralised debt obligations). And, do you specifically mean CDO or any kind of mortgage / commercial loan structured finance deal?

surfingdino
0 replies
9h55m

My analogy is based on the fact that nobody could see what was inside CDOs nor did they want to see, all they wanted to do was pass them on to the next sucker. It was all fun until it all blew up. LLM operators behave in the same way with copyrighted material. For context, read https://nymag.com/news/business/55687/

rcbdev
7 replies
10h41m

OpenAI is outputting the partially copyright-infringing works of their LLM for profit. How does that square?

KoolKat23
4 replies
9h47m

You, the user, is inputting variables into their probability algorithm that's resulting in the copyright work. It's just a tool.

maeil
1 replies
9h18m

Let's say a torrent website asks the user through an LLM interface what kind of copyrighted content they want to download and then offers me links based on that, and makes money off of it.

The user is "inputting variables into their probability algorithm that's resulting in the copyright work".

KoolKat23
0 replies
8h38m

Theoretically a torrent website that does not distribute the copyright files themselves in anyway should be legal, unless there's a specific law for this (I'm unaware of any, but I may be wrong).

They tend to try argue for conspiracy to commit copyright infringement, it's a tenuous case to make unless they can prove that was actually their intention. I think in most cases it's ISP/hosting terms and conditions and legal costs that lead to their demise.

Your example of the model asking specifically "what copyrighted content would you like to download", kinda implies conspiracy to commit copyright infringement would be a valid charge.

DaSHacka
1 replies
9h30m

How is it any different than training a model on content protected under an NDA and allowing access to users via a web-portal?

What is the difference OpenAI has that lets them get away with, but not our hypothetical Mr. Smartass doing the same process trying to get around an NDA?

KoolKat23
0 replies
8h49m

Well if OpenAI signed an NDA beforehand to not disclose certain training data it used, and then users actually do access this data, then yes it would be problematic for OpenAI, under the terms of their signed NDA.

throwaway2037
1 replies
10h7m

You raise an interesting point. If more professional lawyers agreed with you, then why have we not seen a lawsuit from publishers against OpenAI?

kibibu
1 replies
9h16m

Is converting an audio signal into the frequency domain, pruning all inaudible frequencies, and then Huffman encoding it tranformative?

KoolKat23
0 replies
8h58m

Well if the end result is something completely different such as an algorithm for determining which music is popular or determining which song is playing then yes it's transformative.

It's not merely a compressed version of a song intended to be used in the same way as the original copyright work, this would be copyright infringement.

wtallis
11 replies
11h58m

If your training process ingests the entire text of the book, and trains with a large context size, you're getting more than just "a handful of word probabilities" from that book.

ben_w
10 replies
11h40m

If you've trained a 16-bit ten billion parameter model on ten trillion tokens, then the mean training token changes 2/125 of a bit, and a 60k word novel (~75k tokens) contributes 1200 bits.

It's up to you if that counts as "a handful" or not.

andrepd
3 replies
10h23m

xz can compress the text of Harry Potter by a factor of 30:1. Does that mean I can also distribute compressed copies of copyrighted works and that's okay?

realusername
0 replies
10h0m

You can't get Harry Potter out of the LLM, that's the difference

ben_w
0 replies
10h10m

Can you get that book out of an LLM?

Because that's the distinction being argued here: it's "a handful"[0] of probabilities, not the complete work.

[0] I'm not sold on the phrasing "a handful", but I don't care enough to argue terminology; the term "handful" feels like it's being used in a sorites paradox kind of way: https://en.wikipedia.org/wiki/Sorites_paradox

Sharlin
0 replies
10h8m

Incredibly poor analogy. If an LLM were able to regurgitate Harry Potter on demand like xz can, the copyright situation would be much more black and white. But they can’t, and it’s not even close.

throwaway2037
1 replies
10h2m

To be fair, OP raises an important question that I hope smart legal minds are pondering. In my view, they aren't looking for a "programmer answers about legal issue" response. Probably the right court might agree with their premise. What the damages or restrictions might be, I cannot speculate. Any IP lawyers here who want to share some thoughts?

ben_w
0 replies
9h40m

Yup, that's fair.

As my not-legally-trained interpretation of the rules leads to me being confused about how traditional search engines aren't a copyright violation, I don't trust my own beliefs about the law.

snovv_crash
1 replies
11h13m

If I invent an amazing lossless compression algorithm such that adding an entire 60k word novel to my blob only increases the size by 1.2kb, does that mean I'm not copyright infringing if I release that model?

Sharlin
0 replies
10h18m

How is that relevant? If some LLM were able to regurgitate a 60k word novel verbatim on demand, sure, the copyright situation would be different. But last I checked they can’t, not 60k, 6k, or even 600 words. Perhaps they can do 60 words of some well-known passages from the Bible or other similar ubiquitous copyright-free works.

hansworst
1 replies
11h21m

I think it’s questionable whether you can actually use this bit count to represent the amount of information from the book. Those 1200 bits represent the way in which this particular book is different from everything else the model has ingested. Similarly, if you read an entire book yourself, your brain will just store the salient bits, not the entire text, unless you have a photographic memory.

If we take math or computer science for example: some very important algorithms can be compressed to a few bits of information if you (or a model) have a thorough understanding of the surrounding theory to go with it. Would it not amount to IP infringement if a model regurgitates the relevant information from a patent application, even if it is represented by under a kilobyte of information?

ben_w
0 replies
10h13m

I agree with what I think you're saying, so I'm not sure I've understood you.

I think this is all still compatible with saying that ingesting an entire book is still:

If you're taking a handful of word probabilities from every book ever written, then the portion taken from each work is very, very low

(Though I wouldn't want to make a bet either way on "so courts aren't likely to care" that follows on from that quote: my not-legally-trained interpretation of the rules leads to me being confused about how traditional search engines aren't a copyright violation).

bryanrasmussen
1 replies
10h20m

we also thought sampling in music was de minimus

I would think if I can recognize exactly what song it comes from - not de minimus.

throwaway2037
0 replies
10h5m

When I was younger, I was told that the album from Beastie Boys called Paul's Boutique was the straw that broke the camel's back! I have no idea if this true, but that album has a batshit crazy amount of recognizable samples. I doubt very much that Beastie paid anything for the rights to sample.

Gravityloss
1 replies
9h46m

I think with some AI you could reproduce artworks of obscure indie artists who are working right now.

If you were a director at a game company and needed art in that style, it would be cheaper to have the AI do it instead of buying from the artist.

I think this is currently an open question.

dgoldstein0
0 replies
8h48m

I recently read an article that I annoyingly can't find again about an art director at a company that decided to hire some prompters. They got some art, told them to completely change it, got other art, told them to make smaller changes... And then got nothing useful as the prompters couldn't tell the ai "like that but make this change". Ai art may get there in a few years or maybe a decade or two, but it's not there yet. (End of that article: they fired the prompters after a few days)

An ai-enhanced Photoshop, however, could do wonders though as the base capabilities seem to be mostly there. Haven't used any of the newer ai stuff myself but https://www.shruggingface.com/blog/how-i-used-stable-diffusi... makes it pretty clear the building blocks seem largely there. So my guess is the main disconnect is in making the machines understand natural language instructions for how to change the art.

NewJazz
7 replies
13h19m

My US history teacher taught me something important. He said that if you are going to steal and don't want to get in trouble, steal a whole lot.

PontifexMinimus
3 replies
11h50m

Copying one person is plagarism. Copying lots of people is research.

comfysocks
2 replies
10h54m

True, but if you research lots of sources and still emit significant blocks of verbatim text without attribution, it’s still plagiarism. At least that’s how human authors are judged.

TeMPOraL
1 replies
10h18m

Plagiarism is not illegal, it is merely frowned on, and only in certain fields at that.

bayindirh
0 replies
7h59m

This is a reductionist take. Maybe it's not illegal per se where you live, but it always have ramifications, and these ramifications affect your future a whole lot.

throwaway2037
1 replies
9h59m

Very interesting post! Can you share more about your teacher's reasoning?

SuchAnonMuchWow
0 replies
9h49m

It likely comes from the saying similar to this one: "kill a few, you are a murderer. Kill millions, you are a conqueror".

More generally, we tend to view number of causalities in war as a large number, and not as the sum of every tragedies that it represent and that we perceive when fewer people die.

psychoslave
0 replies
11h1m

Scale might be a factor, but it's not the only one. Your neighbor might not care if you steal a grass stalk in its lawn, and feel powerless if you're the bloody dictator of the country which wastes tremendous amount of resources in socially useless whims thanks to overwhelming taxes.

But most people don't want to live in permanent mental distress due to shame of past action or fear of rebellion, I guess.

tempodox
0 replies
11h8m

Almost reminds one of real life: The big thieves get away and have a fan base while the small ones get prosecuted as criminals.

omeid2
0 replies
12h35m

It may not make a lot of sense but it follows the "fair use" doctrine. Which is generally based on the following 4 factors:

1) the purpose and character of use.

2) the nature of the copyrighted material.

3) the *amount* and *substantiality* of the portion taken, and.

4) the effect of the use upon the *potential market*.

So in that regard, if you're training a personal assistance GPT, and use some software code to teach your model logic, that is easy to defend as fair use.

But the extent of use matters, and if you're training an AI for the sole purpose of regurgitating specific copyrighted material, it is infringement, if it is copyrighted, but in this case, it is not copyright issue, it is contracts and NDAs.

blksv
0 replies
10h1m

It is the same scale argument that allows you to publish a photo of a procession without written consent from every participant.

TeMPOraL
0 replies
13h21m

You may or may not agree with it, but that's the only thing that makes it different - scale and non-specificity. Same thing that worked for search engines, for example.

My point isn't to argue merits of that case, it's just to point out that OP's joke is like a stereotypical output of an LLM: seems to make sense, but really doesn't.

adra
4 replies
13h7m

Which has been established in court where?

sundalia
2 replies
13h0m

+1, this is just the commenter saying what they want without an actual court case

cj
1 replies
12h56m

The justice system moves an order of magnitude slower than technology.

It’s the Wild West. The lack of a court case has no bearing on whether or not what they’re doing is right or wrong.

6510
0 replies
11h19m

Sounds like the standard disrupt formula should apply. Cant we stuff the court into an app? I kinda dislike the idea of getting a different sentence for anything related to appearance or presentation.

TeMPOraL
0 replies
12h54m

And it matters how? I didn't say the argument is correct or approved by court, or that I even support it. I'm saying what the argument, which OP referenced, is about, and how it differs from their proposal.

throwaway2037
3 replies
10h15m

    > LLMs not being copyright laundromats
This a brilliant phrase. You might as well put that into an Emacs paste macro now. It won't be the last time you will need it. And the OP is classic HN folly where programmer thinks laws and courts can be hacked with "this one weird trick".

calvinmorrison
2 replies
10h6m

But they can, just look at AirBnB, Uber, etc.

throwaway2037
0 replies
9h56m

No, lots of jurisdictions outside the US fought back against those shady practices.

abofh
0 replies
9h38m

You mean unregulated hotels and on-demand taxis?

Uber is no longer subsidized (or even cheap) in most places, it's just an app for summoning taxis and overpriced snacks. AirBnB is underregulated housing for nomads at this point.

Your examples sorta prove the point - they didn't succeed in what they aimed at doing, so they pivoted until the law permitted it.

romwell
2 replies
13h26m

Cool, just feed the ChatGPT+ the same half the Internet plus OpenAI founders' anecdotes about the company.

Ta-da.

TeMPOraL
1 replies
13h0m

And be rightfully sacked for maliciously burning millions of dollars on a retrain to purposefully poison the model?

Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.

romwell
0 replies
11h58m

And be rightfully sacked for maliciously burning millions of dollars on a retrain to purposefully poison the model?

Does it really take millions dollars of compute to add additional training data to an existing model?

Plus, we're talking about employees that are leaving / left anyway.

Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.

Excellent. That means plausible deniability.

Surely all those horror stories about unethical behavior are just hallucinations, no matter how specific they are.

Absolutely no reason for anyone to take them seriously. Which is why the press will not hesitate to run with that, with appropriate disclaimers, of course.

Seriously, you seem to think that in a world where numbers about death toll in Gaza are taken verbatim from Hamas without being corroborated by other sources, an AI model output will not pass the test of public scrutiny?

Very optimistic of you.

dorkwood
2 replies
12h11m

How many sources do you need to steal from for it to no longer be considered stealing? Two? Three? A hundred?

TeMPOraL
1 replies
11h29m

Copyright infringement is not stealing.

psychoslave
0 replies
10h45m

True.

Making people believe that anything but their own body and mind can be considered part of their own properties is stealing their lucidity.

makeitdouble
1 replies
13h1m

My take away is that we should talk about our experience in companies at a large enough scale that it becomes non specific in principle, and not targeted at a single company.

Basically, we need our open source version of Glassdoor as a LLM ?

TeMPOraL
0 replies
12h57m

This exists, it's called /r/antiwork :).

OP wants to achieve effects of specific accusation using only non-specific means; that's not easy to pull off.

anigbrowl
1 replies
12h5m

It's not a copyright violation if you voluntarily provide the training material...

XorNot
0 replies
11h59m

I don't know why copyright is getting involved here. The clause is about criticizing the company.

Releasing an LLM trained on company criticisms, by people specifically instructed not to do so is transparently violating the agreement.

Because you're intentionally publishing criticism of the company.

tadfisher
0 replies
12h32m

To definitively prove this either way, they'll have to make their source code and model available (maybe under subpoena and/or gag order), so don't expect this issue to be actually tested in court (so long as the defendants have enough VC money).

aprilthird2021
0 replies
11h32m

In other words: GPT-4 gets to get away with occasionally spitting out something real verbatim. Llama2-7b-finetune-NYTArticles does not.

Based on what? This isn't any legal argument that will hold water in any court I'm aware of

8note
0 replies
12h39m

The scale of two people should be large enough to make it ambiguous who spilled the beans at least

andyjohnson0
11 replies
10h24m

Clever, but the law is not a machine or an algorithm. Intent matters.

Training an LLM with the intent of contravening an NDA is just plain <intent to contravene an NDA>. Everyone would still get sued anyway.

jeffreygoesto
8 replies
9h58m

But then training a commercial model is done with the intent to not pay the original authors, how is that different?

repeekad
4 replies
9h33m

done with the intent to not pay the original authors

no one building this software wants to “steal from creators” and the legal precedent for using copyrighted works for the purpose of training is clear with the NYT case against open AI

It’s why things like the recent deal with Reddit to train on their data (which Reddit owns and users give up when using the platform) are becoming so important, same with Twitter/X

kaoD
3 replies
8h59m

no one building this software wants to “steal from creators”

It’s why things like the recent deal[s ...] are becoming so important

Sorry but I don't follow. Is it one or the other?

If they didn't want to steal from the original authors, why do they not-steal Reddit now? What happens with the smaller creators that are not Reddit? When is OpenAI meeting with me to discuss compensation?

To me your post felt something like "I'm not robbing you, Small State Without Defense that I just invaded, I just want to have your petroleum, but I'm paying Big State for theirs cause they can kick my ass".

Aren't the recent deals actually implying that everything so far has actually been done with the intent of not compensating their source data creators? If that was not the case, they wouldn't need any deals now, they'd just continue happily doing whatever they've been doing which is oh so clearly lawful.

What did I miss?

repeekad
2 replies
7h15m

The law is slow and is always playing catch up in terms of prosecution, it’s not clear today because this kind of copyright has never been an issue before. Usually it’s just outright stealing content that was protected, no one ever imagined “training” to be a protected use case, humans “train” on copyrighted works all the time, ideally copyrighted works they purchased for said purpose… the same will start to apply for AI, you have to have rights to the data for that purpose, hence these deals getting made. In the meantime it’s ask for forgiveness not permission, and companies like Google (less openAI) are ready to go with data governance that lets them remove copyright requested data and keep the rest of the model working fine

Let’s also be clear that making deals with Reddit isn’t stealing from creators, it’s not a platform where you own what you type in, same on here this is all public domain with no assumed rights to the text. If you write a book and openAI trains on it and starts telling it to kids at bed time, you 100% will have a legal claim in the future, but the companies already have protections in place to prevent exactly that. For example if you own your website you can request the data not be crawled, but ultimately if your text is publicly available anyone is allowed to read it, and the question it is anyone allowed to train AI on it is an open question that companies are trying to get ahead on.

kaoD
1 replies
7h5m

That seems even worse: they had intent to steal and now they're trying to make sure it is properly legislated so nobody else can do it, thus reducing competition.

GPT can't get retroactively untrained on stolen data.

repeekad
0 replies
6h50m

Google actually can “untrain” afaik, my limited understanding is they have good controls their data and its sources, because they know it could be important in the future, GPT not sure.

I’m not sure what you mean by “steal” because it’s a relative term now, me reading your book isn’t stealing if I paid for it and it inspires me to write my own novel about a totally new story. And if you posted your book online, as of right now the legal precedent is you didn’t make any claims to it (anyone could read it for free) so that’s fair game to train on, just like the text I’m writing now also has no protections.

Nearly all Reddit history ever up to a certain date is available for download now online, only until they changed their policies did they start having tighter controls about how their data could be used.

kdnvk
1 replies
9h49m

It’s not done with the intent to infringe copyright.

binkethy
0 replies
9h38m

It would appear that it explicitly IS done with this intent. We are told that an LLM is a living being that merely learns and then creates, but yet we are aware that its outputs regurgitate combinations of uta inputs.

mpweiher
0 replies
9h24m

Chutzpah. And that the companies doing it are multi-billion dollar companies who can afford the finest legal representation money can buy.

Whether the brazenness with which they are doing this will work out for them is currently playing out in the courts.

bazoom42
1 replies
9h45m

It is a classic geek fallacy to think you can hack the law with logic tricks.

judge2020
5 replies
13h42m

NDAs don’t touch the copyright of your speech / written works you produce after leaving, they just make it breach of contract to distribute those words.

elicksaur
2 replies
13h40m

Following the legal defense of these companies, the employees wouldn’t be distributing any words. They’re distributing a model.

cqqxo4zV46cp
0 replies
9h41m

Please just stop. It’s highly unlikely that any relevant part of any reasonably structured NDA has any material relevance to copyright. Why do developers think that they can just intuit this stuff? This is one step away from being a more trendy “stick the constitution to the back of my car in lieu of a license place” lunacy.

JumpCrisscross
0 replies
12h52m

They’re disseminating the information. Form isn’t as important as it is for copyright.

romwell
0 replies
13h25m

they just make it breach of contract to distribute those words.

See, they aren't distributing the words, and good luck proving that any specific words went into training the model.

otabdeveloper4
0 replies
13h40m

Technically, no words are being distributed here. (At least according to OpenAI lawyers.)

KoolKat23
2 replies
11h47m

Lol this would be a great performative piece. Although not so sure it'd stand up to scrutiny. Openai could probably take them to court on the grounds of disclosure of trade secrets or something like that and force them to reveal its training data and thus potentially revealing its source.

nextaccountic
1 replies
11h29m

If they did so, they would open up themselves for lawsuits of people unhappy about OpenAI's own training data.

So they probably won't.

KoolKat23
0 replies
10h0m

Good point

visarga
1 replies
12h58m

No need for LLM, anonymous letter does the same thing

throwaway2037
0 replies
5h2m

On first blush, this sounds like a good idea. Thinking deeper, the company is so small that it will be easy to identify the author.

rlt
0 replies
13h58m

This would be hilarious and genius. Touché.

renewiltord
0 replies
12h29m

To be honest, you can just say “I don’t have anything to add on that subject” and people will get the impression. No one ever says that about companies they like so you know when people shut down that something was up.

“What was the company culture like?” “Etc. platitude so on and so forth”

“And I heard the CEO was a total dickbag. Was that your experience working with him?” “I don’t have anything to add on that subject”

Of course going back and forth on that won’t really work but to different people you can’t be expected to not say the nice things and then someone could build up a story based on that.

p0w3n3d
0 replies
10h0m

that's the evilest thing I can imagine - fighting with them with their own weapon

otterley
0 replies
13h11m

IAAL (but not your lawyer and this is not legal advice).

That’s not how it works. It doesn’t matter if you write the words yourself or have an agent write them for you. In either case, it’s the communication of the covered information that is proscribed by these kinds of agreements.

jahewson
0 replies
12h13m

Ha ha, but no. For starters, copyright falls under federal law and contacts under state law, so it’s not even possible to make this claim in the relevant court.

cqqxo4zV46cp
0 replies
9h44m

I’m going to break rank from everyone else and explicitly say “not clever”. Developers that think that they know how the levels system works are a dime a dozen. It’s both easy and useless to take some acquired-in-passing largely incorrect surface level understanding of a legal mechanic and “pwned with facts and logic!” in whichever way benefits you.

bboygravity
0 replies
13h43m

Genious. I'm praying for this to happen.

bbarnett
0 replies
9h49m

Copyright != an NDA. Copyright is not an agreement between two entities, but a US federal law, with international obligations both ratified and not.

Copyright has fair uses clauses, endless court decisions limiting its use, carve outs for libraries, additional junk like the DMCA and more slapped on top. It's a patchwork of dozens of treaties and laws, spanning hundreds of years.

For example, you can read a book to a room full of kids, you can use copyright materials in comedic skits, you can quote snippets, the list goes on. And again, this is all legislated.

The point? It's complex, and specific usage of copyrighted works infringing or not, can be debatable without intent immediately being malign.

Meanwhile, an NDA covers far, far more than copyright. It may cover discussion and disclosure of everything or anything, including even client lists, trade secrets, work processes, and more. It is signed, and agreed to by both parties involved. Equating "copyright law" to "an NDA" is a non-starter. There's literally zero legal parallel or comparison here.

And as others have mentioned, the intent of the act would be malicious on top of all of this.

I know a lot of people dislike the whole data snag by OpenAI, and have moral or ethical objections to closed models, but thinking anyone would care about this argument if you breach an NDA is a bad idea. No judge would even remotely accept or listen to such chicanery.

NoMoreNicksLeft
0 replies
10h44m

NDA's don't rely on copyright to protect the party who drafted it from disclosure. There might even be an argument to be made that training the LLM on it was disclosure, regardless of whether you release the LLM publicly or not. We all work in tech right? Why do even you people get intellectual property so wrong, every single time?

Always42
0 replies
12h41m

if I slaved away at openai for a year to get some equity, I don't think I would want to be the one to try this strategy

jp57
36 replies
19h24m

The only way I can see this being a valid contract is if the equity grant that they get to keep is a new grant offered the time of signing the exit contract. Any vested equity given as compensation for work could not then be offered again as consideration for signing a new agreement.

Maybe the agreement is "we will accelerate vesting of your unvested equity if you sign this new agreement"? If that's the case then it doesn't sound nearly so coercive to me.

apsec112
24 replies
19h23m

It's not. The earlier tweets explain: the initial agreement says the employee must sign a "general release" or forfeit the equity, and then the general release they are asked to sign includes a lifetime no-criticism clause.

beastman82
8 replies
18h27m

ITT: a bunch of laymen thinking their 2 second proposal will outlawyer the team of lawyers who drafted these.

mminer237
3 replies
17h44m

I am a lawyer. This is not just a general release, and I have no idea how OpenAI's lawyers expect this to be legal.

ethbr1
1 replies
16h2m

Out of curiosity, what are the penalties for putting unenforceable stuff in an employment contract?

Are there any?

sangnoir
0 replies
13h29m

Typically there is no penalty - and contracts explicitly declare that all clauses are severable so that the rest of the contract remains valid even if one of the scare-clauses is found to be invalid. IANAL

listenallyall
0 replies
17h4m

Have you read the actual document or contracts? Opining on stuff you haven't actually read seems premature. Read the contract, then tell us which clause violates which statute, that's useful.

throwaway562if1
1 replies
18h19m

You haven't worked with many contracts, have you? Unenforceable clauses are the norm, most people are willing to follow them rather than risk having to fight them in court.

to11mtm
0 replies
17h26m

Bingo.

I have seen a lot of companies put unenforceable stuff into their employment agreements, separation agreements, etc.

jprete
1 replies
18h8m

Lawyers are 100% capable of knowingly crafting unenforceable agreements.

riwsky
0 replies
17h51m

You don’t need to out-litigate the bear,

Melatonic
5 replies
18h58m

I'm no lawyer but this sounds like something that would not go well for OpenAI if strongly litigated

mrj
3 replies
18h34m

Yeah, courts have generally found that this is "under duress" and not enforceable.

singleshot_
2 replies
18h24m

Under duress in the contractual world is generally interpreted as “you are about to be killed or maimed.” Economic duress is distinct.

to11mtm
1 replies
17h28m

Duress can take other forms, unless we are really trying to differentiate general 'coercion' here.

Perhaps as an example of the blurred line; Pre-nup agreements sprung the day of the wedding, will not hold up in a US court with a competent lawyer challenging them.

You can try to call it 'economic' duress but any non-sociopath sees there are other factors at play.

singleshot_
0 replies
2h17m

That’s a really good point. Was this a prenuptial agreement? If it wasn’t May take is section 174 would apply and we would be talking about physical compulsion — and not “it’s a preferable economic situation to sign.”

Not a sociopath, just know the law.

fuzztester
0 replies
15h8m

I'm no lawyer

Have any (startup or other) lawyers chimed in here?

DesiLurker
3 replies
17h55m

somebody explained to me early on that you cannot have a contract to have a contract. either initial agreement must state this condition clearly or they are signing another contract at employment termination which is bringing these new terms. IDK why would anyone sign that at termination unless they dangle additional equity. I dont think this BS they are trying to pull would be enforceable at least in California. though IANAL obviously.

all this said, in bigger picture I can understand not divulging trade secrets but not being allowed to discuss company culture towards AI safety essentially tells me that all the Sama talk about the 'for the good of humanity' is total BS. at the end of day its about market share and bottom line.

hughesjj
2 replies
17h16m

Canceling my openai subscription as we speak, this is too much. I don't care how good it is relative to other offerings. Not worth it.

lanstin
0 replies
16h55m

Claude is better anyways (at least for math classes.

DesiLurker
0 replies
11h2m

same I cancelled mine months ago. Claude is much better for coding anyway.

w10-1
1 replies
19h5m

But a general release is not a non-criticism clause.

They're not required to sign anything other than a general release of liability when they leave to preserve their rights. They don't have to sign a non-disparagement clause.

But they'd need a very good lawyer to be confident at that time.

User23
0 replies
18h50m

And they won’t have that equity available to borrow against to pay for that lawyer either.

ethbr1
0 replies
19h19m

IOW, this is burying the illegal part in a tangential document, in hopes of avoiding legal scrutiny and/or judgement.

They're really lending employees equity, subject to the company's later feelings as to whether the employee should be allowed to keep or sell it.

bradleyjg
0 replies
18h7m

The earlier tweets explain …

What a horrific medium of communication. Why anyone uses it is beyond me.

Animats
0 replies
18h35m

That's when you need a lawyer.

In general, an agreement to agree is not an agreement. A requirement for a "general release" to be signed at some time in the future is iffy. And that's before labor law issues.

Someone with a copy of that contract should run it through OpenAI's contract analyzer.

DebtDeflation
10 replies
17h50m

My initial reaction was "Hold up - your RSUs vest, you sell the shares and pocket the cash, you quit OpenAI, a few years later you disparage them, and then when? They somehow try and claw back the equity? How? At what value? There's no way this can work." Then I remembered that OpenAI "equity" doesn't take the form of an RSU or option or anything else that can be converted into an actual share ever. What they call "equity" is a "Profit Participation Unit (PPU)" that once vested entitles you to a share of their profits. They don't share the equivalent of a Cap Table with employees, so there's no way to tell what sort of ownership interest a PPU represents. And of course, it's unlikely OpenAI will ever turn a profit (which if they did would be capped anyway). So this is all just play money anyway.

whimsicalism
4 replies
17h2m

This is wrong on multiple levels. (to be clear I don't work at OAI)

They don't share the equivalent of a Cap Table with employees, so there's no way to tell what sort of ownership interest a PPU represents

It is known - it represents 0 ownership share. They do not want to sell any ownership because their deal with MS gives MS 49% ownership and they don't want MS to be able to buy up additional stake and control the company.

And of course, it's unlikely OpenAI will ever turn a profit (which if they did would be capped anyway). So this is all just play money anyway.

Putting aside your unreasonable confidence that OAI will never be profitable, the PPUs are tender offered so they can be sold to institutional investors up to a very high limit, OAIs current tender offer round values them at ~$80b iirc

almost_usual
1 replies
15h8m

Note at offer time candidates do not know how many PPUs they will be receiving or how many exist in total. This is important because it’s not clear to candidates if they are receiving 1% or 0.001% of profits for instance. Even when giving options, some startups are often unclear or simply do not share the total number of outstanding shares. That said, this is generally considered bad practice and unfavorable for employees. Additionally, tender offers are not guaranteed to happen and the cadence may also not be known.

PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years. Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value. So in the offer example above, the candidate received $2M worth of PPUs, which means that their capped amount they could sell them for would be $20M

The most recent liquidation event we’re aware of happened during a tender offer earlier this year. It was during this event that some early employees were able to sell their profit participation units. It’s difficult to know how often these events happen and who is allowed to sell, though, as it’s on company discretion.

This NDA wrinkle is another negative. Honestly I think the entire OpenAI compensation model is smoke and mirrors which is normal for startups and obviously inferior to RSUs.

https://www.levels.fyi/blog/openai-compensation.html

whimsicalism
0 replies
14h34m

Additionally, tender offers are not guaranteed to happen and the cadence may also not be known. > PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years.

i know for a fact that these bits are inaccurate, but i don't want to go into the details.

the profit share is not known but you are told what the PPUs were valued at the most recent tender offer

DebtDeflation
1 replies
46m

You're not saying anything that in any way contradicts my original post. Here, I'll simplify it - OpenAI's PPUs are not in any sense of the word "equity" in OpenAI, they are simply a subordinated claim to an unknown % of a hypothetical future profit.

whimsicalism
0 replies
31m

there's no way to tell what sort of ownership interest a PPU represents

Wrong. We know - it is 0, this directly contradicts your claim.

this is all just play money anyway.

Again, wrong - because it is sellable so employees can take home millions. Play money in the startup world means illiquid options that can't be tender offered.

You're making it sound like this is a terrible deal for employees but I personally know people who are able to sell $1m+ in OAI PPUs to institutional investors as part of the tender offer.

ec109685
3 replies
16h22m

Their profit is capped at $1T, which is amount no company has ever achieved.

arthurcolle
2 replies
15h22m

No company? Are you sure? Aramco?

saalweachter
1 replies
15h13m

Apple has spent $650 billion on stock buybacks in the last decade.

Granted, that might be most of the profit they have made, but still, they're probably at at least 0.7T$ so far. I bet they'll break $1T eventually.

cdchn
0 replies
17h39m

Wow. Smart for them. Former employees are behooved to the company for an actual perpetuity. Sounds like a raw deal but when the potential gains are that big, I guess you'll agree to pretty much anything.

modeless
27 replies
16h49m

A lot of the brouhaha about OpenAI is silly, I think. But this is gross. Forcing employees to sign a perpetual non-disparagement agreement under threat of clawing back the large majority of their already earned compensation should not be legal. Honestly it probably isn't, but it'll take someone brave enough to sue to find out.

twobitshifter
13 replies
15h17m

If I have equity in a company and I care about its value, I’m not going to say anything to tank its value. If I sell my equity later on, and then disparage the company, what can OpenAI hope to do to me?

modeless
5 replies
15h9m

They can sue you into bankruptcy, obviously.

Also, what if you can't sell? Selling is at their discretion. They can prevent you from selling some of your so-called "equity" to keep you on their leash as long as they want.

bambax
1 replies
10h28m

* They can prevent you from selling some of your so-called "equity"*

But how much do you need? Sell half, forgo the rest, and you'll be fine.

modeless
0 replies
2h31m

Not a lot of people out there willing to drop half of their net worth on the floor on principle. And then sign up for years of high profile lawsuits and character assassination.

LtWorf
1 replies
12h5m

If you can't sell, it's worthless anyway.

ajross
0 replies
4h35m

Liquidity and value are different things. If someone offered you 1% of OpenAI, would you take it? Duh.

But it's a private venture and not a public company, and you "can't sell" that holding on a market, only via complicated schemes that have to be authorized by the board. But you'd take it anyway in the expectation that it would be liquid someday. The employees are in the same position.

twobitshifter
0 replies
4h4m

That’s a good point, if you can get the equity liquid - I don’t think the lawsuit would go far or end up in bankruptcy. In this case, the truth of what happened at OpenAI would be revealed even more in a trial, which is not something they’d like and this type of contract with lifetime provisions isn’t likely to be enforced by a court IMO - especially when the information revealed is in the public’s interest and truthful.

cdchn
4 replies
15h9m

From what other people have commented, you don't get equity. You get a profit sharing plan. You're chained to them for life. There is no divestiture.

pizzafeelsright
1 replies
14h13m

Well, then, people are selling their souls.

I got laid off by a different company and can't disparage them. I can tell the truth. I'm not signing anything that requires me to lie.

cdchn
0 replies
13h46m

Just playing the devils advocate here, but what if you're not lying.. what if you're just keeping your mouth shut, for millions, maybe tens of millions?

Wish I could say I would have been that strong. Many would not disparage a company they hold equity in, unless they went full baby genocide.

nsoonhui
1 replies
10h54m

Here's something I just don't understand. I have a profit sharing plan *for life*, and yet I want to publicly thrash it so that the benefits I can derive from it is reduced, all in the name of some form of ... what, social service?

ivalm
0 replies
6h7m

Yeah, people do things financially not optimal for the sake of ethics. That’s a key part of living in a society. That’s part of why we don’t just murder each other.

citizen_friend
0 replies
13h59m

Clout > money

chefandy
0 replies
15h10m

If I sell my equity later on, and then disparage the company, what can OpenAI hope to do to me?

Well, that would obviously depend on the terms of the contract, but I would be astonished if the people who wrote it didn't consider that possibility. It's pretty trivial to calculate the monetary value of equity, and if they feel entitled to that equity, they surely feel entitled to its cash equivalent.

ecjhdnc2025
9 replies
16h37m

It shouldn't be legal and maybe it isn't, but all schemes like this are, when you get down to it, ultimately about suppressing potential or actual evidence of serious, possibly criminal misconduct, so I don't think they are going to let the illegality get them all upset while they are having fun.

sneak
8 replies
15h55m

What crimes do you think have occurred here?

ecjhdnc2025
3 replies
15h53m

An answer in the form of a question: why don't OpenAI executives want to talk about whether Sora was trained on Youtube content?

(I should reiterate that I actually wrote "serious, possibly criminal")

KeplerBoy
2 replies
9h43m

Because of course it was trained on Yt data, but they gain nothing from admitting that openly.

ezconnect
1 replies
5h3m

They will gain a lot of lawsuit if they admit they trained on youtube dataset because not everyone gave consent.

MOARDONGZPLZ
0 replies
21m

Consent isn’t legally required. An admission, however, would upset a lot of extremely online people though. Seems lose lose.

mindcandy
1 replies
14h16m

I’m no lawyer. But, this sure smells like some form of fraud. Or, at least breach of contract.

Employees and employer enter into an agreement: Work here for X term and you get Y options with Z terms attached. OK.

But, then later pulling Darth Vader… “Now that the deal is completing, I am changing the deal. Consent and it’s bad for you this way. Don’t consent and it’s bad that way. Either way, you held up your end of our agreement and I’m not.”

edanm
0 replies
5h54m

I have no inside info on this, but I doubt this is what is happening. They could just say no and not sign a new contract.

I assume this was something agreed to before they started working.

tcmart14
0 replies
11h31m

They don't say that criminal activity has occurred in this instance, just that this kind of behavior could be used cover it up in situations where that is the case. An example that could potentially be true. Right now with everything going on with Boeing, it sure seems plausible they are covering something(s) up that may be criminal or incredibly damaging. Like maybe falsify inspections and maintenance records? A person at Boeing who gets equity as part of compensation decides to leave. And when they leave, they eventually at some point in the future decide to speak out at a congressional investigation about what they know about what is going on. Should that person be sued into oblivion by Boeing? Or should Boeing, assuming what situation above is true, just have to eat the cost/consequences for being shitty?

listenallyall
1 replies
14h49m

It's very possible someone has already threatened to sue, and either had their equity restored or received a large payout. But they probably had to sign an NDA about that in order to receive it. End result, every future person thinks they are the first to challenge the legality contract, and few actually try.

monktastic1
0 replies
1h43m

Man, sounds like NDAs all the way down.

insane_dreamer
0 replies
3h50m

Lawsuits are tedious, expensive and drawn-out affairs that many people would rather just move on than initiate.

a_wild_dandan
25 replies
19h28m

Is this a legally enforceable suppression of free speech? If so, are there ways to be open about OpenAI, without triggering punitive action?

antiframe
12 replies
19h23m

OpenAI is not the government. Yet.

impossiblefork
6 replies
18h59m

Free speech is a much more general notion than anything having to do with governments.

The first amendment is a US free speech protection, but it's not prototypical.

You can also find this in some other free speech protections, for example that in the UDHR

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

doesn't refer to states at all.

kfrzcode
3 replies
18h50m

Free speech is a God-given right. It is innate and given to you and everyone at birth, after which it can only be suppressed but never revoked.

smabie
0 replies
18h9m

Did God tell you this? People who talk about innate rights are just making things up

hollerith
0 replies
18h11m

I know it is popular, but I distrust "natural rights" rhetoric like this.

CamperBob2
0 replies
18h12m

Good luck serving God with a subpoena when you have to defend yourself in court. He's really good at dodging process servers.

lupire
1 replies
18h55m

UDHR is not law so it's irrelevant to a question of law.

impossiblefork
0 replies
10h0m

Originally the comment to which that comment responded said something about free speech rather than anything about legality, and it was in that context which I responded, so the comment to which I responded must have also been written in that context.

janalsncm
1 replies
18h49m

A lot of people forget that although 1A means the government can’t put you in prison for things, there are a lot of pretty unpleasant consequences from private entities. As far as I know, it wouldn’t be illegal for a dentist to deny care to someone who criticized them, for example.

Marsymars
0 replies
17h2m

Right, and that's why larger companies need regulation around those consequences. If a dentist doesn't want to treat you because you criticized them, that's fine, but if State Farm doesn't want to insure your dentistry because you criticized them, regulators shouldn't allow that.

a_wild_dandan
1 replies
19h20m

What do I do with this information?

TaylorAlexander
0 replies
18h58m

I think we need to face the fact that these companies aren’t trustworthy in upholding their own stated morals. We need to consider whether streaming video from our phone to a complex AI system that can interpret everything it sees might have longer term privacy implications. When you think about it, a cloud AI system is an incredible surveillance machine. You want to talk to it about important questions in your life, and it would also be capable of dragnet surveillance based on complex concepts like “show me all the people organizing protests” etc.

Consider for example that when Amazon bought the Ring security camera system, it had a “god mode” that allowed executives and a team in Ukraine unlimited access to all camera data. It wasn’t just a consumer product for home users, it was a mass surveillance product for the business owners:

https://theintercept.com/2019/01/10/amazon-ring-security-cam...

The EFF has more information on other privacy issues with that system:

https://www.eff.org/deeplinks/2019/08/amazons-ring-perfect-s...

These big companies and their executives want power. Withholding huge financial gain from ex employees to maintain their silence is one way of retaining that power.

zeroonetwothree
0 replies
13h25m

If the courts enforce the agreement then that is state action.

So I think an argument can be made that NDAs and similar agreements should not be enforceable by courts.

See Shelley v. Kraemer

YurgenJurgensen
5 replies
19h8m

I believe a better solution to this would be to spread the following sentiment: "Since it's already illegal to tell disparaging lies, the mere existence of such a clause implies some disparaging truths to which the party is aware." Always assuming the worst around hidden information provides a strong incentive to be transparent.

lupire
1 replies
18h56m

Humans respond better to concrete details than abstractions.

It's a lot of mental work to rally the emotion of revulsion over the evil they might be doing that is kept secret.

hi-v-rocknroll
0 replies
18h25m

This is true.

I was once fired, ghosted style, for merely being in the same meeting room as a racist corporate ass-clown muting the conference call to make Asian slights and monkey gesticulations. There was no lawsuit or payday because "how would I ever work again?" was the Hobson's choice between let it go and a moral crusade without a way to pay rent.

If instead I were upset that "not enough N are in tech," there isn't a specific incident or person to blame because it'd be a multifaceted situation.

jiggawatts
0 replies
14h42m

This is an important mode of thinking in many adversarial or competitive contexts.

Cryptography is a prime example. Any time any company is the tiniest bit cagey or obfuscates any aspect, I default to assuming that they’re either selling snake oil or have installed NSA back doors. I’ll claim this openly, as a fact, until proven otherwise.

d0mine
0 replies
13h21m

I hope forbidding telling the truth is about something banal like "fake it until you make it" in some of OpenAI demos. The technology looks like magic but plausible to implement in a few months/years.

Worse if it is related to training future super intelligence to kill people. Killer drones are possible even with today's technology without AGI.

berniedurfee
0 replies
15h39m

That’s a really good point. A variation of the Streisand Effect.

Makes you wonder what misdeeds they’re trying so hard to hide.

exe34
2 replies
19h23m

you could praise them for the opposite of what you mean to say, and include a copy of the clause in between each paragraph.

lucubratory
1 replies
18h44m

Acknowledging the NDA or any part of it is in violation of the NDA.

exe34
0 replies
9h46m

there is no NDA in Ba Sing Se!

to11mtm
1 replies
17h22m

Well, for starters everyone can start memes...

After all, at this point, OpenAI:

- Is not open with models

- Is not open with plans

- Does not let former employees be open.

It sure does give us a glimpse into the Future of how Open AI will be!

stoperaticless
0 replies
8h53m

So they are kind of open about their strategy.. (on high level at least)

hi-v-rocknroll
0 replies
18h28m

Hush money payments and NDAs aren't illegal as Trump discovered, but perhaps lying about or concealing them in certain contexts is.

Also, when secrets or truthful disparaging information is leaked anonymously without a metadata trail, I'm thinking there's probably little or no recourse.

asperous
15 replies
19h33m

Not a lawyer but those contracts aren't legal. You need something called "consideration" ie something new of value to be legal. They can't just take away something of value that was already agreed upon.

However they could add this to new employee contracts.

ethbr1
8 replies
19h23m

"Legal" seems like a fuzzy line to OpenAI's leadership.

Pushing unenforceable scare-copy to get employees to self-censor sounds on-brand.

tptacek
6 replies
19h2m

I agree with Piper's point that these contracts aren't common in tech, but they're hardly unheard of. In 20 years of consulting work I've seen dozens of them. They're not uncommon. This doesn't look uniquely hostile or amoral for OpenAI, just garden-variety.

a_wild_dandan
3 replies
18h49m

Well, an AI charity -- so founded on openness that they're called OpenAI -- took millions in donations, everyone's copyright data...only to become effectively for-profit, close down their AI, and inflict a lifetime gag on their employees. In that context, it feels rather amoral.

tptacek
2 replies
18h26m

This to me is like the "don't be evil" thing. I didn't take it seriously to begin with, I don't think reasonable people should have taken it seriously, and so it's not persuasive or really all that interesting to argue about.

People are different! You can think otherwise.

thumrusn72
0 replies
12h14m

Therein lies the issue. The second you throw idealistic terms like “don’t be evil” and __OPEN__ ai around you should be expected to deliver.

But how is that even possible when corporations are typically run by ghouls who enjoy relativistic morals when it suits them. And are beholden to profits, not ethics.

int_19h
0 replies
14h57m

I think we do need to start taking such things seriously, and start holding companies accountable using all available venues (including legal, and legislative if the laws don't have enough leverage as it is) when they act contrary to their publicly stated commitments.

lupire
0 replies
18h58m

as an exit contract? Not part of a severance agreement?

Boomberg famously used this as an employment contract, and it was a campaign scandal for Mike.

comp_throw7
0 replies
15h59m

Contracts like this seem extremely unusual as a condition for _retaining already vested equity (or equity-like instruments)_, rather than as a condition for receiving additional severance. And how common are non-disclosure clauses that cover the non-disparagement clauses?

In fact both of those seem quite bad, both by regular industry standards, and even moreso as applied to OpenAI's specific situation.

dylan604
0 replies
16h41m

This sounds just like the non-compete issue that the FTC just invalidated. I can see if the current FTC leadership is allowed to continue working after 2025/01/20 that these things might be moved against as well. If new admin is brought in, they might all get reversed. Just something to consider going into your particular polling place

lxgr
1 replies
17h37m

“You get shares in our company in exchange for employment and eternal never-talking-bad-about-us”?

Doesn’t mean that that’s legal, of course, but I’d doubt that the legality would hinge on a lack of consideration.

hannasanarion
0 replies
14h53m

You can't add a contingency to a payment retroactively. It sounds like these are exit agreements, not employment agreements.

If it was "we'll give you shares/cash if you don't say anything bad about us", that's normal, kind of standard fare for exit agreements, it's why severance packages exist.

But if it is "we'll take away the shares that you already earned as part of your regular employment compensation unless you agree to not say anything bad about us", that's extortion.

singleshot_
0 replies
18h29m

They give you a general release of liability, as noted elsewhere in the thread.

koolba
0 replies
19h25m

Through in a preamble of “For $1 and other consideration…

danielmarkbruce
0 replies
14h14m

Have you seen the contracts?

blackeyeblitzar
0 replies
18h38m

It doesn’t matter if they are not legal. Employees do not have resources to fight expensive legal battles and fear retaliation in other ways. Like not being able to find future jobs. And anyone with family plain won’t have the time.

jimnotgym
12 replies
12h16m

the company will succeed at developing AI systems that make most human labor obsolete.

Hmmmn. Most of the humans where I work do things physically with their hands. I don't see what AI will achieve in their area.

Can AI paint the walls in my house, fix the boiler and swap out the rotten windows? If so I think a subscription to chat GPT is very reasonably priced!

windowsrookie
4 replies
11h44m

Obviously if your job requires blue-collar style manual labor, no it's likely not going to be replaced anytime soon.

But if your job is mostly sitting at a computer, I would be a bit worried.

eastbound
2 replies
11h21m

Given the low quality of relationships between customers and blue-collared jobs, i.e. ever tried to get a job done by a plumber or a painter, if you don’t know how to do their job you are practically assured they will do something in your back that will fall off in 2 years, for the price of 2x your daily rate as a software engineer (when they don’t straight up send a paperless immigrant which makes you culprit of participation to unlawful employment scheme if it is discovered), well…

I’d say there is a lot of available money in replacing blue collared jobs with AI-powered robots. Even if they do crap, it’s still better quality that contractors.

jimnotgym
1 replies
4h43m

Shoddy contractors can then give you a shoddy service with a shoddy robot.

Quality contractors will still be around, but everyone will try and beat them down on price because they care about that more than quality. The good contractors won't be able to make any money because of this and will leave the trade....just like now, just like I did

eastbound
0 replies
1h12m

The argument “pay more to get better quality” would be valid if, indeed, paying more meant better quality.

Unfortunately, it’s something I’ve often done, either as a 30% raise for my employees or giving a tip to a contractor when I knew I’d take them again or taking the most expensive one.

EACH time the work was much worse off after the raise. The sad truth of humans is that you gotta keep them begging to extract their best work, and no true reward is possible.

drooby
0 replies
4h50m

Once AGI is solved. How long does it take for AGI (or human's steering AGI) to create a robot that meets or exceeds the abilities of the human body?

LtWorf
2 replies
11h11m

It has difficulties with middle school mathematical problems.

reducesuffering
1 replies
1h49m

1.5 year old GPT-4 is getting 4/5 on an AP Calculus test, better than 95% of humans. Want to guess how much better at all educational tests GPT-5 is going to be than people?

LtWorf
0 replies
1h13m

I think the kind of problems we do in italy aren't just "solve this", they are more "understand this text, then figure out what you have to solve, then solve it"

renonce
1 replies
12h3m

I don’t know but once vision AI reacts to traffic conditions accurately within 10ms it’s probably a matter of time before they take over your steering wheel. For other jobs you’ll need to wait for robotics.

LtWorf
0 replies
11h11m

It has to react "correctly"

jerrygenser
0 replies
6h19m

Robots that are powered by AI might be able to.

cyberpunk
0 replies
12h1m

4o groks realtime video; how far away are we from letting it control robots bruv?

toomuchtodo
11 replies
19h23m

I would strongly encourage anyone faced with this ask by OpenAI to file a complaint with the NLRB as well as speak with an employment attorney familiar with California statute.

worik
10 replies
19h1m

I would strongly encourage anyone faced with this ask by OpenAI to file a complaint with the NLRB as well as speak with an employment attorney familiar with California statute.

Very Very bad advice

Unless you have the backing of some very big money first do not try to fight evil of this kind, and size

Suck it up, take the money, looks after yourself and your family

Fighting people like these is a recipe for misery.

hehdhdjehehegwv
4 replies
18h56m

Talking to a lawyer is **never** bad advice.

Especially in CA where companies will make you THINK they have power which they don’t.

reaperman
2 replies
17h32m

I’d be more afraid of their less-than-above-board power than their litigation power. People with $10-100 billion dollars who are highly connected to every major tech company and many shadowy companies we’ve never heard of can figure out a lot of my secrets and make life miserable enough for me that I don’t have the ability/energy to follow through with legal proceedings, even if I don’t attribute the walls collapsing around me to my legal opponent.

hehdhdjehehegwv
1 replies
17h26m

And that’s precisely the issue you ask a lawyer about.

reaperman
0 replies
13h46m

What could a lawyer possibly do about something that isn’t traceable? Other than warn me it’s a possibility?

listenallyall
0 replies
16h55m

I think never is inaccurate here. First, there are a lot of simply bad lawyers who will give you bad advice. Secondly, a lot of lawyers who either don't actually specialize in the legal field your case demands, or who have never actually tried any cases and have no idea how something might go down in a court with a jury. Third (the most predatory), a lot of lawyers actually see the client (not the opposing party) as the money fountain. Charging huge fees for their "consultation," "legal research," "team of experts," etc, and now the client is quickly tens-of-thousands in the hole without even an actual case being filed.

Talking to good, honest lawyers is a good idea. Unfortunately most people don't have access to good honest lawyers, or don't know how to distinguish them from crooks with law degrees.

xyst
0 replies
18h4m

The FUD is strong with this one

toomuchtodo
0 replies
18h56m

Over the last 10 years or so, I have filed a number of high-profile unfair labor practice charges against coercive statements, with many of those statements being made on Twitter. I file those charges even though I am merely a bystander, not an employee or an aggrieved party.

Every time I do this, some individuals ask how I am able to file charges when I don’t have “standing” because I am not the one who is being injured by the coercive statements.

The short answer is that the National Labor Relations Act (NLRA) has no standing requirement.

Employees reasonably fear retaliation from their boss if they file charges. So we want to make it possible for people who cannot be retaliated against to do it instead. [1]

I believe the Vox piece shared in this thread [2] is enough for anyone to hit submit on an NLRB web form and get the ball rolling. Snapshot in the Wayback Machine (all the in scope tweets archived in archive.today|is|ph), just in case.

[1] https://mattbruenig.com/2024/01/26/why-there-is-no-standing-...

[2] https://news.ycombinator.com/item?id=40394955

saiojd
0 replies
5h22m

Plenty of people are already miserable. Might as well try if you are no?

kfrzcode
0 replies
18h52m

alternative take: get Elon's attention on X and spin it as employer-enforced censorship and get his legal team to take on the battle

RaoulP
0 replies
13h32m

I don't see why this comment needed a flag or so many uncharitable replies (though you could have expressed yourself more charitably too).

I understand your sentiment, but I think a lot of idealistic people will disagree - it's nice to think that a person should stand up for justice, no matter what.

In reality, I wonder how many people attempt to do this and end up regretting, because of what you mentioned.

fragmede
10 replies
18h1m

It's time to find a lawyer. I'm not one but there's an intersection with California SB 331, also known as “The Silenced No More Act”. while it is focused more on sexual harrasment, it's not limited to that, and these contracts may run afoul of that.

https://silencednomore.org/the-silenced-no-more-act

staticautomatic
3 replies
12h52m

No it’s either a violation of the NLRB rule against severance agreements conditioned on non-disparagement or it’s a violation of the common law rule requiring consideration for amendments to service contracts.

solidasparagus
2 replies
12h13m

NLRB rule against severance agreements conditioned on non-disparagement

Wait that's a thing? Can you give more detail about this/what to look into to learn more?

wahnfrieden
0 replies
3h19m

Tech execs are lobbying to dissolve NLRB now btw

They have a lot of supporters here (workers supporting their rulers interests)

nickff
3 replies
16h1m

This doesn’t seem to fall inside the scope of that act, according to the link you cited:

” The Silenced No More Act bans confidentiality provisions in settlement agreements relating to the disclosure of underlying factual information relating to any type of harassment, discrimination or retaliation at work”
berniedurfee
2 replies
15h50m

Sounds like retaliation to me.

Filligree
1 replies
15h27m

It's not retaliation at work if you're no longer working for them.

sudosysgen
0 replies
14h41m

The retaliation would be for the reaction to the board coup, no?

j45
1 replies
17h41m

Definitely an interesting way to expand existing legislation vs having a new piece of legislation altogether.

eru
0 replies
15h56m

In practice, that's how a lot of laws are made. ('Laws' in the sense of rules that are actually enforced, not what's written down.)

Al-Khwarizmi
10 replies
11h1m

"It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it."

I find it hard to understand that in a country that tends to take freedom of expression so seriously (and I say this unironically, American democracy may have flaws but that is definitely a strength) it can be legal to silence someone for the rest of their life.

borski
2 replies
10h4m

It’s all about freedom from government tyranny and censorship. Freedom from corporate tyranny is another matter entirely, and generally relies on individuals being careful about what they agree to.

sleight42
0 replies
2h39m

And yet there was such a to-do about Twitter "censorship" that Elon made it is his mission to bring freedumb to Twitter.

Though I suppose this is another corporate (really, plutocratic) tyranny.

bamboozled
0 replies
9h32m

America values money just as much as it values freedom. If there is any chance the money collection activities will be disturbed, then heads will roll, violently.

See the assassination attempts on president Jackson.

DaSHacka
2 replies
7h5m

As others have mentioned, its likely many parts of this NDA are non-enforceable

Its quite common for companies to put tons of extremely restrictive terms in an NDA they can't actually legally enforce to scare off potential future ex-employees from creating a problem.

fastball
1 replies
3h35m

I wouldn't say that is "quite common". If you throw a bunch of unenforceable clauses into an NDA/non-compete/whatever, that increases the likelihood of the whole thing being thrown out, which is not a can of worms most corporations want to open. So it is actually toeing a delicate balance most of the time, not a "let's throw everything we can into this legal agreement and see what sticks".

tcbawo
0 replies
1h4m

If you throw a bunch of unenforceable clauses into an NDA/non-compete/whatever, that increases the likelihood of the whole thing being thrown out

I’m not sure that this is true. Any employment contract will have a partial invalidity/severability clause which will preserve the contract if individual clauses are unenforceable.

sundalia
1 replies
3h19m

How is it serious if money is the motor of freedom of speech? The suing culture in the US ensures freedom of speech up until you bother someone with money.

sleight42
0 replies
2h37m

Change that to "bother someone with more money than you."

Essentially your point.

In the US, the wealthiest have most of the freedom. The rest of us, who can be sued/fired/blackballed, are, by degrees, merely serfs.

ryanmcgarvey
0 replies
4h35m

In America you're free to sign or not sign terrible contracts in exchange for life altering amounts of money.

SXX
0 replies
10h21m

This is not much worse than "forced arbitration". In US you can literally lose your rights by clicking on "Agree" button.

0cf8612b2e1e
9 replies
19h31m

Why have other companies not done the same? This seems legally tenuous to only now be attempted. Will we see burger flippers prevented from discussing the rat infestation at their previous workplace?

(Don’t have X) - is there a timeline? Can I curse out the company on my deathbed, or would their lawyers have the legal right to try and clawback the equity from the estate?

exe34
4 replies
19h24m

i worked at McDonald's in the mid-late 00s, I'm pretty sure there was a clause about never saying anything negative about them. i think they were a great employer!

wongarsu
3 replies
19h6m

Sorry, someone at corporate has interpreted this statement as criticism. Please give back all equity, or an amount equivalent to its current value.

dylan604
1 replies
16h45m

Like a fast food employee would have equity in the company. Please, let's at least be sensible in our internet ranting.

jen20
0 replies
16h25m

What about a franchisee?

exe34
0 replies
9h45m

i got f-all equity, I was flipping burgers for minimum wage.

romanovcode
0 replies
10h44m

ROFL how is this even legal?

johnnyanmac
0 replies
18h55m

For the burger metaphor, you need to have leverage over the employee to make them not speak. No one at Burger King is getting severance when they are kicked out, let alone equity.

As for other companies that can pay: I can only assume that the cost to bribe skilled workers isn't worth the perceived risk and cost of lawsuits from the downfall (which they may or may not be able to settle). Generative AI is still very young and under a lot of scrutiny on all fronts, so the risk of a whistle blower at this stage may shape the entire future of the industry at large.

dylan604
0 replies
16h46m

Other companies have done the same. I worked at a company that is 0% related to the tech industry. I was laid off/let go/dismissed/sacked where they offered me a "severance" on the condition I sign a release with a non-disparaging clause. I didn't give enough shits about the company to waste my time/energy commenting about them. It was just an entry on a resume where I happened to work with some really neat, talented, and cool/interesting coworkers. I had the luxury of nobody else giving a damn about how/why I left. I can only imagine these people getting hounded by Real Housewives level gossip/bullshit.

benreesman
6 replies
11h58m

This has just been crazy both to watch and in some small ways interact with up close (I’ve had some very productive and some regrettably heated private discussions advising former colleagues and people I care about to GTFO before the shit really hits the rotary air impeller, and this is going to get so much worse).

This thread is full of comments making statements around this looking like some level of criminal enterprise (ranging from “no way that document holds up” to “everyone knows Sam is a crook”).

The level of stuff ranging from vitriol to overwhelming if maybe circumstantial (but conclusive that my personal satisfaction) evidence of direct reprisal has just been surreal, but it’s surreal in a different way to see people talking about this like it was never even controversial to be skeptical/critical/hostile to thing thing.

I’ve been saying that this looks like the next Enron, minimum, for easily five years, arguably double that.

Is this the last straw where I stop getting messed around over this?

I know better than to expect a ticker tape parade for having both called this and having the guts to stand up to these folks, but I do hold out a little hope for even a grudging acknowledgment.

0xDEAFBEAD
3 replies
11h4m

There's another comment saying something sort of similar elsewhere in this thread: https://news.ycombinator.com/item?id=40396366

What made you think it was the next Enron five years ago?

I appreciate you having the guts to stand up to them.

benreesman
2 replies
10h36m

First, thank you for probably being the first person to recognize in print that it wasn’t easy to stand up to these folks in public, plenty have said things like “you’re fighting the good fight” in private, but I think you’re the first person to in any sense second the motion in my personal case, so big ups on having the guts to say it too.

I’ve never been a YC-funded founder myself, but I’ve had multiple roommates who were, and a few girlfriends who were on the bubble of like, founder and early employee, and I’ve just generally been swimming in that pool to one degree or another for coming up on 20 years (I always forget my join date but it’s on the order of like, 17 years or something).

So when a few dozen people you trust tell you the same thing, you tend to buy it even if you’re not quite ready to print the worst hearsay (and I’ve heard things about Altman that I believe but still wouldn’t print without proof, dark shit).

As the litany of scandals mounted (Green Dot, zero-rated pre-IPO portfolio stock with like, his brother involved, Socialcam, the list just goes on), and at some point real journalists start doing pieces (New Yorker, etc.).

And while some of my friends and former colleagues (well maybe former friends now) who joined are both eminently qualified and as ethical as this business lets anyone be, there was a skew there too, it skewed “opportunist, fails up”.

So it’s a growing preponderance of evidence starting in about 2009 and being just “published by credible journalists”starting about five years later, at some point I’m like “if even 5% of this is even a little true, this is beyond the pale”.

It’s been a gradual thing, and people giving the benefit of the doubt up until the November stuff are maybe just really charitable, at this point it’s like, only a jury can take the next steps trivially indicated.

brap
1 replies
5h10m

Don’t forget WorldCoin!

benreesman
0 replies
4h43m

Yeah, I was trying to stay on topic but flagrant violations of the Universal Declaration of Human Rights are really Lawrence Summers’s speciality.

I’m pretty embarrassed to have former colleagues who openly defend shit like this.

danielbln
1 replies
8h4m

OpenAI was incorporated 9 years ago, but you easily saw that it's the next Enron 10 years ago?

benreesman
0 replies
6h28m

I said easily five, not easily ten. I was alluding to it in embryo with the comment that it’s likely been longer.

If you meant that remark/objection in good faith then thank you for the opportunity to clarify.

If not, the thank you for hanging a concrete example of the kind of shit I’m alluding to (though at the extremely mild end of the range) directly off the claim.

ryandrake
5 replies
19h18m

Non-disparagement clauses seem so petty and pathetic. Really? Your corporation is so fragile and thin-skinned that it can't even withstand someone saying mean words? What's next? Forbidding ex-employees from sticking their tongue at you and saying "nyaa nyaa nyaa?"

ecjhdnc2025
1 replies
15h56m

This isn't about pettiness or thin skin. And it's not about mean words. It's about potential valid, corroborated criticism of misconduct.

They can totally deal with appearing petty and thin-skinned.

parpfish
0 replies
1h42m

Wouldnt various whistleblower protections apply if you were reporting illegal activities?

xyst
0 replies
17h57m

The company is literally a house of cards at this point. There is probably so much vulture capitalist and angel investor money tied up in this company that even a disparaging rant could bring the whole company crashing down.

It’s yet another sign that the AI bubble will soon burst. The laughable release of “GPT-4o” was just a small red flag.

Got to keep the soldiers in check while the bean counters prep the books for an IPO and eventual early investor exit.

Almost smells like a SoftBank-esque failure in the near future.

w10-1
0 replies
19h2m

Modern AI companies depend entirely on goodwill and being trusted by their customers.

So yes, they're that fragile.

johnnyanmac
0 replies
18h48m

Legally yes. Those mean words can cost them millions in lawsuits and billions if the judge rulings restrict how they can implement and monetize AI. Why do you think Boieing's "coincidental" deaths of whistle blowers has happened more than once these past few months?

lopkeny12ko
5 replies
17h57m

What a lot of people seem to be missing here is that RSUs are usually double-trigger for private companies. Vested shares are not yours. They are just an entitlement for you to be distributed common stock by the company. You don't own any real stock until those RSUs are released (typically from a liquidity event like an IPO).

Companies can cancel your vested equity for any reason. Read your employment contract carefully. For example, most RSU grants have a 7 year expiration. Even for shares that are vested, regardless of whether you leave the company or not, if 7 years have elapsed since they were granted, they are now worthless.

darth_avocado
3 replies
17h44m

if 7 years have elapsed since they were granted, they are now worthless

Once vested, RSUs are the same as regular stock purchased through the market. The company cannot claw them back, nor do they "expire".

lopkeny12ko
1 replies
17h36m

No, this is not true. That's the entire point I'm making. An RSU that is vested, for a private company, is not a share of stock, it's an entitlement to receive a share of stock tied to a liquidity event.

same as regular stock purchased through the market

You cannot purchase stock of a private company on the open market.

The company cannot claw them back

The company cannot "claw back" a vested RSU but they can cancel it.

nor do they "expire".

Yes, they absolutely do expire. Read your employment contract and equity grant agreement carefully.

danielmarkbruce
0 replies
14h10m

It's just a semantic issue. Some folks will say aren't really fully vested when they are double trigger until the second trigger event. Some will say they are vested but not triggered, other people say similar things.

jatins
0 replies
13h26m

this is incorrect. Private company RSUs often have double trigger with second trigger being IPO/exit. The "semi" vested RSUs can expire if the company does not IPO in 7 years.

onesociety2022
0 replies
13h53m

The 7 year expiry time exists so IRS lets you give RSUs different tax treatment than regular stock. The idea is because they can expire, they could be worth nothing. And so the IRS cannot expect you to pay taxes on RSUs until the double-trigger event occurs.

But none of this means the company can just cancel your RSUs unless you agreed to them being cancelled for specific reason in your equity agreement. I have worked at several big pre-IPO companies that had big exits. I made sure there were no clawback clauses in the equity contract before accepting the offers.

atum47
5 replies
17h27m

That's not enforceable, right? I'm not a lawyer, but even I know no contract can strips you out of rights given by the constitution.

smabie
1 replies
13h10m

Non disparagement clauses are in so so many different employment contracts. It's pretty clear you're not a lawyer though.

atum47
0 replies
12h27m

It is also clear that you can read, since i wrote it.

hsdropout
1 replies
17h17m

Are you referring to the first amendment? If so, this allows you to speak against the government. It doesn't prevent you from entering optional contracts.

I'm not making any statement about the morality, just that this is not a 1a issue.

atum47
0 replies
12h7m

I can understand defamation, but it's hard for me to understand disparagement. If i sign one of those contracts with Coca-Cola and later on I publicly announce that a can of Coca-Cola contains too much sugar. Am I in breach of contract?

staticman2
0 replies
13h31m

If the constitution protected you from this sort of thing then there'd be no such thing as "trade secret" laws.

OldMatey
5 replies
19h28m

Well that's not worrying. /s

I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.

Even if they had a massive, successful and public safety team, and got alignment right (which I am highly doubtful about being possible) it is still going to happen as massive portions of white collar workers loose their jobs.

Mass protests are coming and he will be an obvious focus point for their ire.

throwup238
1 replies
19h22m

> I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.

He's already perceived by some as a bit of a scoundrel, if not yet a villain, because of World Coin. I bet he'll hit supervillain status right around the time that ChatGPT BattleBots storm Europe.

shawn_w
0 replies
18h27m

When he was fired there was a short window where the prevailing reaction here was "He must have done something /really/ bad." Then opinion changed to "Sam walks on water and the board are the bad guys". Maybe that line of thinking was a mistake.

rvz
0 replies
18h38m

I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.

He probably already knows that, but doesn't care as long as OpenAI has captured the world's attention with ChatGPT generating them billions and their high interest in destroying Google.

Mass protests are coming and he will be an obvious focus point for their ire.

This is going to age well.

Given that no-one knows the definition of AGI, then AGI can mean anything; even if it means 'steam-rolling' any startup, job, etc in OpenAI's path.

maxerickson
0 replies
16h15m

If they actually invent a disruptive god, society should just take it away.

No need to fret over the harm to future innovation when I innovation is an industrial product.

surfingdino
3 replies
10h39m

It's for the good of humanity, right? /s I wonder if Lex is going to ask Sam about it the next time they get together for a chat on YouTube?

brap
2 replies
5h2m

I kinda like Lex, but he never asks any difficult questions. That’s probably why he gets all these fancy guests on his show.

surfingdino
0 replies
3h4m

And he always ends with questions about love, just to pour some more oil on the quiet seas :-) nothing wrong with that, but like you say he asks safe questions.

reducesuffering
0 replies
1h46m

Worse, he will agree 95% with what guest A opinions are, only for guest B to come on next episode and also agree with 95%. It would've been better for those opposing guests to just debate themselves. Like, I don't want to see Lex and Yuval Noah Harari, then Lex and Bibi Netanyahu, I'd rather see Yuval and Bibi. I don't want to see Lex and Sama, then Lex and Eliezer, I'd rather see Sama and Eliezer.

shuckles
3 replies
23h24m

I'm not sure how this is legal. My employer certainly could not clawback paid salary or bonuses if I violated a surprise NDA they sprung on me when leaving on good terms. Why can they clawback vested stock compensation?

orionsbelt
1 replies
22h54m

My guess is they agreed to it upfront.

_delirium
0 replies
20h4m

That appears to be the case, although the wording of what they agree to up front is considerably more vague than the agreement they're reportedly presented to sign post-departure. Link to a thread from the author of the Vox article: https://x.com/KelseyTuoc/status/1791584341669396560

gwern
0 replies
21h31m

These aren't real stock, they are "profit participation units" or PPUs; in addition, the fact that there is a NDA and a NDA about the NDA, means no one can warn you before you sign your employment papers about the implications of 'PPUs' and the tender-offer restriction and the future NDA. So it's possible that there's some loophole or simple omission somewhere which enables this, which would never work for regular RSUs or stock options, which no one is allowed to warn you about on pain of their PPUs being clawed back, and which you find out about only when you leave (and who would want to leave a rocketship like OA?).

photochemsyn
3 replies
19h11m

I refused to sign all these secrecy non-disclosure contracts years ago. You know what? It was the right decision. Even though, as a result, my current economic condition is what most would describe as 'disastrous', at least my mind is my own. All your classified BS, it's not so much. Any competent thinker could have figured it out on their own.

Fucking monkeys.

worik
1 replies
18h51m

You know what? It was the right decision. Even though, as a result, my current economic condition is what most would describe as 'disastrous', at least my mind is my own.

Individualistic

No body depends on you, I hope

serf
0 replies
18h14m

you can still provide for your family without signing deals with the devil, it's just harder.

moral stands are never free, but they are freeing.

istjohn
0 replies
18h27m

In most cases there is no free exercise whatever of the judgment or of the moral sense; but they put themselves on a level with wood and earth and stones; and wooden men can perhaps be manufactured that will serve the purpose as well. Such command no more respect than men of straw or a lump of dirt.[0]

0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)

olliej
3 replies
18h42m

As I say over and over again: equity compensation from a non-publicly traded company should not be accepted as a surrogate for below market compensation. If a startup wants to provide compensation to employees via equity, then those employees should have first right to convert equity to cash in funding rounds or sale, there shares must be the same class as any other investor, because the idea that an “early employee” is not an investor making a much more significant investment than any VC is BS.

I feel that this particular case is just another reminder of that, and now would make me require a preemptory “no equity clawbacks” clause in any contract.

DesiLurker
1 replies
17h27m

I always say in that the biggest swindle in the world is that in the great 'labor vs capital' fight, capital has convinced labor that its interests are secondary to capital's. this so much truer in the modern fiat-fractional reserve banking world where any development is rate-limited by either energy or people.

DesiLurker
0 replies
11h3m

why downvote me instead of actually refuting my point?

blackeyeblitzar
0 replies
18h37m

Totally agree. For all this to work there needs to also be transparency. Anyone receiving equity should have access to the cap table and terms covering all equity given to investors. Without this, they can be taken advantage of in so many ways.

nextworddev
3 replies
17h43m

Unfortunately this is actually pretty common in Wall St, where they leverage your multiple years of clawback-able shares to make you sign non-disparagement clauses.

citizen_friend
1 replies
13h54m

Sounds like a deal honestly. I’ll fast forward a few years of equity to mind my own business. I’m not trying to get into journalism

nextworddev
0 replies
2h12m

Yes, the vast vast majority of finance folks just take the money and be quiet

lokar
0 replies
17h24m

But that is all very clear when you join

jstummbillig
3 replies
7h26m

I am confused about the source of the outrage. A situation where nobody is very clear about what the claim is but everyone is very upset, makes me suspicious.

Are employees being mislead about the contract terms at time of signing the contract? Because, obviously, the original contract needs to have some clause regarding the equity situation, right? We can not just make that up at the end. So... are we claiming fraud?

What I suspect is happening, is that we are confusing an option to forgo equity for an option to talk openly about OpenAI stuff (an option that does not even have to exist in the initial agreement, I would assume).

Is this overreach? Is this whole thing necessary? That seems besides the point. Two parties agreed to the terms when signing the contract. I have a hard time thinking of top AI researchers as coerced to take a job at OpenAI or unable to understand a contract, or understand that they should pay someone to explain it to them – so if that's not a free decision, I don't know what is.

Which leads me to: If we think the whole deal is pretty shady – well, it took two.

ghusbands
2 replies
7h5m

If the two parties are equal, sure. If it's a person vs a corporation of significant size, then no, it's not safe to assume that people have free choice. That's also ignoring motivations apart from business ones, like them actually wanting to be at the leading edge of AI research or wanting to work with particular other individuals.

It's a common mistake on here to assume that for every decision there are equally good other options. Also, the fact that they feel the need to enforce silence so strongly implies at least a little that they have something to hide.

jstummbillig
0 replies
4h41m

If it's a person vs a corporation of significant size, then no, it's not safe to assume that people have free choice

We understand this as a market dynamic, surely? More companies are looking for capable AI people, than capable AI people exist (as in: on the entire planet). I don't see any magic trick a "corporation of significant size" can pull, to make the "free choice" aspect go away. But, of course, individual people can continue to CHOOSE certain corps, because they actually kind of like the outsized benefits that brings. Complaining about certain trade-offs afterwards is fairly disingenuous.

That's also ignoring motivations apart from business ones, like them actually wanting to be at the leading edge of AI research or wanting to work with particular other individuals.

I don't understand what you are saying. Is the wish to work on leading AI research sensible, but offering the opportunity to work on leading AI research not a value proposition? How does that make sense?

hanspeter
0 replies
6h46m

AI researchers and engineers surely have the free choice to sign with another employer than OpenAI?

dakial1
3 replies
19h15m

What if I sell my equity? Can I criticize them then?

apsec112
1 replies
19h14m

()

dekhn
0 replies
19h5m

Right, but once you sell the shares, OpenAI isn't going to claw back the cash proceeds, is what I think was asked here.

saalweachter
0 replies
14h43m

Once there's a liquidity event and the people making you sign this contract can sell, they stop caring what you say.

MBlume
3 replies
22h54m

Submission title mentions NDA but the article also mentions a non disparagement agreement. "You can't give away our trade secrets" is one thing but it sounds like they're being told they can't say anything critical of the company at all.

reducesuffering
2 replies
22h45m

They can't even mention the NDA exists!

danielmarkbruce
1 replies
14h8m

This is common, and there is nothing wrong with it.

Chinjut
0 replies
4h32m

There is absolutely something wrong with it. Just because a thing is common doesn't make it good.

31337Logic
3 replies
18h16m

This is how you know you're dealing with an evil tyrant.

downrightmike
1 replies
17h45m

And he claims to have made his fortune by just helping people and not expecting anything in return. Well, the reality here is that was a lie.

api
0 replies
17h31m

Anyone who constantly toots their own horn about how altruistic and pure they are should have cadaver dogs led through their house.

0xDEAFBEAD
0 replies
15h19m

Saw this comment suddenly move way down in the comment rankings. Somehow I only notice this happening on OpenAI threads:

https://news.ycombinator.com/item?id=38342850

My guess would be that YC founders like sama have some sort of special power to slap down comments that they feel are violating HN discussion guidelines.

zombiwoof
2 replies
15h15m

Sam and Mira. greedy as fuck since they are con artists and neither could get a job at that level anywhere legitimate.

Now it’s a money grab.

Sad because some amazing tech and people now getting corrupted into a toxic culture that didn’t have to be that way

romanovcode
1 replies
10h39m

Sam and Mira. greedy as fuck since they are con artists and neither could get a job at that level anywhere legitimate.

Hey hey hey! Sam founded a 4th most popular social networking site in 2005 called Loopt. Don't you forget that! (After that he joined YC and founded nothing ever since)

null0pointer
0 replies
8h26m

He’s spent all those years conducting field research for his stealth-mode social engineering startup.

subroutine
2 replies
6h58m

This is an interesting update to the article...

After publication, an OpenAI spokesperson sent me this statement: “We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.”

- Updated May 17, 2024, 11:20pm EDT

jiggawatts
1 replies
5h58m

Neither of those statements negate the key point of the article.

I've noticed that both Sam Altman personally, and official statements from OpenAI sound like they've been written by Aes Sedai: Not a single untrue word while simultaneously thoroughly deceptive.[1]

Let's try translating some statements, as if we were listening to an evil person that can only make true statements:

"We have never canceled any current or former employee’s vested equity" => "But we can and will if we want to. We just haven't yet."

"...if people do not sign a release or nondisparagement agreement when they exit." => "But we're making everyone sign the agreement."

[1] I've wondered if they use a not-for-public-use version of GPT for this purpose. You know, a model that's not quite as aligned as the chat bots, with more "flexible" morals.

twobitshifter
0 replies
3h54m

Could also be that they have a unique definition of vesting when they say specifically “vested equity”

nsoonhui
2 replies
18h16m

But what's stopping the ex-staffers from criticizing once they sold off the equity?

danielmarkbruce
0 replies
14h4m

The threat of a lawsuit.

You can't just sign a contract and then not uphold your end of the bargain after you've got the benefit you want. You'll (rightfully) get sued.

EA-3167
0 replies
18h9m

Nothing, these don't seem like legally enforceable contracts in any case. What they do appear to be is a massive admission that this is a hype train which can be derailed by people who know how the sausage is made.

It reeks of a scammer's mentality.

ecjhdnc2025
2 replies
17h5m

Totally normal, nothing to see here.

Keep building your disruptive, game-changing, YC-applicant startup on the APIs of this sociopathic corporation whose products are destined to destroy all trust humans have in other humans so that everyone can be replaced by chatbots.

It's all fine. Everything's fine.

jay-barronville
1 replies
17h1m

You don’t think the claim that “everyone can be replaced by chatbots” is a bit outrageous?

Do you really believe this or is it just hyperbole?

ecjhdnc2025
0 replies
16h54m

Almost every part of the story that has made OpenAI a dystopian unicorn is hyperbole. And now this -- a company whose employees can't tell the truth or they lose access to remuneration. Everyone's Allen Weisselberg.

What's one more hyperbole?

Edit to add, provocatively but not sarcastically: next time you hear some AI-proponent-who-used-to-be-a-crypto-proponent roll out the "but aren't we all just LLMs, in essence?" justification for their belief that ChatGPT may have broad understanding, ask yourself: are they not just self-soothing over their part in mass job losses with a nice faux-scientific-inevitability bedtime story?

autonomousErwin
2 replies
19h32m

Is it criticism if a claim is true? There is so much legal jargon I'm willing to bet most people won't want the headache (and those that don't care about equity are likely already fairly wealthy)

cma
0 replies
19h28m

Yes, if it isn't true it is libel or slander (sometimes depending on intent), not just criticism, and already not permissible without any contract covering it.

tonyhart7
1 replies
17h3m

"Even acknowledging that the NDA exists is a violation of it." now its not so much more open anymore right

ecjhdnc2025
0 replies
17h1m

The scriptwriters are in such a hurry -- even they know this show isn't getting renewed.

swat535
1 replies
16h49m

I mean why would anyone be surprised about this is beyond me?

I know many people on this site will not like what I am about to write as Sam is worshiped but let's face it: The head of this company is a master scammer who will do everything under the sun and the moon to earn a buck, including but notwithstanding to destroying himself along with his entire fortune if necessary in his quest of making sure other people don't get a dime;

So far he has done it all it: attempt to regulatory capture, hostile take over as the CEO, thrown out all other top engineers and partners and ensured the company remains closed despite its "open" name.

Now he is simply attempting to tie up all the loos ends and ensuring his employees remain loyal and are kept on a tight leash. It's a brilliant strategy, preventing any insider from blowing the whistle should OpenAI ever decides to do anything questionable, such as selling AI capabilities to hostile governments.

I simply hope that open source wins this battle so that we are not all completely reliant on OpenAI for the future, despite Sam's attempt.

jeltz
0 replies
10h34m

Since I do not follow OpenAI or Ycombinator I first learned that he was a scammer when he released is crypto currency. But I am surprised that so many did not catch on to it then. It is not like he has really tried to hide that he is a grifter.

rvz
1 replies
19h26m

So that explains the cult-like behaviour months ago when the company was under siege.

Diamond multi-million dollar hand-cuffs which OpenAI has bound lifetime secret service-level NDAs which are another unusual company setting after their so-called "non-profit" founding and their contradictory name.

Even an ex-employee saying 'ClosedAI' could see their PPUs evaporate in front of them to zero or they could never be allowed to sell them and have them taken away.

timmg
0 replies
17h51m

I don’t have any idea what goes on inside OAI. But I have this strange feeling that they were right to oust sama. They didn’t have the leverage to pull it off, though.

mrweasel
1 replies
11h27m

When companies create rules like this, that tells me that they are very unsure of their product. Either it doesn't works as they claim, or it's incredible simple to replicate. It can also be that their entire business plan is insane, in any case, there's something basic wrong internally at OpenAI for them to feel the need for this kind of rule.

If OpenAI and ChatGPT is so far ahead for everyone else, and their product is so complex, it doesn't matter what a few disgruntled employees do or say, so the rule is not required.

underdeserver
0 replies
11h11m

Forget their product, they're shady as employers. Intentionally doing something borderline legal when they have all the negotiating power.

mise_en_place
1 replies
12h7m

Why indeed? But that’s nobody’s business except OpenAI and its former employees. Doesn’t matter if it’s not legally enforceable, or in bad taste. When you enter into a contract with another party, it is between you and the other party.

If there is something unenforceable about these contracts, we have the court system to settle these disputes. I’m tired of living in a society where everyone’s dirty laundry is aired out for everyone to judge. If there is a crime committed, then sure, it should become a matter of public record.

Otherwise, it really isn’t your business.

0xDEAFBEAD
0 replies
11h19m

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

...

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.

From OpenAI's charter: https://openai.com/charter/

Now read Jan Leike's departure statement: https://news.ycombinator.com/item?id=40391412

That's why this is everyone's business.

smhx
0 replies
51m

that's a direct implication that they're waiting for a liquidity event before they speak

bambax
1 replies
10h32m

All of this is highly ironic for a company that initially advertised itself as OpenAI

Well... I know first hand that many well-informed, tech-literate people still think that all products from OpenAI are open-source. Lying works, even in that most egregious of fashion.

SXX
0 replies
10h13m

This is just Propoganda 101. Call yourself anti-fascist on TV for decade enough times and then you can go indiscriminately kill everyone you call fascist.

Unfortunately Orwellian propoganda works.

baggiponte
1 replies
3h26m

Not a US right expert. Isn’t the “you can’t criticize ever the company or you’ll lose the vested equity” a violation of the first amendment?

strstr
0 replies
2h46m

Corporations aren’t the government.

User23
1 replies
18h47m

What is criticism anyhow? Feels like you could black knight this hard with clever phrasing. “The company does a fabulous job keeping its employees loyal regardless of circumstances!” “Yes they have the best and toughest employment lawyers in the business! They do a great job using all available leverage to force favorable outcomes from their human resources!” “I have no regrets working there. Their exit agreement has really improved my work life balance!” “Management never lets externalities get in the way of maximizing shareholder value!”

singleshot_
0 replies
18h26m

If a contract barred me from providing criticism I would not imagine that I could sidestep it by uttering positive criticism unless my counterparty was illiterate and poor at drafting contracts.

RomanPushkin
1 replies
14h24m

They don't talk publicly, but they're almost always OK if you're friends with them. I have two ex-OpenAI friends, and there is a lot of shit going in there. Of course, I won't reveal their identities, even in a court. And they will deny they said anything to me. But the info, if needed, might get leaked through trusted friends. And nobody can do anything with that.

benreesman
0 replies
9h31m

I’ve worked (for years) with easily a dozen people who either are there or spent meaningful time there.

I also work hard not to print gossip and hearsay (I try not to even mention so much as a first name, I think I might have slipped one or twice on that though not in connection with an accusation of wrongdoing), there’s more than enough credible journalism to paint a picture, any person whose bias (and I have my own but it’s not like, over being snubbed for a job or something it’s a philosophical/ethical/political agenda) has not utterly robbed them of objectivity can acknowledged that “this looks really bad and worse all the time” on the basis of purely public primary sources and credible journalism.

I think some of the inside baseball I try very hard not to put in writing might be what cranks it up to “people are doing time”.

I’ve caught more than a little “less than a great time” over being a vocal critic, but I’m curious if having gone pretty far down the road and saying something is rotten, why you’d declare a willingness to defy a grand jury or a judge?

I’ve never been in court, let alone held in contempt, but I gather it’s fairly hard time to openly defy a judge.

I have friends I’d go to jail for, but not very many and none who work at OpenAI.

Melatonic
1 replies
19h9m

So much for the "Open" in OpenAI

a_wild_dandan
0 replies
18h46m

We should call them ClopenAI to acknowledge their almost comical level of backstabbing/rug-pulling.

Barrin92
1 replies
16h51m

We're apparently at the Scientology stage of the AI hype cycle. One funny observation is, if you ostensibly believe that you're about to invent the AGI godhead who will render the economic system obsolete in < ~5 years or so, how do stock return no-criticism lawsuits fit into that kind of worldview

mavbo
0 replies
13h26m

AGI led utopia will be pretty easy if we're all under contractual obligation to not criticize any aspect of it, lest we be banished back to "work"

topspin
0 replies
12h59m

"making former employees sign extremely restrictive NDAs doesn’t exactly follow."

Once again, we see the difference between the public narrative and the actions in a legal context.

throwaway5959
0 replies
16h56m

Definitely the stable geniuses I want building AGI.

strstr
0 replies
2h44m

This really kills my desire to trust startups and YC. Hopefully paulg makes some kind of statement or commitment on non-disparagement and the like.

rich_sasha
0 replies
21h52m

So what's open about it these days?

photochemsyn
0 replies
14h3m

OpenAI's military-industrial contracting options seems to be making some folks quite nervous.

olalonde
0 replies
6h19m

A bit unexpected coming from a non-profit organisation that supposedly has an altruistic mission. It's almost as if there was actually a profit making agenda... I'm shocked.

krick
0 replies
14h55m

I'm well aware of being ignorant about USA law, and it isn't news to me that it encompasses a lot of ridiculous stuff, but it's still somehow amazes me, that "lifetime no-criticism contract" is possible.

It's quite natural, that a co-founder, being forced out of the company wouldn't be exactly willing to forfeit his equity. So, what, now he cannot… talk? That has some Mexican cartel vibes.

koolala
0 replies
13h35m

They all can combine their testimony into 1 document, give it to an AI, and lol

jgalt212
0 replies
17h19m

I really don't get how lawyers can knowingly put unenforceable crap, for lack of a better word, in contracts. It's like why did you even go to law school.

jameshart
0 replies
18h49m

The Basilisk's deal turned out to be far more banal than expected.

ggm
0 replies
17h56m

I am not a lawyer.

doubloon
0 replies
16h37m

deleting my OpenAI account.

diebeforei485
0 replies
12h3m

For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet.

Yes, but:

(1) OpenAI salaries are not low like early stage startup salaries. Essentially these are highly paid jobs (high salary and high equity) that require an NDA.

(2) Apple has also clawed back equity from employees who violate NDA. So this isn't all that unusual.

dbuser99
0 replies
14h34m

Man. No wonder openai is nothing without its people

danielmarkbruce
0 replies
14h17m

This seems like a nonsense article.

As for 'invalid because no consideration' - there is practically zero probability OpenAI lawyers are dumb enough to not give any consideration. There is a very large probability this reporter misunderstood the contract. OpenAI have likely just given some non-vested equity, which in some cases is worth a lot of money. So yeah, some (former) employees are getting paid a lot to shut up. That's the least unique contract ever and there is nothing morally or legally wrong with it.

dandanua
0 replies
13h54m

With how things are unfolding I wouldn't be surprised that after the creation of an AGI the owners will just kill anyone who took a part in building it. Singularity is real.

croes
0 replies
9h48m

I guess OpenAI makes the hero to villain switch faster than Google as they dropped "don't be evil"

bradleyjg
0 replies
18h1m

For as high profile an issue as AI is right now, and as prominent as the people recently let go are, I bet they could arranged to be subpoenaed to testify before a congressional subcommittee.

blackeyeblitzar
0 replies
18h40m

They are far from the only company to do this but they deserve to be skewered for it. The FTC and NLRB should come down hard on them to make an example. Jail time for executives.

andrewstuart
0 replies
17h10m

I would like people to sign a lifetime contract to not criticize me.

almost_usual
0 replies
12h58m

This is what a dying company does.

StarterPro
0 replies
13h35m

Glad to see that all giant companies are just evil rich white dudes racing each other to taking over the world.

RockRobotRock
0 replies
13h39m

so much money stuffed in their mouth it’s physically impossible

Madmallard
0 replies
14h8m

I'm really sick of seeing people jump in and accelerating the demise of society wholeheartedly due to greed.

I_am_tiberius
0 replies
9h36m

I get Theranos / David Boies vibes.

Delmololo
0 replies
11h3m

Why should they?

It's absolutely normal not to spill internals.