return to table of content

Leaked OpenAI documents reveal aggressive tactics toward former employees

tedivm
143 replies
19h49m

If this really was a mistake the easiest way to deal with it would be to release people from their non disparagement agreements that were only signed by leaving employees under the duress of losing their vested equity.

It's really easy to make people whole for this, so whether that happens or not is the difference between the apologies being real or just them just backpedaling because employees got upset.

Edit: Looks like they're doing the right thing here:

Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.
NotSammyHagar
94 replies
19h35m

This reads like more than standard restrictions. I hate those like everyone, they are just intended to chill complaints in my opinion with enough question to scare average people without legal expertise (like me, like most devs), just like non-competes used to seemingly primarily be used to discourage looking at other jobs, separate from whether it was enforceable - note the recent FTC decision to end non-competes.

About 5 months ago I had a chance to join a company, their company had what looked like an extreme non-compete to me, you couldn't work for any company for the next two years after leaving if they had been a customer of that company.

I pointed out to them that I wouldn't have been able to join their company if my previous job had that non-compete clause, it seemed excessive. Eventually I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it, and the FTC is about to end non-competes. I said great, strike it from the contract and I'll sign it right now. He said I can't do that, no one off contracts. So then I said I'm not working there.

tedivm
67 replies
19h31m

I have worked for multiple startups (Malwarebytes, Vicarious, Rad AI, Explosion AI, Aptible, Kenna Security). Not once have I seen an exit agreement that stated they would steal back my vested equity if I didn't sign. This is definitely not "standard restrictions".

danielmarkbruce
33 replies
19h22m

Comp clawbacks are quite common in finance, at least contractually. It's rare for it to go ahead, but it happens. It isn't some especially weird thing.

tedivm
8 replies
19h4m

Comp clawbacks in exit agreements, that weren't part of the employment agreement?

I've seen equity clawbacks in employment agreements. Specifically, some of the contracts I've signed have said that if I'm fired for cause (and were a bit more specific, like financial fraud or something) then I'd lose my vested equity. That isn't uncommon, but its not typically used to silence people and is part of the agreement they review and approve of before becoming an employee. It's not a surprise that they learn about as they try to leave.

danielmarkbruce
7 replies
18h58m

It must have been part of the original employment document package, that the equity was cancellable. In the details of the equity grant, or similar, somewhere.

ecjhdnc2025
5 replies
18h3m

Must it?

Not clear what you mean.

Do you mean it is generic to do that in contracts? (Been a while since I was offered equity.)

Or do you mean that even OpenAI would not try it without having set it up in the original contract? Because I hate to be the guy with the square brackets ;-)

danielmarkbruce
2 replies
17h25m

It must.

Joke aside - I'm saying "it must" the same way someone might say "surely".

ecjhdnc2025
1 replies
17h13m

Wise. Stops people saying "and don't call me Shirley!"

dmvdoug
0 replies
16h37m

Don’t call me Shirley.

ajb
1 replies
17h4m

If it wasn't in the original contracts for the equity, they wouldn't be able to claw back. Fairly obviously, the mechanism can't be in the exit agreement because you didn't sign that yet.

Normally a company has to give you new "consideration" (which is the legal term for something of value) for you to want to sign an exit agreement - otherwise you can just not bother to sign. Usually this is extra compensation. In this case they are saying that they won't exercise some clause in an existing agreement that allows them to claw back.

tsimionescu
0 replies
15h49m

Per the Vox article, it's not directly in the contract you sign for the equity, it's basically part of the definition of the equity itself (the articles of incorporation of the for-profit company) that OpenAI remains in full control of the equity in this way.

tsimionescu
0 replies
15h52m

According to the Vox article, it's much more complicated legally. It's not part of each employee's contract that allows this, it's part of the articles of incorporation of the for-profit part of OpenAI.

JumpCrisscross
5 replies
18h59m

Comp clawbacks are quite common in finance, at least contractually

Never negotiated on exit.

danielmarkbruce
4 replies
18h36m

I don't think it was negotiated on exit. It was threatened on exit. The ability to do it was almost certainly already in place.

JumpCrisscross
2 replies
17h2m

The ability to do it was almost certainly already in place

Why? OpenAI is a shitshow. Their legal structure is a mess. Yanking vested equity on the basis of a post-purchase agreement signed under duress sounds closer to securities fraud than anything thought out.

danielmarkbruce
1 replies
16h50m

I'm not saying it was thought out, I'm saying it was in place. My understanding is that the shareholders agreement had something which enabled the canceling of the shares (not sure if it was all shares, shares granted to employees, or what). I have not seen the document, so you may be right, but that's my understanding.

JumpCrisscross
0 replies
11h50m

the shareholders agreement had something which enabled the canceling of the shares

OpenAI doesn't have shares per se, since they're not a corporation but some newfangled chimeric entity. Given the man who signed the documents allegedly didn't read them, I'm not sure why one would believe everything else is buttoned up.

interactivecode
0 replies
3h51m

If its not negotiated on exit why are they requesting additional documents to be signed when leaving? Clearly nothing like this was agreed at the start of employment.

minhazm
4 replies
19h2m

Can you find any specific examples? I've only seen that apply to severance agreements where you're being paid some additional sum for that non-disparagement clause.

Never seen anything that says money or equity you've already earned could be clawed back.

danielmarkbruce
2 replies
18h45m

Wells Fargo clawed back from the CEO (and a couple others if I remember) over the fake account scandals.

ecjhdnc2025
1 replies
18h0m

Right, but would that have been achieved with a clause open-ended enough to allow this additional paperwork on exit?

Or would that have been an "if you break the law" thing?

Seems unlikely that OpenAI are legally in the clear here with nice clear precedent. Why? Because they are backflipping to deny it's something they'd ever do.

tsimionescu
0 replies
5h20m

I think they are backpedaling rapidly to avoid major discontent among their workers. By the definition of their stock as laid out in their articles of incorporation, they have the right to reduce any former employee's stock to 0, or to prevent them from ever selling it, which is basically the same thing. This makes their stock offers to employees much less valuable than the appear at face value, so their current and future employees may very well start demanding actual dollars instead.

Bluecobra
0 replies
18h4m

I negotiated a starting bonus with my employer and signed a contract that I would need to pay it back if I quit within a year.

voxic11
3 replies
19h10m

Is OpenAI a finance company? I guess that would explain a lot.

danielmarkbruce
1 replies
18h33m

They pay like one.

srockets
0 replies
17h38m

Finance has bigger cash and deferred cash (bonus) in their packages. OpenAI still puts a lot of the pay in restricted equity.

spongebobism
0 replies
2h49m

Would it though? Presumably a finance company's claw back clause is there to protect it from you taking its trade secrets with you to its competitors, not from you tweeting "looks trashy lol" in response to a product launch of theirs, or you mentioning to a friend that your old boss was kind of a dick.

herval
3 replies
17h31m

IANAL but isn’t it illegal to execute something in the event of a document not being signed?

dllthomas
2 replies
17h15m

I expect not... provided it's a thing you could do anyway (and it isn't extortion or something).

herval
1 replies
15h52m

You could claim you gave someone a contract and they didn’t sign it, so now they owe u a million bucks

dllthomas
0 replies
12h59m

I think you missed my proviso.

If you can do X in the first place, I don't think there's any general rule that you can't condition X on someone not signing a contract.

adastra22
3 replies
18h46m

What is the structure of those compensations, and the mechanism for the clawbacks? Equity is taxed when it becomes the full, unrestricted property of the employee, so depending on the structure these threatened clawbacks could have either (1) been very illegal [essentially theft], or (2) could have had drastic and very bad tax consequences for all employees, current and former.

I'm not surprised that they're rapidly backpedaling.

londons_explore
1 replies
18h30m

taxed when it becomes the full, unrestricted property of the employee

I guess these agreements mean that the property isn't full unrestricted property of the employee... and therefore income tax isn't payable when they vest.

The tax isn't avoided - it would just be paid when you sell the shares instead - which for most people would be a worse deal because you'll probably sell them at a higher price than the vest price.

semi-extrinsic
0 replies
13h42m

which for most people would be a worse deal

It's a worse deal in retrospect for a successfull company. But there and then it's not very attractive to pay an up-front tax on something that you can sell at an unknown price in the relatively far future.

danielmarkbruce
0 replies
18h37m

Not sure how they deal with the tax. Ping John Stumpf (former Wells CEO) and ask, he probably has time on his hands and scar tissue and can explain it.

throwaway2037
0 replies
17h42m

   > Comp clawbacks are quite common in finance
Common? Absolutely not. It might be common for a tiny fraction of investment bank staff who are considered (1) material risk takers, (2) revenue generators, or (3) senior management.

ungreased0675
14 replies
15h3m

Anytime someone tried to get me to sign a terrible contract, they always said “This is just standard stuff.”

ornornor
12 replies
12h18m

Same. And in the same breath they also added “this is never used anyway, it’s just the template”. But “no it can’t be removed from the contract”

mpweiher
11 replies
11h50m

I always respond with "if it's never enforced, then you'll be fine with me taking it out"

Then I strike the offending passage out on both copies of the contract, sign and hand it back to them.

Your move.

¯\_(ツ)_/¯

hollowpython
9 replies
10h53m

Do you really do this, and is striking out a line of a contract binding?

darkwater
2 replies
9h58m

Why not? A labor contract is a 2-ways street. If the company doesn't like the new version, they will not sign it and not hire you.

mpweiher
1 replies
8h49m

Exactly. And just like I have to be fine with not getting the job if my conditions are not acceptable to them, they have to be fine with not getting me if their conditions are not acceptable to me.

Considering the considerable effort that has gone into this by the time you are negotiating a contract, letting it fail over something that "is not important" and "is never enforced" would be very stupid of them.

So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.

Neither of which is a great advertisement for the company as an employer.

darkwater
0 replies
8h28m

So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.

Most of the time is basically just FUD, to coerce people into following the rule-that-is-never-enforced

alfiedotwtf
2 replies
7h9m

IANAL but I've seen strikes throughout contracts, and then an initial+date from both parties. Weird how in 2024 an initial that's so easily forgeable can be legally binding

retrac
0 replies
6h14m

A verbal contract, which has no record at all, can also be legally binding.

cubefox
0 replies
6h49m

I would guess that the initial is not the important thing, but that the strike is present on both copies of the contract.

nielsole
0 replies
10h3m

I've seen legal departments redlining drafts of a contract repeatedly until an agreement had been reached. The final contract still contained the red lines.

mpweiher
0 replies
9h27m

Yes, I really do this. Have done since I started working.

At one of my first jobs as a student employee they offered me a salary X. In the contract there was some lower number Y. When I pointed this out, they said "X includes the bonus. It's not in the contract but we've never not paid it". OK, if this is really guaranteed, you can make that the salary and put it in writing. They did, my salary was X and that year was the first time they didn't pay the optional bonus. Didn't affect me, because I had my salary X.

IANAL and I don't know how binding this is. I'd think it's crucial for it to be in both copies of the contract, otherwise you could have just crossed it out after the fact, which would of course not be legally binding at all and probably fraud (?)

In practice, it doesn't really come up, because the legal department will produce a modified contract or start negotiating the point. The key is that the ball is now in their court. You've done your part, are ready and rearin' to go, and they are the ones holding things up and being difficult, for something that according to them isn't important.

UPDATE:

I think it's important to note that I am also perfectly fine with a verbal agreement.

A working relationship depends on mutual trust, so a contract is there for putting in a drawer and never looking at it again...and conversely if you are looking at it again after signing, both the trust and the working relationship are most likely over.

But it has to be consistent: if you insist on a binding written agreement, then I will make sure what is written is acceptable to me. You don't get to pick and choose.

WhrRTheBaboons
0 replies
5h19m

(EU perspective) it is binding. you just add both parties' initials/signature on the margin of each line that was changed.

DonHopkins
0 replies
11h20m

Don't forget to initial the crossed-out section and draw a passive aggressive happy face!

cyrillite
0 replies
6h14m

“This is just standard stuff” belongs in a category of phrases like “this is perfectly legal”.

benreesman
5 replies
17h12m

I’ve heard of some pretty aggressive non-competes in finance, but AFAIU (never worked in Connecticut myself), it’s both the carrot and the stick: you get both paid and a stiff contract if you leave with proprietary alpha between the ears.

In tech I’ve never even heard a rumor of something like this.

whitej125
4 replies
17h1m

It’s got a term - “garden leave” and yeah it was prevalent in finance. I say “was” because I think some states are changing laws wrt/ non-competes and this calling this practice into question.

kelnos
0 replies
10h49m

I don't recall where I saw it, but I believe the FTC clarified and said that garden-leave type arrangements aren't covered under their ban.

d4mi3n
0 replies
15h20m

I think this still leaves garden leave on the table. The thing that can no longer happen is an employer ending it's relationship with an employee and preventing them from continuing their career after the fact. Garden leave was in fact one of the least bad outcomes of a non-compete as I understand it.

tomp
0 replies
6h37m

No, you're confusing stuff.

First of all, taking any code with you is theft, and you go to jail, like this poor Goldman Sachs programmer [1]. This will happen even if the code has no alpha.

However, noone can prevent you from taking knowledge (i.e. your memories), so reimplementing alpha elsewhere is fine. Of course, the best alpha is that which cannot simply be replicated, e.g. it depends on proprietary datasets, proprietary hardware (e.g. fast links between exchanges), access to cheap capital, etc.

What hedge funds used to do, is give you lengthy non-competes. 6months for junior staff, 1-2y for traders, 3y+ in case of Renaissance Technologies.

In the US, that's now illegal and un-enforceable. So what hedge funds do now, is lengthy garden(ing) leaves. This means you still work for the company, you still earn a salary, and in some (many? all?) cases also the bonus. But you don't go to the office, you can't access any code, you don't see any trades. The company "moves on" (developes/refines its alpha, including your alpha - alpha you created) and you don't.

These lengthy garden leaves replaced non-competes, so they're now 1y+. AFAIK they are enforceable, just as non-competes while being employed always have been.

[1] https://nypost.com/2018/10/23/ex-goldman-programmer-sentence...

bertil
3 replies
17h54m

I’ve seen that for a well-known large tech company, and I wasn’t even employed in the US, making those seem stranger. Friends and former colleagues pushed back against that (very publicly and for obvious reasons in one case) and didn’t get to keep their vested options: they had to exercise what they had before leaving.

There was one thing that I cared about (anti-competitive behavior, things could technically be illegal, but what counts is policy so it really depends on what the local authority wants to enforce), so I asked a lawyer, and they said: No way this agreement prevents you from answering that kind of questioning.

srockets
2 replies
17h41m

A 90 days exercise window is standard (and there are tax implications as well in play).

OpenAI is different: they don’t grant options, but “Units” that are more like RSUs.

blackeyeblitzar
1 replies
5h42m

Don’t those come with bad tax implications then? The point of options is to give ownership without immediate financial burden for the employee.

dartos
0 replies
4h35m

You pay normal tax on them when you sell after holding for 1 year, but an increased tax if you sell within that year.

Der_Einzige
3 replies
14h40m

How is malwarebytes a startup? They were a thing when I was a baby!

fcarraldo
1 replies
13h44m

People on this site have been working in this industry longer than you. Some longer than you have been alive, it sounds like.

Der_Einzige
0 replies
12h21m

You take my statement far too literally. I thought it came out in the late 90's. Turns out, it was 2006. I was in middle school at the time.

hluska
0 replies
12h59m

Well that’s depressing. I was 27 years old when Mawarebytes was released.

Fuck, I’m old.

saghm
1 replies
13h40m

The closest thing I've heard of is having to sign anti-disparagement clauses as part of severance when laid off; still pretty shitty, but taking back already vested equity would be on another level.

dragonwriter
0 replies
13h37m

My understanding is that its an explicit condition of the equity grant, not something technically first revealed at exit (which would probably be illegal), but probably under the expectation that no one is carefully studying the terms of the agreement that would be required at exit when they are accepting compensation terms that i nclude equity.

j0hnyl
0 replies
17h59m

I work in ad tech and have had to sign this when laid off.

influx
0 replies
15h41m

I worked pre-ipo Uber with TK as CEO and they were bro-af and had nothing like this.

squigz
7 replies
19h7m

If it's non-enforceable, but you signed it, wouldn't that make the contract void?

I suppose there's probably a bunch of legalese to prevent that though...

reaperman
1 replies
13h48m

At most it would just make that part of the contract void. Almost all contracts with stuff like this would have a “severability” clause which states like if one part of the contract is invalid, the rest is still valid.

But even without that, judges have huge amounts of leeway to “create” an ex post facto contract and say “heres the version if that contract you would have agreed to, this is now the contract you signed”. A sort of “fixed” version of the contract.

dragonwriter
0 replies
13h46m

At most it would just make that part of the contract void. Almost all contracts with stuff like this would have a “severability” clause which states like if one part of the contract is invalid, the rest is still valid.

Severability clauses themselves are not necessarily valid; whether provisions can be severed and how without voiding the contract is itself a legal question that depends on the specific terms and circumstances.

owenmarshall
1 replies
18h32m

Probably not enforceable != enforceable. Are you worth suing or does everyone sign? Are your state laws and jurisprudence going to back you up?

If you are ever going to sign an employee agreement that binds you, consult with an employment attorney first. I did this with a past noncompete and it was the best few hundred I ever spent: my attorney talked with me for an hour about the particulars of my noncompete, pointed out areas to negotiate, and sent back redlines to make the contract more equitable.

hluska
0 replies
15h41m

The single best professional decision I ever made was to get a business degree. The degree itself wasn’t worth a damn, but the network was invaluable. I have very close friends who are the exact kind of attorney who you would expect to have an undergraduate business degree. They’re greedy, combative people who absolutely relish these sorts of opportunities. And as a bonus, they are MY greedy, combative people who relish these sorts of opportunities.

They’re great partners when confronted with this kind of contract. And fundamentally, if my adversary/future employer retains counsel, I should too. Why be at a disadvantage when it’s so easy to pay money and be at even?

There are some areas my ethics don’t mesh with, but at the end of the day this is my work and I do it for pay. And when I look at results, lawyers are the best investment I have ever made.

indymike
1 replies
16h12m

There is usually a severability clause that basically says if a clause is illegal it voids that clause not the whole contract… this is pretty standard practice.

Bognar
0 replies
14h17m

I think I've seen that in every contract I've ever signed.

MyFedora
5 replies
19h14m

Yeah, totally legit. Don't worry about it, it's not enforceable anyways. What, remove it from the contract? God no! Oh, I mean sorry, no one off contracts.

ryandrake
3 replies
18h17m

I'd be surprised if anyone fell for that. "Oh, thanks, opposing counsel, I totally trust you to represent my interests over your employer's!"

8372049
2 replies
17h25m

Well, you can be surprised. It's surprisingly common, in my experience, to believe people who pretend they are on your side. One interesting and typical case that is documented through countless online videos is police interrogations, where the interrogator is usually an expert in making it seem he (or she) is on your side, despite how obvious it should be that they're not. "Can I get you a meal?", friendly tone, various manipulations and before you know it you've said things that can and will be used against you whether you are guilty or not.

And you don't get the meal, either.

istjohn
1 replies
13h4m

We can also mention the case of psychiatrists running the "Presence francaise" groups who, appointed to examine the prisoner, started off boasting they were great friends with the defense lawyer and claiming both of them (the lawyer and the psychiatrist) would get the prisoner out. All the prisoners examined by this method were guillotined. These psychiatrists boasted in front of us of this neat method of overcoming "resistance."

- The Wretched of the Earth, Frantz Fanon

8372049
0 replies
7h21m

I read some of the pages before and after that footnote. Highly disturbing, to say the least.

Spooky23
0 replies
17h51m

Attorneys are like any other profession. The average attorney is just like the average person, except he passed a difficult test.

Exceptions require sign off and thinking. The optimal answer is go with the flow. In an employment situation, these sorts of terms require regulatory intervention or litigation to make them go away, so it’s a good bet that most employees will take no action.

joedevon
2 replies
19h30m

No one off contracts. lol. What nonsense. Then why did bother to have a meeting? You handled that one like a boss.

fuzztester
0 replies
17h9m

yup.

companies say that all the time.

another way they do it is to say, it is company policy, sorry, we can't help it.

thereby trying to avoid individual responsibility for the iniquity they are about to perpetrate on you. .

bertil
0 replies
17h52m

Then why did bother to have a meeting?

Because lawyers are in the business of managing risk, and knowing what OC was unhappy about was very much relevant to knowing if he presented a risk.

xwolfi
1 replies
15h35m

You did well: there is never a rule against one-off contract. I can assure you the CEO has a one-off contract, and that lawyer has a one-off contract, at the very least :D

NotSammyHagar
0 replies
1h20m

Those are great points. I didn't think about it at the time. Since they were pushing me hard to sign a contract that for once literally blocked most of the companies in the cs speciality area I mostly have worked in, it gave me enough courage to say no. Barely ;-)

justinclift
1 replies
17h2m

He said I can't do that, no one off contracts.

There was still potential to engage there:

  "That's alright, as you said it's not enforceable anyway just remove it from everyone's
   contract.  It'll just be the new version of the contract for everyone."
Doubt it would have made any difference though, as the lawyer was super likely bullshitting.

hluska
0 replies
15h50m

This is one of those magical times where having your own counsel is worth the upfront cost.

fuzztester
0 replies
17h13m

That lawyer was probably lying, bro, since he could not keep his money where his mouth was.

eterevsky
0 replies
11h34m

Non-competes like this are often not enforceable, but it depends on the jurisdiction.

ecjhdnc2025
0 replies
19h9m

This reads like more than standard restrictions.

It reads like omertà.

I wonder if I'll still get downvoted for saying this. A lot can change in 24 hours.

Edit: haha :-P

OtomotO
0 replies
7h58m

Not standard where I come from.

And standard doesn't mean shit... Every regime in the history of mankind had standards!

JohnFen
0 replies
2h21m

You did the right thing here.

I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it

Life rule: if the party you're negotiating a contact with says anything like "don't worry about that, it's not enforceable" or "it's just boilerplate, we never enforce that" but refuses to strike it from the contract then run, don't walk, away from table. Whoever you're dealing with is not operating in good faith.

madeofpalk
12 replies
19h41m

For whatever it's worth (not much), Sam Altman did say they would do that

if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.

https://x.com/sama/status/1791936857594581428

tedivm
3 replies
19h33m

What Sam is saying is very different than what I'm saying. I'm saying he should be proactive and just do it, he's saying that if people explicitly reach out to him then he'll do it specifically for them.

super256
1 replies
19h11m

Turkeys don't vote for an early Christmas.

johnbellone
0 replies
3h44m

Bingo.

Symmetry
0 replies
4h21m

Importantly if he just said publicly that that he wouldn't enforce the non-disparagement agreements that could be legally binding[1]. But if he just says he'll release people who he's, legally speaking, free to just not do that.

[1] The keywords are promissory estoppel. I'm not a lawyer but this looks at least like a borderline case worth worrying about.

pempem
3 replies
19h13m

Sure and anyone who has worked in a toxic workplace knows exactly what it means to require a direct path to leadership to resolve an issue instead of just resolving it.

Terr_
2 replies
18h57m

I also notice he conditions it on "any former employee." What about current employees who may be affected by the same legalese?

Either way, I can imagine a subtext of "step forward and get a target on your back."

tedivm
1 replies
15h58m

Current employees rarely sign exit agreements, since by exiting they stop being employees.

Terr_
0 replies
15h9m

True, they can't renegotiate agreements that don't yet exist.

However the fact that the corporate leadership could even make those threats to not-yet-departed employees indicates that something is already broken or missing in the legal relationship with current ones.

A simple example might for the company to clearly state in their handbook--for all current employees--that vested shares cannot be clawed back.

jay-barronville
1 replies
18h55m

This looks like proper accountability and righting your wrongs to me. Much respect to Sam. I hope this isn’t just a performance for the public.

baq
0 replies
13h4m

Hope is not a process. Look at what he does not what he says. Actually you should go deaf whenever you see him opening his mouth.

drcode
0 replies
17h34m

don't go public

don't contact OpenAI legal, which leaves an unsavory paper trail

contact me directly, so we can talk privately on the phone and I can give you a little $$$ to shut you up

DaiPlusPlus
0 replies
19h34m

Given what OpenAI's been in the press for the past few weeks, I can't help but feel this is a trap; even if it isn't, Sam is certainly making it look like it is...

polack
8 replies
19h42m

”we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations”

Looks like they’re doing that.

dragonwriter
6 replies
19h36m

Well, they say they are. But the nondisparagement agreement repeatedly forbids revealing the agreement itself, so if it wasn't cancelled those subject to it would be forbidden to point out that the public claim they were going to release people from it was a lie (or done only for people from whom OpenAI was not particularly concerned about potential disparagement.)

brookst
4 replies
19h7m

“I have not been released from any exit agreement” is not disparagement.

dragonwriter
2 replies
18h59m

“disparagement" is whatever is defined in the agreement, which reportedly (from one of the people who declined to sign it) includes discussing the existence of the agreement.

neglesaks
1 replies
18h51m

Dear heavens, being a corporate employee is paranoia and depression-inducing. It's literally like walking into a legal minefield.

Symmetry
0 replies
4h15m

This is not normal for being a corporate employee. This was certainly going to come out eventually and cause big problems, but to the extent Sam thinks AGI is around the corner he might not be playing the long game.

ecjhdnc2025
0 replies
18h27m

OpenCanAIry

lelandfe
0 replies
19h20m

If your concern is in the validating that they've done so, the article author can at least vet with her anonymous sources.

taylorfinley
0 replies
19h22m

Note that statement says nothing about whether they will be allowed to participate in liquidity events

belter
7 replies
19h36m

Not a mistake...

"...But there's a problem with those apologies from company leadership. Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about..."

tedivm
4 replies
19h30m

Honestly I'm willing to give the benefit of the doubt on that, depending on their actions, because I'm sure they sign so many documents they just rely on their legal teams to ensure they're good.

neglesaks
0 replies
18h53m

I'm not.

callalex
0 replies
15h51m

Then why are they paid such obscene amounts of money?

acjohnson55
0 replies
17h40m

There's absolutely no way that the officers of the company would be unaware of this.

First of all, it beggars belief that this whole thing could be the work of HR people or lawyers or something, operating under their own initiative. The only way I could believe that is if they deliberately set up a firewall to let people be bad cops while giving the C-suite plausible deniability. Which is no excuse.

But...you don't think they'd have heard about it from at least one departing employee, attempting to appeal the onerous terms of their separation to the highest authority in the company?

Terr_
0 replies
18h50m

Hold up... Do you really think that a C-suite including career venture-capitalists who happen to be leading+owning stock in a private startup which has hit an estimated billion+ valuation are too naive/distracted to be involved in how that stock is used to retain employees?

In other words, I'm pretty sure the Ed Dillingers are already in charge, not Walter Gibbs garage-tinkerers. [0]

[0] https://www.youtube.com/watch?v=atmQjQjoZCQ

doctorpangloss
1 replies
19h30m

Mistake? The clawback provisions were the executives' idea!

KennyBlanken
0 replies
18h56m

"We are sorry...that we got caught."

"...and that our PR firm wasn't good enough to squash the story."

They will follow the standard corporate 'disaster recovery' - say something to make it look like they're addressing it, then do nothing and just wait for it to fall out of the news cycle.

whaleofatw2022
3 replies
17h9m

Edit: Looks like they're doing the right thing here:

Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.

1. Get cash infusion from microsoft

2. Do microsoft playbook of 'oh I didn't mean to be shady we will correct' when caught.

3. In the meantime there are uncaught cases as well as the general hand waving away of repeated bad behavior.

4. What sama did would get him banned from some -fetish- circles, if that says something about how his version of 'EA' deals with consent concerns.

comp_throw7
2 replies
16h20m

Plenty of legitimate things to criticize EA for, no need to smear them by association with someone who's never claimed to be an EA and hasn't obviously behaved like one either.

whaleofatw2022
1 replies
14h58m

To be clear I'm saying he's an effective accelerationist not an effective altruist.

gcr
0 replies
4h26m

I think E/Acc may be the preferred abbreviation.

croes
3 replies
19h14m

Edit: Looks like they're doing the right thing here

That's like P.Diddy saying I'm sorry.

That's damage control for being caught doing something bad ... again.

ixaxaar
1 replies
17h52m

Extreme pinky swear.

hughesjj
0 replies
15h6m

"Trust me bro, if it weren't up to me you wouldn't even have to sign that contract. I mean it is up to me, but, like, I won't enforce the thing I made you sign. What? No I won't terminate the contract why don't you trust me bro? I thought we were a family?"

samspot
0 replies
1h51m

Yeah, agree, but they don't have to cancel the disparagement clause. They could just eat the PR hit. Allowing former employees to talk freely seems risky to me (if I were them). I think we can give them back 5 points for this move but still leave them at -995 overall.

cyanydeez
1 replies
18h44m

LLM cares not whether something is said or whether the action is described.

No doubt, openai is as vacuous as their product is effect. GIGO.

ecjhdnc2025
0 replies
17h31m

Form follows function, art imitates life, dog owners grow to look like their dogs...

zaptheimpaler
0 replies
14h59m

That really is not enough. Now that they have been publicly embarrassed and the clause is common knowledge they really have to undo the mistake. If they didn't, they would look like a horrible employer and employees would start valuing their stock at $0, dropping their effective compensations by a ton and then people will leave. Given the situation, undoing the agreement is an act of basic self-preservation at this point.

The documents show this really was not a mistake and "I didn't know what the legal documents I signed meant, which specifically had a weird clause that standard agreements don't" isn't much of a defence either. The whole thing is just one more point in favor of how duplicitous the whole org is, there are many more.

pdonis
0 replies
17h1m

> Looks like they're doing the right thing here

Even if that's true (and I'm not saying it is, or it isn't, I don't think anyone on the outside knows enough to say for sure), is it because they genuinely agree they did something egregiously wrong and they will really change their behavior in the future? Or is it just because they got caught this time so they have to fix this particular mistake, but they'll keep on using similar tactics whenever they think they can get away with it?

The impact of such uncertainty on our confidence in their stewardship of AI is left as an exercise for the reader.

mvdtnz
0 replies
17h1m

You surely don't actually believe Altman when he says they're doing this? Like Elon Musk, Altman is a known liar and should not be trusted. It's truly unbelievable to me that people take statements like this at face value after having been lied to again and again and again. I think I'm starting to understand how crypto scams work.

marcinzm
0 replies
19h44m

Assuming that’s the only clause that they can use to cause people trouble. The article indicates it’s not.

lyu07282
0 replies
13h44m

Looks like they're doing the right thing here:

Well, no:

We're removing nondisparagement clauses from our standard departure paperwork, and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual. We'll communicate this message to former employees.

So the former successfully blackmailed employees, stay blackmailed.

bambax
0 replies
8h14m

If this really was a mistake

The article makes it clear that it wasn't a mistake at all. It's a lie. They were playing hardball, and when it became public they switched to PR crisis management to try and save their "image", or what's left of it.

They're not the good guys. I'd say they're more of a caricature of bad guys, since they get caught every time. Something between a classic Bond villain and Wile E. Coyote.

asveikau
0 replies
15h44m

The right thing would be to not try to put that clause in there to begin with, not release employees from it when they get caught.

_jab
0 replies
12h21m

It shouldn't take a Vox article to ensure employees basic security over their compensation. The fact that this provision existed at all is exceptionally anti-employee.

mateus1
70 replies
19h58m

What surprises me about these stories surrounding openAI is how they apologize while lying and downplaying any blame. Do they expect anybody to believe they didn’t know about clawback clauses?

5e92cb50239222b
50 replies
19h42m

Do they care? The mob will shout for a week or two and then turn their attention somewhere else. spez (the reddit chief) said something like that about their users, and he was absolutely right. A few days ago I was re-reading some of those threads about reddit API changes from ten months back where so many users claimed it was their last message and they were leaving for good. Almost none of them did. I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

dullcrisp
8 replies
19h24m

I stopped browsing Reddit. I imagine the people who posted comments to Reddit saying they’re going to leave Reddit aren’t a representative sample.

lobsterthief
7 replies
18h55m

Same. Redditor for 15 years and the API thing was the last straw.

I didn’t post about not engaging with or using the platform anymore. Nor did I delete my account, since it still holds some value to me. But I slinked away into the darkness and now HN is my social media tool.

warcher
4 replies
17h37m

Delete early, delete often. Never keep an old Reddit account around. I torch mine and build anew every year just out of principle.

hughesjj
0 replies
14h59m

For me it's been every six months. I've even given some creds for burned accounts to the void for the heck of it.

That said, I think you could easily correlate my hn activity with my reddit usage (inverse proportionality). Loving it tbh, higher quality content overall and better than slashdot ever was

al_borland
0 replies
17h0m

I didn’t have a schedule, but probably had 5 or 6 accounts over the years… purging, deleting, and a few weeks later rejoining. The last time I deleted everything was before the API changes, and it was the last straw. I haven’t attempted to create a new account and don’t browse at all. I used to spend hours per day there. Now the only time I end up there is if a search engine directs me there for an answer to a specific question I have.

EMM_386
0 replies
12h35m

I honestly have given up in this battle.

I'm curious, what do you think deleting accounts and starting new is going to do?

They'll just link it all together another way.

8372049
0 replies
17h14m

Same! Except I've basically stopped using reddit. It used to be that if I got a "happy cake day" then I knew nuking the account was overdue.

speff
0 replies
17h6m

15y account here too - also quit. Tried lemmy for a while and didn't like it. At least it helped me kick the reddit habit. Don't even go there anymore

https://old.reddit.com/u/speff

andrethegiant
0 replies
17h38m

Same here on everything you mentioned

kyleblarson
5 replies
19h38m

The mob that Vox represents these days is miniscule

hashtag-til
4 replies
19h26m

It’s remarkable to see the hoi polloi to stand by CEOs and big corporations, rather than defending the few parts of the media that stand for regular workers.

parineum
2 replies
15h56m

It's not about taking sides, it's about not caring. Everyone is tired of getting worked up over super rich CEOs being "aggressive" to their very rich employees and your,"if you're not with us, you're against us" attitude.

optimalsolver
0 replies
11h54m

How do you think those same CEOs would treat their not-so-rich employees?

ecjhdnc2025
0 replies
15h50m

First they came for the Sheldon Coopers, and I did not speak out

ecjhdnc2025
0 replies
15h52m

This is how anything political (big or small P) works.

Aspirations keep people voting against their interests.

I personally worry that the way fans of OpenAI and Stability AI are lining up to criticise artists for demanding to be compensated, or accusing them of “gatekeeping” could be folded into a wider populism, the way 4chan shitposting became a political position. When populism turns on artists it’s usually a bad sign.

jsnell
5 replies
19h34m

This is them fucking over their employees though, not the public, and in a very concrete manner. Threats to rob them of millions - maybe tens of millions - are going to hurt more than losing access to a third-party Reddit client.

And the employees also have way more leverage than Reddit users; at this point they should still be OpenAI's greatest asset. Even once this is fixed (which they obviously will do, given they got caught), it's still going to cause a major loss of trust in the entire leadership.

grepfru_it
4 replies
19h9m

Employees are replaceable. Outside of a very specific few, they have very little leverage. If an employee loses trust and leaves or “quiet quits”, they will simply be replaced with one of the hundreds of people clamoring to work for them. This is why unionization is so great.

Just as Reddit users stay on Reddit because there is nowhere else to go, the reality is that everyone worships leadership because they keep their paychecks flowing.

squigz
1 replies
19h5m

Employees are replaceable, sure, but that doesn't mean that you can't squander your good will with competent employees and end up only being able to hire sub-par employees.

grepfru_it
0 replies
6h25m

Yes. OpenAI will only attract sub-par employees. And even if it’s not OpenAI, you simply raise your offered salary and suddenly the subpar employees vanish.

comp_throw7
1 replies
19h5m

Yes, that "very little leverage" is why engineers & researchers near the bottom of OpenAI's career ladder are getting paid 900k/year (2/3rds funny money, admittedly, though in practice many people _have_ cashed out at very large multiples).

grepfru_it
0 replies
6h26m

Your salary is not leverage..

devbent
5 replies
18h56m

I went from very active on multiple subreddits to barely posting once every few months. Instead of answering programming questions or helping people get in shape I'm on other sites doing other things.

Changes like that are hard to measure.

gundmc
4 replies
18h7m

I went from very active on multiple subreddits to barely posting once every few months. Instead of answering programming questions or helping people get in shape I'm on other sites doing other things. Changes like that are hard to measure.

Changes in sentiment can be hard to measure, but changes in posting behavior seems incredibly easy to measure.

warcher
2 replies
17h38m

It’s the rule of ten (I made that up) 1:10 upvote. 1:10 of them comment. 1:10 post.

The people barking are actually the least worrisome, they’re highly engaged. The meat of your users say nothing and are only visible in-house.

That said, they also don’t give a shit about most of this. They want their content and they want it now. I am very confident spez knows exactly what he’s talking about.

Tao3300
1 replies
16h51m

Some salty downvotes going on in here!

warcher
0 replies
16h37m

Imagine how many people I actually pissed off to get those downvotes!

singron
0 replies
17h43m

How do you measure without the API?

beeeeerp
3 replies
17h40m

I actively quit producing content and deleted my account.

Maybe it’s confirmation bias, but I do feel like the quality of discourse has taken a nose dive.

Tao3300
2 replies
17h2m

The discourse is about the same, trouble is the only mods left are the truly batshit ones.

martin_
1 replies
16h11m

If that's true, wouldn't that imply that the mods aren't very effective?

jachee
0 replies
14h16m

You get what you pay for. ;)

tsunamifury
2 replies
18h26m

Sam Altman has stated over and over again, publicly: "I don't care what other people think." And I'm not paraphrasing.

over_bridge
1 replies
16h16m

Once you learn that online outrage doesn't actually impact your life that much, its easy to ignore. Gone are the days of public apologies and now we just sweep criticism under the rug and carry on.

tsunamifury
0 replies
12h40m

I think trump taught us that very few people will stop you physically if you just ignore what they have to say.

solidasparagus
2 replies
19h14m

It is hard to compete for high-end AI research and AI engineering talent. This definitely matters and they definitely should care. Their equity situation was already a bit of a barrier by being so unusual, now it's going to be a harder sell.

I know extremely desirable researchers who refuse to work for Elon because of how he has historically treated employees. Repeated issues like this will slowly add OpenAI to that list for more people.

hehdhdjehehegwv
1 replies
15h40m

Meanwhile the stock Google pays you can be cashed out same day. Really dumb move for OpenAI.

solidasparagus
0 replies
12h27m

I think it might just be a consequence of an approach to business that, in aggregate, has been very effective.

8372049
1 replies
17h16m

When the changes went through I nuked all my comments and then my account. I don't know if many others did the same, but if so it would mean that you wouldn't see our "I'm leaving" comments anymore, i.e. that we wouldn't be included in your samples.

Tao3300
0 replies
17h4m

Yeah, reading old threads is weird. The majority of everything is intact, but there's enough deleted or mangled comments that it is an effective minor inconvenience.

vb234
0 replies
17h12m

My activity on Reddit has gone way down since they stopped supporting .compact view on mobile. I definitely miss it and want to go back but it’s incredibly hard to engage with the content on mobile browsers now.

uddiygug
0 replies
19h21m

They probably care more about the effect on potential hires who are gonna second think by the fact that part of their pay may be cancelled due to some disagreement

tivert
0 replies
14h1m

A few days ago I was re-reading some of those threads about reddit API changes from ten months back where so many users claimed it was their last message and they were leaving for good. Almost none of them did. I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

Lots of people have pointed out problems with your determination, but here's another one: can you really tell none of those people are posting to subvert reddit? I'm not going to go into details for privacy reasons, but I've "quit" websites in protest while continuing to post subversive content afterwards. Even after I "quit," I'm sure my activity looked good in the site's internal metrics, even though it was 100% focused on discouraging other users.

ruszki
0 replies
11h44m

I haven't stopped using it immediately, but it definitely added to the growing list of problems. I don't use that site anymore, except when a search result directs me there. Even then it's a second choice of mine, because I need to disable my VPN to access it, and I won't login.

olalonde
0 replies
11h58m

The risk is not users boycotting them. The risk is OpenAI having trouble recruiting and retaining top talent, which will cause them to eventually fall behind the competition, leading users to naturally leave.

mrtksn
0 replies
11h40m

I actually find myself to be using reddit much less. It’s not that I protesting, but it feels like the community changed into something more like Facebook folks. It doesn’t feel cutting edge anymore, it’s much more tamed stale. The fresh stuff isn’t on Reddit anymore.

jrflowers
0 replies
16h47m

I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

I also remember when the internet was talking about the twenty four Reddit accounts that threatened to quit the site. It’s enlightening to see that the protest the size of Jethro Tull didn’t impact the site

jprete
0 replies
19h2m

I'm guessing the ones who actually left Reddit did what I did - they disengaged from the site and then deleted all their content and accounts. It's pointless to complain without any actual power.

The relevant stakeholders here are the potential future employees, who are seeing in public exactly how OpenAI treats its employees.

hehdhdjehehegwv
0 replies
15h49m

Personally I only use Lemmy now. I never made a goodbye/fuck spez post, I just stopped using Reddit.

I think your sample frame is off, they did themselves unforced damage in the long run.

discordance
0 replies
18h42m

It’s not easy to get out of an abusive relationship

LocutusOfBorges
0 replies
15h2m

Honestly, from a moderation perspective, the dropoff has been stark - the quality of work behind the scenes has dropped off a cliff on most larger subreddits, and the quality of the content those subreddits facilitate has reduced in quality in turn.

It's definitely had a very impact - but since it's not one that's likely to hit the bottom line in the short term, it's not like it matters in any way beyond the user experience.

refulgentis
5 replies
19h42m

It's been stultifying the older I get to see how easy it is for people to lie to themselves and others, everywhere.

You have to be really attuned to "is this actually rational or sound right, or am I adding in an implicit 'but we're good people, so,'"

ecjhdnc2025
4 replies
19h31m

Right. The big change is bad faith argument developing into unapologetic bad faith developing into weaponised bad faith.

It accelerated rapidly with some trends like the Tea Party, Gamergate, Brexit, Andrew Wakefield, covid antivax, and the Ukraine situation, and is in evidence on both sides of the trans rights debate, in doxxing, in almost every single argument on X that goes past ten tweets, etc.

It's something many on the left have generally identified as worse from the right wing or alt.right.

But this is just because it's easier to categorise it when it's pointing at you. It's actually the primary toxicity of all argument in the 21st century.

And the reason is that weaponised bad faith is addictive fun for the operator.

Basically everyone gets to be Lee Atwater or Roger Stone for a bit, and everyone loves it.

ants_everywhere
3 replies
18h42m

It's something many on the left have generally identified as worse from the right wing or alt.right.

It depends a bit by what you mean by left and right, but if you take something like Marxism that was always 100% a propaganda effort created by people who owned newspapers and the pervasiveness of propaganda has been a through line e.g. in the Soviet Union, agitprop etc. A big part of the Marxist theory is that there is no reality, that social experience completely determines everything, and that sort of ideology naturally lends itself to the belief that blankets of bad faith arguments for "good causes" are a positive good.

This sort of thinking was unpopular on the left for many years, but it's become more hip no doubt thanks to countries like Russia and China trying to re-popularize communism in the West.

ecjhdnc2025
2 replies
18h36m

Propaganda at a national level, it's always been that, and I take your point for sure.

I think perhaps I didn't really make it totally clear that what I'm mostly talking about is a bit closer to the personal level -- the way people fight their corners, the way twitter level debate works, the way local politicians behave. The individual, ghastly shamelessness of it, more than the organised wall of lies.

Everyone getting to play Roger Stone.

Not so much broadcast bad faith as narrowcast.

I get the impression Stalinism was more like this -- you know, you have your petty level of power and you _lie_ to your superiors to maintain it, but you use weaponised bad faith to those you have power over.

It's a kind of emotional cruelty, to lie to people in ways they know are lies, that make them do things they know are wrong, and to make it obvious you don't care. And we see this everywhere now.

ants_everywhere
1 replies
18h17m

Well, I was referring to Marx and Engels. That's sort of how the whole movement got started. The post-Hegelians who turned away from logic-based philosophical debate to a sort of anti-logical emotional debate where facts mattered less than the arc of history. That got nationalized and industrialized with Lenin and Stalin etc, but that trend precedes them and was more personal. It was hashed out in coffee houses and drinking clubs.

You see the same pattern with social media accounts who claim to be on the Maxist-influenced left. Their tactics are very frequently emotionally abusive or manipulative. It's basically indistinguishable in style from how people on the fringe right behave.

Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.

ecjhdnc2025
0 replies
18h6m

Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.

Oh I agree. I wasn't making it a right-vs-left thing, but rather neutering the idea that people perceive it to be.

I would not place myself on the political right at all -- even in the UK -- but I see this idea that bad-faith is an alt.right thing and I'm inclined to push back, because it's an oversimplification.

swat535
1 replies
14h26m

I mean it's not like anything is going to happen to them anyway.

People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere, consumers will continue using GPT, businesses will keep hyping it up and rivers of cash will flow per status quo to his pockets like no tomorrow.

If one thoroughly wants to to make a change, one should support alternative open source models to remove our dependency on Altman and co; I fear for a day where such powerful technology is tightly controlled by OpenAI. We have already given up so much our computing freedom away to handful of companies, let's make sure AI doesn't follow. Honestly,

I wonder if we would ever have access to Linux, if it were to be invented today?

0xDEAFBEAD
0 replies
13h38m

People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere

The percentage of HN users defending Altman has dropped massively since the board scandal ~6 months ago.

consumers will continue using GPT, businesses will keep hyping it up

Customers will use the best model. If OpenAI loses investors and talent, their models may not be in the lead.

IMO the best approach is to build your app so it's agnostic to the choice of model, and take corporate ethics into consideration when choosing a model, in addition to performance.

nabla9
1 replies
19h37m

Every legal clause that affects company ownership is accepted by the CEO and the board. It's not something VP or general counsel can put there. Lo and behold, signatures from Altman and Kwon are there.

Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about.

OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.
GianFabien
0 replies
16h15m

The public statements would suggest either that Sam Altman is lying or he signs anything that is put in front of him without reading it. I'm inclined to believe that whatever is said is PR (aka BS). In a court of law it is the written and signed contracts that are upheld.

jsnell
1 replies
19h41m

Yes, I've definitely seen people believe that in various discussions. Combine "Altman said they'd totally never done this" with "the ex-employee who first wrote about this didn't write with absolute 100% clarity that this applied to vested equity", and there's enough cover to continue believing what one wants to believe. And if the news cycle dies down before the lie is exposed, then that's a win.

Obviously that should not be possible any more with these leaked documents, given they prove both the existence of the scheme and Altman and other senior leadership knowing about it. Maybe they thought that since they'd already gagged the ex-employees, nobody would dare leak the evidence?

sanxiyn
0 replies
18h26m

People are so gullible. Sam Altman deserves zero benefit of doubt. His words should be ignored, his words do not prove anything whatsoever.

gchamonlive
1 replies
19h35m

Not trying to play the devil's advocate here, but I am thinking how this would play out if I ever opened a spinoff...

Let's say I find a profitable niche while working for a project and we decide to open a separate spin off startup to handle that idea. I'd expect legality to be handled for me, inherited from the parent company.

Now let's also say the company turns out to be disproportionately successful. I'd say I would have a lot on my plate to worry about, the least of which the legal part that the company inherited.

In this scenario it is probable that hostile clauses in contracts would be dug up. I surely would be legally responsible for them, but how much would I be to blame for them, truly?

And if the company handles the incident well, how important should that blame putting be?

jprete
0 replies
18h51m

I'd expect legality to be handled for me, inherited from the parent company.

That sounds like a really bad idea for many many reasons. Lawyers are cheap compared to losing control, or even your stake, to legal shenanigans.

mirekrusin
0 replies
13h57m

Yep, it's more like he reviewed and/or requested those clauses to be there than anything else.

latexr
0 replies
18h15m

Do they expect anybody to believe they didn’t know about clawback clauses?

Why wouldn’t they? I’m sure you can think of a couple of politicians and CEOs who in recent years have clearly demonstrated that no matter what they do or say, they will have a strong core of rabid fans eating their every word and defending them.

darth_avocado
0 replies
19h29m

They’re only apologetic because they got caught in a PR shitstorm. They would not otherwise. Being an sh*bag company that claws back equity is a huge red flag and can drive away the critical people who make up the company. They started an arms race, but with companies with much deeper pockets. Meta will be more than happy to gobble up any and every OpenAI employee who no longer wants to work there.

achrono
0 replies
13h8m

It has become a hallmark of Western civilization to first of all Cover Your Ass, and where it gets exposed, to Pretend It's Covered, but when its photos get published, to Sincerely Apologize, and when pressed even more, to come out afresh with a Bold Pro-Coverage Vision and Commitment.

But maybe there's a further step that someone like OpenAI seems uniquely capable of evolving.

0x5f3759df-i
66 replies
19h59m

but sam said he was sorry in all lowercase so it must be okay

minimaxir
62 replies
19h30m

Semi off-topic, but the trend of AI influencers adopting the all-lowercase communication style in professional conversation has been very annoying. It makes them appear completely unserious.

I recently received an job recruitment email for an AI role in all-lowercase and I was baffled how to interpret it.

ra7
27 replies
19h12m

I don’t think it’s only AI influencers that do this. I’ve noticed people use lowercase as some sort of power move. Like they’re so busy and important they don’t care about conventions.

achiandet
6 replies
18h28m

It's certainly a modern status symbol that can be seen as a power move.

a_bonobo
5 replies
16h49m

So common in academia! The student writes a detailed, error-checked, well-edited long email only to receive an answer like

'yes plaese

Sent from my iPhone'

Definitely a 'I'm very busy look at me' powermove

hehdhdjehehegwv
4 replies
15h29m

Actually they probably are that busy and aren’t trying to impress students, they’re trying to grade a bunch of exams while reviewing a paper while writing up research while writing a grant and also having a personal life.

somenameforme
2 replies
14h9m

And if they treat them all with this degree of attention, then they're probably failing at all of them.

hehdhdjehehegwv
0 replies
12h38m

Many, perhaps most, successful academics are horrible mentors and parents/spouses. That isn’t pertinent to the fact that all faculty are, in fact, quite busy.

dragonwriter
0 replies
13h41m

The grant writing and (supervising the work of the people performing and writing up) the research funded by the grants that will also be the basis on which they will secure future grants get more attention, obviously.

huygens6363
0 replies
13h42m

nah i dontt hnk so

Sent from my iPhone

spot5010
5 replies
17h29m

Ironically, if you just kept the default keyboard settings on your phone, it will capitalize words for you. So these people changes the default settings to create this impression?

a_wild_dandan
2 replies
15h43m

I disabled it for being annoying and frequently wrong. The idea that someone might interpret my laziness as a flex (or vise versa) is hilarious to me.

torginus
0 replies
12h56m

Haha, we're here arguing whether LLMs constitute AGI, and we can't even phone autocomplete right

kzrdude
0 replies
13h0m

It's annoying that it insists on "proper" capitalization for github and so on ("GitHub"), I do revolt at this trademark injection into my autocompleter.

parineum
0 replies
15h52m

I turn that stuff off because I don't like my keyboard automatically doing things for me.

nickzelei
0 replies
15h49m

They might be sending messages on a laptop which doesn’t have that turned on by default.

HaZeust
3 replies
16h39m

It also appears softer and borderline condescending. I always thought it was feminine because I've only seen women do it before very recently.

dvaun
2 replies
15h29m

It’s pretty common in chatrooms, forums, etc. Been commonplace for at least two decades.

HaZeust
1 replies
14h28m

Eh, I mean in a post-smart keyboard era that will do it for you, and you either have to disable it OR purposely backspace and re-write it.

dvaun
0 replies
14h25m

For anything on a phone, agreed. I have mine disabled so I can switch easily depending on where or who I’m talking with.

elevatedastalt
1 replies
18h6m

No, this is a status symbol because it's a signal that these people are above norms of conventional society.

What's worse is that there's a ready line of journalists talking about how capital letters promote inequality or shit like that providing coverfire for them.

flembat
0 replies
13h46m

I think it is because people are often typing onto a sheet of glass on a packed train..

fragmede
2 replies
19h2m

it's because the stupidphone's stupid keyboard capitalizes for you in the wrong places, and you have to take extra effort to fix it when they get it wrong, or you turn it off, and then get lazy about fixing it when you do need the capitalization.

ra7
1 replies
18h39m

Does it? When I start a message, the keyboard automatically capitalizes. It also does it when a sentence starts after a period. I rarely ever have to fix capitalization.

(I typed this from my phone)

HaZeust
0 replies
16h37m

I get incorrectly capitalized one in every 250 times the keyboard is used, I'd imagine.

dustincoates
1 replies
14h20m

Something similar I've noticed--there's a certain level people reach within a company where they're now too busy to type out "thank you" or even "thanks." Now "thx" is all they have time for.

tivert
0 replies
13h50m

Something similar I've noticed--there's a certain level people reach within a company where they're now too busy to type out "thank you" or even "thanks." Now "thx" is all they have time for.

"thx" is way to verbose for anyone but a plebs, the real power brokers use "ty." Or they don't thank anyone at all, because they know just bothering to read the message they got is thanks enough.

hehdhdjehehegwv
0 replies
15h26m

…or capitalization is not really that important for conveying meaning.

filleduchaos
0 replies
15h47m

Lowercase typing is broadly informal and (historically) faster (autocorrect especially on mobile devices was terrible for longer than it has been any good).

Anything else people read into it is very often just projection.

IvyMike
9 replies
19h0m

Following convention and using standard capitalization rules makes things easier on the reader.

Going to all-lowercase is harder on the reader, and thus is disrespectful of the reader. I will die on this hill.

lobsterthief
7 replies
18h51m

THIS IS WHY I PREFER TO COMMUNICATE ONLY IN CAPITAL LETTERS. IT REMOVES ANY AMBIGUITY AS TO WHETHER OR NOT I’M ANGRY SINCE THE READER CAN ASSUME I’M ALWAYS SCREAMING IN THEIR FACE.

I HOPE YOU ARE HAVING A NICE DAY.

cpeterso
3 replies
17h24m

I’ve worked with people who write in all lowercase, but I’ve never worked with someone who writes in ALL CAPS.

How long could someone write in ALL CAPS before they get fired?

whatshisface
1 replies
16h54m

YOUR JOB ADVERTISEMENT SPECIFIED VERY MANY YEARS OF EXPERIENCE STOP

HOW DO I WORK THIS DIFFERENCE ENGINE STOP

dmvdoug
0 replies
16h26m

I guffawed (briefly), with a diminishing chortle thereafter. Well played and cheerio.

dmvdoug
0 replies
16h28m

One of the nurses at the high school I teach at only emails SHOUTY-STYLE. She’s been there 26 years.

neglesaks
1 replies
18h45m

Caps Lock is Cruise Control for Cool, right? ;o)

joquarky
0 replies
12h44m

I miss Usenet taglines

HaZeust
0 replies
16h35m

Gives off "if I sound pleased about this, it's because my programmers made this my default tone of voice! I'm actually quite depressed! :D" [1] vibes

1 - https://www.youtube.com/watch?v=oGnwMre07vQ

corinroyal
0 replies
17h24m

This. Reading is hard enough, especially on a screen. Flouting readability conventions shows such contempt for one's readers, which I suppose is the point here.

wayeq
5 replies
18h4m

i've been mostly using all lowercase for decades.. am i the asshole?

downloadram
1 replies
17h16m

not at all, its a stylistic preference for some but its also easier and faster. i've been typing lowercase on PC [faster] and mobile [preference] for as long as i can remember; only using capitalization and punctuation where it feels necessary and for emphasis ..and to the best of my knowledge no one i chat with thinks it's strange, and no one on any forum has said anything about it either. the hate in this thread is just directed at Sam and since he does this, also at this

since i've always typed like this i've joked with my mother that if i ever send her a message with proper capitalization and punctuation, its a secret signal that i've been kidnapped!

opdahl
0 replies
12h51m

Can you please explain to me how it is faster? You click shift and the character at the same time.

internetter
0 replies
17h40m

why?

1123581321
0 replies
15h47m

Same. It’s something you learned if you did a lot of chat (irc, icq, battle.net etc.) before smartphones. It makes sense young people wouldn’t know it’d been a default, faster way to type.

01100011
0 replies
15h14m

right? as i said in my other comment, this has been a thing since at least the 80s when i got behind a keyboard.

i don't normally do it anymore, but for this post i've gone sans-caps. kickin it old school. (yaimadork)

simondotau
3 replies
18h31m

Writing in all lowercase is an aesthetic akin to an executive wearing jeans and a T-shirt. It is supposed to impart an air of self-confidence, that you don’t need to signal your seriousness in order to be taken seriously.

However such signalling is harder to pull off than it seems, and most who try do it poorly because they don’t realise that the casual aesthetic isn't just a lack of care. Steve Jobs famously eschewed the suit for jeans and mock turtleneck. But those weren’t really casual clothes, those mock turtlenecks were bespoke, tailored garments made by a revered Japanese fashion designer. That is a world apart from throwing on whatever brand of T-shirt happens to feel comfortable to the wearer.

lucianbr
1 replies
12h58m

They have no clue which signals work and which don't, they just throw shit at the wall and see what sticks. Another "Sam" signaled his superiority by playing games during meetings with investors, and it seemed to work for him, until it didn't.

Also, how much is there to customize in a turtleneck? Seems like the same signal as a very expensive suit, "I have a lot of money", nothing more.

simondotau
0 replies
8h20m

Unless you are very fit and have a perfect body shape, a very well tailored shirt/turtleneck can look significantly more flattering than an off-the-rack item. It'll sit well when you're in a neutral pose and stretch or pull appropriately when you gesticulate.

You correctly interpreted the point I was making — Steve Jobs treated his casual look as seriously as others treat an expensive tailored suit. And the result means he's still signalling importance and success, without also signalling conformity and "old world" corporate vibes.

voganmother42
0 replies
15h16m

I think it is more like wearing two polo shirts…

xawxaw
2 replies
13h56m

I recently received an job recruitment email

Yet you use "an" for a vowel that's miles away, so I don't like the way you type either.

minimaxir
1 replies
13h52m

did you create a HN account just to point out a typo

xawxaw
0 replies
3h29m

Conversion works in mysterious ways!

dankwizard
1 replies
19h25m

youre not worth moving my pinky to the shift key, and you think that missed apostrophe was a mistake?

im busy running a billion dollar company i dont have time for this

fragmede
0 replies
18h55m

who's using a hardware keyboard these days?

danielmarkbruce
1 replies
19h25m

Give it a year, it will become very uncool as people realize only people not using AI to make their writing better would send syntactically or grammatically incorrect wording.

jareklupinski
0 replies
19h13m

bet

joquarky
0 replies
12h47m

Postmodernism is consuming everything

harrison_clarke
0 replies
17h1m

i read an article about bauhaus typography in high school and mostly dropped capitalization since then (~2005)

whether i use them or not is basically a function of how much i think there will be consequences for not using them. if i do use them without coercion, it's for Emphasis, or acronyms (like AI), or maybe sPoNgEbOb CaSe

i'm not sure where AI CEOs, or younger generations picked it up. but the "only use capitals when coerced" part seems similar

dontupvoteme
0 replies
19h12m

Throw it at an LLM with the simple command "fix", highlight every character that has a delta and send it back to them.

Add a grade in red at the top if you're feeling extra cheeky

atonse
0 replies
19h27m

You're so right, it's utterly distracting to read a statement written in all lower case.

It looks like it was written in a sloppy way and nobody actually proofread it.

I think Sergey Brin used to do the same thing (or maybe it was Larry Page). I remember reading that in some google court case emails and thinking, the show Silicon Valley wasn't even remotely exaggerating.

InfiniteVortex
0 replies
18h17m

If I received a legitimate job recruitment email for an AI role in all-lowercase I would put in the spam folder lol. sam altman typing in all lowercase letters shouldn't influence people to do the same in semi-professional environments & situations. I think Altman is trying to appear as casual, friendly, and attract the zoomer market by typing in all lowercase, it's just my speculation and perhaps I'm over-reading into it.

Cadwhisker
0 replies
17h8m

If I see a job application with spelling or grammar mistakes in it, then it's a huge red flag; it tells me that this person does not care about accuracy or they don't check their work very well. These are very important attributes to have in engineering.

If you see it in a job advert, I'd assume the same for the people who are doing the hiring.

01100011
0 replies
15h16m

lowercase has been used by propeller heads and hackers since.. idk.. the 80s? some of us just liked it better that way.

tdeck
0 replies
13h48m

It looks especially odd when the text contains other "formalities" like semicolons, or writing the phrase "full stop".

Aside: "full stop" is the Commonwealth English way of saying "period" so it seems like an affectation to see an American using it.

atleastoptimal
0 replies
18h11m

it’s the same as the tech lax dresscode, super high salaries, benefits, overall relaxed “we can get away with it because we’re both high status and enlightened” that operates as a signal to their assumed superiority.

3abiton
0 replies
19h45m

He's on the board of AI safety too. Now I feel protected.

user_7832
11 replies
18h26m

I really wish there was some simple calculations that could be shown on how posts are ranked. For eg post A has x upvotes, y comments, is z minutes old and therefore rank 2. Post B has these values, while C is here. Hence this post went down the front page quickly.

It's not that I don't trust the mods explicitly, it's just that showing such numbers (if they exist) would be helpful for transparency.

casefields
5 replies
18h12m

People always interested and fascinated by the algorithm whenever it comes up. Dang makes the (correct) assertion that people will much more easily game it if they know the intricacies. PG always churlishly jumps in to say there’s nothing interesting about it and any discussion of it is boring.

Pretty asinine response but I work in Hollywood and each studio lot has public tours giving anyone that wants a glimpse behind the curtain. On my shows, we’ve even allowed those people to get off the studio golf cart to peek inside at our active set. Even answering questions they have about what they see which sometimes explains Hollywood trickery.

I’m sure there’s tons of young programmers that would love to see and understand how such a long-lasting great community like this one persists.

heavyset_go
1 replies
12h39m

Dang makes the (correct) assertion that people will much more easily game it if they know the intricacies.

Which is interesting, because it's sacrilege to insinuate that it's being gamed at all.

dang
0 replies
12h23m

It's not sacrilege, it's just that people rarely have any basis for saying this beyond just it kind of feels that way based on one or maybe two datapoints, and feeliness really doesn't count. We take real abuse seriously and I've personally put hundreds (feels like thousands) of hours into that problem over many years - but there has to be some sort of data to go on.

yojo
0 replies
16h19m

I dunno. This is standard practice for things like SEO algos to try to slow down spammers, or risk algos to slow down scammers.

HN drives a boatload of traffic, so getting on the front page has economic value. That means there are 100% people out there who will abuse a published ranking system to spam us.

serf
0 replies
17h12m

wait long enough and the other product will be able to expose the secrets.

future gpt prompt : "Take 200000 random comments and threads from hacker news, look at how they rank over time and make assumptions about how the moderation staff may be affecting what you consume. Precisely consider the threads or comments which have risque topics regarding politics or society or projects that are closely related to Hacker News moderation staff or Y Combinator affiliates."

pvg
0 replies
16h16m

There's a public tour of HN stuff pretty much every day in the moderator comments. The story ranking and moderation gets covered frequently.

loceng
1 replies
18h11m

IMHO HN data should be transparent.

The innovation on detecting patterns would be incredible, and in reality I think would be best to evolve into allowing user-decided algorithms that they personally subscribe to.

mvdtnz
0 replies
16h57m

The main component of the HN ranking algorithm is sentiment divided by YC holdings in the company in question. We've all seen it.

beepbooptheory
0 replies
17h12m

I really don't care about the "algorithm" here. I think this place is distinguished nicely by the fact that I almost never know how much karma a post or user has. If it was in fact a total dictatorship of a few, posing as some democratic reddit thing, who cares? I'm OK as it is, and these things don't last forever anyway.

All you can really do on the internet is ride the waves of synchronicity where the community and moderation is at harmony, and jump ship when it isnt! Any other conceit that some algorithm or innovation or particular transparency will be this cure all to <whatever it is we want> feels like it never pans out, the boring truth is that we are all soft squishy people.

Show me a message board that is ultimately more harmonious and diverse and big as this one!

nwoli
5 replies
17h54m

They’re definitely doing this on comments too. I’ve had negative towards altman comments at the top dropped to below the negative voted ones in the past

suroot
3 replies
17h51m

Wait til you hear what happened to Michael O’Church.

suroot
0 replies
9h43m

Lol.

Long story short MO worked for and knew the circle of VCs that pg knew. MO didn’t like them, they didn’t like him calling them out. pg didn’t like him posting about them on HN.

The rest is history.

Hopefully you can see how an egregious conflict of interesting can occur here with sama.

dang
0 replies
12h6m

It's standard moderation on HN to downweight subthreads where the root comment is snarky, unsubstantive, or predictable. Most especially when it is unsubstantive + indignant. This is the most important thing we've figured out about improving thread quality in the last 10 years.

But it doesn't vary based on specific persons (not Sam or anyone else). Substantive criticism is fine, but predictable one-liners and that sort of thing are not what we want here—especially since they evoke even worse from others.

The idea of HN is to have an internet forum—to the extent possible—where discussion remains intellectually interesting. The kind of comments we're talking about tend to choke all of that out, so downweighting them is very much in HN's critical path.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

treme
0 replies
15h39m

PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://paulgraham.com/5founders.html

tgv
0 replies
11h25m

And today: a post about Johansson's voice was on the front page with quite a high score, and then disappeared. This is not the place to discuss OpenAI.

jsnell
0 replies
15h43m

That post was, as far as I can tell, basically an opinion piece repeating/summarizing stories that had been on the HN frontpage dozens of times. This post is investigative journalism with significant new information.

It should not be surprising that the outcomes are different.

dang
0 replies
12h29m

I didn't see that comment but I did post https://news.ycombinator.com/item?id=40437018 elsewhere in that thread, which addresses the same concerns. If anyone reads that and still has a concern, I'd be happy to take a crack at answering further.

The short version is that users flagged that one plus it set off the flamewar detector, and we didn't turn the penalties off because the post didn't contain significant new information (SNI), which is the test we apply (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). Current post does contain SNI so it's still high on HN's front page.

Why do we do it this way? Not to protect any organization (including YC itself, and certainly including OpenAI or any other BigCo), but simply to avoid repetition. Repetition is the opposite of intellectual curiosity (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...), which is what we're hoping to optimize for (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...).

I hesitate to say "it's as simple as that" because HN is a complicated beast and there are always other factors, but...it's kind of as simple as that.

redbell
13 replies
10h23m

The amount [and scale] of practices, chaos and controversies caused by OpenAI since ChatGPT was released are "on par" with the powerful products it has built since.. in a negative way!

These are the hottest controversial events so far, in a chronological order:

  OpenAI's deviation from its original mission (https://news.ycombinator.com/item?id=34979981).
  The Altman's Saga (https://news.ycombinator.com/item?id=38309611).
  The return of Altman (within a week) (https://news.ycombinator.com/item?id=38375239).
  Musk vs. OpenAI (https://news.ycombinator.com/item?id=39559966). 
  The departure of high-profile employees (Karpathy: https://news.ycombinator.com/item?id=39365935 ,Sutskever: https://news.ycombinator.com/item?id=40361128).
  "Why can’t former OpenAI employees talk?" (https://news.ycombinator.com/item?id=40393121).

gdiamos
9 replies
10h0m

Why is AI so dramatic? I just watched mean girls and this is worse.

jnsaff2
1 replies
9h34m

Money. The hype is really strong, the hype might even be justified, insane amounts of money flow in. There is a land grab going on. Blood is in the water, all the sharks are circling.

nicce
0 replies
8h40m

After all that money, nobody can never even think of saying that it was wasted. To keep the investment value high and justifiable, they all agree and go on with the hype. Until the end.

ActionHank
1 replies
4h28m

They are just marketing for Microsoft's AI now.

It's all just drama to draw attention and investor money, that's it.

When the inventors leave there is nothing left to do but sell more.

siegecraft
0 replies
3h54m

people are starting to realize they don't have any significant technical advantages over other AI companies. so all they have left is hype or trying to build a boring enterprise business where they sell a bunch of AI services to other large companies, and their lack of experience in that area is showing.

wouldbecouldbe
0 replies
9h7m

It's probably the perceived value & power it has.

They think they are about to change the entire world. And a very large but of the world agrees. (I personally think it's a great tool but exaggerated)

But that created an very big power play where people don't act normal anymore and the most powerhungry people come out to play.

viraptor
0 replies
9h18m

Thank you for making me laugh. Seriously, I think working for openai already selected for people who are ok with playing in the grey area. They know they ignore copyright and a few other rules. It's not surprising to me that they would also not be very nice to each other internally.

pjc50
0 replies
9h21m

The best case business pitch is total replacement of all white collar jobs. It's even more a "take over the world" pitch than regular tech companies. Now, quite a lot of that is unrealistic and will never be delivered, but which bit?

AI raises all sorts of extremely non-tech questions about power, which causes all the drama.

Edit: also, they've selected for people who won't ask ethical questions. Thus running into the classic villain problem of building an organization out of opportunistic traitors.

l0b0
0 replies
9h40m

I would like to answer that, but OpenAI could probably spend $100,000 per detractor to crush them and still laugh all the way to the bank.

cheq
0 replies
3h16m

I work for a tech startup doing communication and mkt. I'm sick of engineers sucking Sam's dick and believing every demo they watch. Even after seeing what Altman's is capable of perform in the stage. I'm also sick of the scene trying to sell "AI" (whatever that means) next to everything. We even go out of our way to promise stupid impossible stuff when we talk about multimodal or generative... It doesn't matter, it needs to sell.

Just to give a sickening example, I was approached by the CEO to fix a very bad deepfake video that some "AI" Engineer made with tools available. They requested me to use After Effects and editing to make the lips sync....

On top of that, this industry is driving billions of investment into something that is probably the death sentence for a lot of workers, cultures, and society, and that is not fixing or helping in ANY other way to our current world problems.

ddalex
0 replies
2h59m

"If you're not letting me play, I shall not play!"

Workaccount2
0 replies
4h24m

The whole media is _clearly_ threatened by AI. Both subjectively just thinking about it, and objectively seeing things like Google already roll out AI summaries of internet content (saving consumers the need scroll through a power points worth of auto-play ads to read a single paragraphs worth of information.)

With the breakneck progress of AI over the last year, there has been a clear trend in the media of "Wow, this is amazing (and a little scary)" to "AI is an illegal dumpster fire and needs to be killed you should stop using it and companies should stop making it"

JCM9
13 replies
18h35m

There’s a recurring pattern here of OpenAI getting caught red handed doing bad things and then being all like “Oh it was just a misunderstanding, nothing more, we’ll get on to fixing that ASAP… nothing to see here…”

It’s becoming too much to just be honest oversights.

rachofsunshine
5 replies
17h18m

It's the correct counter-strategy to people who believe that you shouldn't attribute to malice what could be attributed to stupidity (and who don't update that prior for their history with a particular actor).

And it works in part because things often are accidents - enough to give plausible deniability and room to interpret things favorably if you want to. I've seen this from the inside. Here are two HN threads about times my previous company was exposing (or was planning to expose) data users didn't want us to: [1] [2]

Without reading our responses in the comments, can you tell which one was deliberate and which one wasn't? It's not easy to tell with the information you have available from the outside. The comments and eventual resolutions might tell you, but the initial apparent act won't. (For the record, [1] was deliberate and [2] was not.)

[1] https://news.ycombinator.com/item?id=23279837

[2] https://news.ycombinator.com/item?id=31769601

skybrian
1 replies
16h27m

Maybe I'm misunderstanding, but this seems straightforward: the first link goes to an email that went out announcing a change, which seems pretty deliberate; nobody writes an announcement that they're introducing a bug. The second change doesn't seem to have been announced, which leaves open the possibility that it's accidental.

Although I suppose someone could claim the email was sent by mistake, and some deliberate changes aren't announced.

rachofsunshine
0 replies
15h26m

The people in [2] got an email too. (It just turned out to be an automated one that hadn't been intended.)

dmvdoug
1 replies
16h32m

Well, in this case, you have the CEO saying basically they didn’t know about it until about a month ago and then Vox brings receipts with docs signed by Altman and Friends showing he and others signed off on the policy originally (or at least as of the date of the doc, which is about a year ago for one of them). And we have several layers of evidence from several different directions accumulating and indicating that Altman is (and this is a considered choice of words) a malicious shitbag. That seems to qualify as a pretty solid exception to the general rule that you cite of not attributing to malice etc.

rachofsunshine
0 replies
15h22m

Yeah, but keep in mind he's been in the public eye now for 10-15 years (he started his first company in 2005, joined YC in '11, and became president in '14). If you're sufficiently high profile AND do it for long enough AND get brazen enough about it, it starts to stick, but the bar for that is really high (and by nature that only occurs after you've achieved massive success).

abrichr
0 replies
16h4m

you shouldn't attribute to malice what could be attributed to stupidity

It's worth noting that Hanlon’s razor was not originally intended to be interpreted as a philosophical aphorism in the same way as Occam’s:

The term ‘Hanlon’s Razor’ and its accompanying phrase originally came from an individual named Robert. J. Hanlon from Scranton, Pennsylvania as a submission for a book of jokes and aphorisms, published in 1980 by Arthur Bloch.

https://thedecisionlab.com/reference-guide/philosophy/hanlon...

Hopefully we can collectively begin to put this notion to rest.

throwaway115
2 replies
18h12m

It doesn't matter because they hold all of the cards. It's the nature of power: you can get away with things that you normally couldn't. If you really want OpenAI to behave, you'll support their competitors and/or open source initiatives.

doubloon
0 replies
16h35m

already cancelled my OpenAI account and installed llama3 on my local machine, and have a paid Copilot membership

benreesman
0 replies
17h4m

But their product isn’t really differentiated anymore and has really low switching costs: Opus is better at almost anything than the 4-series (training on MMLU isn’t a capability increase), Mistral is competitive and vastly more operator-aligned, both are cheaper and non-scandal plagued.

Mistral even has Azure distribution.

FAIR is flat open-sourcing competitive models and has a more persuasive high-level representation learning agenda.

What cards? Brand recognition?

ur-whale
0 replies
14h27m

There’s a recurring pattern here of OpenAI getting caught red handed doing bad things and then being all like “Oh it was just a misunderstanding

This is a very standard psychopathic behavior.

They (psychopaths) typically milk the willingness of their victims to accept the apology and move on to the very last drop.

Altman is a high-iq manipulative psychopath, there is a trail of breadcrumb evidence 10 miles long at this point.

Google "what does paul graham think of Sam Altman" if you want additional evidence.

a_wild_dandan
0 replies
17h39m

Seems this Altman fella isn't being consistently candid with us.

StargazyPi
0 replies
17h35m

Yeah, not radiating "consistent candidness" is he?

Aurornis
0 replies
17h8m

This is what “Better to ask forgiveness than for permission” looks like when people start catching on.

It’s one of the startup catchphrases that brings people a lot of success when they’re small and people aren’t paying attention, but starts catching up when the company is big and under the microscope.

ramesh31
8 replies
19h2m

They could have been cool. They could have been 2001 Google. They could have been the number one place any new PhD wanted to work.

But no. The MBAs saw dollar signs, and everything went out the window. They fumbled the early mover advantage, and will be hollowed out by the competition and commodified by the PaaS giants. What a shame.

nicklecompte
3 replies
18h39m

Instead of taking 15 years to drop the "don't be evil" act like Google, the new AI companies did it in two! e/acc, baby!

Always42
2 replies
13h59m

what does e/acc mean?

ecjhdnc2025
0 replies
5h52m

Sheldon Cooper can afford a nicer apartment.

timmg
0 replies
14h24m

It’s not just the MBAs that saw dollar signs. A lot of the engineers and researchers did, too.

ecjhdnc2025
0 replies
6h11m

As much as I love blaming things on MBAs, the culture of OpenAI looks at least as much like what happens when you make a room full of real-life Big Bang Theory nerds very rich by rewarding their fantasies of a world free of balanced human interactions.

We have to look at the reality that the worst excesses of the new Silicon Valley culture aren’t stemming from the adults sent to run the ship anymore, and they aren’t stemming from the nerds those adults co-opt anymore either.

The worst excesses of the new Silicon Valley culture are coming from nerds who are empowered and rewarded for their superpower of being unable to empathise.

And I say that as someone who is back to being almost a hermit. We got here by paying people like us and not insisting we try to stop saying what we think without pausing first to think about how it will be received by people not like us.

It’s not a them-vs-us thing now. It’s us-vs-us.

dcreater
0 replies
14h44m

The sad thing is that this isn't even MBAs. The Bay area has gotten so infested with the hustlers and win-at-all-costs, winner-takes-all YC mantra tech bros that it seems even a high minded company like OpenAI isn't immune

MaxHoppersGhost
0 replies
13h35m

Who are the MBAs that did this? Altman is not an MBA.

zniturah
7 replies
15h7m

Looking forward for a document leak about openai using YouTube data for training their models. When asked if they use it, Murali (CTO) told she doesn't know which makes you believe that for 99% they are using it.

Dr_Birdbrain
2 replies
11h54m

I would say 100%, simply because there is no other reasonable source of video data

iLoveOncall
1 replies
11h11m

I use multiple websites that have hundreds of thousands of free stock videos that are much easier to label than YouTube videos.

_diyar
0 replies
7h47m

Number of videos are less relevant than the total duration of high-quality videos (quality can be approximated on YouTube with metrics such as view and subscriber count). Also, while YouTube videos are not labelled directly, you can extract signal from the title, the captions, and perhaps even the comments. Lastly, many sources online use YouTube to host videos and embed them on their pages, which probably contains more text data that can be used as labels.

blackeyeblitzar
1 replies
5h36m

To be fair I don’t think Google deserves exclusive rights to contents created by others, just because they own a monopolistic video platform. However I do think it should be the content owner’s right to decide if anyone, including Google, gets to use their content for AI.

Workaccount2
0 replies
4h20m

Any other company can start a video platform. In fact a few have and failed.

Nobody has to use youtube either.

If you want change in the video platform space, either be willing to pay a subscription or watch ads.

Consumers don't want to do either, and hence no one wants to enter the space.

pompino
0 replies
9h47m

I am surprised to see a pro-copyright take on HN :)

optimalsolver
0 replies
12h18m

*Murati

ssklash
7 replies
17h28m

Why anyone trusts Altman or OpenAI with something as societally consequential as AI is beyond me.

ergocoder
1 replies
13h6m

Sam is in a tough position.

OpenAI is worth 100B. At this level, a founder would have been worth $20B at least.

But Sam aren't getting any of that net worth but he gets all the bad reps that comes with running a 100B company.

akaru
0 replies
12h51m

[ croc tears ]

Fomite
1 replies
17h14m

I've reached the point where I wouldn't trust Altman with anything more consequential than a lemonade stand.

jojobas
0 replies
14h11m

The thing is, he didn't ask you. He put himself into a position where anyone not giving him money would feel they're missing out. It's very unfortunate that he managed to pull it off.

ycombinator_acc
0 replies
12h29m

It's neither consequential nor AI which we won't see, our children won't see, their children won't see, etc., so seems fine to trust Altman with chatbots.

samaltmanfried
0 replies
12h8m

I can only think of one other tech CEO that managed to become as universally loathed as Altman in so quick a time, and that's Mark Zuckerberg. However even Zuckerberg somehow manages to seem more trustworthy than Altman.

_heimdall
0 replies
13h57m

Well that's the problem, isn't it? The incentives align with trusting Altman to make a return on your investment, nothing more.

Investors don't really care about consequences that don't hit the bottom line prior to an exit. Consumers are largely driven by hype. Throw a shiny object out there and induce FOMO, you'll get customers.

What we don't have are incentives for companies to give a damn. While that can easily lead to a call for even more government powers and regulation, in my opinion we won't get anywhere until we have an educated populous. If the average person either (a) understood the potential risks of actual AI or (b) knew that they didn't understand the risks we wouldn't have nearly as much money being pumped into the industry.

neglesaks
7 replies
19h0m

I'll make a prediction here: OpenAI will in the coming years turn out just as ruthless and socially damaging as Facebook did.

webdoodle
1 replies
17h47m

Don't forget about Reddit and Twitter. Although they like to call themselves social networks, they are really corporate psyop networks for hire.

neglesaks
0 replies
12h11m

Well put. I agree.

tomcam
1 replies
18h52m

Less of a prediction and more of a descriptor of its current state

neglesaks
0 replies
18h44m

You ain't seen nothin' yet...

web3-is-a-scam
0 replies
17h10m

In the coming years? It pretty much already is

ramesh31
0 replies
18h58m

OpenAI will in the coming years turn out just as ruthless and socially damaging as Facebook did.

They wish. Napster is a more apt analogy.

elevatedastalt
0 replies
18h5m

I know shitting on FB is de rigueur. But honestly Facebook at its peak was really very useful in many ways that OpenAI hasn't ever been.

dontupvoteme
7 replies
19h25m

It's really maddening just how right the board was.

dehrmann
3 replies
15h14m

While true, it doesn't mean they were offering a better alternative.

_heimdall
0 replies
13h56m

I wish more people didn't expect an alternative before getting rid if a bad situation. Sometimes subtraction, rather than replacement, is still the right answer.

Barrin92
0 replies
14h16m

Well appropriately enough it was an AI movie that taught as a valuable lesson, namely that sometimes the only correct move is not to play.

It's such an insidious idea that we ought to accept that you can just give up your promises you explicitly made once those rules get into your way of doing exactly what they were supposed to prevent. That's not anyone else's problem, that was the point! The people that can't do that are supposed to align AI? They can't align themselves

0xDEAFBEAD
0 replies
13h35m

I'll bet Emmett Shear would've been a fine CEO.

hehdhdjehehegwv
1 replies
15h35m

The majority of employees didn’t care about a lying CEO or alignment research: they wanted the stock payoffs and Sam was the person offering it - end of the day that’s what happened with the coup.

Now Sam is seen as fucking with said stock, so maybe that isn’t panning out. Amazing surprise.

lucianbr
0 replies
13h7m

It's funny to me to read now about employees of OpenAI being coerced or tricked or whatever. Didn't they threaten to resign en masse a few months ago, in total unquestioned support of Sam Altman? They pretty much walked into it, in my opinion.

That's not saying anything OpenAI or Altman do is excusable, no way. I just feel like there's almost no good guys in this story.

ssnistfajen
0 replies
11h54m

Doesn't really matter at this point because they gave literally zero info to the public or most of the company employees when they fired Altman. Almost no one sided with them because they never attempted to explain anything.

tsunamifury
6 replies
19h28m

Why had the advent of semi-intelligent agents suddenly turned Silicon Valley into a place that now hates its own workers. Why is it that a place that once believed the mutual benefit of an intelligent worker and a company has turned into a time to brutalize or even hate the very creators of this technology

Where is all this hatred coming from?

tsunamifury
0 replies
18h28m

Yea this has always been a market dynamics issue that it tried to manipulate... but it didn't have a flavor of hatred to it.

tsunamifury
0 replies
18h28m

Then notion here is that the ownership class hates that it cannot own the creators of work. And now a new class if intelligence exists that can be fully owned.

This is dark.

davidcbc
1 replies
19h20m

It always hated its workers, it just didn't think it had other options for a long time.

tsunamifury
0 replies
18h30m

Did it hate the market dynamics. i.e. Peter theils thesis that competition is for losers because it drives up prices... therefore the shortage of talent was creating competition which should be hated?

tomcam
6 replies
18h58m

From Sam Altman:

this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

Bullshit. Presumably Sam Altman has 20 IQ points on me. He obviously knows better. I was a CEO for 25 years and no contract was issued without my knowing every element in it. In fact, I had them all written by lawyers in plain English, resorting to all caps and legal boilerplate only when it was deemed necessary.

For every house, business, or other major asset I sold if there were 1 or more legal documents associated with the transaction I read them all, every time. When I go to the doctor and they have a privacy or HIPAA form, I read those too. Everything the kids' schools sent to me for signing--read those as well.

He lies. And if he doesn't... then he is being libeled right and left by his sister.

https://twitter.com/anniealtman108

devbent
2 replies
18h52m

Presumably Sam Altman has 20 IQ points on me.

I've read your posts for years on HN, don't undersell yourself.

Many CEOs don't know what is their company's contracts, nor do they think about it. While it is laudable that you paid such close attention, the fact is I've met many leaders who have no clue what is in their company's employment paperwork.

cawlfy
1 replies
18h28m

While I agree that there's probably a varying degree of attention paid...

I think that this clause is so non-standard for tech that it almost certainly got flagged or was explicitly discussed before being added that claiming that he didn't know it was there strains credulity badly.

devbent
0 replies
16h16m

I just talked to a neighbor, he said his startup has the exact same clause in their employment contracts!

Huh I should read mine.

jay-barronville
1 replies
18h34m

He lies. And if he doesn't... then he is being libeled right and left by his sister.

https://twitter.com/anniealtman108

You know, it’s always heartbreaking to me seeing family issues spill out in public, especially on the internet. If the things Sam’s sister says about him are all true, then he’s, at the very minimum, an awful brother, but honestly, a lot of it comes across as a bitter or jealous sibling…really sad though.

user_7832
0 replies
18h14m

I think someone mentioned possible mental health conditions that she might have. But in either case it is pure speculation and we're random people on the internet, not legal investigators, for better or worse.

Always42
0 replies
14h2m

Maybe he was to busy being kicked out of the company... /s

kashyapc
6 replies
12h26m

Great, if these documents are credible, this is exactly what I was implying[1] yesteday. Here, listen to Altman say how he is "genuinely embarrassed":

"this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have."

The first thing the above conjures up is the other disgraced Sam (Bankman-Fried) saying "this is on me" when FTX went bust. I bet euros-to-croissants I'm not the only one to notice this.

Some amount of corporate ruthlessness is part of the game, whether we like it or not. But these SV robber-barrons really crank it up to something else.

[1] https://news.ycombinator.com/item?id=40425735

lexapro
1 replies
8h40m

"this is on me" --> "look at what a great leader I am, taking responsibility for other people's mistakes"

"i've been genuinely embarrassed" --> "yep, totally not my fault actually"

"I should have known" --> "other people fucked this up, and they didn't even inform me"

tejohnso
0 replies
7h8m

Kind of like a humblebrag but for accountability.

aswegs8
1 replies
11h10m

Do you really believe he was genuinely embarrassed? All his public statements lately have been just PR BS. Nothing genuine there.

kashyapc
0 replies
10h52m

No, I don't. That why I put it in "scare quotes". You wouldn't get that impression had you read my comment I linked above :) — https://news.ycombinator.com/item?id=40425735

I was trying to be a bit restrained in my criticism; otherwise, it gets too repetitive.

gen220
0 replies
3h58m

Patrick Collison interviewed Sam Altman in May 2023 [1]

In the intro, Patrick goes off-script to make a joke about how last year he'd interviewed SBF, which was "clearly the wrong Sam".

I'm eagerly waiting for 2025, when he interviews some new Sam and is able to recycle the joke. :)

[1]: https://www.youtube.com/watch?v=1egAKCKPKCk

fisheuler
0 replies
12h18m

smells like game of thrones.

tptacek
3 replies
18h0m

I'm not following this very closely, but agreements that block employees from selling (private) vested equity are a market term, not something uniquely aggressive OpenAI does. The Vox article calls this "just as important" as the clawback terms, but, obviously, no.

comp_throw7
2 replies
17h35m

agreements that block employees from selling (private) vested equity are a market term

They threatened to block the employee who pushed back on the non-disparagement from participating in tender offers, while allowing other employees to sell their equity (which is what the tender offers are for). This is not a "market term".

tptacek
1 replies
16h29m

Sure. Selectively preventing sales isn't. But it's not uncommon to have blanket prohibitions. You're right, though.

comp_throw7
0 replies
16h16m

Yeah, my impression is that a lot of non-public startups have "secondary market transactions allowed with board approval" clauses, but many of them just default-deny those requests and never have coordinated tender offers pre-IPO.

thereal_tron
3 replies
18h33m

Does someone know why the employees wanted him back so badly? Must be very few employees actually upset with him and his way of doing things.

ssnistfajen
0 replies
11h51m

If Sam didn't get hired back after the firing, there was a good chance OpenAI would implode and that would be bad news for employee equity. Plus, the board didn't give out any information that could've convinced anyone to side with them. The drama about exit documents and superalignment research appears to have been contained in relatively small circles and did not circulate company-wide until they became public.

keyle
0 replies
18h8m

They want to get rich. They believe it will lead them to it.

eschaton
0 replies
16h7m

I recall that only some wanted him back, and the split was product/research—the “let’s get rich!” types wanted him back, the “let’s do AI!” types adamantly didn’t.

jobs_throwaway
3 replies
19h37m

I bet similar claw-back clauses are waaay more common than many on this thread would imagine at private co's. I've always been under the impression that 'vested equity' doesn't mean ~anything until you actually see liquidity. The company can generally fuck you before that point if they choose to. Hope I'm being overly cynical with this take.

solidasparagus
0 replies
18h52m

I have never seen it in quite a few equity agreements I've looked at. What is common is a very short post-termination exercise window that in practice acts as a clawback unless you are financially able and willing to pay the cost/taxes of exercising within (often) 90 days.

And a bunch of not-well-informed employees who didn't understand the consequences of this clause when they originally signed

aledalgrande
0 replies
18h53m

They don't need clawback. They can just dilute the f out of you. Which is what happens most of the time anyways

NotSammyHagar
0 replies
19h32m

It sure means a lot more after liquidity, but big successful companies like spacex do have a market selling pre-ipo options or shares or whatever.

brcmthrowaway
3 replies
17h42m

What is up with the allegations of Annie Altman?

Something doesn't smell right

newnwme
2 replies
14h24m

Well, it is wrong to disparage someone that is innocent until proven otherwise. Even if you disagree with them and think they are a snake-oil salesman.

That does not mean you should not hear someone out. As far as I am aware Annie said Sam and their brother molested her as a kid. He claims otherwise, and deflects with “she is a drug addict” (heavily paraphrasing here). Lots of talk of how her trust was broken, and it is impossible to get justice against someone so rich and powerful, etc. where sama’s camp claim it is a money grab and there is zero proof. A sticky wicket.

Now, whether all these “new” revelations (honestly never thought Sam was honest) help support her claims is up to you. Just wanted to add some context for those unaware. Not accusing anyone.

brcmthrowaway
0 replies
12h10m

Those are shocking allegations, real question is why Sam, as a wealthy man, isn't able to support his family?

Uptrenda
0 replies
11h0m

Not gonna lie, I think its shady as fuck that a new account registers to post this one comment...

af3d
3 replies
14h26m

It just seem petty, refusing to sign a simple document agreeing to not to trash your former employer (with whom you intend to continue to benefit from a shared interest in said company). It wasn't as if Altman was threatening to take back equity. Little more than a "just be nice, OK" and yet somehow that is asking too much?

TechDebtDevin
2 replies
13h57m

Damn, is this Sammy's Sock or do you just have elementary level reading comprehension?

af3d
1 replies
11h55m

The latter, apparently. TBH I answered after hearing about the story elsewhere and it just struck me as fairly benign for an employer to ask as much from a (soon-to-be) former employee. I am not a lawyer anyway so I honestly have no idea what the proper legal interpretation might be. I was just commenting on what seems to be the unfortunate state of human affairs these days. It just feels like people are so much more prone to go on the offensive over what amounts to a simple request for civility. Of course down here we still honor the humble handshake. Maybe that is the difference?

TechDebtDevin
0 replies
5h39m

I see why you'd assume, I deal with annoying young people who bitch about working 4 hours a day, annoying I know. There's still no excuse for being almost the same type of annoying. Get good. Idgaf if you're not a lwayer I just have greater than 5th grade reading level. You clearly displayed an annoying level of person. The type of annoying that encourages the type of genocides I sympathize with. The aware but dumb. Read.

If you're autistic, have an extra chromosome, or will admit they are genuinely dumb. I'll apologize. But otherwise, nah.

Havoc
3 replies
18h44m

I find it hard to believe that Sam didn’t know about something that draconian in something so sensitive as NDAs that affects to equity.

He’s not exactly new to this whole startup thing and getting equity right is not a small part of that

WiSaGaN
2 replies
15h54m

He was obviously lying, and he probably also knew people would not believe it. I just don't know why he still chose to do it.

mirekrusin
0 replies
13h53m

He's not candid.

gdiamos
0 replies
9h50m

Founders/CEOs don't lose track of equity or contracts around it. Every 1/10 of a percent is tracked and debated with investors.

manlobster
2 replies
14h27m

Do OpenAI employees actually get equity in the company (e.g. options or RSUs)? I was under the impression that the company awards "profit units" of some kind, and that many employees aren't sure how they work.

notachatbot1234
0 replies
11h34m

many employees aren't sure how they work.

Why aren't they simply asking their product?

jiggawatts
2 replies
17h59m

I've learned to interpret anything Sam Altman says as-if an Aes Sedai said it. That is: every word is true, but leads the listener to making false assumptions.

Even if in this specific instance he means well, it's still quite entertaining to interpret his statements this way:

"we have never clawed back anyone's vested equity"

=> But we can and will, if we decide to.

"nor will we do that if people do not sign a separation agreement"

=> But we made everyone sign the separation agreement.

"vested equity is vested equity, full stop."

=> Our employees don't have vested equity, they have something else we tricked them into.

"there was a provision about potential equity cancellation in our previous exit docs;"

=> And also in our current docs.

"although we never clawed anything back"

=> Not yet, anyway.

"the team was already in the process of fixing the standard exit paperwork over the past month or so."

=> By "fixing", I don't mean removing the non-disparagement clause, I mean make it ironclad while making the language less controversial and harder to argue with.

"if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too."

=> We'll fix the employee, not the problem.

"very sorry about this."

=> Very sorry we got caught.

dataflow
1 replies
14h56m

We're removing nondisparagement clauses from our standard departure paperwork

How would you interpret this part?

and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual.

This is interesting - was it mutual for most people?

jiggawatts
0 replies
10h50m

> We're removing nondisparagement clauses from our standard departure paperwork

"We're replacing them with even more draconian terms that are not technically nondisparagement clauses"

> and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual.

"We offered some employees $1 in exchange for signing up to the nondisparagement clause, which technically makes it a binding contract because there was an exchange of value."

ein0p
2 replies
18h49m

Protip: you can’t negotiate terms after you agree to them.

solidasparagus
0 replies
17h50m

You absolutely can, you just better have the leverage necessary

rrr_oh_man
0 replies
9h3m

You’d be surprised

andy_ppp
2 replies
8h54m

I think it’s time to cancel that Chat GPT subscription and move to something else. I am tired of the arrogance of these companies and particularly their narcissistic leaders who constantly want to make themselves the centre of the piece. It’s absolutely ridiculous to run a company as if you’re the lead in a contemporary drama.

Symmetry
1 replies
4h8m

Anthropic was founded by ex OpenAI employees who were concerned with the way it was being run and their language models are comparable, better for some things but worse than others. I also canceled my ChatGPT subscription and I will say I'll miss the GTP-4o multi-modal features.

andy_ppp
0 replies
4h0m

I was thinking of giving Gemini a try, one thing I’m pretty certain of is that Demis Hassabis is consistently candid.

aagha
2 replies
17h33m

I don't understand whenever you read about something like this, why the head of HR at a company like this (just google (head of people|hr|"human resources" openai linkedin) and see the first result) doesn't end up on a public blacklist of bad actors who are knowingly aggressive toward employees!

brown9-2
1 replies
17h17m

Because this isn’t something instituted by the head of HR alone.

aagha
0 replies
2h20m

But it's 100% approved by them.

treme
1 replies
14h24m

PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://p@ulgraham.com/5founders.html *edited link due to first post getting deleted

brap
0 replies
8h51m

This relationship feels like Michael and Ryan from The Office.

One is a well meaning but very naive older person who desperately wants to be liked by the cool kids, the other is a pretentious young conman who soars to the top by selling his “vision”. Michael is a huge simp for Ryan and thinks of himself as Ryan’s mentor, but is ultimately backstabbed by him just like everyone else.

tivert
1 replies
13h57m

Can't we all just go back to being positive and amazed with OpenAI and it's technology? Why does everyone have to be so negative about tech?

flembat
0 replies
13h50m

I also welcome our new AI overlords.

It's not really the tech that is negative it is the humans manipulating it for profit and power, and behaving obnoxiously. The tech is very useful.

tangentstar
1 replies
19h39m

Nothing quite like a contract’s consideration consisting solely of a pre-existing obligation. I wonder what they were thinking with that?

heavyset_go
0 replies
12h49m

I wonder what they were thinking with that?

"Fuck you, poors."

rehitman
1 replies
18h8m

The company that fails in even a simple good faith gesture in their employee aggreement, claims it is the only one who can handle AGI while government creating regulation to lock out open source.

sschueller
0 replies
12h20m

A company that was honest wouldn't lobby the government to lock out others.

notshift
1 replies
11h37m

So what happened to Daniel Kokotajlo, the ex-OAI employee who made a comment saying that his equity was clawed back? Was it a miscommunication and he was referring to unvested equity, or is Sama just lying?

In the original context, it sounded very much like he was referring to clawed-back equity. I’m trying to find the link.

kurts_mustache
1 replies
19h59m

Another day, another article where Vox is hating on OpenAI.

mateus1
0 replies
19h56m

You’re just shooting the messenger.

dmitrygr
1 replies
19h12m

Going to be hard to keep claiming you didn’t know something, if your signature is on it. I don’t really think a CEO gets to say he didn’t read what he was signing.

dexwiz
0 replies
16h45m

There is another very famous leader on trial for exactly that right now.

cambaceres
1 replies
12h16m

this is on me and one of the few times i've been genuinely embarrassed running openai

This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company. In reality, the expectation is that a CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.

pompino
0 replies
9h43m

This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company.

It suggests humans makes mistakes and sometimes own up to them - which is a good thing.

CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.

There is no human who does this , or are you saying turn the CEO role over to AI? :)

blackeyeblitzar
1 replies
18h18m

Everyone is out for Sam Altman, and there are reasons to scrutinize him. But on this issue - it is common for a company's Legal and HR teams to make decisions on language in docs like these (exit docs) entirely on their own. So it is plausible that Sam Altman had no idea that this aggressive language existed. One reason to think the same thing is true here, is I recall Sam spoke up for employee friendly equity plans when he was running YC.

jhatemyjob
0 replies
14h27m

Plus there's a million ways you can get screwed out of your equity

arnklint
1 replies
13h39m

I thought freedom of speech was a foundational thing in the US.

But I guess anyone could be silenced with enough economic incentive?

ssnistfajen
0 replies
11h59m

Freedom of speech in regard to public institutions. For private entities/individuals, any contract you willingly sign will reign supreme over whatever this "freedom of speech" thing is unless there's written law that explicitly forbids the forms of retaliation described within said contracts.

GianFabien
1 replies
16h11m

Equity in a unlisted, non-public company is an IOU scribbled onto a piece of loo paper.

blast
0 replies
12h11m

I'd pay to get me one of them particular IOUs.

wouldbecouldbe
0 replies
9h4m

Who is bullish or bearish on OpenAI?

Now that LLM alternatives are getting better and better, as well as having well funded competitors. They don't yet have seem to developed a new, more advanced technology. What's their long term moat?

thereal_tron
0 replies
18h18m

Imagine if these people, obviously narrow-minded and greedy, gain access to AGI. It really would be a thread to mankind.

surume
0 replies
12h32m

yikes... turns out that lily is actually a venus fly trap...

surfingdino
0 replies
19h14m

It's for the good of humanity... that part of humanity that may not want bad PR.

souvenir
0 replies
18h48m

So disappointing of OpenAI. I hope they'll make things right with all their former employees.

skepticATX
0 replies
19h16m

I don’t believe in the AGI claims, or in X-Risk. But I do think it’s apparent that AI will only become more powerful and ubiquitous. Very concerning that someone like Sam, with a history of dishonesty and narcissism that is only becoming more obvious time, may stand to control a large chunk of this technology.

He can’t be trusted, and as a result OpenAI cannot be trusted.

senderista
0 replies
16h31m

we're deeply sorry we got caught, we need to do better. i take full responsibility for this mistake, i should have ensured all incriminating documents were destroyed.

ps "responsibility" means "zero consequences"

seffect
0 replies
13h21m

Streisand Effect at work

redbell
0 replies
8h52m

..or agreeing not to criticize the company, with no end date

Oh! free speech is on trade! We used to hear the above statement coming from some political regimes but this is the first time I read it in the tech world. Would we live to witness more variations of this behavior on a larger scale?!

High-pressure tactics at OpenAI

That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars

When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI.

“We want to make sure you understand that if you don't sign, it could impact your equity. That's true for everyone, and we're just doing things by the book,”

Although they've been able to build the most capable AI models that could replace a lot of human jobs, they struggle to humanely manage the people behind these models!!

outside1234
0 replies
15h59m

Scam Altman strikes again. And if you don’t believe he knew about this, then you are the fool.

naveen99
0 replies
14h27m

Are there more than 2 former openai employees ?

m3kw9
0 replies
4h50m

I feel there is a smear campaign going on to tarnish OpenAI

lhnz
0 replies
6h48m

I'm surprised that an executive or lawyer didn't realise the reputational damage adding these clauses would eventually cause the leadership team.

Were they really stupid enough to think that the amount of money being offered would bend some of the most principled people in the world?

Whoever allowed those clauses to be added and let them remain has done more damage to the public face of OpenAI than any aggravated ex-employee ever could.

lenerdenator
0 replies
5h4m

It's okay everyone. Silicon Valley will save us. Pay no mind to the "mistakes" they've made over the last 60 years.

frednoodle
0 replies
17h41m

AI-native companies seem to bring a new form of working culture. It could be different from tech industries environment.

ecjhdnc2025
0 replies
19h4m

This really is OpenAI's Downing Street Christmas Party week isn't it.

davidQ123
0 replies
18h14m

It's better to check whether it's true

coahn
0 replies
10h56m

In my third world country, when they do something unethical they say "everything is in accordance with the law", here it's "this is on me", both are very cynical. From the time they went private, it was apparent that this company is unethical to say the least. Given what it is building, this can be very dangerous, but I think they are more proficient in creating hype, than actually coming up with something meaningful.

boh
0 replies
5h6m

It's funny how finding out about corporate misdoing has almost a common ritual attached to it. First shock and dismay is expressed to the findings, then the company leadership has to say it was a mistake (rather than an obvious strategy they literally signed off on), we then bring up the contradiction. Does this display of ignorance from every side really need to take place? Why bother asking for an explanation, they obviously did the thing they obviously did and will obviously do as much as possible to keep doing as much of things like that they can get away with.

blueyes
0 replies
3h56m

Kelsey is an EA and AI doomer who holds a partisan position on this topic. She should recuse herself from the coverage. Previous work that should go on her wall of shame include having a conversation with her friend Sam Bankman-Fried and then publishing it without having revealed that she was conversing as a reporter. And guess what, FTX creditors will not lose a dime... She engages in panics, and villifies people, as part of what she sees as her job.

animanoir
0 replies
3h17m

Mr. Altman seems like a quite pedantic and evil people to work with—absolute psychopath.

ambicapter
0 replies
15h27m

“The team did catch this ~month ago. The fact that it went this long before the catch is on me.”

I love this bullshit sentence formulation that claims to both have known this already--as in, don't worry we're ALREADY on the case--and they're simultaneously embarrassed that they "just" caught it--a.k.a. "wow, we JUST heard about this, how outRAGEOUS".

almog
0 replies
17h26m

Unfortunately it is unlikely to result in Altman's dismissal but imagine being fired from the same company, twice, in less than 12 months.

aAaaArrRgH
0 replies
12h56m

I tried to delete my ChatGPT account but the confirmation button remained locked. Anyone else have the same issue?

_jab
0 replies
12h27m

OpenAI's terrible, horrible, no good, very bad month only continues to worsen.

It's pretty established now that they had some exceptionally anti-employee provisions in their exit policies to protect their fragile reputation. Sam Altman is bluntly a liar, and his credibility is gone.

Their stance as a pro-artist platform is a joke after the ScarJo fiasco, that clearly illustrates that creative consent was an afterthought. Litigation is assumed, and ScarJo is directly advocating for legislation to prevent this sort of fiasco in the future. Sam Altman's involvement is again evident from his trite "her" tweet.

And then they fired their "superalignment" safety team for good measure. As if to shred any last measure of doubt that this company is somehow more ethical than any other big tech company in their pursuit of AI.

Frankly, at this point, the board should fire Sam Altman again, this time for good. This is not the company that can, or should, usher humanity into the artificial intelligence era.

Uptrenda
0 replies
17h6m

HN hates cryptocurrencies but 'equity' to me is even worse than the worst shitcoins. It's an IOU that the company controls (and one people think is a tangle part of their 'compensation.') Just imagine if a company thinks you're about to jump ship and you have equity close to being vested. The company almost has a perverse incentive to fire you to nullify that equity. This gets much much worse when you know the typical vesting schedules that startups like to use. I know some of you working at the top companies might get your equity vested every month. But in my experience its much more common to be talking about yearly vesting schedules at startups where you have to stay for many years to get anything.

So think about that. They offer you an average to low base salary but sweeten the deal with some 'equity' saying that it gives you a stake in the company. Neglecting to mention of course, how many different ways equity can be invalidated; How a year in tech is basically a life time; And how the whole thing is kind of structured to prevent autonomy as an employee. Often founders will use these kind of offers to gauge 'interest' because surely the people who are willing to take an offer that's backed more by magic bean equity money (over real money) are truly the ones most dedicated to the companies mission. So not being grateful for such amazing offers would be taken as a sign of offence by most founders (who would prefer to pay in hopes and dreams if they could.)

Now... with a shitcoin... even though the price may tank to zero you'll at least end up with a goofy item you own at the end of the day. Equity... not so much.