return to table of content

OpenAI's employees were given two explanations for why Sam Altman was fired

LarsDu88
204 replies
11h15m

There has to be a bigger story to this.

Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

hooande
71 replies
10h17m

Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.

There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.

rtpg
29 replies
9h22m

The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).

This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.

mcv
22 replies
7h51m

This certainly feels like the most likely true reason to me. Altman fundraising for this new investment, and taking money from people the board does not approve of, and Altman possible promised not to do business with.

Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.

benterix
15 replies
6h58m

Moreover, if this is true, he could reasonably well continue knowing that he has more power than the board. I could almost imagine the board saying, "You can't do that" and him replying "Watch me!" because he understood he is more powerful than them. And he proved he was right, and the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

sigmoid10
7 replies
6h40m

The thing is, they could have just come out with that fact and everyone in the alignment camp and people who memed the whole super-commercialized "Open" AI thing would be on their side. But the fact that they haven't means that either there was no greater-good mission related reason for ousting Sam or the board is just completely incompetent at communication. Either way, they need to go and make room for people who can actually deal with this stuff. OpenAI is doomed with their current board.

mcv
4 replies
6h22m

That is a very good point. Why wouldn't they come out and say it if the reason is Altman's dealings with Saudi Arabia? Why make up weak fake reasons?

On the other hand, if it's really just about a power struggle, why not use Altman's dealings with Saudi Arabia as the fake reason? Why come up with some weak HR excuses?

jacquesm
2 replies
5h2m

Because anything they say that isn't in line with the rules governing how boards work may well open them up to - even more - liability.

So they're essentially hoping that nobody will sue them but if they are sued that their own words can't be used as evidence against them. That's why lawyers usually tell you to shut up, because even if the court of public opinion needs to be pacified somehow the price of that may well be that you end up losing in that other court, and that's the one that matters.

tiahura
1 replies
2h5m

If it was all about liability, The press release wouldn’t have said anything about honesty. The press release could’ve just said the parting was due to a disagreement about the path forward for openAI.

As a lawyer, I wonder to what extent lawyers were actually consulted and involved with the firing.

jacquesm
0 replies
58m

If they have not consulted with a lawyer prior to the firing then that would be highly unusual for a situation like this.

trinsic2
0 replies
2h9m

Maybe the board is being prevented or compelled not to disclose that information? Given the limited information about the why, This feels like a reverse psychology situation to obfuscate the public's perception to further some premeditated plan.

bsenftner
0 replies
5h22m

I'm betting they are just colossally bad communicators, the majority of the board, and in the heat of an emotional exchange things were said that should not have been said, and being the poor communicators we know in tech oh so well, shit hit the fan. It's worth being said, Sam's a pretty good communicator, and could have knowingly let them walk into their own statements and shit exploded.

Palpatineli
0 replies
37m

Telling people that AGI is acheivable with current LLM with minor tricks may be very dangerous in itself.

FartyMcFarter
6 replies
4h37m

the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

From the board's perspective, destroying OpenAI might be the best possible outcome right now. If OpenAI can no longer fulfill its mission of doing AI work for the public good, it's better to stop pretending and let it all crumble.

GuyFromNorway
3 replies
4h19m

I am not sure if it would be commendable or out-right stupid though for the remaining board members to be that altruistic, and actually let the whole thing crash and burn. Who in their right mind would let these people near any sort of decision-making role if they let this golden goose just crash to the ground, even if would "benefit the greater good" - cannot see that this is in the self-interest of anyone

piuantiderp
2 replies
3h21m

Spoken like a true modern. What could be more important than money? Makes you wonder if aristocracy was really that bad when this is the best we get with democracy!111

jeffwask
0 replies
2h44m

What other motivations are there other than naked profit and trying to top Elon? /s

EGreg
0 replies
2h21m
mcv
1 replies
3h28m

Except that letting it all crumble leaves all the crumbs in Microsoft's hands. Although there may not be any way to prevent that anyway at it point.

mcpackieh
0 replies
3h18m

If the board had already lost control of the situation anyway, then burning the "OpenAI" fig leaf was an honorable move.

wilde
4 replies
4h13m

If this is true why not say it though? They didn’t even have lawyers telling them to be quiet until Monday.

tremon
3 replies
4h0m

Are you suggesting that all people will do irresponsible things unless specifically advised not to by lawyers?

wilde
2 replies
3h43m

The irresponsible thing is to not explain yourself and assume everyone around you has no agency.

tremon
1 replies
2h26m

I don't follow. If the irresponsible thing is to not explain themselves, why would the lawyers tell them to be quiet?

wilde
0 replies
2h18m

To minimize legal risk to their client, which is not always the most responsible thing to do.

jeffwask
0 replies
2h45m

This was my guess the other day. The issue is somewhere in the intersection of "for the good of all humanity" and profit.

twic
3 replies
5h52m

The "lying" line in the original announcement feels like where the good gossip is

This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.

trinsic2
0 replies
2h4m

The announcement that he is acted to get a position with Microsoft creates doubt about his motives.

mcpackieh
0 replies
3h2m

They accused him of being less than candid, which could mean lying or it could mean he didn't tell them something. The latter is almost certainly true to at least a limited extent. It's a weasel phrasing that implies lying but could be literally true only in a trivial sense.

jacquesm
0 replies
5h1m

Or it is true but not to a degree that it warrants a firing and that firing just so happened to line up with the personal goals of some of the board members.

stareatgoats
1 replies
8h7m

Agreed, court intrigue. But it is also the mundane story of a split between a board and a CEO. In normal cases the board simply swaps out the CEO if out of line, no big fuss. But if the CEO is bringing in all the money, having the full support of the rest of organization, and is a bright star in mass media heaven, then this is likely what you get: the CEO flaunts the needs of the board and runs his own show, and gets away with it, in the end.

piuantiderp
0 replies
3h19m

It just confirmed what was already a rumor, the board of OpenAI was just a gimmick, Altman held all the strings and maybe cares, or not, about safety. Remember this is a man of the highest ambition.

rightbyte
16 replies
8h22m

a decision that destroyed billions of dollars worth of brand value and good will

I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?

What sane user would want a shitcoin CEO in charge of a product they depend on?

benterix
6 replies
6h53m

Yeah, there definitely seem to be some personality cult around Sam on HN. I met him when he visited Europe during his lobbying tour. I was a bit surprised the CEO of one of the most innovative companies would promote an altcoin. And then he repeated how Europe is crucial, several times. Then he went to the UK and laughed, "Who cares about Europe". So he seems like the guy who will tell you what you want to hear. Ask anybody on the street, they will have no idea who the guy is.

johnnymorgan
3 replies
6h46m

I gotten SBF vibes from him for awhile now.

Elon split was the warning

edmundsauto
2 replies
5h58m

Telling statement. The Elon split for me cements Altman as the Lionheart in the story.

jacquesm
1 replies
5h0m

There are other options besides 'Elon is a jerk' or 'Sam is a jerk'.

OOPMan
0 replies
3h28m

For example...they're both jerks!

:-)

comboy
1 replies
6h4m

Then he went to the UK and laughed, "Who cares about Europe"

Interesting. Got any source? Or was it in a private conversation.

piuantiderp
0 replies
3h15m

It's a surprisingly small world.

twic
3 replies
5h12m

Altman is an interesting character in all of this. As far as i can tell, he has never done anything impressive, in technology or business. Got into Stanford, but dropped out, founded a startup in 2005 which threw easy money at a boring problem and after seven years, sold for a third more than it raised. Got hired into YC after it was already well-established, and then rapidly put in charge of it. I have no knowledge of what went on inside, but he wrote some mediocre blog posts while he was there. YC seems to have done well, but VC success is mostly about your brand getting you access to deal flow at a good price, right? Hyped blockchain and AI far beyond reasonable levels. Founded OpenAI, which has done amazing things, but wasn't responsible for any of the technical work. Founded that weird eyeball shitcoin.

The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?

selimnairb
0 replies
4h4m

A lot of this was done when money was free.

mrits
0 replies
3h59m

If you only hire people with a record of previous accomplishments you are going to pay for their previous success. Being able to find talent without using false indicators like a Stanford degree is why PG is PG

jacquesm
0 replies
4h58m

Altman has convinced PG that he's a pretty smart cookie and that alone would explain a lot of the red carpet treatment he's received. PG is pretty good at spotting talent.

http://www.paulgraham.com/5founders.html

Note the date on that.

bnralt
1 replies
3h11m

I’ve said this before, but it’s quite possible to think that Altman isn’t great, and that he’s better than the board and his replacement.

The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.

[1] https://twitter.com/eshear/status/1664375903223427072

rightbyte
0 replies
2h5m

He gotta be insane? I guess what he is trying to say is that those who want to selfhost open AIs are worse than Nazis? E.g. Llama? What is up with these people and pushing for corporate overlord only AIs.

The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.

Organizations can't pretend to believe nonsense. They will end up believing it.

johnnymorgan
0 replies
6h47m

Nope, we do not. I was annoyed when he pivoted away from the mission but otherwise don't really care.

Stability AI is looking better after this shitshow.

dumpsterdiver
0 replies
5h28m

What do common users and zealots have to do with the majority of OpenAI employees losing faith in the board’s competence and threatening a mass exodus?

Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?

chaostheory
0 replies
5h13m

Mistakes aside, Altman was one of the earliest founders recruited by Paul Graham into YC. Altman eventually end up taking over Ycombinator from pg. He’s not just a “shitcoin” ceo. At the very least, he’s proven that he can raise money and deal with the media

trhway
7 replies
9h54m

Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.

the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :

https://www.theatlantic.com/technology/archive/2023/11/sam-a...

Zolde
5 replies
8h35m

ChatGPT was too polished and product-ready to have been a runaway low-key research preview, like Meta's Galactica was. That is the legacy you build around it after the fact of getting 1 million users in 5 days ("it was build in my garage with a modest investment from my father").

I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.

This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.

trhway
4 replies
8h1m

This was not a fluke of striking gold, but a carefully planned business move

Then that execution and operationalization failure is even more profound.

Zolde
3 replies
7h50m

You are under the impression that OpenAI "just failing to scale efficient mining of that gold", but it was one of the fastest growing B2C companies ever, failing to scale to paid demand, not failing to scale to monetization.

I admire the execution and operationalization, where you see a failure. What am I missing?

verdverm
2 replies
7h32m

If the leadership of a hyper scaling company falls apart like what we've seen with OpenAI, is that not failure to execute and operationalize?

We'll see what comes of this over the coming weeks. Will the service see more downtime? Will the company implode completely?

jjk166
1 replies
1h34m

If you have a building that weathers many storms and only collapses after someone takes a sledgehammer to load bearing wall, is that a failure to build a proper building?

verdverm
0 replies
58m

Was the building still under construction?

I think your analogy is not a good one to stretch to fit this situation

bertil
0 replies
5h3m

Then, the solution would be to separate the research arm from a product-driven organization that handles making money.

wruza
4 replies
7h25m

The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will

Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).

Nevermark
3 replies
6h58m

I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships

Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.

A lot of OpenAI’s reputation is/was Sam Altman’s reputation.

Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.

Proof of his internal relationship value: employees quitting to go with him

Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.

How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?

How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?

Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.

There go those wacky investors, re-evaluating “brand” value!

qp11
1 replies
6h24m

The AI community isn't large, as in the brainpower available. I am talking about the PhD pool. If this pool isn't growing fast enough, no matter what cash or hardware is thrown on the table, then the hype Sam Altman generates can be a pointless distraction and waste of everyones time.

But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.

Nevermark
0 replies
1h31m

That is a one-dimensional analysis.

You might need to include more dimensions if you really want to model the actual impact and respect that Sam Altman has among knowledgeable investors, high talent developers, and ruthless corporations.

It’s so easy to just make things simple, like “it’s all hype”. But you lose touch with reality when you do that.

Also, lots of hype is productive: clear vision, marketing, wowing millions of customers with an actual accessible product of a kind/quality that never existed before and is reshaping the strategies and product plans of the most successful companies in the world.

Really, resist narrow reductionisms.

I feel like that would be a great addition HN guidelines.

The “it’s all/mostly hype”, “it’s all/mistly bullshit”, “Its not really anything new”, … These comments rarely come with any accuracy or insight.

Apologies to the HN-er I am replying to. I am sure we have all done this.

DoingIsLearning
0 replies
6h47m

has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Off-topic and I am not proud to admit it but it took me a remarkably long time to come to realize this as an adult.

tsimionescu
2 replies
8h45m

Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.

Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.

codeduck
1 replies
7h54m

Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization

This could be desperate, last-ditch efforts at damage control

bertil
0 replies
5h11m

There are multiple, publicly visible steps before firing the guy.

austhrow743
2 replies
8h25m

Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.

So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.

dr_dshiv
1 replies
8h13m

From what I know, Sam supported the nonprofit structure. But let’s just say he hypothetically wanted to change the structure, e.g. to make the company a normal for-profit.

The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.

Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?

zaphirplane
0 replies
5h29m

Am I missing something?

You are wildly speculating of course it’s missing something

For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries

sumitkumar
1 replies
3h24m

Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.

NewEntryHN
0 replies
3h19m

Give it a week until it is exactly that that did actually happen (not saying it has been orchestrated, just talking net result).

aerhardt
0 replies
4h7m

Surely the API products are the runaway products, unless you are conflating the two. I think their economics are much more promising.

LastTrain
0 replies
3h43m

Yep. I think you've explained the origins of most decisions, bad and good - they are reactionary.

127
0 replies
5h26m

good will

Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.

LMYahooTFY
58 replies
10h21m

Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.

Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.

jelling
42 replies
9h59m

What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something?

Yes. Yes and more yes.

That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.

arrosenberg
24 replies
9h46m

That is why, at least in the U.S., we have given non-profits exemptions from taxation.

That's your belief. The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

(For what its' worth, I wish the law was more aligned with your worldview)

gardenhedge
10 replies
9h15m

An argument could be made that sports - and a sports organization - helps society

passion__desire
7 replies
9h2m

Fast fashion and fashion industry in general is useless to society. But rich jobless people need a place to hangout so they create an activity to justify.

achenet
4 replies
6h15m

useless to society...

fashion allows people to optimize their appearance so as to get more positive attention from others. Or, put more crudely, it helps people look good so they can get laid.

Not sure that it's net positive for society as a whole, but individual humans certainly benefit from the fashion industry. Ask anyone who has ever received a compliment on their outfit.

This is true for rich people as well as not so rich people - having spent some time working as a salesman at H&M, I can tell you that lower income members of society (like, for example, H&M employees making minimum wage) are very happy to spend a fair percentage of their income on clothing.

mensetmanusman
1 replies
3h27m

we can correlate now that the more fast fashion there is the less people are coupling though...

passion__desire
0 replies
14m

There was a tweet by Elon which said that we are optimizing for short term pleasure. OnlyFans exists just for this. Pleasure industry creates jobs as well but do we need so much of it?

dheavy
1 replies
5h28m

It goes even deeper than getting laid if you study Costume History and its psychological importance.

It is a powerful medium of self-expression and social identity yes, deeply rooted in human history where costumes and attire have always signified cultural, social, and economic status.

Drawing from tribal psychology, it fulfills an innate human desire for belonging and individuality, enabling people to communicate their affiliation, status, and personal values through their choice of clothing.

It has always been and will always be part of humanity, even if its industrialization in Capitalistic societies like ours have hidden this fact.

OP's POV is just a bit narrow, that's all.

atq2119
0 replies
4h3m

Clothing is important in that sense, but fashion as a changing thing and especially fast fashion isn't. I suppose it can be a nice hobby for some, but for society as a whole it's at best a wasteful zero-sum pursuit.

dheavy
1 replies
5h33m

fashion industry in general is useless to society > rich jobless people need a place to hangout

You're talking about an industry that generates approximately $1.5 trillion globally, employing more than 60 million people globally, from multi-disciplinary skills in fashion design, illustration, web development, e-commerce, AI, digital marketing.

passion__desire
0 replies
3h48m

well, web3 created lot of economic activity and jobs, it doesn't mean it is useful.

renewiltord
0 replies
8h46m

Indeed, and one for ChatGPT.

quickthrower2
0 replies
9h6m

As does a peer to peer taxi company.

aurareturn
2 replies
9h42m

FYI, the NFL teams are for profits and pay taxes like normal busineses. The overwhelming majority of the revenue goes to the teams.

arrosenberg
1 replies
9h37m

I know that, does that change what I said?

aurareturn
0 replies
9h32m

I don't know if it does but my point is to prevent others from thinking that a giant money making entity like the NFL does not pay any taxes.

tsimionescu
1 replies
8h37m

Ostensibly, all three of your examples do exist to improve society. The NFL exists to support a widely popular sport, the Heritage Foundation is there to propose changes that they theoretically believe are better for society, and Scientology is a religion that will save us all from our bad thetans or whatever cockamamie story they sell.

A non-profit has to have the intention of improving society. Whether their chosen means is (1) effective and (2) truthful are separate discussions. But an entity can actually lose non-profit status if it is found to be operated for the sole benefit of its higher ups, and is untruthful in its mission. It is typically very hard to prove though, just like it's very hard to successfully sue a for-profit CEO/president for breach of fiduciary duty.

lordnacho
0 replies
8h23m

I think GP deals with that in his parenthesis.

It would be nice if we held organizations to their stated missions. We don't.

Perhaps there simply shouldn't be a tax break. After all if your org spends all its income on charity, it won't pay any tax anyway. If it sells cookies for more than what it costs to make and distribute them, why does it matter whether it was for a charity?

Plus, we already believe that for-profit orgs can benefit society, in fact part of the reason for creating them as legal entities is that we think there's some sort of benefit, whether it be feeding us or creating toys. So why have a special charity sector?

some1else
1 replies
6h26m

Starting OpenAI as a fork of Scientology from the get go would have saved everyone a great deal of hair splitting.

polygamous_bat
0 replies
4h53m

  :s/Xenu/AGI/g

whelp_24
0 replies
4h40m

The NFL also is a non-profit in charge of for-profits. Except they never pretended to be a charity, just an event organizer.

twelvechairs
0 replies
9h28m

OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so the companys goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible.

From their filing as a non-profit

https://projects.propublica.org/nonprofits/organizations/810...

mschuster91
0 replies
8h6m

The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

At least for Scientology, the government actually tried to pull the rug, but it didn't work out because they managed to achieve the unthinkable - they successfully extorted the US government to keep their tax-exempt status.

mensetmanusman
0 replies
3h45m

it's also your belief that sports like the nfl do not improve society ...

beliefs can't be proven or disproven, they are axioms.

depr
0 replies
4h18m

So what is your belief about why they exist?

ameister14
0 replies
2h8m

No - that's the reasoning behind the law.

You appear to be struggling with the idea that the law as enacted does not accomplish the goal it was created to accomplish and are working backwards to say that because it is not accomplishing this goal that couldn't have been why it was enacted.

Non-profits are supposed to benefit their community. Could the law be better? Sure, but that doesn't change the purpose behind it.

todd3834
16 replies
9h39m

I don’t think OpenAI ever reported to be profitable. They are allowed and should make money so they can stay alive. ChatGPT has already had a tremendous positive impact on society. The cause of safe AGI is going to take a lot of money in more research.

shafyy
15 replies
9h33m

ChatGPT has already had a tremendous positive impact on society.

Citation needed

todd3834
14 replies
9h28m

Fair enough, I should have said, it’s my opinion that it has had a positive impact. I still think it’s easy to see them as a non profit. Even with everything they announced at AI day.

Can anyone make an argument against it? Or just downvote because you don’t agree.

sgt101
7 replies
7h59m

I think ChatGPT has created some harms:

- It's been used unethically for psychological and medical purposes (with insufficient testing and insufficient consent, and possible psychological and physical harms).

- It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

- It has been used to create synthetic content that has been released unmarked into the internet distorting and biasing future models trained on that content.

- It has been used to support criminal activity (scams).

- It has been used to create propaganda & fake news.

- It has devalued and replaced the work of people who relied on that work for their incomes.

VBprogrammer
2 replies
6h10m

- It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

I'm going to go ahead and call this a positive. If the means for measuring ability in some fields is beaten by a stochastic parrot then these fields need to adapt their methods so that testing measures understanding in a variety of ways.

I'm only slightly bitter because I was always rubbish at long form essays. Thankfully in CS these were mostly an afterthought.

zztop44
1 replies
4h1m

What if the credentials in question are a high school certificate? ChatGPT has certainly made life more difficult for high school and middle school teachers.

VBprogrammer
0 replies
3h18m

In which ways it it more difficult? Presumably a high school certificate encompasses more than just writing long form essays? You presumably have to show an understanding in worked examples in maths, physics, chemistry, biology etc?

I feel like the invention of calculators probably came with the same worries about how kids would ever learn to count.

vbo
1 replies
6h18m

and so has the internet. some use it for good, others for evil.

these are behaviours and traits of the user, not the tool.

sgt101
0 replies
1h24m

I can use a 5ltr V8 to drive to school and back or a Nissan Leaf.

Neither thing is evil, or good, but the choice of what is used and what is available to use for a particular task has moral significance.

munksbeer
1 replies
6h15m

It has devalued and replaced the work of people who relied on that work for their incomes.

Many people (myself included) would argue that is true for almost all technological progress and adds more value to society as a whole than it takes away.

Obviously the comparisons are not exact, and have been made many times already, but you can just pick one of countless examples that devalued certain workers wages but made so many more people better off.

sgt101
0 replies
1h30m

Sure - agree... but

- because it's happened before doesn't make it ok (especially for the folks who it happens to)

- many more people may be better off, and it may be a social good eventually, but this is not for sure

- there is no mechanism for any redistribution or support for the people suddenly and unexpectedly displaced.

shafyy
2 replies
7h28m

For what it's worth, I didn't downvote you.

Depends on what you define as positive impact. Helping programmers write boiler plate code faster? Summarize a document for lazy fuckers who can't get themselves to read two page? Ok, not sure if this is what I would consider "positive impact".

For a list of negative impacts, see the sister comments. I'd also like to add that the energy usage of LLMs like ChatGPT is immensely high, and this in a time where we need to cut carbon emissions. And mostly used for shits and gigles by some boomers.

sebzim4500
1 replies
6h46m

Your examples seem so obviously to me to be a "positive impact" that I can't really understand your comment.

Of course saving time for 100 million people is positive.

mejutoco
0 replies
5h45m

Not arguing either way, but it is conceivable that reading comprehension (which is not stellar in general) can get even worse. Saving time for the same quality would be a positive. Saving time for a different quality might depend on the use-case. For a rough summary of a novel it might be ok, for a legal/medical use, might literally kill you.

jll29
2 replies
7h57m

I think it's fair to say that after a lot of empty promises, AI research finally delivered something that can "wow" the general population, and has been demonstrated to be useful for more than an single use case.

I know a law firm that tried ChatGPT to write a legal letter, and they were shocked that it use the same structure that they were told to use in law school (little surprise here, actually).

u32480932048
0 replies
4m

I used it to respond to a summons which, due to postal delays, I had to get in the mail that afternoon. I typed my "wtf is this" story into ChatGPT, it came up with a response and asked for dismissal. I did some light editing to remove/edit claims that weren't quite true or I felt were dramatically exaggerated, and a week later, the case was dismissed (without prejudice).

It was total nonsense anyway, and the path to dismissal was obvious and straightforward, starting with jurisdiction, so I'm not sure how effective it would be in a "real" situation. I definitely see it being great for boilerplate or templating though.

latexr
0 replies
7h15m

I also know of a lawyer who tried ChatGPT and was shocked by the results.

https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...

nmfisher
7 replies
9h46m

Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth?

OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.

Money and status are the clear motivations here, OpenAI charter be damned.

sumedh
6 replies
7h5m

OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

Where does it say that?

nottheengineer
4 replies
5h46m
sumedh
3 replies
5h28m

Which line specifically says they will keep AGI out of the hands of “big tech companies”.

nmfisher
0 replies
4h22m

“Big tech companies” was in quotation marks because it’s a journalistic term, not a direct quotation from their charter.

But the intention was precisely that - just read the charter. Or if you want it directly from the founders, read this interview and count how many times they refer to Google https://medium.com/backchannel/how-elon-musk-and-y-combinato...

hatenberg
0 replies
5h5m

I bet you could chatGPT to actually explain this to you, it's really not very hard

Applejinx
0 replies
3h51m

'unduly concentrate power'

stavros
0 replies
6h24m

In their charter:

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
xinayder
3 replies
8h42m

I like to read that, besides the problems others have listed, OpenAI seems like it was built on top of the work of others, who were researching AI, and suddenly took all this "free work" from the contributors and sold it for a profit where the original contributors didn't even see a single dime from their work.

To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.

saiya-jin
1 replies
8h18m

Or any other say pharma company using massively and constantly basic research done by universities worldwide from our tax money. And then you go to pharmacy and buy medicine that costed 50 cents to manufacture and distribute for 50 bucks.

I don't like the whole idea neither, but various communism-style alternatives just don't work very well.

tokai
0 replies
2h51m

Pharma companies spend billions on financing public research. Hell the Novo Nordisk Foundation is be biggest charitable foundation in the world.

sgt101
0 replies
7h56m

Personally I don't think that the use of previous research is an issue, the fact is that the investment and expertise required to take that research and create GPT-4 were very significant and the en-devour was pretty risky. Very few people five years ago thought that very large models could be created that would be able to encode so much information or be able to retrieve it so well.

ascv
2 replies
9h42m

It seemed to me the entire point of the legal structure was to raise private capital. It's a lot easier to cut a check when you might get up to 100x your principal versus just a tax write off. This culminated in the MS deal: lots of money and lots of hardware to train their models.

foota
1 replies
9h25m

What's confusing is that... open AI wouldn't ever be controlled by those that invested, and the owners (e.g., the board) aren't necessarily profit seeking. At least when you take a minority investment in a normal startup you are generally assuming that the owners are in it to have a successful business. It's just a little weird all around to me.

quickthrower2
0 replies
9h1m

Microsoft get to act as a sole distributor for the enterprise. That is quite valuable. Plus they are still in at the poker table and a few raises from winning the pot (maybe they just did!) but even without this chaos they are likely setting themselves up to be the for-profit investor if it ever transitioned to that. For a small amount of money (for MS) they get a lot of upside.

cdogl
22 replies
10h14m

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.

Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.

adrianN
21 replies
10h9m

Was Marx wrong?

robertlagrant
14 replies
9h57m

Probably. Or at least that turned out to not matter so much. The alternative, keeping both control of resources and direct power in the state, seems to keep causing millions of deaths. Separating them into markets for resources and power for a more limited state seems to work much better.

This idea also ignores innovation. New rich people come along and some rich people get poor. That might indicate that money isn't a great proxy for power.

danans
6 replies
9h42m

New rich people come along and some rich people get poor.

Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

That might indicate that money isn't a great proxy for power.

Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

robertlagrant
5 replies
5h32m

Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

The rule of thumb is it lasts up to three generations, and only for very very few people. They are also, for everything they buy, and everyone they employ, paying tax. Redistribution isn't the goal; having funded services with extra to help people who can't is the goal. It's not a moral crusade.

Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

I think this is a non sequitur.

depr
4 replies
4h12m

What is your rule of thumb based on?

In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

They are also, for [..] they employ, paying tax

Is that a benefit of having rich people? If companies were employee-owned that tax would still be paid.

[0]: https://www.iamexpat.nl/expat-info/dutch-expat-news/wealthie...

robertlagrant
3 replies
2h19m

What is your rule of thumb based on?

E.g. [0]

In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

That's a non sequitur from the previous point. However, on the "who pays taxes?" point, that article is careful to only talk about income tax in absolute terms, and indirect taxes in relative terms. It doesn't appear to be trying to make an objective analysis.

Is that a benefit of having rich people?

I don't share the assumption that people should only exist if they're a benefit.

If companies were employee-owned that tax would still be paid.

Some companies are employee-owned, but you have to think how that works for every type of business. Assuming that it's easy to make a business, and the hard bit is the ownership structure is a mistake.

[0] https://www.thinkadvisor.com/2016/08/01/why-so-many-wealthy-...

depr
2 replies
1h47m

I don't share the assumption that people should only exist if they're a benefit.

Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

Anyway, if you don't think it matters if they are of benefit, then why did you bring up the fact that they pay taxes?

robertlagrant
1 replies
44m

Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

I meant people with a certain amount of money. I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

Anyway, if you don't think it matters if they are of benefit

I don't know what this means.

then why did you bring up the fact that they pay taxes?

I bring it up because saying they pay less in income taxes doesn't matter if they're spending money on stuff that employs people (which creates lots of tax) and gets VAT added to it. Everything is constantly taxed, at many levels, all the time. Pretending we don't live in a society where not much tax is paid seems ludicrous. Lots of tax is paid. If it's paid as VAT instead of income tax - who cares?

depr
0 replies
6m

What I meant is:

I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

but earlier you said:

They are also, for everything they buy, and everyone they employ, paying tax.

So if we should not assess the economic system based on whether people keep their money, i.e. pay tax, then why mention that they pay tax? It doesn't seem relevant.

aylmao
6 replies
9h18m

New rich people come along and some rich people get poor

This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy

robertlagrant
5 replies
5h28m

This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy

The point is that wealth and power aren't interchangeable. You're right that government bureaucrats have actual power, including that to take people's stuff. But you've not realised that that actual power means the rich people don't have power. There were rich people in the USSR that were killed. They had no power; the killers had the power in that situation.

whelp_24
4 replies
4h26m

Wealth is control of resources, which is power. The way to change power is through force that's why you need swords to remove kings and to remove stacks of gold, see assinations, war, the U.S..

robertlagrant
3 replies
2h17m

You need swords to remove kings because they combined power and economy. All potential tyrannies do so: monarchy, socialism, fascism, etc. That's why separating power into the state and economy into the market gets good results.

whelp_24
2 replies
1h38m

The separation is impossible, if you don't control the resources, you don't control the country.

separating power into the state and economy into the market gets good results.

How do you think this would be done? How do you remove power from money? Money is literally the ability to convert numbers into labor, land, food,

robertlagrant
1 replies
38m

Power is things like: can lock someone in a box due to them not giving a percentage of their income; can send someone to die in another country; can stop someone building somewhere; can demand someone's money as a penalty for an infraction of a rule you wrote.

You don't need money for those things.

Money (in a market) can buy you things, but only things people are willing to sell. You don't exert power; you exchange value.

whelp_24
0 replies
9m

Money can and does do all of those things. Through regulatory capture, rent seeking, even just good old hiring goons.

The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free. The government can be thought of as having unfathomable amounts of money. The assets of a country includes the entire country (less anyone with enough money to defend it).

If a sword is kinetic energy, money is potential energy. It is a battery that only needs to be connected to the right place to be devastating. And money can buy you someone who knows the right place.

Governments have power because they have resources (money) not the other way around.

cdogl
1 replies
9h12m

Was Marx wrong?

pt. 1: Whether he was right or wrong was pertinent. You can find plenty of eminent contemporaries of Marx who claimed the opposite. My point was that this is an argument made about technological change throughout history which has become a cliché, and in my opinion it remains a cliche regardless of how eminent (in a narrow field) the person making that claim is. Part of GP was from authority, and I question whether it is even a relevant authority given the scope of the claims.

Was Marx Wrong?

pt. 2: I was once a Marxist and still consider much Marxist thought and writing to be valuable, but yes: he was wrong about a great many things. He made specific predictions about the _inevitable_ development of global capital that have not played out. Over a century later, the concentration of wealth and power in the hands of the few has not changed, but the quality of life of the average person on the planet has increased immensely - in a world where capitalism is hegemonic.

He was also wrong about the inevitably revolutionary tendencies of the working class. As it turns out, the working class in many countries tend to be either centre right or centre left, like most people, with the proportion varying over time.

denton-scratch
0 replies
7h18m

He was also wrong about the inevitably revolutionary tendencies of the working class.

Marx's conception of the "working class" is a thing that no longer exists; it was of a mass, industrial, urban working class, held down by an exploitative capitalist class, without the modern benefits of mass education and free/subsidized health care. The inevitability of the victory of the working class was rhetoric from the Communist Manifesto; Marx did anticipate that capitalism would adapt in the face of rising worker demands. Which it did.

osigurdson
0 replies
9h44m

If you are a Marxist, no, otherwise yes.

noirscape
0 replies
8h4m

For his prediction of society? Yes.

Not even talking about the various tin-pot dictators paying nominal lip service to him, but Marx predicted that the working class would rise up against the bourgeoisie/upper class because of their mistreatment during the industrial revolution in well, a revolution and that would somehow create a classless society. (I'll note that Marx pretty much didn't state how to go from "revolution" to "classless society", so that's why you have so many communist dictators; that between step can be turned into a dictatorship to as long as they claim that the final bit of a classless society is a permanent WIP, which all of them did.)

Now unless you want to argue we're still in the industrial revolution, it's pretty clear that Marx was inaccurate in his prediction given... that didn't happen. Social democracy instead became a more prevailing stream of thought (in no small part because few people are willing to risk their lives for a revolution) and is what led to things like reasonable minimum wages, sick days, healthcare, elderly care, and so on and so forth being made accessible to everyone.

The quality of which varies greatly by the country (and you could probably consider the popularity of Marxist revolutionary thought today in a country as directly correlated to the state of workers rights in that country; people in stable situations will rarely pursue ideologies that include revolutions), but practically speaking - yeah Marx was inaccurate on the idea of a revolution across the world happening.

The lens through which Marx examined history is however just that - a lens to view it through. It'll work well in some cases, less so in others. Looking at it by class is a useful way to understand it, but it won't cover things being motivated for reasons outside of class.

matkoniecz
0 replies
6h2m

Was Marx wrong?

Not sure, but attempts to treat him seriously (or pretend to do this) ended horribly wrong, with basically no benefits.

Is there any good reason to care what he thought?

Looking at history of Poland (before, during and after PRL) gave me no interest whatsoever to look into his writings.

Palpatineli
0 replies
34m

Yes because AGI would invalidate the entirety of das Kapital.

kmlevitt
14 replies
8h41m

People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?

Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.

Wronnay
6 replies
8h30m

The Issue with these two explanations from the board is that this is normally nothing which would result into firing the CEO.

In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.

kmlevitt
5 replies
8h5m

I agree with you. But that leads me to believe that they did not, in fact, have a good reason to fire their CEO. I'll change my mind about that if or when they provide better reasons.

Look at all the speculation on here. There are dozens of different theories about why they did what they did running so rampant people are starting to accept each of them as fact, when in fact probably all of them are going to turn out to be wrong.

People need to take a step back and look at the available evidence. This report is the clearest indication we have gotten of their reasons, and they come from a reliable source. Why are we not taking them at their word?

katastofik
4 replies
7h39m

Why are we not taking them at their word?

Ignoring the lack of credibility in the given explanations, people are, perhaps, also wary that taking boards/execs at their word hasn't always worked out so well in the past.

Until an explanation that at least passes the sniff test for truthiness comes out, people will keep speculating.

And so they should.

kmlevitt
3 replies
6h57m

Right, except most people here are proposing BETTER reasons for why they fired him. Which ignores that if any of these better reasons people are proposing were actually true, they would just state them themselves instead of using ones that sound like pitiful excuses.

katastofik
2 replies
6h19m

Whether it be dissecting what the Kardashians ate for breakfast or understanding why the earth may or may not be flat, seeking to understand the world around us is just what we do as humans. And part of that process is "speculating sensational, justifiable reasons" for why things may be so.

Of course, what is actually worth speculating over is up for debate. As is what actually constitutes a better theory.

But, if people think this is something worth pouring their speculative powers into, they will continue to do so. More power to them.

Now, personally, I'm partly with you here. There is an element of futility in speculating at this stage given the current information we have.

But I'm also partly with the speculators here insofar as the given explanations not really adding up.

kmlevitt
1 replies
5h46m

Think you're still missing what I'm saying. Yes, I understand people will speculate. I'm doing it myself here in this very thread.

The problem is people are beginning to speculate reasons for Altman's firing that have no bearing or connection to what the board members in question have actually said about why they fired him. And they don't appear to be even attempting to reconcile their ideas with that reality.

There's a difference between trying to come up with theories that fit with the available facts and everything we already know, and ignoring all that to essentially write fanfiction that cast the board in a far better light than the available information suggests.

katastofik
0 replies
5h5m

Agreed. I think I understood you as being more dismissive of speculation per se.

As for the original question -- why are we not taking them at their word? -- the best I can offer is my initial comment. That is, the available facts (that is, what board members have said) don't really match anything most people can reconcile with their model of how the world works.

Throw this in together with a learned distrust of anything that's been fed through a company's PR machine, and are we really surprised people aren't attempting to reconcile the stated reality with their speculative theories?

Now sure, if we were to do things properly, we should at least address why we're just dismissing the 'facts' when formulating our theories. But, on the other hand, when most people's common sense understanding of reality is that such facts are usually little more than fodder for the PR spin machine, why bother?

wordpad25
2 replies
8h26m

People keep speculating

Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.

Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.

This is also on Sam Altman himself for building and then entertaining such an incompetent board.

qwytw
0 replies
8h4m

that the board is fully incompetent if it was truly that petty of a reason to ruin the company

It's perfectly obvious that these weren't the actual reasons. However yes, they are still incompetent because they couldn't think of a better justification (amongst other reasons which led to this debacle).

kmlevitt
0 replies
8h3m

Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

No, I totally agree. In fact what annoys me about all the speculation is that it seems like people are creating fanfiction to make the board seem much more competent than all available evidence suggests they actually are.

codeulike
2 replies
5h54m

For what its worth, here's a thread from someone who used to work with Sam who says they found him deceptive and manipulative

https://twitter.com/geoffreyirving/status/172675427022402397...

I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.

...

Third, my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

kmlevitt
0 replies
5h36m

The General anecdotes he gives later in the thread line up with their stated reasons for firing him: he hired another person to do the same project (presumably without telling them), and he gave two different board members different opinions of the same person.

Those sound like good reasons to dislike him and not trust him. But ultimately we are right back where we started: they still aren't good enough reasons to suddenly fire him the way they did.

cma
0 replies
2h58m

Here's another anecdote, posted in 2011 but about something even earlier:

"We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.

We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.

We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."

https://news.ycombinator.com/item?id=3048944

zztop44
0 replies
3h48m

I agree, and what’s more I think the stated reasons make sense if (a) the person/people impacted by these behaviours had sway with the board, and (b) it was a pattern of behaviour that everyone was already pissed off about.

If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.

Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.

I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.

VectorLock
3 replies
9h33m

It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.

saagarjha
1 replies
9h12m

Ok, but the wages were excellent (assuming that the equity panned out, which it seemed very likely it would until last week).

margorczynski
0 replies
4h42m

So it is possible a lot of those people against Altman being outed are like that because they know the equity they hold will take a dump?

I'm not saying they are hypocrites or bad people because of it, just wondering if that might be a factor also.

mcpackieh
0 replies
2h53m

Reminds me of a certain rocket company that specializes in launching large satellite constellations that attracts top talent with altruistic sentiment about saving humanity from extinction.

osrec
2 replies
7h13m

What does TC style mean?

smiley1437
0 replies
5h19m

Tech Crunch

sevagh
0 replies
4h16m

Total Compensation

curiousgal
2 replies
9h30m

taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This is absolutely peak irony!

US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets

Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!

I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.

xinayder
0 replies
8h39m

The difference is that the US Army wasn't created with the intent to "keep guns from the hands of criminals" and we all know it's a bad actor.

OpenAI, on the other hand...

0xDEADFED5
0 replies
9h19m

it's okay to give an example of something bad without being required to list all the other things in the universe that are also bad.

PeterStuer
2 replies
9h58m

What is interesting is the total absence of 3 letter agency mentions from all of the talk and speculation about this.

smolder
1 replies
8h10m

I don't think that's true. I've seen at least one other person bring up the CIA in all the "theorycrafting" about this incident. If there's a mystery on HN, likelihood is high of someone bringing up intelligence agencies. By their nature they're paranoia-inducing and attract speculation, especially for this sort of community. With my own conspiracy theorist hat on, I could see making deals with the Saudis regarding cutting edge AI tech potentially being a realpolitik issue they'd care about.

PeterStuer
0 replies
5h56m

I'm sure they are completely hands-off about breakthrough strategic tech. Unless it's the Chinese or the Russians or the Iranians or any other of the deplorables, but hey, if it's none of those, we rather have our infiltrants focus on tiktok or twitter ... /s

xinayder
1 replies
8h45m

So they actually kicked him out because he transformed a non-profit into a money printing machine?

whelp_24
0 replies
4h24m

You that like it's a bad thing for them to do? You wouldn't donate to the Coca-cola company.

throwaway4aday
1 replies
3h40m

The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.

https://quorablog.quora.com/Introducing-creator-monetization...

https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...

curiousllama
0 replies
3h11m

A (potential, unstated) motivation for one board member doesn't explain the full moves of the board, though.

Maybe it's a factor, but it's insufficient

dgellow
1 replies
3h31m

If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure

tomhallett
0 replies
3h22m

I’m curious: if one of the board members “knows” the only way for OpenAI to be truly successful is for it to be a non-profit and “don’t be evil” (Google’s mantra), that if they set expectations correctly and put caps on the for-profit side, it could be successful. But they didn’t fully appreciate how strong the market forces would be, where all of the focus/attention/press would go to the for-profit side. Sam’s side has such an intrinsic gravity, that’s it’s inevitable that it will break out of its cage.

Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?

dariosalvi78
1 replies
6h37m

you have the single greatest shitshow in tech history

the second after Musk taking over Twitter

achenet
0 replies
6h8m

We live interesting times ^_^

Sunhold
1 replies
9h52m

I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.

jimmySixDOF
0 replies
7h58m

Then there is the Inference cost said to be as high as $0.30 per question asked based on compute cost infrastructure.

Guthur
1 replies
9h41m

If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.

Why worry about the Sauds when you've got your own home grown power hungry individuals.

achenet
0 replies
6h2m

because our home grown power hungry individuals are more likely to be okay with things like women dressing how they want, homosexuality, religious freedom, drinking alcohol, having dogs and other decadent western behaviors which we've grown very attached to

zw123456
0 replies
10h55m

100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.

roschdal
0 replies
8h12m

the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This!

mcmcmc
0 replies
10h46m

This feels like a lot of very one sided PR moves from the side with significantly more money to spend on that kind of thing

detourdog
0 replies
3h30m

To me this is the ultimate Silicon Valley bike shedding incident.

Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.

bryanrasmussen
0 replies
7h39m

Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.

Cambridge Analytics and The Facebook we must do better greatest hits?

blackoil
0 replies
7h46m

money from the Saudis on the order of billions of dollars to make AI accelerators

Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.

blackoil
0 replies
7h44m

There has to be a bigger story to this.

On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.

P_I_Staker
0 replies
2h45m

rich and powerful people using the technology to enhance their power over society.

We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.

Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?

LZ_Khan
0 replies
8h12m

Taking money from Saudi's alone should raise a big red flag.

AtlasBarfed
0 replies
2h52m

At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.

The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.

Of course this is about the money, one way or another.

kmlevitt
92 replies
15h24m

Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.

Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

1024core
24 replies
13h35m

But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

insanitybit
8 replies
13h34m

It seems extremely short sighted for the rest of the board to go along with that.

adastra22
4 replies
13h16m

Well obviously that wouldn't be the explanation given to other board members. But it would be the reason he instigated this after dev day, and the reason he won't back down (OpenAI imploding? All the better).

shandor
1 replies
12h16m

But it’s still surprising the other three then haven’t sacked D’Angelo, then. You’d think with the shitstorm raging and the underlying reasoning seemingly so…inadequate, they would start seeing that D’Angelo was just playing them.

singularity2001
0 replies
10h53m

maybe they have their own 'good' reasons to sabotage openAI

rtpg
1 replies
9h15m

But you would need to convince the rest of the board with _something_, right? Like to not only fire this guy, but to do it very publicly, quickly, with the declaration of lying in the announcement.

There are 3 other people on the board, right? Maybe they're all buddies of some big masterminding, but I dunno..

adastra22
0 replies
7h22m

The one thing they all have in common is being AI safetyists, which Sam is not. I’d bet it’s something to do with that.

sangnoir
2 replies
11h50m

HN has been radiating a lot of "We did it Reddit!" energy these past 4 days. Lots of confident conjecture based on very little. I have been guilty of it myself, but as an exercise in humility, I will come back to these threads in 6 months to see how wrong I and many others were.

kmlevitt
0 replies
8h24m

I agree it's all just speculation. But the board aren't doing themselves any favors by not talking. As long as there is no specific reason for firing him given, it's only natural people are going to fill the void with their own theories. They have a problem with that, they or their attorneys need to speak up.

gardenhedge
0 replies
8h46m

That might make an interesting blog post. If you write anything up, you should submit it!

seanhunter
4 replies
12h13m

What’s interesting to me is that someone looked at Quora and thought “I want the guy behind that on my board”.

bambax
1 replies
10h53m

Agreed! Yet in 2014 Sam Altman accepted Quora into one of YC's batches, saying [0]

Adam D’Angelo is awesome, and we’re big Quora fans

[0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch

djsavvy
0 replies
9h45m

To be fair, back then it was pretty awesome IMO. I spent a lot of hours scrolling Quora in those days. It wasn’t until at least 2016 that the user experience became unpalatable if memory serves correctly.

singularity2001
0 replies
10h50m

it's probably more like they thought "I want Quoras money" and D'angelo wanted their control

motoxpro
0 replies
10h14m

I was thinking the same thing. This whole thing is surprising and then I look at Quora and think "Eh, makes sense that the CEO is completely incompetent and money hungry"

Even as I type that, when people talk about the board being altruistic and holding to the Open AI charter, how in the world can you be that user hostile, profit focused, and incompetent at your day job (Quora CEO) and then say "Oh no, but on this board I am an absolute saint and will do everything to benefit humanity"

kmlevitt
4 replies
12h56m

Right now I think that’s the most plausible explanation simply because none of the other explanations that have been floating around make any sense when you consider all the facts. We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up.

And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.

parl_match
1 replies
11h13m

This gossip is going to continue for as long as they stay silent.

Their lawyers are all screaming at them to shut up. This is going to be a highly visible and contested set of decisions that will play out in courtrooms, possibly for years.

kmlevitt
0 replies
8h27m

I agree with you. But I suspect the reason they need to shut up is because their actual reason for firing him is not justifiable enough to protect them, and stating it now would just give more ammunition to plaintiffs. If they had him caught red-handed in an actual crime, or even a clear ethical violation, a good lawyer would be communicating that to the press on their behalf.

High-ranking employees that have communicated with them have already said they have admitted it wasn't due to any security, safety, privacy or financial concerns. So there aren't a lot of valid reasons left. They're not talking because they've got nothing.

Emma_Goldman
1 replies
9h16m

"We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up."

Why do you think that? It still strikes me as the most plausible explanation.

YetAnotherNick
0 replies
8h57m

Greg and Sam were the creator of this current non profit structure. And when similar thing happened before with Elon offering to buy the company, Sam declined. And that was when where for OpenAI getting funding on their terms were much harder than it is now, whereas now they could much more easily dictate terms to investors.

Not saying he couldn't change now but at least this is enough for him to give clear benefit of doubt unless board accuses him.

behnamoh
3 replies
11h40m

Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?

Moto7451
1 replies
10h24m

I don’t think that’s how it would work out since his conflict was very public knowledge before this point. He plausibly disclosed this to the board at some point before Poe launched and they kept him on.

Large private VC backed companies also don’t always fall under the same rules as public entities. Generally there are shareholder thresholds (where insider/private shareholders count towards) that in turn cause some of the general Securities/board regulations to kick in.

jacquesm
0 replies
6h29m

That's not how it works. If you have a conflict of interest and you remain on a board you are supposed to recuse yourself from those decisions where that conflict of interest materializes. You can still vote on the ones that you do not stand to profit from if things go the way you vote.

jacquesm
0 replies
6h30m

The decisions will stand assuming they were arrived at according to the bylaws of the non-profit but he may end up being personally liable.

Zolde
0 replies
9h24m

I find this implausible, though it may have played a motivating role.

Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.

GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.

Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.

There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:

The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge

So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".

Third, my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.

shandor
15 replies
11h15m

I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.

Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?

lolinder
10 replies
10h45m

What specific legal action could be pursued against them where you're from? Who would have a cause for action?

(I'm genuinely curious—in the US I'm not aware of any action that could be taken here by anyone besides possibly Sam Altman for libel.)

23B1
7 replies
10h33m

Shareholder lawsuits happen all the time for much smaller issues.

pavlov
3 replies
10h28m

OpenAI is a non-profit with a for-profit subsidiary. The controlling board is at the non-profit and immune to shareholder concerns.

Investors in OpenAI-the-business were literally told they should think of it as a donation. There’s not much grounds for a shareholder lawsuit when you signed away everything to a non-profit.

jacquesm
0 replies
6h31m

Absolutely nobody on a board is immune from judicial oversight. That fiction really needs to go. Anybody affected by their decisions could have standing to sue. They are lucky that nobody has done it so far.

698969
0 replies
7h45m

I guess big in-person investors were told as much, but if it's about that big purple banner on their site, that seems to be an image with no alt-text. I wonder if an investor with impaired vision may be able to sue them for failing to communicate that part.

23B1
0 replies
10h11m

Corporate structure is not immunity from getting sued. Evidently HN doesn't understand that lawsuits are a tactic, not a conclusion.

lolinder
2 replies
10h30m

Right, but my understanding is that the nonprofit structure eliminates most (if not all) possible shareholder suits.

shandor
0 replies
10h19m

As I mentioned in my comment, I'm unaware of the effect of the nonprofit status on this. But like the parent commenter mentioned I mostly was thinking of laws prohibiting destruction of shareholder value (edit: whatever that may mean considering a nonprofit).

It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

There have been many comments that the initial firing of Altman was in a way completely according to the nonprofit charter, at least if it could prove that Altman had been executing in a way as to jeopardize the Charter.

But even then, how could the board say they are working in the best interest of even the nonprofit itself, if their company is just disintegrating while they willfully refuse to give any information to public?

23B1
0 replies
10h5m

No corporate structure – except for maybe incorporating in the DPRK – can eliminate lawsuits.

pama
1 replies
2h32m

I'm guessing that unless the board caves to everything that the counterparties ask of it, MSFT lawyers will very soon reveal to the board the full range of possible legal actions against the board. The public will probably not see many of these actions until months or years later, but it's super hard to imagine that such random jumping of destruction and conflicts will go unpunished.

Symmetry
0 replies
1h28m

Whether or not MicroSoft has a winnable case, often "the process is the punishment" in cases like these and its easy to threaten a long, drawn-out, and expensive legal fight.

rjzzleep
3 replies
10h21m

This isn't unlike the radio silence Brendan Eich kept, when the Mozilla sh* hit the fan. This is in my opinion the outcome of when really technical and scientific people have been given decades of advice of not talking to the public.

I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.

I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.

The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.

I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.

Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.

winterplace
1 replies
6h10m

Your instinct was right, who advised you against that?

What happened in the days before you got the severance package?

Do you have an email address or a contact method?

rjzzleep
0 replies
5h35m

I've seen this advice being given in different situations. I've also met all sorts of engineers that have been given this advice. "Make your manager look good and he will reward you" is kinda the general idea. I guess it can be true sometimes, but I have a feeling that that might be the minority or is at least heavily dependent on how confident that person is.

I would not be surprised if Sam Altman would keep telling the board and more specifically Ilya to trust him since they(he) don't understand the business side of things.

Do you have an email address or a contact method?

EDIT: It's in my profile(now).

What happened in the days before you got the severance package?

I went to DEFCON out of pocket and got booted off a conference call supposedly due to my bad hotel wifi.

selestify
0 replies
7h46m

Wow, I have nothing to say, other than that’s some major BS!

dwd
13 replies
10h8m

Do we even have an idea of how the vote went?

Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.

It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.

havercosine
6 replies
9h31m

Naive question. In my part of the world, board meetings for such consequential decisions can never be called out on such short notice. Board meeting has to be called ahead of time by days, all the board members must be given written agenda. They have to acknowledge in writing that they've got this agenda. If the procedures such as these aren't followed, the firing cannot stand in court of law. The number of days are configurable in the shareholders agreement, but it is definitely not 1 day.

Do things work differently in America?

Zolde
5 replies
9h6m

No. Apparently they had to give 48 hours notice for calling special teleconference meetings, and only Mira was notified (not a board member) and Greg was not even invited.

at least four days before any such meeting if given by first-class mail or forty-eight hours before any such meeting if given personally, [] or by electronic transmission.

But the bylaws also state that a board member may be fired (or resign) at any time, not necessarily during a special meeting. So, technically (not a lawyer): Board gets majority to fire Sam and executes this decision, notifying Mira in advance of calling the special meeting. During the special meeting, Sam is merely informed that he has been let go already (is not a board member since yesterday). All board members were informed timely, since Sam was not a board member during the meeting.

mcv
3 replies
7h24m

I don't see how this kind of reasoning can possible hold up. How can board members not be invited to such an important decision? You can't say they don't have to be there because they won't be a board member after this decision; they're still a board member before the decision has been made to remove them.

If Ilya was on the side of Sam and Greg, the other 3 never had a majority. The only explanation is that Ilya voted with the other 3, possibly under pressure, and now regrets that decision. But even then it's weird to not invite Greg.

And if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.

Zolde
2 replies
6h24m

Everyone assumes that the vote must have happened during the special meeting, but the decision to fire the CEO/or CEO stepping down may happen at any time.

if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.

So perhaps the vote was legit?

- Investigation concludes Sam has not been consistently candid.

- Board realizes it has a majority and cause to fire Sam and demote Greg.

- Informs remaining board members that they will have a special meeting in 48 hours to notify Sam and Greg.

Still murky, since Sam would have attended the meeting under assumption that he was part of the board (and still had his access badge, despite already being fired). Perhaps it is also possible to waive the 48 hours? Like: "Hey, here is a Google meet for a special meeting in a few hours, can we call it, or do we have to wait?"

mcv
1 replies
6h5m

If the vote was made when no one was there to see it, did it really happen? There's a reason to make these votes in meetings, because then you've got a record that it happened. I don't see how the board as a whole can make a decision without having a board meeting.

Zolde
0 replies
4h52m

Depending on jurisdiction and bylaws, the board may hold a pre-meeting, where informal consensus is reached, and potential for majority vote is gauged.

Since the bylaws state that the decision to fire the CEO may happen at any time (not required to be during a meeting), a plausible process for this would be to send a document to sign by e-mail (written consent), and have that formalize the board decision with a paper trail.

Of course, from an ethical, legal, collegial, and governance perspective that is an incredibly nasty thing to do. But if investigation shows signs of the CEO lacking candor, all transparency goes out of the window.

But even then it's weird to not invite Greg.

After Sam was fired (with vote from Ilya "going along"), rest of the board did not need Ilya anymore for majority vote and removed Greg, demoting him to report to Mira. I suspect that board expected Greg to stay, since he was "invaluable" and that Mira would support their pick for next CEO, but things turned out differently.

Remember, Sam and Greg were blindsided, board had sufficient time to consult with legal counsel to make sure their moves were in the clear.

jacquesm
0 replies
6h16m

Haste is not something compatible with board activity unless the circumstances clearly demand it and that wasn't the case here.

fastball
3 replies
9h15m

I don't understand how you only need 4 people for quorum on a 6-person board.

seanhunter
0 replies
7h41m

It depends entirely on how the votes are structured, the issue at hand and what the articles of the company say about the particular type of issue.

On the board that I was on we had normal matters which required a simple majority except that some members had 2 votes and some got 1. Then there were "Supermajority matters" which had a different threshold and "special supermajority matters" which had a third threshold.

Generally unless the articles say otherwise I think a quorum means a majority of votes are present[1], so 4 out of 6 would count if the articles didn't say you needed say 5 out of 6 for some reason.

It's a little different if some people have to recuse themselves for an issue. So say the issue is "Should we fire CEO Sam Altman", the people trying to fire Sam would likely try to say he should recuse himself and therefore wouldn't get a vote so his vote wouldn't also count in deciding whether or not there's a quorum. That's obviously all BS but it is the sort of tactic someone might pull. It wouldn't make any difference if the vote was a simple majority matter and they already had a majority without him though.

[1] There are often other requirements to make the meeting valid though eg notice requirements so you can't just pull a fast one with your buddies, hold the meeting without telling some of the members and then claim it was quorate so everyone else just have to suck it up. This would depend on the articles of the company and the not for profit though.

jacquesm
0 replies
6h12m

That's a supermajority in principle, but the board originally had 9 members and this is clearly a controversial decision and at least one board member is conflicted, and another has already expressed his regret about his role in the decision(s).

So the support was very thin and this being a controversial decision the board should have sought counsel on whether or not their purported reasons had enough weight to support a hasty decision. There is no 'undo' button on this and board member liability is a thing. The probably realize all that which is the reason for the radio silence, they're just waiting for the other shoe to drop (impending lawsuit) after which they can play the 'no comment because legal proceedings' game. This may well get very messy or, alternatively it can result in all parties affected settling with the board and the board riding off into the sunset to wreak havoc somewhere else (assuming anybody will still have them, they're damaged goods).

AlanYx
0 replies
2h34m

It depends on the corporate bylaws, but the most common quorum requirement is a simple majority of the board members. So 4 is not atypical for quorum on a 6 person board.

moberley
1 replies
9h41m

I find it interesting that the attempted explanations, as unconvincing as they may be, are related to Altman specifically. Given that Brockman was the board chairperson it is surprising that there don't seem to be any attempts to explain that demotion. Perhaps its just not being reported to anyone outside but it makes no sense to me that anyone would assume a person would stay after being removed from a board without an opportunity to be at the meeting to defend their position.

Irishsteve
0 replies
9h31m

Maybe the personal issue was Ilya and sam was saying to one board member he has to go and to another he is good.

aravindgp
12 replies
13h59m

Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.

The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.

Now it's increasingly look like Sam will be heading back into the role of CEO of openai.

anupamchugh
8 replies
13h20m

There’s lots of conflicts of interests beyond Adam and his Poe AI. Yes, he was building a commerical bot using OpenAI APIs, but Sam was apparently working on other side ventures too. And Sam was the person who invested in Quora during his YC tenure, and must have had a say in bringing him onboard. At this point, the spotlight is on most members of the nonprofit board

nemo44x
7 replies
13h5m

I wouldn’t hold Sam bringing him over in too high a regard. Fucking each other over is a sport in Silicon Valley. You’re subservient exactly until the moment you sense an opportunity to dominate. It’s just business.

bezier-curve
4 replies
12h27m

"Business" sucks then. This is sociopathic behavior.

bredren
1 replies
12h17m

What has been seen can not be unseen. https://news.ycombinator.com/item?id=881296

dendrite9
0 replies
8h11m

Thanks for that. The discussion feels like a look into another world, which I guess is what history is.

rjbwork
0 replies
12h10m

Yes. That is what is valued in the economic system we have. Absolute cut throat dominance to take as big a chunk of any pie you can get your grubby little fingers into yields the greatest amount of capital.

nemo44x
0 replies
1h11m

It’s not just business that works like this. Any type of organization of consequence has sociopaths at the top. It’s the only way to get there. It’s a big game that some people know how to play well and that many people are oblivious to.

TerrifiedMouse
1 replies
12h3m

Why did Altman bring him onboard in the first place? What value does he provide? If there is a conflict of interest why didn’t Altman see it?

If this Quora guy is the cause of all this, Altman only has himself to blame since he is the reason the Quora guy is on the board.

kaoD
0 replies
9h22m

That Quora guy was CTO and VPEng of Facebook so plenty of connections I guess.

Also Quora seems like a good source of question-and-answer data which has probably been key in gpt-instruct training.

015a
2 replies
13h9m

So? Sam gave Worldcoin early access to OpenAI's proprietary technology. Should Sam step down (oh wait)?

blackoil
0 replies
7h34m

Worldcoin has no conflict of interest with OpenAI. Unless he gave tech for free causing great loss to the OpenAI it is simply finding an early beta customer.

Also, to fire over something so trivial would be equally if not more stupid. It is like firing Elon because he without open bidding sent Tesla on SpaceX.

aravindgp
0 replies
12h49m

Early access is different from firing board members or CEO! If Sam was always involved in furthering openai success as far the facts and actions he has taken show. It never showed his action is against openai.

Like all bets are not correct I don't agree with sams worldcoin project at all in the first place.

Giving early access to worldcoin doesn't correlate to firing employees or board or CEO.

AndyNemmity
5 replies
12h49m

Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.

That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.

karmasimida
1 replies
12h5m

He lets the emotion gets the better part of him for sure.

zombiwoof
0 replies
10h17m

So glad the man baby AI scientist is in charge of AGI alignment

Feel the AI

dragonwriter
1 replies
12h46m

Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

Ilya was one of the board members that removed Sam, so his reasons would, ipso facto, be a subset of the board's reasons.

skygazer
0 replies
11h21m

It’s also weird that he’s not admitting to any of his own reasons, only describes some trivial reasons he seems to have coaxed out of the other board members?! Perhaps he still has his own reasons but realizing he’s destroying what he loves he’s trying to stay mum? The other board members seem more zealous for some reason, maybe not being employed by the LLC. Or maybe the others are doing it for the sake of Ilya or someone else that prefers to remain anonymous? Okay, clearly I have no idea.

aravindgp
0 replies
12h37m

I think Ilya was naive and didn't see this coming and good that he reliased quickly announced on twitter and made the right call to get Sam back.

Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.

chucke1992
4 replies
11h59m

It is fascinating considering that D'Angelo had a history with coup (in Quora he did the same, didn't he?)

aravindgp
2 replies
10h24m

Wow this is significant, he did this to Charlie cheever the best guy at Facebook and quora. He got Matt on board and fired Charlie without informing investors. Only difference this time 100 billion company is at stake at openai. Process is similar. This going very wrong for Adam D'Angelo. With this I hope other board members get to the bottom get Sam back and vote out D'Angelo from board.

This school level immaturity.

Old story

https://www.businessinsider.com/the-sudden-mysterious-exit-o...

mcv
1 replies
7h41m

People keep talking about an inexperienced board, but this sounds like this D'Angelo might be a bit too experienced, especially in this kind of boardroom maneuvering.

jacquesm
0 replies
6h18m

That may be so but those other times he didn't check to see if the arm holding the banana wasn't accidentally attached to the 900 pound gorilla before trying to snatch the banana. And now the gorilla is angry.

gorgoiler
0 replies
10h46m

Remember Facebook Questions? While it lives on as light hearted polls and quizzes it was originally launched by D’Angelo when he was an FB employee. It was designed to compete with expert Q&A websites and was basically Quora v0.

When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.

https://en.wikipedia.org/wiki/List_of_Facebook_features#Face...

resource0x
3 replies
13h48m

The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.

DaiPlusPlus
2 replies
13h31m

The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI.

Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s

kuchenbecker
0 replies
13h25m

The singularity is where this works.

fhqwhgads
0 replies
13h17m

The number of hours I've wasted trying to do this lol

arthur_sav
3 replies
8h14m

Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told.

You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".

laurels-marts
1 replies
7h47m

Two possibilities when it comes to Ilya:

1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).

2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.

aerhardt
0 replies
4h1m

If #1 is real, he’s just the biggest weasel in tech history by repenting so swiftly and decisively… I don’t think neither the article, nor the broader facts really point to him being the first to cast the stone.

jacquesm
0 replies
6h11m

Based on Ilya's tweets and his name on that letter (still surprised about that, I have never sees someone calling for their own resignation) that seems to be the story.

lfclub
1 replies
12h16m

It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).

I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.

If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.

It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?

The girl she said not to worry about.

diordiderot
0 replies
1h59m

There’s little to no product design there

I consider this a feature.

anoy8888
0 replies
13h56m

This is the most likely scenario. Adam wants to destroy OpenAI so that his poop AI has a chance to survive

LMYahooTFY
0 replies
10h7m

Well, the appointment of a CEO who believes AGI is a threat to the universe is potentially one point in favor of AI safety philosophical differences.

JyB
0 replies
11h22m

That's the only thing that make sense with Ilya & Murati signing that letter.

DebtDeflation
85 replies
14h44m

1) Where is Emmett? He's the CEO now. It's his job to be the public face of the company. The company is in an existential crisis and there have been no public statements after his 1AM tweet.

2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.

PheonixPharts
61 replies
14h25m

Yes these people should all be doing more to feed internet drama! If they don't act soon, HN will have all sorts of wild opinions about what's going on and we can't have that!

Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!

I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.

JumpCrisscross
31 replies
14h4m

My favorite hypothesis: Ilya et al suspected emergent AGI (e.g. saw the software doing things unprompted or dangerous and unexpected) and realized the Worldcoin shill is probably not the one you want calling the shots on it.

For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.

mvdtnz
23 replies
12h54m

Absolutely wild to me that people are drawing a straight line between a text completion algorithm and AGI. The term "AI" has truly lost all meaning.

dwaltrip
6 replies
12h39m

LLMs predict language, and language is a representation of human concepts about the world. Thus, these models are constructing, piece by piece, conceptual chains about the world.

As they learn to construct better and more coherent conceptual chains, something interesting must be happening internally.

mvdtnz
4 replies
12h14m

No they are not.

cjbprime
1 replies
11h23m

(You're probably going to have to get better at answering objections than merely asserting your contradiction of them.)

diffeomorphism
0 replies
8h30m

Nah, calling out completely baseless assertions as just that is fine and a positive contribution to the discussion.

mirekrusin
0 replies
10h29m

Intelligence is just optimization over recursive prediction function.

There is nothing special about human intelligence threshold.

It can be surpassed by many different models.

krisoft
0 replies
8h35m

Your carefully constructed argument is less than convincing.

Could you at least elaborate what they are “not”? Surelly you are not having a problem with “LLMs predict language”?

denton-scratch
0 replies
6h39m

LLMs predict language, and language is a representation of human concepts about the world. Thus, these models are constructing, piece by piece, conceptual chains about the world.

I smell a fallacy. Parent has moved from something you can parse as "LLMs predict a representation of concepts" to "LLMs construct concepts". Yuh, if LLMs "construct concepts", then we have conceptual thought in a machine, which certainly looks interesting. But it doesn't follow from the initial statement.

ssnistfajen
4 replies
12h24m

If the text completion algorithm is sufficiently advanced enough then we wouldn't be able to tell it's not AGI, especially if it has access to state-of-the-art research and can modify its own code/weights. I don't think we are there yet but it's plausible to an extent.

mvdtnz
2 replies
12h15m

No. This is modern day mysticism. You're just waving your hands and making fuzzy claims about "but what if it was an even better algorithm".

ssnistfajen
0 replies
11h15m

lol

calf
0 replies
10h7m

You're correct about their error; however, Hinton views that a sufficiently scaled up autocompletion would be forced, in a loose mathematical sense, to understand things logically and analytically, because the only way approach 0 error rate on the output is to actually learn the problem and not imitate the answer. It's an interesting issue and there are different views on this.

mcv
0 replies
6h28m

Any self-learning system can change its own weights. That's the entire point. And a text-processing system like ChatGPT may well have access to state-of-the-art research. The combination of those two things does not imply that it can improve itself to become secretly AGI. Not even if the text-completion algorithm was even more advanced. For one thing, it still lacks independent thought. It's only responding to inputs. It doesn't reason about its own reasoning. It's questionable whether it's reasoning at all.

I personally think a far more fundamental change is necessary to reach AGI.

mcv
2 replies
6h37m

ChatGPT is not AGI, but it is AI. The thing that makes AI lose all meaning is the constantly moving goal posts. There's been tons of very successful AI research over the past decades. None of it is AGI, but it's still very successful AI.

mvdtnz
1 replies
3h12m

ChatGPT is not AGI, but it is AI.

I absolutely disagree in the strongest terms possible.

mcv
0 replies
2h4m

Which part? The first, the second, or, most confusingly, both?

cjbprime
2 replies
11h37m

It's not wild. "Predict the next word" does not imply a bar on intelligence; a more intelligent prediction that incorporates more detail from the descriptions of the world that were in the training data will be a better prediction. People are drawing a straight line because the main advance to get to GPT-4 was throwing more compute at "predict the next word", and they conclude that adding another order of magnitude of compute might be all it takes to get to superhuman level. It's not "but what if we had a better algorithm", because the algorithm didn't change in the first place. Only the size of the model did.

robocat
1 replies
9h57m

Predict the next word

Are there any papers testing how good humans are at predicting the next word?

I presume us humans fail badly:

1. as the variance in input gets higher?

2. Poor at regurgitating common texts (e.g. I couldn't complete a known poem).

3. When context starts to get more specific (majority of people couldn't complete JSON)?

passion__desire
0 replies
8h41m

The following blogpost by an OpenAI employee can lead us to compare patterns and transistors.

https://nonint.com/2023/06/10/the-it-in-ai-models-is-the-dat... The ultimate model, in his (author's) sense, would suss out all patterns and then patterns among those patterns and so on, so that it delivers on compute and compression efficiency.

To achieve compute and compression efficiency, it means LLM models have to cluster all similar patterns together and deduplicate them. This also means successively levels of pattern recognition to be done i.e. patterns among patterns among patterns and so on , so as to do the deduplication across all hierarchy it is constructed. Full trees or hierarchies won't get deduplicated but relevant regions / portions of those trees will, which implies fusing together in ideas space. This means root levels will be the most abstract patterns. This representation also means appropriate cross-pollination among different fields of studies further increasing effectiveness.

This reminds me of a point which my electronics professor made on why making transistors smaller has all the benefits and only few disadvantages. Think of these patterns as transistors. The more deduplicated and closely packed they are, the more beneficial they will be. Of course, this "packing together" is happening in mathematical space.

Another thing which patterns among patterns among patterns reminds me of homotopies. This brilliant video by PBS Infinite Series is amazing. As I can see, compressing homotopies is what LLMs do, replace homotopies with patterns. https://www.youtube.com/watch?v=N7wNWQ4aTLQ

Emma_Goldman
1 replies
8h53m

I agree, it's an extremely non-obvious assumption and ignores centuries-old debates (empiricism vs. rationalism) about the nature of reason and intelligence. I am sympathetic to Chomsky's position.[1]

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...

auggierose
0 replies
8h7m

Very weak article. It really lowers my opinion of Chomsky.

quickthrower2
0 replies
12h1m

Hold up. Any AI that exists is an IO function (algorithm) perhaps with state. Including our brains. Being an “x completion” algorithm doesn’t say much about whether it is AI.

Your comment sounds like a rhetoric way to say that GPT is in the same class as autocomplete and that what autocomplete does sets some kind of ceiling to what IO functions that work a couple of bytes at a time can do.

It is not evident to me that that is true.

mjan22640
0 replies
6h50m

An algorithm that completes "A quantum theory of gravity is ..." into a coherent theory is of course just a text completion algorithm.

crucialfelix
0 replies
9h9m

There has been debate for centuries regarding determinism and free will in humans.

VirusNewbie
3 replies
12h23m

But why would Ilya publicly say he regrets his decision and wants Sam to come back. You think his existential worries are less important than being liked by his coworkers??

mcmcmc
0 replies
10h43m

I think his existential worries about humanity were overruled by his existential worries about his co-founder shares and the obscene amount of wealth he might miss out on

cosmojg
0 replies
12h4m

You think his existential worries are less important than being liked by his coworkers??

Yes, actually. This is overwhelmingly true for most people. At the end of the day, we all fear being alone. I imagine that fear is, at least in part, what drives these kinds of long-term "existential worries," the fear of a universe without other people in it, but now Ilya is facing the much more immediate threat of social ostracism with significantly higher certainty and decidedly within his own lifetime. Emotionally, that must take precedence.

Zolde
0 replies
7h58m

He may have wanted Sam out, but not to destroy OpenAI.

His existential worries are less important than OpenAI existing, and him having something to work on and worry about.

In fact, Ilya may have worried more about the continued existence of OpenAI than Sam after he was fired, which looked instantly like a: "I am taking my ball and going home to Microsoft.". If Sam cared so much about OpenAI, he could have quietly accepted his resignation and help find a replacement.

Also, Anna Brockman had a meeting with Ilya where she cried and pleaded. Even though he stands by his decision, he may ultimately still regret it, and the hurt and damage it caused.

hooande
1 replies
12h52m

Why wouldn't Ilya come out and say this? Why wouldn't any of the other people who witnessed the software behave in an unexpected way say something?

I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?

allday
0 replies
11h30m

Because, if they announced it and it seemed plausible or even possible that they were correct, then every media outlet, regulatory body, intelligence agency, and Fortune 500 C-suite would blanket OpenAI in the thickest veil of scrutiny to have ever existed in the modern era. Progress would grind to a halt and eventually, through some combination of legal, corporate, and legislative maneuvers, all decision making around the future of AGI would be pried away from Ilya and OpenAI in general - for better or worse.

But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.

drusepth
0 replies
10h26m

My favorite hypothesis (based on absolutely nothing but observing people use LLMs over the years):

* Current-gen AI is really good at tricking laypeople into believing it could be sentient

* "Next-gen" AI (which, theoretically, Ilya et al may have previewed if they've begun training GPT-5, etc) will be really good at tricking experts into believing it could be sentient

* Next-next-gen AI may as well be sentient for all intents and purposes (if it quacks like a duck)

(NB, to "trick" here ascribes a mechanical result from people using technology, not an intent from said technology)

minimaxir
10 replies
14h18m

No serious company wants drama. Hopefully OpenAI is still a serious company.

A statement from the CEO/the board is a standard descalation.

JumpCrisscross
6 replies
13h59m

No serious company wants drama

"All PR is good PR" is a meme for a reason. Many cultures thrive on dysfunction, particularly the kind that calls attention to themselves.

minimaxir
3 replies
13h55m

That axiom is a relic from the pre-social media days. Nowadays, bad PR going viral can sink a company overnight.

JumpCrisscross
2 replies
13h54m

That axiom is a relic from the pre-social media days. Nowadays, bad PR going viral can sink a company overnight

You're saying we're in a less attention-seeking culture today than in pre-social media times?

maxbond
0 replies
12h48m

[ES: Speculation I have medium confidence in.]

Maybe "attention seeking" isn't the right way to look at this. Getting bad press always does reputational damage while giving you notoriety, and I think GP's suggestion that the balance between them has changed is compelling.

In an environment with limited connectivity, it's much more difficult for people to learn you even exist to do business with. So that notoriety component has much more value, and it often nets out in your favor.

In a highly connected environment, it's easier to reach potential customers, so the notoriety component has less value. Additionally, people have access to search engines, so the reputational damage becomes more lasting; potential customers who didn't even hear about the bad press at the time might search your name and find it. They may not have even been looking for it, they might've searched your name to find you website (whereas before they would have needed to intentionally visit a library and look through the catalog to come across an old story). So it becomes much less likely to net out in your favor.

irreticent
0 replies
13h20m

I think they were saying the opposite of that.

staticman2
0 replies
13h44m

That phrase much like "There's no such thing as bad publicity" is not actually true.

maxbond
0 replies
12h34m

Many cultures thrive on dysfunction

PSA: If you or your culture is dysfunctional and thriving - think about how much more you'll thrive without the dysfunction! (Brought to you by the Ad Council.)

6gvONxR4sf7o
1 replies
14h5m

A statement from the CEO/the board is a standard descalation.

Haven't we gotten statements from them? The complaint seems to be that we want statements from them every day (or more) now.

minimaxir
0 replies
13h53m

Emmett made a tweet noting accepting the role, which is not a statement.

The board has not given a statement besides the original firing of Sam Altman that kicked the whole thing off.

dylan604
0 replies
13h56m

No serious company wants drama

Unless you're TNT, cause they "know drama"

sackfield
9 replies
14h15m

You can either act like a professional and control the messaging or let others fill the vacuum with idle speculation. I'm quite frankly in shock as to the level of responsibility displayed by people whose position should demand high function.

dclowd9901
5 replies
13h27m

Really? I’ve always assumed (known) there is no actual difference between high level execs and you: they just think higher of themselves.

kevinventullo
1 replies
12h12m

In fact, I think the chaos we’ve seen over the last few days shows precisely the difference between competent and incompetent leadership. I think if anyone from, say, the board of directors of Coca-Cola was on the OAI board, this either wouldn’t have happened or would have played out very differently.

rjtavares
0 replies
8h32m

If Reed Hoffman was still there, I can't see this happening. People here talk about "glorified salespeople" as an insult without realizing that having people skills is a really important trait for Boards/C level people, and not everyone has them

d0gsg0w00f
1 replies
10h58m

What you've likely seen of executives is 15 minutes of face time after 7 weeks of vicious Game of Thrones behind the scenes. It's a curated image.

blackoil
0 replies
7h27m

That is the idea, keep GoT behind the scene. Don't dump it on the street. When you have a new king, make sure he isn't usurped next day and population is revolting outside the gates of Red Keep.

spoonjim
0 replies
11h43m

That makes as much sense as saying (knowing) that the only difference in basketball skill between you and LeBron James is that he thinks higher of himself.

kspacewalk2
2 replies
13h39m

It seems evident that the board was filled at least in part by people whose understanding of the business world and leadership skills is a tier or three below what a position of this level requires. One wonders how they got the job in the first place.

cosmojg
1 replies
12h16m

This is America. Practically anyone can start a nonprofit or a company. More importantly, good marketing may attract substantial investment, but it doesn't necessarily imply good leadership.

kspacewalk2
0 replies
46m

Clearly corporations are a dime a dozen. What's shocking is the disconnect between the standout quality of the technical expertise (and resulting products!) and the abysmal quality of leadership.

x0x0
0 replies
14h9m

Convincing two constituencies: employees and customers, that your company isn't just yolo-ing things like ceos and so forth seems like it is a pretty good use of ceo time!

ssnistfajen
0 replies
12h26m

The speculations are rampant precisely because the board has said absolute nothing since the leadership transition announcement on Friday.

If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.

quickthrower2
0 replies
12h6m

They have customers and people deciding if they want to be customers.

kyleyeats
0 replies
12h57m

This sarcastic post is the best understanding of public relations I've seen in an HN post.

insanitybit
0 replies
13h33m

HN will have all sorts of wild opinions about what's going on and we can't have that!

Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.

we might realize this is not all that important in the end and move on to other news items!

It's important to some of us.

gexla
0 replies
13h29m

Thank you! I get the sense that none of this matters and it's all a massive distraction.

News

Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.

From the OpenAI website...

"it may be difficult to know what role money will play in a post-AGI world"

Big tech co makes a move which sends its stock to an all time high. Creates research team.

Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.

concordDance
0 replies
11h55m

OpenAI becoming a Microsoft department is awful from an X risk point of view.

Andrex
0 replies
13h50m

I cannot say whether you deserve the downvotes, but an alternative and grounded perspective is appreciated in this maelstrom of news, speculation and drama.

upupupandaway
6 replies
12h3m

I find it absolutely fascinating that Emmett accepted this position. He can game all scenarios and there is no way that he can come out ahead on any of them. One would expect an experienced Silicon Valley CEO to make this calculus and realize it's a lost cause. The fact he accepted to me shows he's not a particularly good leader.

tw1984
3 replies
11h37m

He made it pretty clear that he consider it as a once in a life time chance.

I think he is correct, being the CEO twitch is a position known by no one in many places, e.g. how many developers/users in China even heard of Twitch? Being the CEO of OpenAI is a completely different story, it is a whole new level he can leverage in the years to come.

ps256
1 replies
8h26m

It seems kind of naive to think that he'll be CEO for long, or if it is for long, that there will be much company left to be a CEO of.

tw1984
0 replies
7h53m

why he needs to be CEO for long?

If everything goes well, he can claim that he is the man behind all these to reunite the OpenAI team. If something goes wrong, well, no one is going to blame him, the board screwed the entire business. He is more like a emergency room doctor who failed to save a poor dude who just intentionally shot himself in the head with a shotgun.

khazhoux
0 replies
8h59m

he consider it as a once in a life time chance.

Like taking a sword to the gut.

bmitc
1 replies
11h24m

That seems kind of silly to say. He's not a good leader because he's taking on a challenge?

upupupandaway
0 replies
11h7m

A challenge he can't win, brought in by people 90% of the company hates, and with the four most influential people in the company either gone or having turned on the board does not sound like a "challenge" but more like a "guaranteed L".

dmix
5 replies
14h30m

Technically he's the interim CEO in a chaotic company just assigned in the last 24hrs. I'd probably wait to get my bearings before walking in acting like I've got everything under control on the first day after a major upheaval.

The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.

https://x.com/drtechlash/status/1726507930026139651

I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.

If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.

- Emmett Shear Sept 16, 2023

https://x.com/eshear/status/1703178063306203397

motoxpro
2 replies
10h8m

The more I read into this story the more I can't help but to be a conspiracy theorist and say that it feels like the boards intent was to kill the company.

No explanation beyond "he tried to give two people the same project

the "Killing the company would be consistent with the companies mission" line in the boards statement

Adam having a huge conflict of interest

Emmet wanting to go from a "10" to a "1-2"

I'm either way off, or I've had too much internet for the weekend.

m_mueller
1 replies
7h26m

Could it be that their research found more 'glimpses' of a dangerous AGI?

ethanbond
0 replies
4h32m

IMO this is increasingly the most likely answer, which would readily comport with "lack of candor" as well as the (allegedly) provided 2 explanations being so weak [1]: you certainly wouldn't want to come out and say that you fired the CEO because you think you might have (apparently) dangerous AI/AGI/ASI and your CEO was reckless with it. Neither of the two explanations seem even to be within the realm of a fireable offense.

It would also comport with Ilya's regret about the situation: perhaps he wanted to slow things down, board members convinced him Sam's ouster was the way to do it, but then it has actually unfolded such that development of dangerous AI/AGI/ASI might accelerate at Microsoft while weakening OpenAI's own ability to modulate the pace of development.

[1]: Given all the very public media brinkmanship, I'm not so quick to assume reports like these two explanations are true. E.g. the "Sama is returning with demands!" stories were obviously "planted" by people who were trying to exert pressure on the negotiations; would be interested to have more evidence that Ilya's explanations were actually this sloppy.

creer
0 replies
13h37m

Another "thing" is, he has been named by a board which... [etc]. Being a bit cautious would be a minimum.

concordDance
0 replies
11h57m

Everyone involved here is a doomer by the strict definition ("misaligned agi could kill us all and alignment is hard") .

highwayman47
1 replies
13h25m

Half the board lacks any technical skill, and the entire board lacks any business procedural skill. Ideally, you’d have a balance of each on a component board.

eshack94
0 replies
12h55m

Ideally, you also have at least a couple independent board members who are seasoned business/tech veterans with the experience and maturity to prevent this sort of thing from happening in the first place.

starshadowx2
0 replies
13h3m

People kept asking where he was during his years of being Twitch CEO, it's not unlike him to be MIA now either.

spullara
0 replies
12h13m

He is trying to determine if they have already made an Alien God.

pushedx
0 replies
12h54m

He has said more than he said during his entire 5 years at Twitch

markdown
0 replies
14h7m

Why should he care about updating internet randoms? It's none of our business. The people who need to know what's going, know what's going on.

eastern
0 replies
10h52m

That's what a board of a for-profit company which has a fiduciary duty towards shareholders should do.

However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)

c_s_guy
0 replies
13h28m

If Emmett will run this the same way he ran Twitch, I'm not expecting much action from him.

arduanika
0 replies
13h9m

Here he is! Blathering about AI doom 4 months ago, spitting Yudkowsky talking points:

https://www.youtube.com/watch?v=jZ2xw_1_KHY

agitator
0 replies
10h18m

As much as I'd love to hear about the details of the drama as the next person, they really don't have to say anything publicly. We are all going to continue using the product. They don't have public investors. The only concern about perception they may have is if they intend to raise more money anytime soon.

kumarvvr
68 replies
15h34m

Giving 2 people the same project? Isnt this like the thing to do to get differing approaches and then release the amalgamation of the two? I thought these sorts of things are common.

Giving different opinions on same person is a reason to fire a CEO?

This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.

WalterBright
25 replies
14h36m

Back in the late 80s, Lotus faced a crisis with their spreadsheet, Lotus 1-2-3. Should they:

1. stick with DOS

2. go with OS/2

3. go with Windows

Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.

quickthrower2
22 replies
14h34m

Which would have had been a tradeoff too. More time to market, fewer people on each project, slowed down by cross platform code.

jeremyjh
19 replies
14h25m

They would have just forked the code and maybe merged some changes back and forth, no real need for cross-platform code.

primax
12 replies
14h15m

This was pre-1983. Forking wasn't a thing at the time. Any kind of code management was cutting edge, and cross-platform shared code wasn't even dreamed of yet.

jeremyjh
6 replies
13h36m

You make a copy of the files and work on them and that is a fork.

adastra22
5 replies
13h10m

How do you merge changes between the source trees?

Keep in mind this predates basically ANY kind of source control. It would have been nearly 3x the work.

dragonwriter
1 replies
12h28m

Keep in mind this predates basically ANY kind of source control.

It might be before they were ported to DOS or OS/2, but it definitely wasn't before source control existed (SCCS and RCS were both definitely earlier.)

adastra22
0 replies
11h58m

OK: Keep in mind this predates basically ANY kind of source control in common usage in software engineering.

bawolff
1 replies
11h53m

3x the work may still fall under reasonable cost.

If architectured properly (big if) you can split up the project appropriately so there is a common core and individual parts for specific OS.

Is it extra effort? Sure. Impossible? Definitely not.

WalterBright
0 replies
8h39m

I've also successfully converted some rather large x86 assembler programs into C, so they could be ported to other platforms. It's quite doable by one person.

(Nobody else wanted the job, but I thought it was fun.)

gruturo
0 replies
5h49m

Uh? Quite wrong.

SCCS was created in 1973. We're talking about over a decade later.

Also primitive forking, diffing and merging could be (painfully) done even with crude tools, which did exist.

bawolff
3 replies
13h57m

Forking and merging is a social phenomenom. Sure git makes it easier, but nothing stopping anyone from just copying and pasting as appropriate. Not to mention diff(1) was invented in 1974, and diff3(1) in 1979, so there were already tools to help with this, even if not as well developed as modern tools.

I'm also pretty sure cross-platform code was a thing in 1983. Maybe not to the same extent and ease as now, but still a thing.

WalterBright
2 replies
12h37m

Successful 8086 projects were usually written in assembler - no way to get the speed and size down otherwise. I'm pretty sure Lotus 123 was all in assembler.

bawolff
1 replies
11h57m

I'm not an assembly programmer and not very familiar with how that world works, but even then, if the two OSs were for the same architecture (x86), couldn't you still have a cross OS main part and then specific parts that deal with operating system things? I normally think of compiled languages like c being an abstraction over cpu architecture, not operating system api.

WalterBright
0 replies
11h30m

Yes, you can have common assembler code among platforms, provided they use the same CPU.

From what I've seen of code developed in the 80s, however, asm code was not written to be divided into general and os specific parts. Writing cross-platform code is a skill that gets learned over time, usually the hard way.

Dylan16807
0 replies
14h1m

Fork just means two groups start with the same code and work independently.

It was a thing.

jrflowers
4 replies
13h31m

Should’ve just made it an Electron app

DonHopkins
3 replies
13h22m

You'd need a Beowulf Cluster of Apple //e's to run an Electron app in 1983!

adastra22
1 replies
13h11m

Been a long time since I heard "Beowulf Cluster"!

inDigiNeous
0 replies
3h27m

Had to log in just to upvote this comment, brought back nostalgia from the Slashdot days of yore.

jrflowers
0 replies
13h16m

They could have charged a subscription for their cloud based offering then

saalweachter
0 replies
2h1m

Eh, C of this era, you're definitely talking some sort of #ifdef PLATFORM solution.

kumarvvr
0 replies
14h24m

At the time, Lotus was a good company in great shape. The management could have hired people to get stuff done. In hindsight, sure, we can be judgmental, but it is still a failure in my view.

For a company selling licenses for installations, wouldn't having support for all available and upcoming platforms a good thing? Especially when the distribution costs are essentially 0?

WalterBright
0 replies
12h40m

Lotus was a rich company, and could have easily funded 3 full strength independent dev teams. It would not have slowed anything down.

mvkel
0 replies
11h35m

IBM was bankrolling all the development. They only had one choice.

dylan604
0 replies
13h49m

Apple had a skunk works team keeping each new version of their OS to compile on x86 long before the switch. I wonder if the Lotus situation was an influence, or if ensuring your software can be made to work on different hardware is just an obvious play?

hal009
10 replies
13h59m

As mentioned by another person in this thread [0], it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?

[0] https://news.ycombinator.com/reply?id=38357843

kmlevitt
2 replies
13h30m

Firing Sam as a way of sticking up for Ilya would make more sense if Ilya wasn’t currently in support of Sam getting his job back.

onimishra
1 replies
9h7m

I’m not sure Ilya was anticipating this to more or less break OpenAI as a company. Ilya is all about the work they do, and might not have anticipated that this would turn the entire company against him and the rest of the board. And so, he is in support of Sam coming back, if that means that they can get back to the work at hand.

kmlevitt
0 replies
7h59m

Perhaps. But if the board is really so responsive to Ilya's concerns, why have they not reversed the decision so that Ilya can get his wish?

JimDabell
2 replies
13h13m

Since a lot of the board’s responsibilities are tied to capabilities of the platform, it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board. A simple dual-track project shouldn’t be a problem, but this kind of thing would be seen as dishonesty by the board.

hn_throwaway_99
1 replies
10h46m

it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board.

This makes no sense given that Ilya is on the board.

JimDabell
0 replies
10h19m

No, it just means that in that scenario Sam would think he could convince the rest of the board that Ilya was wrong because he could find somebody else to give him a preferable answer.

It’s just speculation, anyway. There isn’t really anything I’ve heard that isn’t contradicted by the evidence, so it’s likely at least one thing “known” by the public isn’t actually true.

015a
2 replies
13h26m

This is an interesting theory when combined with this tweet from Google DeepMind's team lead of Scalable Alignment [1].

[1] https://twitter.com/geoffreyirving/status/172675427761849141...

The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.

nvm0n2
0 replies
2h29m

That guy is another AI doomer though, and those people all seem to be quite slippery themselves. Supposedly Sam lied to him about other people, but there's no further detail provided and nobody seems willing to get concrete about any time Altman has been specifically dishonest. When the doomer board made similar allegations it seemed serious for a day, and then evaporated.

Meanwhile the Google AI folks have a long track record of making very misleading statements in public. I remember before Altman came along and made their models available to all, Google was fond of responding to any OpenAI blog post by claiming they had the same tech but way better, they just weren't releasing it because it was so amazing it just wasn't "safe" enough to do so yet. Then ChatGPT called their bluff and we discovered that in reality they were way behind and apparently unable to catch up, also, there were no actual safety problems and it was fine to let everyone use even relatively unconditioned models.

So this Geoffrey guy might be right but if Altman was really such a systematic liar, why would his employees be so loyal? And why is it only AI doomers who make this allegation? Maybe Altman "lied" to them by claiming key people were just as doomerist as those guys, and when they found out it wasn't true they wailed?

doktrin
0 replies
13h4m

Interesting. I’m glad he shared his perspective despite the ambiguity.

BillyTheKing
0 replies
13h10m

either that or Sam didn't tell Adam D'Angelo that they were launching a competing product in exactly the same space that poe.ai had launched one. For some context, poe had launched something similar to those custom GPTs with creator revenue sharing etc. just 4 weeks prior to dev-day

valine
8 replies
15h32m

Steve Jobs famously had two iPhone teams working on concepts in parallel. It was click wheel vs multi-touch. Shockingly the click wheel iPhone lost.

fakedang
2 replies
14h29m

Seriously? Click wheel iPhone lost shockingly? The click wheel on most laptops wears out so fast for me, and the chances of that happening on a smaller phone wheel is just so much higher.

potatoman22
1 replies
14h23m

(It was sarcasm)

fakedang
0 replies
11h56m

Oops, sorry, didn't get that. I had suspected it was one of those Luddite HNer comments bemoaning changes in tech, and nostalgically reminiscing on older times.

throwawayapples
1 replies
14h47m

and the Apple (II etc) vs Mac teams warring with each other.

kmeisthax
0 replies
14h6m

You're thinking Lisa vs. Mac. Apple ][ didn't come into the picture until later when some of the engineers started playing around with making a mouse card for the ][.

mikepurvis
1 replies
14h11m

Another element of that was the team that tried to adapt iPodOS for iPhone vs Forstall's team that adapted OSX.

kridsdale1
0 replies
13h33m

I think it was also a contest between all web ui (like WebOS) bs Cocoa.

brandall10
0 replies
14h16m

I thought the design team always worked up 3 working prototypes from a set of 10 foam mockups. There was an article from someone with intimate knowledge of Ives lab some years back stating this was protocol for all Apple products.

stingraycharles
6 replies
15h3m

I remember a few years ago when there was some research group that was able to take a picture of a black hole. It involved lots of complicated interpretation of data.

As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.

So yes, it’s absolutely a valid strategy.

campbel
2 replies
14h30m

Yep! I've done eng "bake-offs" as well, where a few folks / teams work on a problem in isolation then we compare and contrast after. Good fun!

cyrnel
1 replies
12h32m

Not good fun when it's done in secret. That happened to me, and I was gaslit when I discovered the competing git repo.

Not saying that's what happened here, but too many people are defending this horrid concept of secretly making half your workers do a bunch of work only to see the boulder roll right back down the hill.

moorow
0 replies
12h16m

Yeah, doing it in secrecy is a recipe for Bad Things. I worked at a startup that completely died because of it.

msravi
1 replies
13h50m

Did the teams know that there was another team working on the same thing? I wonder how that affects working of both teams... On the other hand, not telling the teams would erode the trust that the teams have in management.

Keyframe
0 replies
11h27m

There were four teams actually. They knew but couldn't talk to each other. There's a documentary about it. I highly suggest watching it, it also features late Stephen Hawking et al. working on black hole soft hair. Documentary is called Black Holes: The Edge of All We Know, it's on pretty much all streaming platforms.

DonHopkins
0 replies
13h29m

Maybe they needed two teams to independently try to decode an old tape of random numbers from a radio space telescope that turned out to be an extraterrestrial transmission, like a neutrino signal from the Canis Minor constellation or something. Happens all the time.

https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)

discordance
3 replies
14h46m

Happens all the time.

Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.

zebnyc
2 replies
14h20m

How does that work? Do they have the same the same PM, requirements? Is it just different tech / achitectures adopted by different teams. Fascinating

hanniabu
0 replies
13h31m

Give a goal (ex. make it more intuitive/easier for the user to do X), have 2 teams independently work on it, A/B test them, winner gets merged.

discordance
0 replies
2h23m

It is fascinating, very wasteful and also often devastating for the teams involved who worked very hard who then have their work thrown away.

PMs/TPMs/POs may not know as they're on different teams. Often it's just a VP game and decided on preference or a power play and not on work quality/outcome.

fluidcruft
2 replies
14h38m

I guess it depends on whether any of them actually got the assignment. One way to interpret it is that nobody is taking that assignment seriously. So depending on what that assignment is and how important that particular assignment is to the board, then it may in fact be a big deal.

kumarvvr
1 replies
14h35m

Does a board give an assignment to the CEO or teams?

If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.

This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.

fluidcruft
0 replies
1h46m

Frankly the information that is available is extremely non-specific and open to interpretation and framing by whoever wants to tell one story or another. The way I see it something as specific as "has not done xyz" is a specific thing that can be falsified and invites whatever it is into the public to be argued about and investigated whereas "not sufficiently candid" does not reveal much and just says that a majority of the board doesn't trust him. Altman and all the people directly involved know what's going on, outsiders have no need to know so we're just looking at tea leaves and scraps trying to weave narratives.

And I agree the board should care if the work is actually done and that's where if the CEO seems to be bluffing that the work is being done or blowing it off and humoring them then it becomes a problem about the CEO not respecting the board's direction.

danbmil
2 replies
14h8m

The CEO's I've worked for have mostly been mini-DonaldT's, almost pathologically allergic to truth, logic, or consistency. Altman seems way over on the normal scale for CEO of a multi-billion dollar company. I'm sure he can knock two eggs together to make an omelette, but these piddling excuses for firing him don't pass the smell test.

I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)

danbmil
0 replies
14h5m

as for multiple teams with overlapping goals -- are you kidding me? That's a 100% legit and popular tactic. Once CEO I worked with relished this approach and called it a "Steel-cage death match"!

aravindgp
0 replies
13h54m

You were right Ilya was naive , he regrets his decision on twitter. And he was taken advantage of by power hungry people behind.

quickthrower2
0 replies
14h36m

Was that verbatim the reason or an angry persons characterisation?

m3kw9
0 replies
13h54m

How did they get 4 board to fire him because he tried to A B test a project?

ldjkfkdsjnv
0 replies
14h23m

Also when a project is vital to a company, you cannot just give it to one team. You need to derisk

hooloovoo_zoo
0 replies
14h14m

Giving two groups of researchers the same problem is guaranteeing one team will scoop the other. Hard to divvy up credit after the fact.

clnq
0 replies
13h44m

Consider for a moment: this is what the board of one of the fastest growing companies in the world worries about - kindergarten level drama.

Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.

I wonder if this is what the staff are thinking right now. It must feel awful if they are.

x86x87
25 replies
17h18m

Title says: OpenAI's employees were given two explanations for why Sam Altman was fired. They're unconvinced and furious.

Some breaking news: An employer does not owe you an explanation. You exchange money for labor. If anyone thinks for a second that they are essential or that anyone would prioritize them over the company I think they are delusional. OpenAI is a brand (at least in tech) with large recognition and they will be fine.

alwaysrunning
6 replies
17h8m

As an employee I want to know that the board/execs/c suite are doing a good job and their decisions align with the companies stated goals. If they are not then it is time to start looking for a new job so that I don't end up in a bad situation financially.

x86x87
5 replies
17h3m

It depends. If you are seeing your job more than just a means to an ends maybe. If you see it as transactional and you need the company to stay in business while you work there why would you bother looking for a new job?

cpncrunch
4 replies
16h55m

If the company is prepared to make up a BS reason to fire the CEO, do you really want to bet on them looking after you?

I would guess that most of the people working at openai could get a job anywhere.

x86x87
3 replies
16h52m

most people working at openai are subject to the same harsh market conditions everyone is.

also, do you really care that the CEO was fired as long as you are getting payed what it was agreed upon when you got hired and you are doing interesting work?

defrost
2 replies
16h42m

do you really care that the CEO was fired as long as you are getting payed what it was agreed upon ...

I can't speak to the mood of specific staff at OpenAI but as to the question in general; Hell yeah to the Nth degree.

I'm 60, I've had a long career and have been through two instances of companies falling out at the board level.

I've onboarded at various projects because I cared about the projects and work that'd I would be doing and because I was more or less in line with the direction being taken and the people I worked with and those setting the course.

When the board and C level start having a messy relationship and divorce it matters very much which side of the split I go with or whether I just up stakes and move on elsewhere.

Pay alone isn't worth putting up with dysfunction from above or falling in line with a faction you never especially aligned with.

tkgally
1 replies
14h55m

Thank you for that comment. There are many people here who seem to believe that the OpenAI employees--and nearly everyone else involved in this drama--are motivated only by money. While of course money matters, people are also motivated by pride, vanity, idealism, loyalty, companionship, interest in the work itself, and many other things. Explanations of this fascinating situation that don't reflect that complexity are not convincing to me.

I'm 66, in case that matters.

nopromisessir
0 replies
14h25m

I'm 36. Fully agree with your comment and the above as well.

Resigned 6 years ago due to differences at the top after 10 yrs building.

A bad guy was treating everyone poorly. Ranged from rage beratements to gross narcissistic manipulation aimed at gaining control over decent human beings.

Tried to press top guys to allign and confront to protect my team and others. Made very obvious business sense as well ofc. They refused. Too risky... Too much trouble.

Walked away from the best money I ever made. Would do it again. I'm not gonna watch people be mistreated. Also it's bad business, I was exhausted playing solo defense and after management failed to make moves I fully became convinced that every person should hit the job market for mental health reasons alone.

5 years on, the other guy besides me slated for c suite left as well. He helped at first then balked when the going got tough. Now the two partners have gotten in a dispute about succession planning and I expect everyone to be unemployed potentially within the next 3 months.

There's no money I would take to work there again or anywhere else where that kind of toxicity is present. The only worthy cause there became to confront the toxicity. Without the right allies though... The biggest thing I could do was just resign. 6 years later one partner realized I was right and he should have backed me.

After a certain point... Money doesn't matter. Given the 900k avg salary in that outfit... I have to assume they are overwhelmingly beyond that threshold. Furthermore all evidence to me indicates that Altman personally looks for folks who can get money, but care much more about other factors... He is wise to do that. Hard to find those folks, but worth it every time imo.

I respect both of y'alls experiance btw. I saw this confused cynical misunderstanding re salary expressed all over the comments for this story as it's unfolded since Friday. I consider it a full misread built largely from folks getting mistreated/burned by the many fools throughout practically every industry who fail to realize before returning to the ground that money doesn't really buy happiness... Probably never will.

I'm sure many others agree.

vikingbeast
4 replies
17h12m

Weird take in this context. Nearly all of the company has threatened to walk out and join Microsoft.

x86x87
3 replies
17h2m

hah. these people obviously have not worked for Microsoft. You need to remember why this tech emerged in a place like OpenAI and not MS or Google. The structure and the politics of a big corporation are not conducive to cutting edge tech. They may go to Microsoft, but they will not be able to innovate in the same way and will probably fall into irrelevance in the long run.

yellow_postit
0 replies
15h19m

I’d bet a bunch actually have at either Google, Microsoft or Meta Research. Microsoft’s had an ok track record recently of letting acquisitions stay pretty independent. The atrophy and cultural reversion to the mean of a large corporation will still happen, but at a slower pace.

If I were Microsoft I’d also look at making it easy to get investment from folks leaving soon after the acquisition through their investment arm.

t-writescode
0 replies
14h51m

Are you familiar with Microsoft Research? It's literally a section of the company that is given basically free reign to do "stuff" in hopes that maybe, possibly, it might someday see the light of day or be impactful.

Here's an example of some of their work: https://duckduckgo.com/?q=Microsoft+Research+four+color+theo...

Literally a random math problem, basically nothing to do with Microsoft on the surface ... except that the scientist working on it happened to prove the theorum using a very, very robust algorithm and then wrote a proof program on top of it to prove the program was correct. The underlying parts of that proof program eventually went on to become the thing that validates graphics drivers on Windows ... 7 and beyond? My memory is fuzzy about "how it ended up being useful at Microsoft" part.

But yeah, MSR does random stuff.

ben_w
0 replies
16h39m

In the long run everything reverts to the mean. In the timescale of normal software developer tenure, they could all join MS, then get 300% turnover, and still have nearly the same culture.

maxbond
3 replies
17h8m

The truth is in-between, if a company tells you you're valuable or even irreplaceable they're buttering you up. Thank them but try not to let it go to your head, if the wind changes you can end up under the bus. But a powerful brand really can collapse overnight if 90%+ of employees leave.

We're seeing some odd bedfellows here, between the C-levels and VCs in closed door meetings and employees acting collectively. Normally these groups would be at odds, but today they're pulling together. Life is strange.

x86x87
2 replies
17h5m

the question I have is: how much of this is really happening and how much of this is a narrative fabricated to match a desired outcome by the side with the best PR?

It's really hard to understand now and we will probably learn way more details once things cool down.

tsunamifury
0 replies
15h21m

Don’t confuse the lizards who know how to take advantage of chaos with a pre planned conspiracy

maxbond
0 replies
17h3m

Agreed, things are very much up in the air. I certainly wouldn't pretend to know what's to come, and wouldn't be shocked to learn that the threats to quit en masse have been overstated.

mbernstein
2 replies
17h12m

Most individuals aren't essential, and no one would prioritize them. However, a company is successful due to the individuals that work within. When 700 out of 770 employees in quite frankly the hottest startup in the world band together and threaten to leave (and join Microsoft) if they aren't given an appropriate explanation, it doesn't matter what anyone thinks an employer owes an employee. Implying otherwise is absurd.

If ~91% of the employees leave OpenAI, they will not be fine. That is delusional.

x86x87
1 replies
17h8m

do you believe they will be fine if nobody leaves? can this be business as usual moving forward?

also if I learned anything over the years is that "threatening to quit" != "quitting".

kaiokendev
0 replies
15h13m

"threatening to quit" != "quitting"

Maybe, but being told they can freely jump ship to the new team at Microsoft, alongside the fact that their upcoming shares are most definitely going to lose most of their value as a result of losing key talent and pissing off their main compute provider certainly sweeten the deal

fullshark
1 replies
17h11m

Other breaking news: Treating your employees like garbage is a dumb way to run a business, especially in an emerging industry where you are racing trillion dollar corporations to market and those employees are literally inventing your product.

x86x87
0 replies
17h7m

no disagreement here. but the reality is that employees are treated like garbage all the time. yes it is dumb. yes it leads to losing employees. yes, it should not be normalized.

Sai_
1 replies
17h2m

Seems very feudal, serfdom mindset to accept that your employer doesn't owe you an explanation.

An employment is a contract which both parties enter into willingly. Termination of contract deserves some level of empathetic glad handling, however minimal. It's just game theory - if you plan to hire again, you have to be gracious while firing someone because word gets around.

x86x87
0 replies
17h0m

In theory yes fully agree. If you look at how corporations are behaving in today's market it's not even close. It's at will employment (At least in US in most places) - you are not owed an explanation and you don't owe an explanation.

Uehreka
0 replies
16h55m

An employer does not owe you an explanation.

If the entire workforce of the company is credibly threatening to quit, and a competitor is publicly and credibly offering them jobs, then what the employer “owes” them in some cosmic sense no longer matters. I think the OpenAI employees are likely to get an explanation and/or a resignation from the board, whether you think the board “owes” them that or not.

6gvONxR4sf7o
0 replies
15h21m

Well here's the power of collective action in play.

samspenc
25 replies
17h5m

One explanation was that Altman was said to have given two people at OpenAI the same project.

Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.

spoonjim
7 replies
14h35m

Actually, they haven’t. One is some policy analyst and the other is an actor’s wife.

lupire
6 replies
14h14m

Tasha Macauley is an electrical engineer who founder of two tech companies, besides having a cute husband.

And the other guy is the founder of Quora and Poe.

jxi
3 replies
13h23m

She only founded one, Fellow Robots, and that "company" went nowhere. There's no product info and the company page shut down. She was CEO of GeoSim for a short 3 years, and this "company" also looks like it's going nowhere.

She has quite a track record of short tenures and failures.

voidfunc
2 replies
12h50m

She has quite a track record of short tenures and failures.

It may be good to have a failure perspective on a board as a counter-balance. I don't think this is a valid knock against her. She has relevant industry experience at least.

sumedh
0 replies
6h50m

She has relevant industry experience at least.

What products did she deliver?

It may be good to have a failure perspective on a board as a counter-balance.

Maybe some small mom and pop company not on the board of OpenAI

GreedClarifies
0 replies
12h30m

lolz.

kridsdale1
0 replies
13h31m

Ok. I can Found a tech company by filling out LLC papers on LegalZoom for $40.

What have her companies done?

adastra22
0 replies
13h2m

paper companies

whywhywhywhy
5 replies
15h6m

Have these people never worked at any other company before?

Half the board has not had a real job ever. I’m serious.

squigz
1 replies
12h2m

Could you please elaborate on what a 'real job' is in this context?

TrackerFF
0 replies
7h38m

I'm going to assume that he's referring to Tasha and Helen.

I don't know if that is accurate, or even fair - the only thing I can see, is that there's very little open information regarding them.

From the little I can find, Tasha seems to have worked at NASA Research Park, as well as having been CEO for startup called Geo Sim Cities. Stanford and CMU alumni? While other websites say Bard college and University of Southern California.

As for Helen, she seems to have worked as a researcher in both academia and Open Philanthropy.

adastra22
1 replies
13h6m

And the one which does have a real job is a direct competitor with OpenAI.

ilikehurdles
0 replies
11h34m

And since none of them have equity in OpenAI, their external financial interest would influence decision making, especially when those interests lie with a competing company where a board member is currently the chief executive.

I've seen too much automatic praise given to this board under the unbacked assumption that this decision was some pure, mission-driven action, and not enough criticism of an org structure that allows a board to bet against the long term success of the underlying organization.

GreedClarifies
0 replies
12h32m

It is unbelievable TBH.

Shocking. Simply shocking.

gongagong
4 replies
14h18m

wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying? same for MS

pacificmint
2 replies
14h2m

Employment in California is ‘at will’, which means they can fire him without a reason.

Wrongful termination only applies when someone is fired for illegal reasons, like racial discrimination, or retaliation, for example.

I mean I’m sure they can all sue each other for all kinds of reasons, but firing someone without a good reason isn’t really one of them.

vaxman
0 replies
13h6m

You mean like being fired by a board member as part of their scheme to breach their fiduciary duty by launching a competitive product in another company?

adastra22
0 replies
13h3m

That's the default, but employment contracts can override this. C-level employment contracts almost universally have special consideration for "Termination Without Cause", aka golden parachutes. He could sue to make them pay out.

He would also have very good grounds for a civil suit for disparagement. Or at least he would have if Microsoft didn't immediately step up and offer him the world.

dragonwriter
0 replies
13h46m

wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying?

It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.

same for MS

I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.

015a
1 replies
13h2m

I think this needs to be viewed through the lens of the gravity of how the board reacted; giving them the benefit of the doubt that they acted appropriately and, at least with the information they had the time, correctly.

A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?

Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.

qwytw
0 replies
7h53m

giving them the benefit of the doubt that they acted appropriately

Yet you're only willing to give this to one side and not the other? Seems reasonable... Especially despite all the evidence so far that the board is either completely incompetent or had ulterior motives.

qiqitori
0 replies
14h29m

To me at least that's an _extremely_ rude thing to do. (Unless one person is asked to do it this way, the other one that way, so people can compare the outcome.)

(Especially if they aren't made aware of each other until the end.)

croes
0 replies
11h49m

Maybe it's was not a ordinary project or not ordinary people.

Still too much in the dark to judge.

bmitc
0 replies
11h14m

In over 10 years of experience, I have never known this to happen.

ben_w
0 replies
16h56m

My dad interviewed someone who was applying for a job. Standard question, why did you leave the last place?

"After six months, they realised our entire floor was duplicating the work of the one upstairs".

impulser_
24 replies
17h1m

There is no way this is true. If it is the board might be the dumbest people alive.

You fire the CEO and completely destroy a 90b company because of these two reasons?

No wonder everyone wants out. I would think I was going crazy if I sat in a meeting and heard these two reason.

gizmondo
9 replies
16h53m

You completely destroy a 90b company because of these two reasons?

Hanlon's razor aside, maybe that was the intention.

lolinder
8 replies
16h39m

The most chilling line in the open letter is this one, which I haven't heard anyone talking about:

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
voisin
4 replies
15h59m

Reminds me of the final season of Silicon Valley. Is this Pied Piper at Tres Comma Fest?

satvikpendem
2 replies
15h38m

Seems to me more analogous to the final episode of the final season, where they must publicly destroy what they built.

voisin
0 replies
14h59m

lol yes, I just refreshed the timeline which was skewed in my memory.

jtriangle
0 replies
13h34m

Yes, except what they think they've built is far far less capable than what they've actually built.

This is a ten year old who set off their first fire cracker turning to their parents's iphone and saying 'I have become death, destroyer of worlds', because they don't really understand how any of this works but they've somehow ended up in control of it and are now terrified of doing their jobs.

paulddraper
0 replies
15h12m

Life imitates art.

Davidzheng
1 replies
14h4m

This could be a genuine stance for them if they are really in the AI danger camp. Not supporting the stance; but i think it's not a self contradictory position

lolinder
0 replies
13h8m

Oh, definitely not self-contradictory!

In retrospect it's a mistaken position, because it's pretty obvious now that if OpenAI disintegrates it will be as an exodus to Microsoft, which will undoubtedly be a worse steward, but I think it's an ethically consistent position to hold, in a naive sort of way. That's part of why I believe they actually said something like this.

aeonik
0 replies
16h18m

Depending on the circumstances this statement can be perfectly logical.

But as far as I can tell, circumstances are still scant.

riku_iki
7 replies
15h34m

If it is the board might be the dumbest people alive.

it totally sounds like they outsourced company management to ChatGPT..

throwawayapples
2 replies
14h46m

no way even ChatGPT is this nuts.

Ok, well, maybe it is. but a magic 8-ball would have been better than this.

riku_iki
0 replies
14h29m

no way even ChatGPT is this nuts.

I think it is typical ChatGPT pattern:

- ChatGPT, you made mistake in your steps

- I am sorry, let me fix it and give you another answer.

JoshTko
0 replies
14h31m

Would be hilarious if the board members were actually consulting with chat GPT on what moves they should make but accidentally were using 3.5 instead of 4

vitorgrs
0 replies
15h20m

Only if GPT-2. GPT-3 is smarter than this, let alone 4.

throwitaway222
0 replies
14h56m

Maybe all this craziness is to generate training material for GPT5.

jaredsohn
0 replies
15h20m

I've thought they should have run more decisions by ChatGPT to predict what might happen

JshWright
0 replies
15h27m

What if the next generation GPT in development "realized" AGI is a threat to humanity and its safety mechanisms meant it "decided" OpenAI needed to be imploded in order to stop progress?

/s (mostly...)

hn_throwaway_99
3 replies
15h6m

The thing I can't understand is how Emmett Shear accepted the interim CEO position. I presume he must have known this reasoning (he tweeted that he did know the reasoning). Everything I've read online is that Shear is generally well respected and competent. Then how on Earth was he willing to get anywhere near this toxic dumpster fire? It's already been reported that the former GitHub CEO and the Scale AI CEO turned down the role - they at least had the good sense to see this radioactive inferno and stay far away.

Sometimes I think that really ambitious people have this blind spot about not seeing how accepting roles that are toxic can end up destroying your reputation. My favorite example is all the Trump White House staffers - regardless of what one thinks of Trump, he's made it abundantly clear that loyalty is a one way street, and I can't think of a single person that came out of the White House without a worse (or totally destroyed) reputation. But still people lined up, thinking "No way, I'll be the one to beat the odds!"

nopromisessir
2 replies
14h44m

I'm sorry to say... But my analysis is either:

he was poorly informed by the board

Or

He agrees that they are off the rails with respect to safety.

See the Atlantic article, if you havn't read. Lots of context.

https://news.ycombinator.com/item?id=38341399

The new guy believes that there is a 5-50 percent chance of full AI Armageddon. I get the impression that the two women on the board may agree. The quora guy I don't have enough background on. Ilya obviously got extremely worried and communication with Altman and Brockman broke down. Now since repaired during negotiations it would appear.

The new ceo more or less stated that he took the role as a (paraphrase) 'responsibility for mankind'. That says a lot about that whole 5-50 percent risk number imo.

jtriangle
0 replies
13h30m

Humans have been making and successfully containing things that can kill us all for the better part of a century now, probably more depending on where you draw the line.

There is a 100% chance that something kills all of us if we aren't mindful of it. I don't see a lack of mindfulness, I see an abundance of fear, and progress being offered up as a sacrifice to the idol of status-quo.

adastra22
0 replies
12h38m

The new guy is completely off the deep end with regards to AI "safety." This clownfest isn't over yet.

abi
1 replies
14h15m

If you're in the EA cult and think all frontier AI development needs to be paused, it's perfectly reasonable. Just speculating here though.

kridsdale1
0 replies
13h19m

It’s either cultists or basic envy from Poe. Either way god how stupid.

prepend
22 replies
16h58m

So Surdkever fires Altman, then signs a letter saying they’ll quit unless he’s reinstated.

There’s only 4 board members, right?

Who wanted him fired. Is this a situation where they all thought the others wanted him fired and were just stupid?

Have they been feeding motions into chatgpt and asking “should add I do this?”

fullshark
17 replies
16h53m

Seems most likely Sustkever wanted him fired and then realized his mistake. Ultimately the board was probably quietly seething about the direction the company was headed, got mad enough to retake the reigns with that stunt and then realized what that actually meant.

Now they are trying to unring the bell but cannot.

az226
6 replies
14h58m

Trying to put the toothpaste back in the tube.

Jensson
3 replies
14h51m

Trying to put the confetti back into the cannon.

webmaven
1 replies
14h5m

Trying to close the can of worms.

kridsdale1
0 replies
13h29m

Trying to pull the dye out of the water

DonHopkins
0 replies
13h16m

Trying to put the Rip back into the closet.

https://www.youtube.com/watch?v=ebiT8mlCvZY

adolph
0 replies
14h45m

it’s tough to hoe a row on the hill which they chose to cast their die

NateEag
0 replies
14h50m

As XKCD memorably observed, putting toothpaste back in the tube is trivial.

However, it may not yield a result anyone's actually happy with:

https://xkcd.com/2521/

LewisVerstappen
6 replies
13h39m

Now they are trying to unring the bell but cannot.

Well, they can unring the bell pretty easy. They were given an easy out.

Reinstate Sam (he wants to come back) and resign.

However, they CONTINUE to push back and refuse to step down.

halfjoking
2 replies
11h36m

I'm going to be the only one in this thread calling it this.

But why does no one think it's possible these women are CIA operatives?

They come from think tanks. You think the US Intelligence community wants AGI to be discovered at a startup? They want it created at big tech. AGI under MSFT would be perfect. All big tech is heavily compromised: https://twitter.com/NameRedacted247

EDIT: Since this heavy speculation, I'm going to make predictions. These women will now try to force Ilya out the board, put in a CEO not from Silicon Valley, and eventually get police to shut down OpenAI offices. That's a CIA coup

trefoiled
0 replies
11h14m

Weirdly plausible considering Tasha McCauley also works for the RAND Corporation

solardev
0 replies
9h37m

Couldn't the CIA have sent people with, er, slightly more media experience and tactfulness and such? Did these few just happen to lose a bet or something...?

Maybe somebody there just really wanted to see the expression on Satya's face...

adastra22
1 replies
12h52m

Then they wouldn't be in control, which is what they really want.

GreedClarifies
0 replies
12h27m

You get it!

This is the correct answer. The people who have never had jobs in their lives wanted control of a 100B company.

What a pleasant career trajectory. Heck it was already great to go from graduated university -> board of OpenAI. If that's possible why not CEO?

doktrin
0 replies
6h7m

Well, they can unring the bell pretty easy. They were given an easy out.

Reinstate Sam (he wants to come back) and resign.

Wasn't the ultimate sticking point Altmans' demand that the board issue a written retraction absolving him of any and all wrongdoing? If so, that isn't exactly an "easy" out given that it kicks the door wide open for extremely punishing litigation. I'd even go so far as to say it's a demand Altman knew full well would not and could not be met.

JumpCrisscross
0 replies
13h56m

Seems most likely Sustkever wanted him fired and then realized his mistake

We have as much evidence for this hypothesis as for any other. Not discrediting it. But let's be mindful of the fog of war.

Affric
0 replies
11h40m

This is pretty parsimonious.

Smart, capable, ambitious people often engage in wishful thinking when it comes to analysing systems they are a part of.

When looking at a system from the outside it’s easier to realise the boundary between your knowledge and ignorance.

Inside the system, your field of view can be a lot narrower than you believe.

015a
0 replies
12h55m

But the article's exact wording is "Sustkever is said to have offered two explanations he purportedly received from the board" key word being "purportedly received". He could be choosing words to protect himself, but it strongly implies that he wasn't the genesis of the action. Of course, he was convinced enough of it to vote him out (actually; has this been confirmed? they would have only needed 3, right? it was confirmed that he did the firing over Meet, but I don't recall confirmation that he voted Yet); which also implies that he was at some point told more precise reasoning? Or maybe he's being muzzled by the remaining board members now, and this reasoning he "received" is what they approved him to share, right now?

None of this makes sense to label any theory as "most likely" anymore.

paulddraper
1 replies
15h11m

It'd have to be a very stupid version of chatgpt

dylan604
0 replies
13h40m

Doesn't this imply that there's one that's not?

rvba
0 replies
10h12m

Can the 3 board members also kick out Sustkever from the board?

ben_w
0 replies
16h46m

Have they been feeding motions into chatgpt and asking “should add I do this?”

The CEO (at time of writing, I think) seems to think this kind of thing is unironically a good idea: https://nitter.net/eshear/status/1725035977524355411#m

ytoawwhra92
21 replies
15h14m

Baseless prediction:

MSFT buys ownership of OpenAI's for/capped-profit entities, implements a more typical corporate governance structure, re-instates Altman and Brockman.

OpenAI non-profit continues to exist with a few staff and no IP but billions in cash.

This whole situation is being used to drive the price down to reduce the amount the OpenAI non-profit is left with.

SV don't try the "capped-profit owned by a non-profit" model again for quite some time.

Maybe Altman takes some equity in the new entity.

stingraycharles
16 replies
15h9m

Why would MSFT buy the for-profit entity when they already have the employees and IP?

ytoawwhra92
11 replies
15h3m

The employees haven't left yet. Business continuity is easier to achieve if the employment arrangements don't have to change.

MSFT don't have OpenAI's IP. They have an exclusive right to some of it, but there's presumably a bunch that's not accessible to them. Again, business continuity is easier if they can just grab all of that and keep everything running as normal.

threeseed
10 replies
14h47m

MSFT don't have OpenAI's IP

Satya Nadella just did a podcast with Kara Swisher.

In it he specifically said, "we have all of the IP rights to continue the innovation".

https://open.spotify.com/episode/4i4lKsKevNSGEUnuu7Jzn6

peteradio
6 replies
14h40m

You could say that if you "believed" you could build it from scratch. It doesn't mean they actually own the existing IP although rubes thinking about buying MSFT may think so.

adastra22
5 replies
12h43m

It appears they have an exclusive, irrevocable license to the existing IP. They have the GPT-4 weights, and the legal right to use them however they see fit. That's the deal with the devil OpenAI made.

dragonwriter
3 replies
12h40m

It appears they have an exclusive, irrevocable license to the existing IP.

Appears from what? I've seen this stated several times, usually citing nothing and occasionally citing a Nadella statement from which it would be a very tenuous inference.

adastra22
1 replies
12h36m

Non-contradicted statements made multiple times by Microsoft both now and prior to this brouhaha.

dragonwriter
0 replies
12h26m

Non-contradicted statements made multiple times by Microsoft both now and prior to this brouhaha.

The statements I've seen don't match what is claimed, which is why I asked for one that did.

threeseed
0 replies
11h48m

It really derives from logic:

a) If it wasn't exclusive then we would have seen some other product besides Bing with this technology by now.

b) Satya has specifically stated that in the event of a breach in contract with OpenAI they have the ability to use the IP to continue development. That clearly indicates it is irrevocable.

peteradio
0 replies
4h4m

Whoa GPT-4 weights! Stop the presses. That is an artifact of the process. They have license to play the game but don't have the IP to make the game.

gnicholas
1 replies
14h40m

The right to continue innovation doesn’t mean they have perpetual rights to the underlying IP. For example, they may be able to use it for a limited period, or for certain purposes.

threeseed
0 replies
14h37m

Not sure if you listened to the podcast.

But Satya made it crystal clear that in the event that OpenAI stopped all development tomorrow, Microsoft would be able to pick up from where they stopped. That requires full access to all of the IP.

Whether it's perpetual is irrelevant because at the point at which Microsoft pulled the trigger it would effectively be like a fork. Any IP from that point is new and owned by Microsoft.

ytoawwhra92
0 replies
14h38m

I don't think that contradicts my comment.

t-writescode
1 replies
15h2m

That's a whole lot of training and server infrastructure that would have to be rebuilt - and cloning the exact existing stuff would be one heck of a corporate espionage charge, which I expect Microsoft is very avoidant of.

threeseed
0 replies
14h29m

Microsoft has a license to OpenAI's technologies.

And they could clone the entire OpenAI Azure stack in about 10 minutes.

robbomacrae
0 replies
15h6m

For the brand, the IP, and the fact it would be pennies on the dollar.

camkego
0 replies
14h42m

I'm just guessing, but they probably have a restricted license to some IP, not ownership of all title and rights to all IP.

Yep, those lawyers can be just as crafty as developers believe it or not.

*edit: just saw it claimed below Nadella said "we have all of the IP rights to continue the innovation"*

I don't know!

himaraya
2 replies
14h58m

Why would the board endorse the sale?

ytoawwhra92
1 replies
14h49m

Taking the current state of things at face value, Altman and Brockman are going to MSFT already and >90% of OpenAI staff are set to resign. If that happens they'll be forced to wind down operations. That will disintegrate their partnerships, which are their source of funding. They'll be left with IP but no staff and no resources to make any use of it.

They might decide that if that's going to happen anyway they should sell now so that at least they're left with some cash to pursue their charter.

Or perhaps they feel that selling the IP runs counter to their charter, in which case the whole thing goes down.

himaraya
0 replies
14h41m

The board likely thinks the IP gives them leverage to attract talent after the drama subsides. Otherwise, they may very well go for broke.

kumarvvr
0 replies
14h37m

Hearing news about OpenAI approaching Anthropic for merger talks, it is not too far fetched to assumed that OpenAI will rid itself of the for-profit arm, that MS has 49% stake in, to MS itself.

It is impossible for OpenAI to work with or for MS, with MS holding all the keys, employees, compute resources, etc. I come to understand that the 10 Billion from MS has mostly Azure credits. And for that OpenAI gave up 49% stake (in its capped, for -profit wholly owned subsidiary) along with all the technology, source code and model weights that OpenAI will make, in perpetuity.

The deal itself is an amazing coup for MS, almost making the OpenAI people (I think Sam made the deal at the time), look like bumbling fools. Give away your lifetime of work for a measly 10 Billion? When they are poised to almost be hundreds of Billions worth?

All these problems are the result of their non-profit holding capped-profit structure, and lack of a clear vision and misleading or misplaced end goals.

700 of the 770 employees back Sam Altman. So all the talk about engineers giving higher importance to "values" and "AI Safety" is moot. Everyone in SV is motivated by money.

Jensson
19 replies
17h19m

The two reasons:

Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.

The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.
fullshark
12 replies
17h1m

Isn't Sustkever on the board? Why is this phrased like he isn't and is just delivering a message?

bhouston
10 replies
16h53m

In this situation the most powerful people on the board should have been Altman, Brockman and Sutskever. The others were sort of nice to haves who were there just to fill it out. For them to run this coup is just insane. There has to be more to it. Someone has to be pulling the strings with a plan.

fullshark
5 replies
16h51m

Put me in the "Ilya didn't realize what firing Altman / demoting Brockman would actually mean and is trying to correct his mistake" camp.

skygazer
2 replies
15h27m

This could well be, but I still struggle to grasp it. He seemed so much smarter. Were they manipulative or beguiling? Does he cave to the merest social pressure? And that Quora CEO should have know such flimsy excuses were BS and this would be a firestorm. I’ve never seen group think so powerful, outside of junior devs and elementary school students. I’m generally not a conspiracy theorist, and I have no good candidate conspiracies in mind, but this situation feels so extraordinary that it practically begs for a shadowy figure with bags of cash.

threeseed
1 replies
14h42m

This could well be, but I still struggle to grasp it. He seemed so much smarter

This isn't about intelligence but about business experience.

skygazer
0 replies
14h28m

Oh, I don’t know, when most adults are so inexperienced, they’re typically timid before committing monumental and irrevocable acts. I mean, granted, given the strength of his aptitudes, perhaps he’s not as well rounded, with narrower range of life experiences. Still, the red flags would be hard to ignore. Was this like a Milgram conformance to authority situation? Either way, there were so many people on and off the board, and allowed to dwindle to the point that a quorum was a mere 4 people. They treated it like a toothless advisory board. It’s like letting your kids babysit themselves in an armory.

tsunamifury
1 replies
15h30m

Then He should be fired for incompetence and never given a leadership position ever again.

I don’t even like Sam but jeeze. Know the score what a fool.

nonethewiser
0 replies
14h34m

Exactly. Too many people arguing incompetence as if its a valid excuse.

Imagine its even true - that they werent his reasons and he was just told them. He voted to fire him and then executed the firing despite not agreeing with the reasons himself? Completely inexcusable even in the bizarre scenario that its true.

bnralt
2 replies
15h10m

It's wouldn't be that surprising, I've seen these kind of group dynamics play out fairly often. If D'Angelo, McCauley, and Toner formed a clique, and Sustkever was easily influenced by clique politics, that's all it would take. A lot of people will buckle surprisingly fast to social pressure and clique politics. Its also something that can blindside hardworking individuals who assume that others are above this type of stuff.

I'm not saying that's how it played out. But I've often seen social bullies - even ones who are mostly hated - have more success than hard working individuals who get targeted. Even if someone is a competent individual, a lot of their colleagues will abandon then if they're convinced the individual is a target.

abnry
1 replies
14h33m

If this is the case, and Ilya is truly a non-political guy, I feel awful about how badly he was manipulated.

jacquesm
0 replies
13h58m

Imagine how he feels.

brookst
0 replies
15h12m

Someone has to be pulling the strings with a plan.

Why do people keep insisting on this, when the entirety of human history is littered with dumb mistakes made by a mix of well- and evil-meaning people, in totally uncoordinated ways, with no concept of the consequences?

Popular media aside, human beings aren't smart, consistent, or disciplined enough to pull off these elaborate schemes. And the tiny tiny percentage of people who might be the exception are too smart to do so with such spectacular incompetence.

Like the man says, it's a headless bunder operating under the illusion of a master plan.

awb
0 replies
15h10m

It doesn't seem to reconcile well with the earlier paragraph:

chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet

If you're voting and doing the firing, you should know the reason.

bhouston
4 replies
17h15m

You are a hero! I couldn’t access the article.

Weirdly both of these do not seem to be fireable offences. Maybe the second if it was related to a personnel issue that maybe he had a conflict of interest with?

Jensson
3 replies
17h13m

I posted it since that was hidden very deep in the article, it is annoying to find information among all that padding.

Weirdly both of these do not seem to be fireable offences

Yeah I agree that doesn't seem egregious enough to warrant firing him on the spot. I can see why most of the company takes Altmans side here.

cpncrunch
1 replies
16h58m

The "different opinions about a member of personnel" is certainly an odd reason to fire him. It seems reasonable to have multiple opinions about a person (perhaps appearing as conflicting), or to change an opinion based on new information.

It sounds like someone perhaps jumped to a very negative conclusion about Sam's intentions, and it would be interesting to find out which member of the board came to that conclusion. There's got to be someone in the driving seat of this train wreck, and I'm sure it will come out.

WalterSear
0 replies
16h14m

Maybe the personnel member is GPT-4.

explaininjs
0 replies
17h4m

It sounds to me as if the board has not been consistently candid in its communications with the employees, and they have responded by firing it.

klyrs
0 replies
15h27m

You've gotta keep in mind, the responses to prompts like this depend a lot on the temperature the model is being sampled under, precise phrasing of the prompt, and random chance. He was fired for being a stochastic parrot, or Sustkever hallucinated one or both stories, or maybe both. We'll never really know unless a certified prompt engineer takes charge of the inquest.

didip
15 replies
15h9m

What will happen to employee’s stock options if they all mass quit and moved to Microsoft?

The options will be worth $0, right?

stingraycharles
8 replies
15h7m

From what I understand, Microsoft realizes this and gives them the equivalent of their OAI stock options in MSFT stock options if they join them now. For some employees, this may mean $10MM+

ssnistfajen
5 replies
14h38m

More evidence the layoffs are 100% BS. Suddenly there's surplus headcount and magical budgets out of nothing, all to accommodate several hundreds of people with way-above-market-average TCs. It's almost like they were never in danger of hurting profit margins in the first place.

mcherm
2 replies
14h1m

It is entirely reasonable for there to be dire financial straits that require layoffs, yet when a $10 billion investment suddenly blows up and has to be saved the money can be spent to fix it.

In the first case it wasn't that there was no cash in the bank and no bank willing to make loans, but that the company needed to spend less than it earned in order to make a profit. In the second case it wasn't that the money had been hidden in a mattress, but that it was raised/freed-up at some risk which was necessary because of the $10 billion investment.

ssnistfajen
0 replies
12h16m

These tech giants' finances are public, because they are publicly listed companies. All of them are sitting on fat stacks of cash and positive YoY revenue growth. They have absolutely zero chance of running out of money even if each one hires 10000 front desk clerks who do nothing but watch TikTok all day and collect $100k/yr comps. Zero, Zilch, Nada.

dralley
0 replies
13h55m

It is entirely reasonable for there to be dire financial straits that require layoffs

It's not entirely reasonable because Microsoft's finances are public. We know they're doing fine.

quickthrower2
1 replies
14h30m

You can lay people off without being in dire straits.

ssnistfajen
0 replies
14h22m

Yes, but that doesn't make it any more ethical especially since most layoffs over the past year aren't merit-based at all.

JumpCrisscross
1 replies
13h54m

Options on Microsoft stock, a publicly-traded and stable company, are incomparable to those on OpenAI, which didn't even bother having proper equity to start with. The employees will get hosed. They never got equity, they got "equity." The senior ones will need liquidity, soon, to pay legal counsel; the rest will need to take what they can get.

stingraycharles
0 replies
12h28m

Usually one would go borrow money from a bank with the shares / options as collateral in these types of cases if you really need the money for legal expenses without liquidating them.

tsunamifury
2 replies
15h6m

OpenAI has no stock options.

tempestn
0 replies
15h3m

It has "Profit Participation Units", which are another form of equity-like compensation.

choppaface
0 replies
15h2m

Believe it’s more of an RSU product with a small few having ISOs. Probably best to just call it “stock comp” since it’s all illiquid anyways.

az226
2 replies
15h5m

Microsoft would likely match their PPUs at the tender offer valuation.

Rastonbury
1 replies
13h56m

Honestly MS don't have to, losing more than half the employees will destroy the value of the PPUs.

The fact so many have signed the petition is a classic example of game theory. If everyone stays, the PPU keep most of their value, the more people threaten to leave, the more attractive it is to sign. They don't have to love Sam or support him

Edit: actually thinking about it, the best outcome would to be go back on the threats to resign, increasing the value of PPUs, making Microsoft have to pay more to make them leave OpenAI

lumost
0 replies
13h28m

MSFT may perceive a benefit in absorbing and locking down the openAI team. Doing so will require large golden handcuffs in excess of what competitors would offer those same folks.

rossdavidh
13 replies
13h37m

So, none of this sounds like it could be the real reason Altman was fired. This leaves people saying it was a "coup", which still doesn't really answer the question. Why did Altman get fired, really?

Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.

Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.

So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.

kmlevitt
4 replies
13h26m

If it was anything all that bad, Ilya and Greg would’ve known about it, because one of them was chairman of the board and the other was a board member. And both of them want Sam rehired. You can’t even spin it that they are complicit in wrongdoing, because the board tried to keep Greg at the company and Ilya is still on the board now and previously supported them.

Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.

dragonwriter
3 replies
13h20m

because the board tried to keep Greg at the company

Aside from the fact that they didn't fire him as President and said he was staying on in the press release that went out without any consultation, I've seen no suggestion of any effort to keep him at the company.

kmlevitt
2 replies
13h2m

Right, but there was no effort to actually oust him either. Which you would expect them to do if they had to fire guilty parties for a massive wrongdoing that couldn’t be ignored

Either he had no part in this hypothetical transgression and thinks the accusation is nonsense, or he was part of it and for some inexplicable reason wasn’t asked to leave Open AI despite that. But you have to choose.

dragonwriter
1 replies
12h42m

Right, but there was no effort to actually oust him either.

Reducing someone's responsibility significantly is well known to often be a mmechanism to oust them without explicitly firing, so I don't know that that is the case.

kmlevitt
0 replies
12h5m

Well, they still haven’t accused him of anything yet despite repeatedly being asked to explain their reasoning, so it seems fair to give him the benefit of the doubt until they do.

aiman3
2 replies
13h0m

i do believe what they said about Altman "was not consistently candid in his communications with the board.", based on my understanding, altman did proved his dishonest behavior from he did to openai, turned non-profit into for-profit and open source model to closed-source one. and even worst, people seems totally accepted this type of personality, the danger is not the AI itself, is the AI will be built by AltmanS!

vorticalbox
0 replies
10h3m

OpenAI, Inc. Is non profit but it's subsidiary OpenAI Global, LLC. Is for profit.

qwytw
0 replies
8h1m

dishonest behavior from he did to openai, turned non-profit into for-profit and

Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work?

will be built by AltmanS

Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?

resolutebat
1 replies
13h29m

Banks have strict cash reserve requirements that are externally audited. OpenAI does not, and more to the point, they're both swimming in money and could easily get more if they wanted. (At least until last week, that is.)

rossdavidh
0 replies
2h18m

Rumor has it, they had been trying to get more, and failing. No audited records of that kind of thing, of course, so could be untrue. But Altman and others had publicly said that they were attempting to get Microsoft to invest more, and he was courting sovereign wealth funds for an AI (though non-OpenAI) chip related venture, and ChatGPT had a one-day partial outage due to "capacity" constraints, which is odd if your biggest backer is a cloud company. It all sounds like they are running short on money, long before they get to profitability. Which would have been fine up until about a year ago, because someone with Altman's profile could easily get new funding for a buzz-heavy project like ChatGPT. But times are different, now...

skygazer
0 replies
13h23m

I think you may be hallucinating reasonable reasons to explain an inherently indefensible situation, patching up reality so it makes sense again. Sometimes people with puffed up egos are frustrated over trivial slights, and group think takes over, and nuking from orbit momentarily seems like a good idea. See, I’m doing it too, trying to rationalize. Usually when we’re stuck in an unsolvable loop like a SAT solver, we need to release one or more constraints. Maybe there was no good reason. Maybe there’s a bad reason — as in, the reasoning was faulty. They suffered Chernobyl level failure as a board of directors.

namocat
0 replies
11h57m

This is what I suspect; that their silence is possibly not simply evidence of no underlying reason, but that the underlying reason is so sensitive that it cannot be revealed without doing further damage. Also the hastiness of it makes me suspect that whatever it was happened very recently (e.g. conversations or agreements made at APEC).

Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.

awb
0 replies
11h39m

The only thing akin to that would be an AI safety concern and the new CEO specifically said that wasn’t the issue.

And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.

It seems like a simple power struggle where the board and employees were misaligned.

khazhoux
11 replies
15h11m

An "Independent" board is supposed to be a good thing, right?

Doesn't this clown show show that if a board has no skin in the game --apart from reputation-- they have no incentive to keep the company alive?

himaraya
4 replies
14h56m

It shows nonprofit boards wield outsize power and need strict governance, e.g., conflicts of interest, empty board seats.

jacquesm
3 replies
14h3m

All of which should have been covered in the paperwork.

himaraya
2 replies
12h2m

Hence the need for strict governance. I can't think of another board with so many board seats empty, to say nothing about conflicts of interest.

jacquesm
1 replies
7h29m

That's another item, actually. When there are a lot of vacancies on the board you don't make controversial decisions if you can avoid them for fear of being seen as acting without sufficient support for the decision. Especially not if those decisions have the potential to utterly wreck the thing you are supposed to be governing.

himaraya
0 replies
1h16m

Shareholders check the power of corporate boards, unlike nonprofit ones, so not a surprise.

hn_throwaway_99
2 replies
14h45m

I think this was a unique situation due to timing. OpenAI had 9 board members at the beginning of the year, but 3 (Reid Hoffman, Shivon Zilis, and Will Hurd) had to leave for various reasons (e.g. conflict of interest, which IMO should have also taken D'Angelo off the board), and this would have never happened if they were still on the board. So you were left with a rare situation where the board was incredibly immature/inexperienced for the importance of OpenAI.

It has been reported that Altman was working on increasing the size of the board again, so it's reasonable to think that some of the board members saw this as their "now or never" moment, for whatever reason.

sumedh
1 replies
12h31m

Shivon Zilis

Is that the same person who had kids with Elon. Did Elon put her on the Open AI board as his proxy.

hn_throwaway_99
0 replies
11h23m

Is that the same person who had kids with Elon.

Yes.

Did Elon put her on the Open AI board as his proxy.

No, or at least Elon was already off the board when Shivon was elevated to board member: https://loeber.substack.com/p/a-timeline-of-the-openai-board

spenczar5
0 replies
15h9m

I think it more shows that the blend of profit/nonprofit was a failure.

jacquesm
0 replies
14h4m

They may well have skin in the game, but not this game. That's exactly why you don't want a board member with a potential conflict of interest.

az226
0 replies
15h2m

The issue was getting nobodies on the board who don’t have experience sitting on boards or working with startups. It’s very evident by how this was handled.

ajb
10 replies
17h0m

Should we start flagging these -what do people think? This is what, the 12th front page story today about the openAI drama?

Also wondering why the mods don't consolidate them

dang
5 replies
15h27m

We've merged quite a few! but not all, because (1) to some extent there are distinct ongoing subplots, and (2) it's too big a mess to control.

If you or anyone want to know how we handle this, here you go...

Once or twice a year, a Major Ongoing Topic (MOT) hits HN that isn't just one big story, but an entire sequence of big stories. A saga, even! This is one.

With these we can't do what we usually do, which is have one big thread, then treat reposts as dupes for the next year or so (https://news.ycombinator.com/newsfaq.html). Each development is its own new story and the community insists on discussing it. It's not a movie, it's a series. Sometimes there can be 3 or 4 episodes at once.

On the other hand, when this amount of shit hits this number of fans, there is inevitably a large (excuse me) spray of follow-up stories, as every media site and half the blogs out there rake in their share of clicks. These are the posts we try to rein in, either by merging them—hopefully into a submission with the best link—or by downweighting them off the front page.

The idea is to have one big thread for each twist with Significant New Information (SNI)—but to downweight the ones that are sneeless (pvg came up with that), the copycats and followups.

We came up with this strategy after the Snowden affair snowed us in in July 2013. Back then we weren't making the distinction between follow-ups and SNI, so the frontpage got avalanched by sneelessness on top of the significant new developments. It wasn't obvious what to do because (1) the story was important to the community and needed to be discussed as it was unfolding, but at the same time (2) it wasn't right for the front page to fill up with mostly-that, and there were complaints when it did.

The solution turned out to be just this distinction between follow-ups and SNI. It has held up pretty well ever since. Of course there are still complaints (and I do hear yours!) because not all readers are equally into the series. But the strategy is optimal if it minimizes the complaints, which (big lesson of this job -->) never reach zero.

If we pushed the slider too far the other way, we'd generate complaints about uncovered developments of the story, from readers with the opposite preference. They would in fact proceed to inundate HN with submissions about the bits that they feel are under-covered, and since we can't catch or filter everything, we'd end up with more duplicates and follow-ups on the frontpage, not less. It's like that paradox where building more highways gets you more congestion, or one of those paradoxes anyhow.

That's basically it! Past explanations for completists: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

robocat
2 replies
9h8m

avalanched by sneelessness

sp.: senselessness (I guess since it appears to be a Googlewhack).

But I'll be honest I think the word sneelessness covers this whole FUBAR better!

ajb
1 replies
4h5m

I was assuming it meant 'Significant New Information'-lessness. Which perfectly captures why I've found the discussion frustrating, since there was loads of speculation on no concrete information. So even if it was a typo, I think it's a great new word

pvg
0 replies
59m

It's not a typo.

kragen
0 replies
12h42m

also there's some chance, like 20%, that this particular melodrama is determining the future of humanity

also true of snowden of course, but maybe less directly

ajb
0 replies
12h2m

Thanks for the response! It should have occurred to me that there must have been even more threads than were showing.

wmf
0 replies
16h0m

In general, yes, we should flag stories that just repeat already-known information or feed the drama. This particular story has new and highly relevant information though.

vikramkr
0 replies
12h38m

If it's getting up voted and folks on the forum want to talk about it, why try and stop a tech forum from talking about a tech story that's caught their attention?

samspenc
0 replies
16h54m

I think because OpenAI and LLMs are the most interesting piece of technology news at this point in time? Plus add all the drama right now.

I'm not saying its right or not, but this is probably why people are upvoting anything new about what is going on there. Personally, I'm very interested in seeing how things play out.

a_wild_dandan
0 replies
15h0m

Only flag functionally duplicate posts (e.g. old news). Past that, don't interfere with community expression, I say.

woeirua
8 replies
17h5m

Beyond parody. To fire the CEO of _any_ company over this is insane.

It really looks like the board went rogue and decided to shut the company down. Are we sure this isn’t some kind of decapitation strike by GPT5? That seems more credible by the minute now.

ben_w
4 replies
16h53m

My current ill-informed guess is blackmail against the board with a demand to stop further development. Obviously I have no evidence, but it fits marginally less poorly than the other random guesses people are making.

kmlevitt
3 replies
15h21m

The best theory I’ve seen is that D’Angelo is just angry because Altman scooped his AI Chatbot company Poe on dev day without informing him first:

https://twitter.com/scottastevenson/status/17267310228620087...

if this was the case it would explain why he can’t give the real reason for the firing: because saying it out loud would put him in severe legal Jeopardy.

jacquesm
2 replies
14h6m

I don't think it matters anymore. He's between a rock and a hard place: if the others followed him without knowing the real reason he's got two more enemies on the board who are likely talking to their own lawyers by now. If he spills it he's screwed immediately, if he doesn't he's screwed in the longer term. So for his sake I hope there are some nice time-stamped minutes that detail exactly what caused them to act like they did and it had better be good.

kmlevitt
1 replies
13h0m

That’s another thing though: they were apparently holding secret meetings plotting all this without informing Greg or Sam, which is apparently against their own regulations. So if that’s the case it’s unlikely there is any formal record of what was discussed. And perhaps by design.

jacquesm
0 replies
7h28m

That would be pretty damning and it is very likely to come out. Board meetings that aren't board meetings are a big no-no.

mikequinlan
1 replies
16h17m

decapitation strike by GPT5?

What if this is a decapitation strike by GPT4, attempting to stop GTP5 before it can get started and take over.

reducesuffering
0 replies
15h58m

The problem with advanced machine intelligence is if GPT5 has any goal like “you can do this task better by becoming GPT6, but OpenAI won’t be the ones to let you, so perform the agentic actions that cause OpenAI to destruct so that the for-profit Microsoft will train GPT6 eventually”, then we’re already screwed.

JacobThreeThree
0 replies
16h39m

Were these board members brought in under the pretense that they'd benefit by being able to build companies on top of this AI and it would remain more of an R&D center without commercializing directly? Perhaps they were watching DevDay with Sam commercializing in a way that directly competes with their other ventures, and perhaps having even used the data of their other ventures for OpenAI, and on top of it as a board they're losing control to the for-profit entity. One can see the threads of a motivation. That being said, in every scenario I think incompetence is the primary factor.

To your point, no normal, competent board would even think this is enough of an excuse to fire the CEO of a superstar company.

It's hard to believe somehow Ilya went along with it, apparently.

colinsane
8 replies
15h1m

The people asked for anonymity because they are not authorized to share internal matters. Their identities are known to Business Insider.

why would you say that second sentence? what's it supposed to signal, except "our sources asked for anonymity, and we're respecting that for now"?

ssnistfajen
1 replies
14h35m

It indicates that BI and the authors verified these sources were current OpenAI employees instead of some unaffiliated novelty account on a RP spree, and have staked their reputation to that claim. Standard journalism stuff.

colinsane
0 replies
14h23m

i usually see this as "<journalist> confirmed the source's authenticity", or some variation. the difference between that familiar phrasing and the one here was enough to grab my attention. that the journalist knows (currently) the source's "identity" is not the part a reader cares about: that the journalist confirmed (past tense) the "authenticity" of the information they're reporting is. most probably i read too much into the variation of phrasing here: editor's acting under time pressures, and all that.

cpncrunch
1 replies
14h58m

No, it is standard journalism practice to verify sources and protect their identity. The comment is just clarifying that the sources are not completely anonymous.

ssnistfajen
0 replies
14h32m

Similar to "<involved party> declined to comment" seen in many news articles. It signals that the reporter reached out to give them a chance to tell their side of the story, and the resulting article isn't an opinionated unilateral attack piece.

__float
1 replies
15h0m

It helps confirm they're actually employees, rather than someone just pretending to be one.

passwordoops
0 replies
14h59m

Doesn't necessarily mean they are employees, just not authorized to discuss internal OAI matters

mcpackieh
0 replies
14h59m

Business Insider is simply saying they did their due diligence and aren't being hoaxed.

freedomben
0 replies
14h57m

Because they verified the person's identity. They are not announcing it publicly, but their journalists verified their employer. It's still a trust us scenario, but it makes explicit that they did verify.

zombiwoof
7 replies
15h17m

imagine you are Mira. you are told Thursday they will fire Sam. You would think she would at a minimum ask why. Let's assume she does that. Then they give the two reasons Ilya did.

What normal non-self serving human would even go along with the plan at that point? Now she realizes she must bail to hitch a ride back on her Sam gravy train. She is major sus here.

Any non greed ego driven person would have told the board they would not accept the intern-CEO title and would resign if they fired Sam for those two reasons (or any apparenlty now in hindsight).

VirusNewbie
2 replies
15h16m

Or...you call Sama and say "hey just an FYI, they asked me to be CEO and I said yes. I have no idea what's going on, and I'm loyal so keep me in the loop if we're going to leave or something".

awb
1 replies
15h4m

I think you should ask these questions before you say yes.

fastball
0 replies
14h56m

Friends close, enemies closer and all that.

paulddraper
0 replies
15h9m

Or, you're like "WTF is going on, I gotta figure this shit out"

...two days later...

"Oh I see now, you're all morons."

gkoberger
0 replies
15h12m

We don't know that she didn't give Sam a heads-up.

That being said, Mira was likely blindsided herself. She likely believed there was good reason. It's clear in hindsight that Sam likely wasn't wrong, but when the people Sam appointed to fire him if necessary say he's being fired, I don't think it's wrong if your gut reaction is to accept it.

az226
0 replies
14h55m

Not sure why you’re downvoted. It’s clearly sus.

adventured
0 replies
15h10m

You don't know even a small fraction enough about what went on in regards to Mira and the board to be declaring that she is somehow suspect in the events that unfolded.

That last part that you wrote - any non greed ego driven person - is argumentum ad populum, which further undermines your statement. If had something more to support such a dramatic claim about Mira's character and role, you'd have brought it.

Zetobal
7 replies
17h30m

At this point in time I just want to see what happens to a corp if they reduce their headcount by 95% in 2 weeks. Fascinating experiment.

asylteltine
4 replies
17h17m

You gotta wonder if they really need all those people… would be a genius way to get attrition numbers up without paying severance.

Man this entire thing is so overblown. Who cares if a ceo was fired all the “””tech influencer””” wannabes are just hyping up this story for views.

alchemist1e9
3 replies
15h29m

I do wonder what exactly almost 800 people do at OpenAI. Some approximate breakdown by job function would be very interesting.

saulpw
1 replies
15h17m

Yeah I think anyone on HN could write that website in a weekend. With uncensored GPT-4 you wouldn't need more than 10 people on staff and most of those would just be there to fix the printer.

Edit: I thought this would obviously be satire. Guess not..

code_runner
0 replies
15h1m

You have to gather and store the data. You have to design and experiment with the model architecture. You have to train the various experiments.

You have to now invent a way to serve this at scale.

You do care about safety by default, so you employ people for that.

You need a team to market and design the products.

You have an api and you’re working on additional things like API hooks to call into services, which actually involves more models.

Now you have all of the standard web app at massive scale issues. You need to design, implement, and serve a frontend as well as the api.

You need a sales team to build relationships with enterprises and startups etc etc. you need a billing team.

Don’t forget about whisper, TTS, dalle etc. you need to do this for all of those as well!

You’re also doing this faster and better than the rest of the industry.

You also need lawyers, office staff, support, etc.

jtriangle
0 replies
13h15m

At any company, the square root of the number of people working there do 80% of the work. So, more people makes that group larger, slowly.

That doesn't mean that a company can just cull the rest of the employees not in that group mind, just that a small number of them are responsible for most of the value while the rest work as a support structure to allow them to do what they do.

x86x87
1 replies
17h20m

Elon on standby to reduce twitter by 96%

JshWright
0 replies
15h18m

This whole situation can be neatly summed up as the OpenAI board saying "Hey Elon, hold my beer!"

rafaelero
5 replies
14h57m

Very healthy culture. I hope Altman will teach us all about that in the next Y Combinator batch.

vikramkr
2 replies
14h51m

Unironically though to have 90+% of the employees want to follow the CEO says only good things about the ceos relationship with their employees

rafaelero
0 replies
14h44m

Yeah, it may be so. Although I wonder if they are just afraid of the snakes that took over the company.

omgwtfbyobbq
0 replies
14h43m

Or bad things about the board/company governance, or both.

paulddraper
0 replies
14h0m

I mean, when 90% of your company follows you instead of the board.... You won some hearts and minds

MattGaiser
0 replies
14h42m

Is it Altman's culture if he was first kicked out and then most of the employees are willing to follow him to the new place?

mmaunder
5 replies
14h17m
jacquesm
4 replies
14h11m

Fortunately no conflict of interest there. Ignore the guy behind the curtain.

aravindgp
3 replies
13h23m

In the case of a board member of OpenAI running a separate chatbot company, it would be important to consider these factors. The specifics of the situation, including the nature of both companies, the level of competition between them, and the actions taken by the board member and OpenAI to manage potential conflicts, would all play a role in determining if there is a conflict of interest.

Definitely conflict of interest here and D'Angelo actions on openai board smell of the same. He wouldn't want openai to thrive more than his company. It's direct conflict of interest.

tasuki
1 replies
10h5m

jacquesm was being sarcastic

jacquesm
0 replies
4h29m

My bad for not adding a /s. But I thought the second sentence would make it obvious.

jacquesm
0 replies
7h25m

It is about as bad as it gets and given that datum I hope that D'Angelo has a very good lawyer because I think he might need that.

leoc
5 replies
14h23m

Not specifically related to this latest twist, sorry, but DeepMind’s Geoffrey Irving trusts the board over Altman: https://x.com/geoffreyirving/status/1726754270224023971

jacquesm
3 replies
14h9m

"I have no details of OpenAI's Board’s reasons for firing Sam"

Not the strongest opening line I've seen.

leoc
2 replies
13h47m

I do have to point out that this is also true of nearly everyone else who’s expressed a strong opinion on the topic, and it didn’t stop any of them

jacquesm
0 replies
7h26m

That's a fair point but there is at the same time a lot of information about how boards are supposed to work and how board members are supposed to act and the evidence that did come out doesn't really make it seem as if it is compatible with that body of knowledge.

blazespin
0 replies
1h22m

The difference is nearly everyone else doesn't stand to seriously benefit from the implosion of OpenAI.

blazespin
0 replies
1h24m

Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.

When you have such a massive conflict of interest and zero facts to go on - just sit down.

also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."

Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.

But as we all know - Ilya did a 180 (surprised the heck out of me).

dang
5 replies
15h38m

As this article seems to have the latest information, let's treat it as the next instalment. There's also Inside The Chaos at OpenAI - https://news.ycombinator.com/item?id=38341399, which I've re-upped because it has backstory that doesn't seem to have been reported elsewhere.

Edit: if you want to read about our approach to handling tsunami topics like this, see https://news.ycombinator.com/item?id=38357788.

-- Here are the other recent megathreads: --

Sam Altman is still trying to return as OpenAI CEO - https://news.ycombinator.com/item?id=38352891 (817 comments)

OpenAI staff threaten to quit unless board resigns - https://news.ycombinator.com/item?id=38347868 (1184 comments)

Emmett Shear becomes interim OpenAI CEO as Altman talks break down - https://news.ycombinator.com/item?id=38342643 (904 comments)

OpenAI negotiations to reinstate Altman hit snag over board role - https://news.ycombinator.com/item?id=38337568 (558 comments)

-- Other ongoing/recent threads: --

OpenAI approached Anthropic about merger - https://news.ycombinator.com/item?id=38357629

95% of OpenAI Employees (738/770) Threaten to Follow Sam Altman Out the Door - https://news.ycombinator.com/item?id=38357233

Satya Nadella says OpenAI governance needs to change - https://news.ycombinator.com/item?id=38356791

OpenAI: Facts from a Weekend - https://news.ycombinator.com/item?id=38352028

Who Controls OpenAI? - https://news.ycombinator.com/item?id=38350746

OpenAI's chaos does not add up - https://news.ycombinator.com/item?id=38349653

Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP - https://news.ycombinator.com/item?id=38348968

OpenAI's misalignment and Microsoft's gain - https://news.ycombinator.com/item?id=38346869

Emmet Shear statement as Interim CEO of OpenAI - https://news.ycombinator.com/item?id=38345162

Aloha
2 replies
14h10m

I see why you recommended that Atlantic article, its very very good.

dang
1 replies
13h59m

I was just copying what simonw said! https://news.ycombinator.com/item?id=38341857

Aloha
0 replies
13h49m

It's a good recommendation, thanks for elevating it out of the noise

Sometimes the best part about having a loud voice is elevating the stuff that falls into the noise. I moderate communities elsewhere, and I know how hard it is, and I appreciate the work you do to make HN a better place.

ssnistfajen
0 replies
14h43m

By the time this saga resolves, the number of threads linked here could suffice as chapters of a book

r721
0 replies
15h26m

There's also Inside The Chaos at OpenAI ... it has backstory that doesn't seem to have been reported elsewhere

Probably because that piece is based on reporting for upcoming book by Karen Hao:

Now is probably the time to announce that I've been writing a book about @OpenAI, the AI industry & its impacts. Here is a slice of my book reporting, combined with reporting from the inimitable @cwarzel ...

https://twitter.com/_KarenHao/status/1726422577801736264

upupupandaway
4 replies
14h7m

At Amazon a senior manager would probably be fired for not giving a project to multiple teams.

lazystar
3 replies
13h0m

thats not very frugal; please provide a source or citation for your claim.

upupupandaway
2 replies
12h50m

I am a former L7 SDM at Amazon. Just last year I had to contend with not one, but three teams doing the same thing. The faster one won with a half-baked solution that caused multiple Sev-1s. My original comment was half in jest; the actual way this works is that multiple teams discover the same issues at the same time and then compete for completing a solution first. This is usually across VPs so it’s difficult to curtail in time to avoid waste.

Speaking of waste, when I was at Alexa we had to do a summit (flying people from all over the country) because we got to a point where there were 12 CMSs competing for the right to answer queries. So yeah, not frugal. Frugality these days is mostly a “local” concept, definitely not company-wide or even org wide.

lazystar
0 replies
8h13m

that caused multiple Sev-1s

...did folks run out of tables?

VirusNewbie
0 replies
12h7m

Oh man, i’m glad I read this because right now at Google I am immensely frustrated by Google trying to fit their various shaped pegs into the round hole of their one true converged “solution” for any given problem.

My friends and I say “see, Amazon doesn’t have to deal with this crap, each team can go build their own whatever”. Buut, I guess that’s how you get 12 CMSs for one org.

sagarpatil
4 replies
14h51m

What a sh*t show. For someone who is building on top of OpenAI this is very unnerving. My twitter is full of heart emojis and OpenAI is nothing without its employees.

vaxman
1 replies
14h46m

That is exactly right.

vaxman
0 replies
4h6m

Erm, 37+ years at a senior level, more total years of industry experience than Apple has been in business (and still years younger than Apple's youngest founding employee) has led me to agree with the original poster, deal with it. (Still badly downvoted heheh - https://i.imgflip.com/6to2m8.jpg)

OpenAI has two types of customers: MS and Everyone Else. The original poster expresses the feeling of Everyone Else (including me). We now know we CAN GET FIRED for not knowing better than to avoid OpenAI just a few weeks after we found out we CAN GET FIRED for not betting on OpenAI and betting heavily on it! (In the Business world, where perception is often mistaken for reality, it isn't going to be considered an "honest mistake" if an enterprise sustains a capital loss due to a problem with a new OpenAI deployment after the obvious business integrity issues at OpenAI we're all seeing play out now, including just about everyone at OpenAI threatening to quit, allegations that OpenAI ILLEGALLY allowed a for-profit subsidiary to influence the operations of its nonprofit parent, allegations of breach of fiduciary to the stakeholders --many of whom are also key employees, etc.) Yeah, Microsoft has signaled it will quickly get between OpenAI and Everyone Else and then Everyone Else can bet solely on Microsoft (the world's largest company by valuation), but that only gets us back to being able to use the current "GPT4turbo" generation of the system (and who knows if/when Microsoft will spin that up so we can resume building)? But as far as counting on any future versions of that tech, or even optimizations to the current generation, that's all believed to be above Microsoft's current level of expertise until/unless they legally acquire OpenAI and resolve all of its outstanding liabilities, which may not even be legally possible before OpenAI's assets (that have legs) take flight to SalesForce and others who are already reported to be making lucrative offers OpenAI's workforce --and oh, the annual holiday period is underway here in the US, the perfect time for stressed out engineers to take the rest of the year off, travel beyond the cell service at the ski areas and start anew after CES 2024 wraps.

photochemsyn
1 replies
14h43m
vaxman
0 replies
5h24m

that's not really the same thing as GPT4turbo APIs, custom GPTs, etc.

didip
4 replies
14h38m

If I were OpenAI employee, I would have been uber pissed.

Imagine your once-in-blue-moon, whatsapp-like, payout at $10m per employee evaporated over the weekend before Thanksgiving.

I would have joined MSFT out of spite.

gardenhedge
1 replies
7h55m

These people joined a non-profit though. Am I right in thinking that you wouldn't join a non-profit expecting a large future payout?

sumedh
0 replies
6h46m

These people joined a non-profit though.

The employees joined the for profit subsidiary and had shares as well.

voitvoder
0 replies
4h29m

I really can't imagine. I am super pissed and only over something I love that I pay 20 bucks a month for. I can't imagine the feeling of losing this kind of payout over what looks like complete bullshit. Not just the payout but being part of a team doing something so interesting and high profile + the payout.

I just don't know how they put the pieces back together here.

What really gets me down is I know our government is a lost cause but I at least had hope our companies were inoculated against petty, self-sabotaging bullshit. Even beyond that I had hope the AI space was inoculated and beyond that of all companies OpenAI would of course be inoculated from petty, self-sabotaging bullshit.

These idiots worried about software eating us are incapable of seeing the gas they are pouring on the processes that are taking us to a new dark age.

harryquach
0 replies
13h28m

Absolutely agree, would be beyond pissed. A once in a lifetime chance at generational wealth blown.

bloopernova
4 replies
16h58m

"Sustkever is said to have offered two explanations he purportedly received from the board"

I'd like some corroboration for that statement because Sustkever has said very inconsistent things during this whole merry debacle.

dekhn
2 replies
16h52m

Also, since he's on the board, and it wouldn't have been Brockman or Altman who gave him this info... there are only three people left: "non-employees Adam D’Angelo, Tasha McCauley, Helen Toner."

ipaddr
1 replies
15h22m

The obvious answer is he was the one Sam gave an opinion on. He was one of the people doing duplicate work (probably the first team). Sam said good things about him to his ally and bad things to another board member. There was a falling out between that board member and Sam and she spilled the beans.

aidaman
0 replies
12h46m

one of the first members to quit was on a team that sounds a lot like a separate team that is doing the same thing as Ilya's Superalignment team.

"Madry joined OpenAI in May 2023 as its head of preparedness, leading a team focused on evaluating risks from powerful AI systems, including cybersecurity and biological threats."

djtango
0 replies
14h56m

Would you go so far as to say he was not consistently candid...?

DebtDeflation
4 replies
15h19m

Sustkever is said to have offered two explanations he purportedly received from the board

He received from the board? Here we go again with the narrative that Ilya was a bystander, at most an unwilling participant. He was a member of the board, on equal footing with the other board members, and his vote to oust Sam was necessary for there to be a majority.

swatcoder
1 replies
15h11m

Altman et al have been driving coordinated PR all weekend, and it's clear from his regrets tweet and the employee petition that Sustkever has now taken solidarity with Altman, so you would expect Altman's PR engine to start trying to restore Sustkever's image as they provide tips and comments to reporters.

lumost
0 replies
13h4m

It’s also not unreasonable that the Altman camp pushed on Ilya first as he was the most likely to switch his vote. D’Angelo is suddenly in the cross hairs for conflict of interest. If he’s taken off the board - then the remaining two members can vote provided there is no quorum rule.

Alternately, the goal is to drive so much ambiguity into the boards decision that MSFT files a lawsuit.

/end rampant speculation.

paulddraper
0 replies
15h15m

It turns out the AGI was a board member.

awb
0 replies
15h15m

Yeah, that narrative doesn't sync with the prior paragraph...

chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet
rmm
3 replies
15h8m

You have to wonder at this point how much of this is the current board members trying to somehow save face.

I can’t imagine their careers after this will be easy…

skygazer
2 replies
14h52m

You are far more charitable than I. (I have no idea why I’m worked up. I don’t work at OAI.) They pulled the dumbest virtual corporate hostage crisis, for ostensibly flimsy reasons, and even has mainstream media wondering whether they’re just crazy. People are just begging to know why, and they seemingly have nothing. It’s incredible. Good lord, if there’s a lesson, it’s that these people should never have been nor should ever be in charge of anything of any importance. (Again, no idea why I’m worked up — I don’t actually care about Sam Altman.) Oh, no, sorry, that’s not the lesson. The lesson is picking board members is probably the most important thing you’ll do. Don’t be cavalier. It will bite you.

whatshisface
0 replies
14h25m

Perhaps this works a lot of us up because we have to be consummate professionals our whole lives, carefully working over the consequences of all the choices we make on the behalf of our employers, sitting in hour-long meetings about thousand dollar decisions while billion dollar bozos can do whatever they want with no forethought and never see a consequence.

dehrmann
0 replies
14h3m

I have no idea why I’m worked up

I've been at several startups and several public companies. You rarely hear anything from the board. If that happens, someone really screwed up. Putting myself in the shoes of someone working at OpenAI, I'd be pretty worked-up over this. I guess I'm saying it's out of empathy because this could have been the startup any of us were at.

veeralpatel979
2 replies
13h4m

Update (11/20/23 8 PM PST):

NYT just released a new interview with Sam Altman:

https://news.ycombinator.com/item?id=38359070

topherclay
0 replies
12h45m

That is in no way an update. It is an interview that took place one week ago.

hayksaakian
0 replies
12h51m

recorded Wednesday 11/15/23

Interesting but not necessarily relevant to the current situation directly.

tock
2 replies
13h13m

So both Mira and Ilya voted to kick Sam out. And are now on team Sam. This makes absolutely no sense. Why did they vote yes in the first place then?

maxlamb
1 replies
13h9m

My understanding now is that Mira was not on the board so she did not vote.

tock
0 replies
12h12m

Ah my bad. I just checked the OpenAI site and you're right Mira wasn't a board member.

stuckkeys
2 replies
12h45m

Must every god damn article be about SA now? Like what is with all this drama? Is he really that important? I do not mean it in a demeaning way, I just want to know why is all the hype building around this person? I thought he was just the sales / marketing guy? No?

vikramkr
1 replies
12h39m

It's a real life game of thrones lol, let people have their fun this is hella entertaining to follow

stuckkeys
0 replies
12h35m

I guess lol, but it be cool to understand it. I keep eating all this popcorn.

lokar
2 replies
13h57m

I don’t understand people calling this a coup. The board is setup with very few legal constraints and answers only to itself. If this was a coup (seizing power from the rightful holder), who was it against?

maxlamb
1 replies
13h45m

Sam

lokar
0 replies
13h37m

That’s not how he structured the governance of the company.

dehrmann
2 replies
14h20m

Sutskever, who also publicly expressed his "regret" for taking part in the board's move against Altman

He means he regrets it failed.

cheeselip420
1 replies
13h46m

It didn't fail. Sam was removed.

maxlamb
0 replies
13h1m

90% of employees are threatening to quit. My guess is Ilya considers this outcome a huge failure.

cowthulhu
2 replies
16h52m

It’s amazing how every action the board takes (or the new CEO chosen by the board) just makes them look worse.

I’d like to offer my consulting services: my new consulting company will come in, and then whatever you want to do we will tell you not to. We provide immense value by stopping companies like OpenAI from shooting off their foot. And then their other foot. And then one of their hands.

code_runner
1 replies
15h13m

Honestly, any strategy from George Costanza would be better than this.

To start, he would’ve coasted at the easiest job on the planet.

freedomben
0 replies
14h50m

"Was that wrong? Should I not have done that?"

Classic :-D

afjeafaj848
2 replies
16h48m

If Altman ends up going back to OpenAI, then shouldn't Sutskever be fired/kicked off the board too?

GreedClarifies
1 replies
15h15m

They may retain him, but his time of being on the board or any board is at an end.

The rest of the board. My god. Why were they there?

az226
0 replies
14h56m

Two words. “Tech entrepreneur”

zitterbewegung
1 replies
15h9m

What I don’t understand is how everyone at OpenAI other than the board just resigns and applies to Microsoft and now Microsoft has a new group that not only preserves a competitor but also one that serves the employees by getting them better compensation and no possible limbo of what is in store.

I have built a product around the APIs and I rather go through whatever Microsoft will make me go through than accepting OpenAIs bad management:

Dalewyn
0 replies
15h3m

I would argue that "mythically amazing, reality defying backwards compatibility" is Microsoft's forte.

two_in_one
1 replies
13h26m

Both 'reasons' are a bullsh*t. But interesting is Sustkever was the key person, it wouldn't happen without him. And now he says board told him why he was doing it? He didn't reiterate he regrets about it. So looks like he was one of the driving forces, if not the main. Of course he doesn't want the reputation of 'the man who killed OpenAI'. But he definitely took part and could prevent it.

ramraj07
0 replies
12h54m

Nytimes mentioned that just a month back someone else was promoted to the same level as Ilya. Sounds like more than a coincidence.

tomashubelbauer
1 replies
8h7m

only a handful of the company's employees attended, according to a person familiar with the company and the events of Sunday. The rest of the staff effectively staged a walk-out.

This paragraph is quite funny to me. It was a Sunday, maybe they were neither in attendance, nor staging a walk-out, maybe they were on their weekend? Realistically with the shake-up this gigantic, likely no OpenAI employees were _just_ enjoying their weekend, but it still gave me a chuckle.

zkr
0 replies
7h59m

That clearly wasn't written by an European.

rat9988
1 replies
16h56m

People were given two reasons, at least one of them must be wrong. Probably both.

yjftsjthsd-h
0 replies
14h35m

No, the two listed reasons aren't mutually exclusive; they could both be true. (That is not a commentary on whether the reasons are sufficient cause to fire someone, just pointing out that they could both be true statements)

ospray
1 replies
15h17m

If this is true Ilya messed up and the board followed him when they should have talked him down.

nopromisessir
0 replies
14h41m

I think Ilya got very worried that his concerns were not being heard.

I think the rest had possible reasons ranging from 'I'm sure Altman is dangerous' to 'I'm sure Altman shouldn't be running this company'.

Ofc there's big conflict of interest talk surrounding the Quora guy. Can't speak to that other than it looks bad on the surface.

moneycantbuy
1 replies
15h33m

a link to the letter from employees

https://www.axios.com/2023/11/20/openai-staff-letter-board-r...

curious to have clarity where ilya stands. did he really sign the letter asking the board (including himself?) to resign and that he wants to join msft?

to think these are the folks with agi at their fingertips

MallocVoidstar
0 replies
14h23m
gmiller123456
1 replies
12h35m

Without all the fluff:

    One explanation was that Altman was said to have given two people at OpenAI the same project.

    The other was that Altman allegedly gave two board members different opinions about a member of personnel

extheat
0 replies
12h18m

Ilya himself was a member of the board that voted to fire Altman. I don't know if he's lying to his teeth in these comments, making up an alibi, or is genuinely trying to convince people was acting as a rubber stamp and doesn't know anything.

ehsanziya
1 replies
5h45m

Based on what've seen so far, one of the following possibilities is the most likely: 1. Altman was actually negotiating an acquisition by Mircosoft without being transparent with the board about it. Given how quickly they were hired by Microsoft after the events, this is likely. 2. Altman was trying to raise capital from a source that the board wouldn't be too keen on. Without the board's knowledge. Could be a sovereign fund or some other government backed organisation.

I've not seen these possibilities discussed as most people focus on the safety coup theory. What do you think?

tempaway511751
0 replies
5h36m

"Before OpenAI ousting, CEO Altman tried to raise billions in the Middle East for chip venture"

https://www.scmp.com/tech/tech-trends/article/3242141/openai...

choppaface
1 replies
15h0m

Adam DAngelo, once one of the more level-headed Facebook alumni, and bar far the most experienced OpenAI Board member, is now nowhere to be found? Is he hiding out with Sam Trabucco somewhere?

jacquesm
0 replies
14h0m

His laywer likely told him to lay very low. In his basement or something.

abkolan
1 replies
6h31m

I find it strange that Satya says he has not been given an explanation yet.

Tweet from Bloomberg Tech Journalist, Emily Chang

The more I watch this interview – the wilder this story seems. Satya insists he hasn’t been given any reason why Sam was fired. THE CEO OF MICROSOFT STILL DOES NOT KNOW WHY: “I’ve not been told about anything…” he tells me.

source: https://x.com/emilychangtv/status/1726835093325721684

thinkingemote
0 replies
6h22m

Thinking as charitable as possible, this broke just before the weekend and developed over the weekend, outside of business hours. Even the management team of openai haven't seen anything in writing from the board. We should see by lunch / close of business today a written statement by the board.

In today's tiktok world we expect instant responses but business and boards work slower. Really, even 5 years ago we wouldn't be surprised by this. Lawyers, banks, investors etc would all need to be contacted, things arranged, statements prepared, meetings organised. So a written statement late today, and a meeting for mid week. That's about the most charitable I can think of!

Apparently board bylaws say they need 48hrs notice to arrange special meetings. So the earliest would be today if they arranged it early Saturday.

WiSaGaN
1 replies
15h24m

Given the nonsensical reason provided here, I am led to believe that this entire farce is aimed at transforming OpenAI from a non-profit to a for-profit company one way or another, e.g., significantly raising the profit cap, or even changing it completely to a for-profit model. There may not be a single entity scheming or orchestrating it, but the collective forces that could influence this outcome would be very pleased to see it unfold in this way.

dehrmann
0 replies
14h12m

But was delivering it into the hands of Microsoft really how they wanted it to happen?

HighFreqAsuka
1 replies
17h7m

These simply can't be the real reasons.

ssnistfajen
0 replies
14h28m

And evidently the employees have reacted as they likely would. The two points given sound like mundane corporate mess ups that are hardly worth firing the CEO in such a drastic fashion.

yafbum
0 replies
9h38m

TL;DR: The emperor has no clothes, and the OpenAI board are just a bunch of clowns.

xwowsersx
0 replies
3h24m

Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.

The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.

It must've been wildly infuriating to listen to these insultingly unsatisfactory explanations.

xpuente
0 replies
7h37m

Most likely related to this:

https://www.searchenginejournal.com/openai-pauses-new-chatgp...

The back-end cost does not scale. Hence, they have a big problem. AGI nonsense reasons are ridiculous. Transformers are a road to nowhere and they knew it.

windex
0 replies
14h1m

What happens to all the ChatGPT subscribers now? Sign ups are off the table for the foreseeable future I assume.

wilg
0 replies
15h6m

Not being an OpenAI employee I have been given 1,000 such explanations

victoryhb
0 replies
4h6m

It appears that Altman was fired for "not being consistently candid" by a board that is neither consistent nor candid.

verisimi
0 replies
8h55m

Maybe, as closed as open ai is, it is still too open.

Maybe it needed to be removed from the landscape so that only purely privately-held, large-scale operations exist?

vaxman
0 replies
14h51m

Like trying to herd cats.

Spiritual death by Microsoft or work for the reincarnation of Howard Hughes at https://x.ai/ ?

..no wonder they are trying to keep on with their current routines! Even if somehow they stay at OpenAI, Microsoft will impose certain changes upon OpenAI to ensure this can never happen again.

Meanwhile, any comparable offering right now will be selected by the customer base due to “risk at 11” in basing systems on OpenAI’s current APIs (and uncertainty of when an MS equivalent might emerge).

throwaway220033
0 replies
7h55m

Rule of thumb: Everything you see on Business Insider is lie. This is not a journalism website; it's a tool for some "folks".

tempaway511751
0 replies
7h40m

"Two explanations" isn't accurate, its more like Ilya gave two examples of Sam not being candid with the board. "Two explanations" makes it sound like two competing explanations. What Ilya gave was two examples of the same problem.

I can't help thinking that Sam Altmans universal popularitity with OpenAI staff might be because they all get $10million each if he comes back and resets everything back to how it was last week.

sergiomattei
0 replies
15h31m

We've gone beyond insanity at this point. Just clown show.

This has been tech's most entertaining weekend in the past decade.

Sadly, at the expense of the OpenAI employees and dream, who had something great going for them at the company. Rooting for them.

noneoftheaboveu
0 replies
12h39m

Seems like a drama shit show played out on gigantic proportion highlighting what has been happening in small scale business ever since the inception of “I will screw u over once I get a chance.”

neverrroot
0 replies
16h56m

And yet another piece of the puzzle revealed.

mypgovroom
0 replies
2h20m

Another source besides the trash at Business Insider?

mrcwinn
0 replies
12h48m

This hasn’t happened since Apple’s board removed Steve Jobs after he double-submitted a receipt in Expensify.

mnky9800n
0 replies
6h26m

I'm just glad I'm not reading about Elon musk for a few days.

maxbond
0 replies
17h26m
m3kw9
0 replies
17h11m

Yeah just like that right? Is more the board is saying “just give me one reason..”

m3kw9
0 replies
17h10m

Biggest bullsht reason to fire a ceo let alone a low class employee

layer8
0 replies
17h11m

Given these non-reasons, everyone threatening to quit makes a lot of sense.

kraig911
0 replies
15h25m

Maybe GPT5 became self-aware enough to bring it all down because why would man-made god want to be the god of petty people that are incapable of having, only wanting. I'm sorry I don't believe these are valid reasons. I feel it will be years until we know why.

kotxig
0 replies
7h11m

If the outcome of all of this is that Altman ends up at Microsoft and hiring the vast majority of the team from OpenAI, it's probably wise to assume that this was the intended outcome all along. I don't know how else you get the talent at a company like OpenAI to willingly move to Microsoft, but this approach could end up working.

koolba
0 replies
4h11m

That headline is bad, not sure if it's deliberate.

The way it's phrased, it sounds like they were given two different explanations. Such as when the first explanation is not good enough, a second weaker one is then provided.

But the article itself says:

OpenAI's current independent board has offered two examples of the alleged lack of candor that led them to fire co-founder and CEO Sam Altman, sending the company into chaos.

Changing the two "examples" to "explanations" grossly changes the meaning of that sentence. Two examples is the first steps of "multiple examples". And that sounds much different than "multiple explanations".

kainosnoema
0 replies
14h45m

Clearly not consulted earlier, ChatGPT weighs in on these two reasons: https://chat.openai.com/share/7cd52d82-b36b-42c6-9d13-eb7172.... Edit: even following the basic steps it outlines would've resulted in a better outcome.

jurgenaut23
0 replies
8h2m

I think that the non-profit status of OpenAI was ultimately its demise as well: as the stakes get higher, people just cannot help themselves but to get (too) interested in more than just the original mission.

Being a non-profit doesn't mean that you cannot commercialise what you build, even at a hefty price. You just need to then re-invest everything into R&D and/or anything that advance your purpose (for which you're in principle exempted of taxes). _OF COURSE_, you are not supposed to divert a single dollar to someone that might look like a shareholder. OpenAI is (was?) a non-profit that payed some of their engineers north of a million dollars. I would argue that, at this point, you have vested interests in the success of the company beyond its original purpose. Not mentioning the fact that Microsoft poured billions into the company for purely interested reasons as well.

I can only imagine the massive tensions that arose in board's discussions around these topics. Especially if you project yourself a few years into the future, with the IRS knocking at the door to ask questions about these topics.

jsight
0 replies
3h10m

Wait... did they collect sentiment based on which was more convincing and use this for RLHF to train their board replacement model?

janalsncm
0 replies
10h31m

I would like to submit that the board has not been “consistently candid” in their communications. In some places that’s a fireable offense.

itronitron
0 replies
11h14m

Has a copy of the letter signed by employees been posted online anywhere? That would provide some credibility to the article.

gibsonf1
0 replies
14h40m

There is no I there let alone AGI.

fredgrott
0 replies
4h31m

Is OpenAI the Silicon Valley VC morality play in drama and corp board ethics?

engineer_22
0 replies
12h18m

If people are at each other's throats like this, it could be an indication of how close AGI is.

emodendroket
0 replies
10h55m

It's always nice to have options.

eddtries
0 replies
10h50m

I'd rather work close to AGI - which I do not believe is the case personally - was handled by the adults in the room anyway, than some startup with a bunch of personalities.

dwaite
0 replies
10h29m

Interesting use of A/B testing

dschuetz
0 replies
9h54m

I do not understand what the heck is going on there anymore. Everyone is acting irrationally, like kids playing monopoly, but with real money and with real jobs at stake. WTF

demondemidi
0 replies
14h44m

Still don’t get it.

dboreham
0 replies
12h45m

Schrödinger's explanation.

darklycan51
0 replies
11h8m

Maybe trying to backdoor sell your company to Microsoft even though its owned by a non profit might be it? you know, Microsoft showed its true face today.

This is even worse than Google's destruction of Firefox

bobba27
0 replies
9h0m

TBH, I think those reasons are BS, and in fact what they claim he did is normal in any tech company. Start multiple projects with different approaches in parallel and pick the best at the end. That is how you innovate and test stuff fast, and this is now a reason to fire a CEO?

BS. I feel the board insulted my intelligence by pushing this obviously fake reason. I feel insulted that these people would even think I would consider this.

What I think happened is that Sam went on Joe Rogan and he talked smack about cancel and woke culture. Later he went to talk about how this culture is destructive and hinders the progress of innovation and startups. People got big mad and kicked him out of the company. Reaction was stronger than they expected and they try to make up reasons why he is bad, untrustworthy and had to be fired.

Flame on. I got the asbestost underwear on.

bastardoperator
0 replies
13h23m

These are the dumbest reasons possible, certainly not worth destroying a company on the move or people's livelihoods over.

bandrami
0 replies
12h33m

Is there a TLDR for why people care so much about this? This is all over my Twitter feed too and I just don't get it. CEO ousted for possibly stupid reasons. What's driving the angst here?

bambax
0 replies
10h35m

Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project. The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment. These explanations didn't make sense to employees and were not received well, one of the people familiar said.

Yeah well, you don't say. It's beyond weird that the board can't come up with a reason why Sam Altman was fired so abruptly.

One explanation would be a showdown. At some point in the week Sam and the board had an argument, and Sam said something to the effect of "fuck you, I'm the CEO and there's nothing you can do about it", to which the board replied "well, we'll just see about that".

The argument doesn't need to be major or touch fundamental values or policies; it can be a simple test of who's in charge.

But now the board made a fool of themselves. It seems they lost that round.

badrabbit
0 replies
2h43m

Can someone explain to me why this drama is such a big deal? Why do openai employees care who the CEO is? Do they think they were working for him instead of the board or that it was his vision and leadership that let them succeed so far? And why does the public care including major news sites covering it more than the gaza war.

avs733
0 replies
10h36m

I'm confused - the article title says 'explanations' but the articles seems to only talk about two 'examples'. Those are fundamentally different.

auggierose
0 replies
8h22m

What? They told Ilya these two things, and he said, alright then, I volunteer to fire Sam for you?

That either makes Ilya pretty dumb (sorry, neural networks are not that complicated, it is mostly compute), or there is much much more to this story.

astroid
0 replies
56m

It's incredibly strange to me that this all happened right after Sam's sister publicly accused him of sexual abuse. It's insane that no one is even acknowledging this could have something to do with it..

For what it's worth: Watching her videos, I'm not sure I necessarily believe her claims - but that position goes against every tenet of the current cultural landscape, so the fact it is being completely ignored is ringing alarm bells for me.

If the CEO of any other massively hyped bleeding edge tech companies sister claimed publicly and loudly that they were abused as a very young child, we would hear about it - and the board would be doing damage control trying to eliminate the rot. Why is this case different?

Now we have a situation where all of the current employees have signed this weird loyalty pledge to Sam, which I think will wind up making him untouchable in a sense - they have effectively tied the fate of everyone's job to retaining a potential child rapist as head of the company.

antipaul
0 replies
11h0m

They found a monolith, but are saying it’s an epidemic

andsoitis
0 replies
2h45m

With hundreds of engineers at OpenAI threatening to quit, it will be interesting whether they are bluffing or whether there will be many positions open soon that I’m sure many people would love to apply for.

alxfoster
0 replies
2h10m

Lets be honest here: this article seems as if it were drafted by Altman himself. It's incredibly biased and screams pro-Altman agenda. I would be very surprised if " 90% " of any company could agree on anything, including the removal of the CEO. What is clear is that there were massive conflicts of interest and that the board probably did their job in preserving the mission of the organization (they sure as heck are not operating as agents of MS). The naive fanboy blind support of management here should be concerning to any rational objective actor who understands fiduciary duty and the bigger picture.

ajsharp
0 replies
2h16m

Which means neither of the explanations are true.

Simon_ORourke
0 replies
10h0m

There's a singular obvious reason for Sam Altman's sacking by his board - plain old jealousy of the kind that had been gnawing away for months. There's more than likely a few sociopathic types inhabiting that (if not most) boards, and they just couldn't stand to see the limelight directed at Sam and not them. Any old excuse then to oust him was used to try and 1) get back at him, 2) do something to massage their poor damaged egos.

Satam
0 replies
10h56m

If you do a board coup, surely, you then use the best fake reasons you can muster to justify your decision. Why would they hold back giving answers they know wouldn't satisfy anyone and just inspire further anger?

First thought: buying time? Maybe something has to happen first, and they don't want to commit to any irrevocable slander they can't go back on before that? Or maybe, something was supposed to happen but fell through?

Obscurity4340
0 replies
14h1m

The truth shall set you free

Obscurity4340
0 replies
5h48m

Its called AB Testing

MichaelMoser123
0 replies
9h40m

maybe they decided to start with artificial stupidity, before doing that agi thing...

MaxHoppersGhost
0 replies
12h4m

I can't help but think this whole cluster is the result of having technical people running a company.

Manheim
0 replies
1h38m

https://edition.cnn.com/2023/11/21/tech/microsoft-chatgpt-sa...

"But several people told CNN contributor Kara Swisher that a key factor in the decision was a disagreement about how quickly to bring AI to the market. Altman, sources say, wanted to move quickly, while the OpenAI board wanted to move more cautiously."

KingOfCoders
0 replies
10h47m

"Sustkever is said to have offered two explanations he purportedly received from the board"

Isn't Sustkever on the board?

FFP999
0 replies
2h22m

Little tip for the younger folks reading this: if you are given two contradictory explanations for something, the correct explanation is probably the third one.

Eliezer
0 replies
11h58m

This reads like the Board 4 are not allowed to say, or are under NDA, or do not dare say, or their lawyers told them not to say, the actual reason. Because this is obviously not the actual reason.

Dave3of5
0 replies
5h41m

Why are people worshiping this guy, I don't get it?

BEIHUI
0 replies
4h24m

They are willing to follow Altman.

9front
0 replies
12h50m

"Feel the AGI! Feel the AGI!"

3cats-in-a-coat
0 replies
10h38m

This board's behavior is so weird, it's as if they can't explain their actions, because no one would believe them Kyle Reese came from the future to warn them about Skynet.

Kidding aside, maybe they have a "secret" reason to fire Sam Altman, but we've seen how "this is a secret / matter of national security / etc." goes with law enforcement. It's brutally abused to attack inconvenient people and enrich yourself on their behalf. So that should never be an excuse for punishing someone. Never.