return to table of content

OpenAI board in discussions with Sam Altman to return as CEO

shmatt
160 replies
15h2m

This makes no sense. People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.

The board was Altmans boss - this is pretty much their only job. Altman knew this and most likely ignored any questions or concerns of theirs thinking he is the unfireable superstar

Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee

The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired. So mostly this just seems like an egomaniac employee who thinks he is the company (while appropriating the work of some really really smart data scientists)

sigmar
84 replies
14h56m

People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.

The board removed the board's chairman and fired the CEO. That's why it was called a coup.

The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired

nonprofits don't have shareholders (or shares).

x86x87
40 replies
14h2m

nope. a coup implies something that is outside of normal operation. the board removing the CEO can and will happen.

tsunamifury
39 replies
13h51m

The fact that HN engineering grunts have no idea what table stakes are vs titles and authority shows how they aren’t cut out for executive brinksmanship.

Sam has superior table stakes.

djur
16 replies
13h29m

What does any of that have to do with whether it's a "coup" or not? "Coup" has an implication of illegitimacy, but by all accounts the board acted within its authority. It doesn't matter if it was an ill-advised action or if Altman has more leverage here.

chatmasta
8 replies
12h54m

There's a distinction between what's technically allowed and what's politically allowed. The board has every right to vote Sam and Greg off the island with 4/6 voting in favor. That doesn't mean they won't see resistance to their decision on other fronts, and especially those where Sam and Greg have enough soft power that the rest of the board would be obviously inadvised to contradict them. If the entire media apparatus is on their side, for example (soft power), then the rest of the board needs to consider that before making a decision that they're technically empowered to make (hard power).

IMO, there are basically two justifiably rational moves here: (1) ignore the noise; accept that Sam and Greg have the soft power, but they don't have the votes so they can fuck off; (2) lean into the noise; accept that you made a mistake in firing Sam and Greg and bring them back in a show of magnanimity.

Anything in between these two options is hedging their bets and will lead to them getting eaten alive.

tsunamifury
7 replies
12h27m

Except You are discounting the major player with all the hard power who can literally call any shot with money

RandomLensman
5 replies
12h9m

The objective functions might be different enough and then there is nothing the hard power can do to get what it wants from OpenAI. Non-profit might consider winddown more in line with mission than something else, for example.

chatmasta
4 replies
11h53m

The threat to the hard power is that a new company emerges to compete with them, and it's led by the same people they just fired.

If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.

As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.

RandomLensman
3 replies
11h48m

If they cannot fulfill their mission one way or another (because it isn't resolvable in the structure) than dissolution isn't a bad option, I'd say.

chatmasta
2 replies
11h43m

That's certainly a purist way of looking at it, and I don't disagree that it's the most aligned with their charter. But it also seems practically ineffective, even - no, especially - when considered within the context of that charter. Because by shutting it down (or sabotaging it), they're not just making a decision about their own technology; they're also yielding control of it to groups that are not beholden to the same constraints.

RandomLensman
1 replies
11h40m

Given that their control over the technology at large is limited anyway, they are already (somewhat?) ineffective, I would think. Not sure what a really good and attainable position for them would like be in that respect.

chatmasta
0 replies
11h39m

Yeah, agreed. But that's also why I feel the whole moral sanctimony is a pointless pursuit in the first place. The tech is coming, from somewhere, whether you like it or not. Never in history has a technological revolution been stopped.

chatmasta
0 replies
12h20m

You mean Microsoft, who hasn't actually paid them the money they said they will eventually, and who can change their Azure billing arrangement at any time?

Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).

tsunamifury
3 replies
13h18m

Legitimacy is derived from power not from abstraction. Sorry that’s the reality. Rules are an abstraction. Power let’s you do whatever you want including making new rules.

x86x87
2 replies
13h11m

Yeah no. While you may be onto something that still does not make it a coup.

tsunamifury
1 replies
13h4m

It’s doesn’t matter what you call it.

x86x87
0 replies
12h35m

it sort of does. a coup is usually regarded as a bad thing. firing a ceo? not so much.

pushing to call it a coup is an attempt to control the narrative.

jacquesm
2 replies
13h21m

They acted within their authority but possibly without the support of those that asked them to join in the first place and possibly without sufficient grounds and definitely in a way that wasn't in the interest of OpenAI as far as the story is known today.

hansSjoberg
1 replies
10h24m

You're speaking as if Altman and Brockman did Sutskever a favour by "asking him to join". They were practically begging.

jacquesm
0 replies
6h7m

Doesn't change the fact that this probably wasn't the outcome they were going for.

zeroonetwothree
14 replies
13h37m

I don’t think you are using table stakes correctly

tsunamifury
12 replies
13h33m

Really? Aka Sam has the ability to start a new business and take the contracts with him and Ilya doesn’t. Because that’s table stakes. Exactly.

nostrademons
7 replies
13h8m

Everyone on that board is financially independent and can do whatever they want. If Sam & Ilya can't get along that basically means there are 2 companies where previously there was OpenAI. (4 if you add Google and Anthropic into the mix; remember that OpenAI was founded because Ilya left Google, and then Anthropic was founded when a bunch of top OpenAI researchers left and started their own company).

Ultimately this is good for competition and the gen-AI ecosystem, even if it's catastrophic for OpenAI.

tsunamifury
6 replies
13h5m

Anyone can do whatever they want, it doesn’t mean it will work out the way they want it too.

nostrademons
5 replies
12h58m

I'm curious what you're inferring to be "the way they want it to"?

From my read, Ilya's goal is to not work with Sam anymore, and relatedly, to focus OpenAI on more pure AGI research without needing to answer to commercial pressures. There is every indication that he will succeed in that. It's also entirely possible that that may mean less investment from Microsoft etc, less commercial success, and a narrower reach and impact. But that's the point.

Sam's always been about having a big impact and huge commercial success, so he's probably going to form a new company that poaches some top OpenAI researchers, and aggressively go after things like commercial partnerships and AI stores. But that's also the point.

Both board members are smart enough that they will probably get what they want, they just want different things.

stdgy
4 replies
12h36m

You need to remember that most people on this site subscribe to the ideology that growth is the only thing that matters. They're Michael Douglas 'greed is good' type of people wrapped up in a spiffy technological veneer.

Any decision that doesn't make the 'line go up' is considered a dumb decision. So to most people on this site, kicking Sam out of the company was a bad idea because it meant the company's future earning potential had cratered.

tsunamifury
1 replies
11h50m

I’m sorry, how is OpenAI going to pay for itself then? On goodwill and hopes?

Please get real.

stdgy
0 replies
11h26m

My best guess is they turn off the commercial operations that are costing them the most money (And that they didn't want Sam to push in the first place) and pump up the prices on the ones they can actually earn a profit from and then try to coast for awhile.

Or they'll do something hilarious like sell VCs on a world wide cryptocurrency that is uniquely joined to an individual by their biometrics and somehow involves AI. I'm sure they could wrangle a few hundred million out of the VC class with a braindead scheme like that.

peyton
0 replies
12h30m

That’s unfair. The issue is poor governance. Why would anybody outside OpenAI care how much money they make? The fact is a lot of people now rely in one way or another on OpenAI’s services. Arbitrary and capricious decisions affect them.

int_19h
0 replies
11h26m

You need to remember that most people on this site subscribe to the ideology that growth is the only thing that matters

I'm not sure that's actually true anymore. Look at any story about "growth", and you'll see plenty of skeptical comments. I'd say the audience has skewed pretty far from all the VC stuff.

cthalupa
1 replies
13h28m

Are you saying that Sam has the ability to generate new contracts when you say take contracts with him, or do you think that somehow the existing contracts with Microsoft and other investors are tied to where he is?

tsunamifury
0 replies
13h17m

I’d say so. Or bring satya with him.

resolutebat
0 replies
11h11m

No, to continue the poker metaphors, that's taking your chips and going home, perhaps to create your own casino with blackjack and hookers (h/t to Bender).

"Table stakes" simply means having enough money to sit at the table and play, nothing more. "Having a big pile of GPUs is table stakes to contest in the AI market."

RandomLensman
0 replies
12h44m

But it isn't a business at heart from its structure. Commercially I agree that Sam's position is superior but purely focusing on the non-profit's mission (not even the non-profit itself) - not so sure.

x86x87
0 replies
11h53m

I second that this is an usual use of table stakes.

Here is what I understand by table stakes: https://brandmarketingblog.com/articles/branding-definitions...

gtirloni
5 replies
13h45m

Such as?

tsunamifury
4 replies
13h36m

Talent following

Financial backing to make a competitor

Internal knowledge of roadmap

Media focus

Alignment with the 2nd most valuable company on the planet.

I could go on. I strongly dislike the guy but you need to recognize table stakes even in your enemy. Or you’ll be like Ilya. A naive fool who is gonna get wrecked thinking doing the “right” thing in his own mind will automatically means you win.

cthalupa
2 replies
13h27m

From everything we can see Ilya appears to be a true believer.

A true believer is going to act along the axis of their beliefs even if it ultimately results in failure. That doesn't necessarily make them naive or fools - many times they will fully understand that their actions have little or no chance of success. They've just prioritized a different value of you.

tsunamifury
0 replies
13h22m

Agree but I see that as potato potahto. Failure by a different name with imaginary wins by the delusional ethicist.

jacquesm
0 replies
13h20m

That's fair, but by messing this up OpenAI may well end up without any oversight at all. Which isn't the optimum outcome by a long shot and that's what you get for going off half-cocked about a thing like this.

hansSjoberg
0 replies
10h18m

Ilya IS the talent. They were desperate to hire him.

juped
0 replies
12h19m

lmao

hskalin
15 replies
14h31m

So who governs the board? Or who "owns" the company?

milkshakes
12 replies
14h26m

First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

https://openai.com/our-structure

echelon
7 replies
13h57m

This is the weirdest company equity structure I've ever heard of.

No wonder this is causing drama.

satvikpendem
4 replies
13h51m

Mozilla is somewhat similar, it's a non-profit that owns a for-profit entity.

jacquesm
3 replies
13h17m

And given that FireFox isn't exactly gaining marketshare you can see how well that works for them.

desert_rue
2 replies
12h49m

A non profit likely doesn’t prioritize growth.

jacquesm
0 replies
12h16m

That may be so, but it also probably shouldn't let its flagship fundraising entity wither.

echelon
0 replies
12h32m

At least Mozilla didn't try to abuse congress to achieve regulatory capture.

Keyframe
1 replies
12h16m

And then there's IKEA

astrange
0 replies
11h35m

And Novo Nordisk, Rolex, Heineken, Bose, and the NFL.

fulladder
3 replies
14h8m

Can you explain the third point a little more? If Altman has no meaningful economic interest in the company, what was his motivation for being involved at all? Why did he choose to spend his time this way?

I'm aware that Altman has made the same claim (close to zero equity) as you are making, and I don't see any reason why either of you would not be truthful, but it also has always just seemed very odd.

satvikpendem
1 replies
13h51m

If Altman has no meaningful economic interest in the company, what was his motivation for being involved at all? Why did he choose to spend his time this way?

Not everything is about money. He likely just likes the idea of making AI.

elcritch
0 replies
9h41m

Or be the public face of making the AI along with the power and control from that.

plorg
0 replies
13h58m

Ideology

gnicholas
0 replies
13h6m

No one governs the board of a nonprofit, exactly. In this case, it sounds like Sam and his allies are trying to exert pressure on the board by threatening crippling resignations. This puts the board in the position of choosing between pursuing its mission without certain employees, or pursuing business plans that do not align as well with its mission, but with the full complement of employees.

It's a tricky situation (and this is just with a basic/possibly-incorrect understanding of what is going on). I'm sure it's much more complicated in reality.

chasd00
0 replies
14h29m

In a 501.3c I think the board is the top. From what I understand they’re usually funded through grants that have requirements that need to be met for each disbursement. If you fail then the money stops but there’s no “firing” the board they just stop getting funds.

norsurfit
14 replies
13h17m

Also, the board made a decision without the board's chairman - Greg Brockman - involved. Also, it looks like the board didn't follow it's own internal rules about meetings.

stingraycharles
12 replies
12h52m

Also, the investors were not informed. It’s insane their largest investor and partner MSFT was blindsided by this. Anyone with just a little bit business sense knows this.

MacsHeadroom
9 replies
12h43m

This is the board of the non-profit. It has no investors. The board does not answer to anyone.

BoiledCabbage
8 replies
12h42m

And how does this non-profit pay for its immense server costs?

sotix
2 replies
11h38m

Non-profits still earn money recorded as net assets. They do not retain earnings at the end of the accounting period to store in shareholder’s equity because there are no shareholders that own the non-profit.

stingraycharles
1 replies
4h35m

You’re interpreting it as a lawyer would, rather than considering the real-world implications of this.

sotix
0 replies
2h22m

I’m interpreting it as a CPA

fakedang
2 replies
12h21m

The point still stands, the board does not have "investors". Microsoft knowingly donated to the for profit entity of the non profit. Open AI isn't a PBC, it's a 501c non profit. So the board can act that way, without the knowledge of the investors.

That being said, this is a case of biting the hand that feeds you. An equivalent would be if a nonprofit humiliated its biggest donor. The donor can always walk away, claiming her future donations away, but whatever she's donated stays at the nonprofit.

lsh123
1 replies
11h44m

I hope IRS is watching this ;)

dragonwriter
0 replies
11h31m

Watching what? A 501c3 being publicly pressured to make key governance decisions for the commercial benefits of investors in the 501c3's for-profit indirect subsidiary rather than the board's good-faith interpretation of its charitable purpose?

Why would they care about that?

dchichkov
0 replies
11h34m

It seems that OpenAI had switched to pre-paid billing. If anyone is interested helping, they can go and pre-pay. And support the non-profit.

I'd guess, OpenAI without Sam Altman and YC/VC network is toothless. And Microsoft's/VC/media leverage over them is substantial.

Obscurity4340
0 replies
12h23m

All corporations are basically Russian dolls at this point.

yellow_postit
0 replies
10h20m

These board members are either not serious people or they let their perceived power over a ground breaking company go to their collective heads. Either way has been quite the misplayed checkers move.

interestica
0 replies
12h14m

It's crazy how fast OpenAI put up the blog post

baby
0 replies
10h15m

where did you read about their internal rules?

ummonk
5 replies
14h35m

OpenAI isn’t a nonprofit company, and it has shareholders.

Edit: nvm I missed the point was about firing the board.

sigmar
1 replies
14h27m

{the entity} of which they are the board does not have shareholders and unless there's something funky in the charter: there's no mechanism to fire members of the board (other than board action). The shareholders of the llc aren't relevant in this context, as they definitely can't fire the nonprofit's board (the whole point of their weird structuring). https://openai.com/our-structure

dragonwriter
0 replies
13h55m

The shareholders of the llc

Pedantic, but: LLCs have "members", not "shareholders". They are similar, but not identical relations (just as LLC members are similar to, but different from, the partners in an partnership.)

bitvoid
1 replies
14h26m

From what I understand, the for-profit OpenAI is owned and governed by the non-profit OpenAI. The board of the latter are the ones who fired him.

dragonwriter
0 replies
13h47m

From what I understand, the for-profit OpenAI is owned and governed by the non-profit OpenAI.

That's functionally true, but more complicated. The for profit "OpenAI Global LLC" that you buy ChatGPT subscriptions and API access from and in which Microsoft has a large direct investment is majority-owned by a holding company. That holding company is itself majority owned by the nonprofit, but has some other equity owners. A different entity (OpenAI GP LLC) that is wholly owned by the nonprofit controls the holding company on behalf of the nonprofit and does the same thing for the for-profit LLC on behalf of the nonprofit (this LLC seems to me to be the oddest part of the arrangement, but I am assuming that there is some purpose in nonprofit or corporate liability law that having it in this role serves.)

https://openai.com/our-structure and particularly https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b6...

skywhopper
0 replies
14h27m

Check again.

shmatt
3 replies
14h46m

Then the board essentially owns the company, if I understand your comment correctly. So it’s like if Yann LeCun says he’ll come back to Meta once Zuck sells all his shares

peyton
2 replies
14h36m

There’s no owners. No ownership interest to sell. The board answers to the courts.

eightysixfour
1 replies
14h10m

Sort of. The board also answers to two other groups:

• Employees

• Donors or whoever is paying the bills

In this case, the threat appears to be that employees will leave and the primary partners paying the bills will leave. If this means the non-profit can no longer achieve its mission, the board has failed.

cthalupa
0 replies
14h5m

It's possible that the failure occurred at some point in the past. If the board truly believes keeping Altman is inherently incompatible with achieving their charter, they have to let him go. The fallout from that potentially kills the company, but a small chance of achieving the charter is better than no chance.

If that's the case, then the failing would be in letting it get to this point in the first place.

bmitc
1 replies
11h25m

It would be a coup if the board placed themselves in power, which they didn't. They just did their job description.

baby
0 replies
10h12m

Sam and Greg were part of the board apparently, so definitely a coup (we can debate for hours if it's a coup or not, but come one, imagine the scene being played in a movie and not being played as a coup).

Another way to think about these is that companies are basically small countries.

Workaccount2
23 replies
14h41m

I think the allure for Altman though would be that OpenAI already has all the pieces in place.

Going off and starting his own thing would be great, but it would be at least a year to get product out, even if he had all the same players making it. And that's just to catch up to current tech

boringg
11 replies
14h32m

Thats ship has sailed for him if hes not on the openAI train out of town. He'd be like a third party political candidate if he tried another run at it building his own team+product from scratch. Lots of other great things to do for sure but probably not a similar supercharged role. It just wouldn't be the same - OpenAI clearly the front runner right now

aluminum96
10 replies
14h26m

What if OAI's entire research organization follows him? Surely it's one of the best teams working today.

sudosysgen
5 replies
14h1m

Why would the entire org follow Sam instead of Ilya?

comfysocks
4 replies
13h43m

Sounds like wishful thinking on the part of the authors source.

If I worked there, I would keep my job and see how things shake out. If I don’t like it, then I start looking. What I don’t do is risk my well being to take sides in a war between people way richer than me.

jacquesm
3 replies
13h12m

That makes good sense and I think all those that are not independently wealthy already except personal friends of either Sam or high level remainers are going to do something quite similar. It's just too fluid a situation to make good decisions, especially if your livelihood is at stake, better not to make decisions that can't be easily undone.

sillysaurusx
2 replies
12h48m

Given that the total comp package is $300k base + $600k profit share, I don’t think any of their livelihoods are at stake. https://news.ycombinator.com/item?id=36460082

You’re probably right because people usually don’t have an appetite for risk, but OpenAI is still a startup, and one does not join a startup without an appetite for risk. At least before ChatGPT made the company famous, which was recent.

I’d follow Sam and Greg. But N=1 outsider isn’t too persuasive.

jacquesm
0 replies
12h20m

I’d follow Sam and Greg.

Once the avalanche has stopped moving that's a free decision, right now it could be costly.

elcritch
0 replies
9h29m

OpenAI isn’t a normal startup. It was founded as a research focused not for profit. That 300k+ base comp isn’t what I’d consider “risky” either. Career wise it never seemed risky as some of the fields top AI researchers were there from day almost one.

cthalupa
3 replies
14h15m

It's still tough. They won't have the data used to train the model, which is an incredibly important part. There's a lot of existing competitors in this space with headstarts. There's no guarantee that the entire research organization will follow Sam even if they leave OpenAI - they're going to have a lot of offers and opportunities at other companies that have an advantage.

It's also not clear that this is a realistic scenario - Ilya is the real deal, and there's likely plenty of people that believe in him over Altman.

Of course, the company has also expanded massively under Altman in a more commercial environment, so there are probably quite a few people that believe in Altman over him.

I doubt either side ends up with the entire research organization. I think a very real possibility is both sides end up with less than half of what OpenAI had Friday morning.

wesapien
1 replies
13h59m

Isn't also because of OpenAI scraping the internet that companies got the walls up. How else is anyone able to gathering training data these days?

astrange
0 replies
11h30m

Generally speaking for a base model this isn't nearly as important as it sounds because the specifics of the data don't matter as long as there's enough of it. You may remember this from high school as the central limit theorem.

For specific things like new words and facts this does matter, but I think they're not in real trouble as long as Wikipedia stays up.

smegger001
0 replies
11h25m

thing is they can team with people that probably have that data already. Say Microsoft switches teams to a hypothetical SamCo AI most of the internet has already been indexed by bing and wants to be indexed by bing as its the number 2 search engine. that mean they either have cached or access to pretty much everything SamCo could want to feed said AI. Reddit or Twitter for example would never cut bing off as it would cut off users. Microsoft could though block openai from further access to things like github linkedin.

j45
3 replies
14h40m

Except building something the second time around is often quicker and with the current gains of hardware capabilities and interest in the space… maybe it wouldn’t be a year behind.

plorg
1 replies
13h41m

There are also a ton of ~first mover advantages you can't benefit from, be they of untapped markets for demand or the exploitation of underpriced labor, capital, or IP. If Sam started a new company he would not get as good a deal on hardware or labor, he would get much more scrutiny on his training sets, and he would have to compete against both OpenAI and its competitors.

j45
0 replies
11h7m

For sure. Getting ahead and staying ahead is one of them.

I’m just not sure it would be totally starting from scratch since there is more of a playbook and know how.

mark_l_watson
0 replies
12h48m

I agree. Anthropic and Mistral are good examples. Both companies have key people from OpenAI and they fairly quickly developed good models, but I don’t think either are thinking too hard about real AGI, but instead are trying to create useful and ethical tools.

unyttigfjelltol
1 replies
14h29m

If only OpenAI open-sourced its models.....

chasd00
0 replies
14h9m

I would be surprised but not shocked if there’s some leaks in the next few weeks.

financypants
1 replies
14h18m

I’m really curious about how the venture investors feel about that

sangnoir
0 replies
13h19m

I'm curious about hoe the messaging and zeitgeist will evolve. Ober the past few months, the sentiment I encountered most frequently is that OpenAIs lead is unsurmountable and basically has a monopoly on genAI - or even AI in general. While I disagreed with this sentiment because there's no reason to believe LLM are the final word in AI, I think the will be many more people going back on prior messaging for partisan or astroturfing reasons and saying OpenAI is nothing special.

x86x87
0 replies
13h59m

not only that but people greatly overestimate how hard it is to replicate the success OpenAi had. you don't just build another one.

ramraj07
0 replies
14h39m

Further wouldn’t they not be able to create GPT-x exactly as it was even though they know it?

berniedurfee
0 replies
14h6m

Maybe much longer. The mass of infrastructure and data housed at OpenAI will be difficult to reproduce from scratch.

Especially considering OpenAI has boosted the value of the masses of data floating around the internet. Getting access to all that juicy data is going to come at a high cost for data hungry LLM manufacturers from here on out.

comfysocks
19 replies
14h15m

I think this article represents a tactical press release from Sam’s camp. Company in “free fall” without Sam? It’s not even Monday yet.

x86x87
18 replies
14h0m

yeah. this whole thing looks staged. not saying it's not possible but what kind of board would actually fire the CEO and take it back to resign?

jacquesm
17 replies
13h14m

The board that has been threatened to be sued individually and collectively by some of the most well known names in IT. They're probably wondering how they can get out of this with their reputations and ego's in one piece. You may have the legal authority to do something but if you don't have the support (or worse: if you haven't checked that you have the support) then it's not exactly the best move.

x86x87
14 replies
13h12m

I like to believe they actually did their homework and thought this through. We also don't have the full story so it's hard to say.

jsolson
8 replies
12h41m

They are a small board, and Microsoft has a very large number of lawyers.

I do not believe it is possible for them to have thought this through. I believe they'll have read the governing documents, and even had some good lawyers read them, but no governance structure is totally unambiguous.

Something I'm immensely curious about is whether they even considered that their opposition might look for ways to make them _criminally_ liable.

jacquesm
7 replies
12h16m

I don't see any ways in which they could be held criminally liable for just voting their conscience, and good luck verifying that. So that angle is not open for exploration as far as I can see. But what would scare the wits out of any board members is to have say the full power of Microsoft's legal department going after them for the perceived damages with respect to either Microsoft's stock price (a publicly traded company, no less) or the value of Microsoft's holdings in OpenAI.

And, incidentally, if there is a criminal angle that's probably the only place you might possibly find it and it would take the SEC to bring suit: they'd have to prove that one or more of the board members profited from this move privately or that someone in their close circle profited from it. Hm. So maybe there is such an angle after all. Even threatening that might be enough to get them to fold, if any of them or their extended family sold any Microsoft stock prior to the announcement they'd be fairly easy to intimidate.

laserlight
4 replies
11h7m

But what would scare the wits out of any board members

Don't you think the board must have sought legal counsel before acting? It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.

I don't think OpenAI board has any responsibility to care for Microsoft's stock price. Such arguments won't hold water in a court of law. And I don't think the power of Microsoft's legal department would matter when there's no legal basis.

jsolson
2 replies
10h47m

There is a difference between "my lawyers advised me that it was probably ok" and "Microsoft's legal team spent 100,000 billable hours pouring over case law to demonstrate that it was not, in fact, ok."

I don't think OpenAI board has any responsibility to care for Microsoft's stock price.

They control an entity that accepted $10B from Microsoft. Someone signed that term sheet.

laserlight
1 replies
10h25m

For such a basic action as a board exercising one of the most fundamental of its rights, I don't think it's necessary to spend 100K hours. And I don't think the board consulted to random lawyers off the street.

Someone signed that term sheet.

Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?

peyton
0 replies
10h19m

The board folded.

Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?

There’s nothing binding on a term sheet.

jacquesm
0 replies
6h13m

Don't you think the board must have sought legal counsel before acting?

They probably should have, but they may have not.

It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.

It is. But having the legal rights to do something and having it stand unopposed are two different things and when one of the affected parties is the proverbial 900 pound Gorilla you tread more than carefully and if you do not you can expect some backlash. Possibly a lot of backlash.

I don't think OpenAI board has any responsibility to care for Microsoft's stock price.

Not formally, no. But that isn't what matters.

Such arguments won't hold water in a court of law.

I'll withhold comment on that until I've seen the ruling. But what does and does not hold water in a court of law unless a case is extremely clear cut isn't something to bet on. Plenty of court cases that have been won because someone managed to convince a judge of something that you and I may think should not have happened.

And I don't think the power of Microsoft's legal department would matter when there's no legal basis.

The idea here is that Microsofts - immense - legal department has the resources to test your case to destruction if it isn't iron-clad. And it may well not be. Regardless, suing the board members individually is probably threat enough to get them to back down instantly.

jsolson
1 replies
10h53m

I agree that the act of voting itself is too squishy/personal, but the things that led up to it and their handling afterwards?

My curiosity stems from whether the board was involved in signing the contract for Microsoft's investment in the for-profit entity, and where the state might set the bar for fraud or similar crimes. How was the vote organized? Did any of them put anything in writing suggesting they did not intend to honor all of the terms of the agreement? Did the manner in which they conducted this business rise the level of being criminally negligent in their fiduciary duty?

I feel like there are a lot of exciting possibilities for criminality here that have little to do with the vote itself.

... and also +1 to your whole last paragraph.

jacquesm
0 replies
6h8m

I've had a case in Germany that for an outsider may have looked like we should have lost it. In a nutshell: we took on a joint-venture to develop a market that we weren't particularly interested in, 51:49 to their advantage. The day after the ink was dried and we had set up their development track to create the product they took the source code and sold it to another party.

We had the whole thing - including the JV - reversed in court in spite of them having the legal right to do all this. The reason: the judge was sympathetic to the argument that apparently the JV was a sham created just to gain access to our code. Counterparty was admonished, a notary public that had failed their duty to act as an independent got the most thorough ear washing that I've ever seen in a court and we got awarded damages + legal fees.

What is legal, what you can do and what will stand up are not always the same thing. Intent matters. And what also really matters is what OpenAI's bylaws really say and to what extent the non-profit's board members exercised their duty to protect the interests of the parties who weren't consulted and who did not get to vote. This so called duty of care - here in NL, not sure what the American term is - can weigh quite heavily.

jacquesm
2 replies
13h7m

It could be. But I've yet to see any evidence of that. More likely it wasn't because short of a massive skeleton in a cup-board in Sam Altmans' apartment this was mishandled and by now I would have expected that to come out.

laserlight
1 replies
11h2m

I've yet to see any evidence of that.

What evidence were you expecting to find? The board said that Sam wasn't candid with his communication. I've yet to see any evidence that he was candid. Unless the communication has been recorded, and somehow leaks, there won't be any evidence that we can see.

jacquesm
0 replies
6h7m

I suspect that if that evidence existed we'd have seen it by now because without it the board looks like incompetents.

mdekkers
1 replies
12h1m

I read somewhere that the CTO wasn’t at all the best pick for interim CEO, but they couldn’t find anyone else that was in their camp in a hurry. Nothing about this looks like they did their homework and thought this through. If they _had_ done those things, MSFT wouldn’t be as pissed as they are right now.

jacquesm
0 replies
11h57m

Where did you read that? That's interesting and would be one more proof point that they did this completely unprepared.

jcranmer
1 replies
10h54m

It's also worth remembering that Sam Altman is also seeking to get out of this with his reputation and ego in one piece. Definitely in his interest to be able to portray the board as coming crawling back to him after kicking him out the door, even if that is, well, less than candid communication of what has happened.

And the evidence that we've seen so far doesn't refute the idea that the board isn't seriously considering taking him back on. The statements we've seen are entirely consistent with "there was a petition to bring him back sent to the board and nothing happened after that."

jacquesm
0 replies
6h18m

Yes, that is correct.

seanoliver
5 replies
14h57m

I don't see how Sam can return if the board doesn't resign. It's either them or him at this point.

j45
4 replies
14h39m

Hard to reconcile with people who would do something like that.

Differences in interpretations will happen but the YC rule that founder drama is too often a problem continues to exist and it shouldn’t be a surprise.

pyuser583
3 replies
14h25m

What rule is this?

jacquesm
2 replies
13h10m

I'm not sure what rule the OP is referencing but otherwise reasonably successful start-ups often fail because founders clash on key parts of their vision (or behave in toxic ways towards each other or to other people in the company). This can very handily wreck your company or at a minimum substantially damage it.

j45
1 replies
11h8m

Rule was a typo, I meant observation.

Specifically, cofounder strife is one of the major issues of startups that don’t get where they could.

If I recall it was Jessica Livingstone’s observation

jacquesm
0 replies
6h4m

Rule or observation doesn't matter all that much (it's a shade, after all) and the whole idea lines up with my personal experience.

hughesjj
5 replies
14h29m

The board was Altmans boss - this is pretty much their only job.

Not at all. Ilya and George are on the board. Ilya is the chief scientist, George resigned with Sam and supposedly works like 80-100hrs a week

lazystar
3 replies
13h39m

supposedly works like 80-100hrs a week

if theyve been doin that for a while, no wonder the board wanted them gone. eventually you cause more work than you put out.

bmitc
1 replies
11h11m

Not to mention 100 hours not even being logistically possible. Working 100 hours a week with just 5 hours of sleep per day leaves only 4 hours in the day for the other parts of living and getting from a to b. Anyone claiming that, much less for an extended period of time, or either lying or is in slavery against their will.

laserlight
0 replies
10h37m

My impression is that people don't measure the time they work, but judge it by their impression. First, they think that they work for, let's say, 40 hours per week. They don't consider how much meals, coffee breaks, mental breaks, off-topic office discussions, checking social media, visiting restroom take. Second, when they work overtime, they get tired and overestimate the amount of time they worked. 10 hours of overtime probably feels like 20 hours.

100 hours is equal to 2 full-time jobs and a half time. People believing that number should consider how they would live going to their second job after their day ends (second full-time job) and working on weekends as well (half-time one).

Under ideal conditions, someone might be doing it. But, people shouldn't be throwing around these numbers without any time-tracking evidence.

ayewo
0 replies
11h1m

He’s putting in crazy hours because he doesn’t have a formal background in ML—his background is software engineering.

He talks about how learning ML made him feel like a beginner again on his blog (which was a way for him attract talent willing to learn ML to OpenAI) https://blog.gregbrockman.com/its-time-to-become-an-ml-engin...

selimthegrim
0 replies
13h33m

You mean Greg?

spoonyvoid7
2 replies
14h38m

Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee

Think you're missing the big picture here. Sam Altman isn't an "easily replaceable employee" especially given his fundraising skills.

kijin
1 replies
10h20m

Brilliant as sama is, a star fundraiser is more replaceable than a top engineer.

One can imagine Microsoft, for example, swooping in and acquiring a larger share of the for-profit entity (and an actual seat on the board, dammit) for more billions, eliminating the need for any fundraising for the foreseeable future.

If a lot of top engieers follow sama out, now that's a real problem.

elcritch
0 replies
9h23m

There’s probably a lot of behind the scenes drama and phone calls occurring among their top researchers. I’d guess Sam Altman calling them and trying to gain support for a counter coup. Things like this article to give the appearance that Sam et al have ready won, etc. If the board and new CEO isn’t doing that too, they could end up losing.

gngoo
2 replies
14h45m

Doesn’t have to make sense if it’s about this much money to be made/lost by investors.

astrange
1 replies
14h25m

Nonprofits don't have investors, their problem is too many of their employees are going to leave.

ta988
0 replies
13h30m

To add to that because it may not be clear for everyone, if they leave the knowledge will sprout other companies that will directly be able to compete with openai with different flavors. If this happens that mean OpenAI may really well be finished and that really well may he the reason why they try desperately to save what they can. Microsoft has a lot to loose here to both in cloud income and because they would loose their enormous tactical advantage they have so far.

skwirl
1 replies
13h28m

Making no effort to obtain a grasp on the basic facts of the situation doesn’t seem to stop people here from posting embarrassing rants.

Altman was on the board. He was not “just another employee.” Brockman was also on the board, and was removed. It was a 4 on 2 power play and the 2 removed were ambushed.

You also don’t seem to realize that this is happening in the nonprofit entity and there are no shareholders to fire the board. I thought OpenAI’s weird structure was famous (infamous?) in tech, how did you miss it?

jacquesm
0 replies
13h8m

They even put a nice little page up about it on their site. But that structure is not going to survive this whole ordeal.

jaredklewis
1 replies
13h3m

In a coup, a leader with the support of the people is ousted by force. If we believe the reports that there will be mass resignations, that seems to indicate the founders enjoy the “support of the people.”

Of course you can protest, “but in this country the constitution says that the generals can sack the president anytime they deem it necessary, so not a coup.” Yes, but it’s just a metaphor, so no one expects it to perfectly reflect reality (that’s what reality is for).

I feel we’ll know way more next week, but whatever the justifications of the board, it seems unlikely that OpenAI can succeed if the board “rules with an iron fist.” Leadership needs the support of employees and financial backers.

kijin
0 replies
10h33m

In a coup, a leader with the support of the people is ousted by force.

Not necessarily. An unpopular leader can be even easier to overthrow, because the faction planning the coup has a higher chance of gaining popular support afterward. Or at least they can expect less resistance.

Of course, in reality, political and/or military leaders are often woefully bad at estimating how many people actually support them.

empath75
1 replies
14h30m

The nature of power relationships at this level is not strictly hierarchical and there's a vast wealth differential here, and Sam is a lot more powerful than any of the board members in many many ways. Everybody who has large amounts of money at stake in this enterprise is going to back Altman. The board has no one.

elcritch
0 replies
9h19m

It to mention I’d wager that Altman is a lot higher on the sociopathic scale as well. The board members sound like somewhat normalish people trying to stick to their charter and perhaps genuine belief in the missions. Altman, not so much.

miohtama
0 replies
12h54m

The true boss is who pays your salaries.

Microsoft in this case.

lijok
0 replies
14h49m

What shareholders? OpenAI is a non-profit. Although hectic, it absolutely makes sense in a non-profit.

glitchc
0 replies
14h39m

I'm pretty sure that's what happened.

Sam and Greg were trying to stage a coup, the rest of the board got wind of it and successfully countered in time (got to them first).

What they didn't expect is that a bunch of their own technical staff would be so loyal to Sam (or at least so prone to the cult of personality). Now they're caught in a Catch-22.

firtoz
0 replies
9h28m

The board here are more like advisors.

If Altman takes all of the good engineers and researchers with him, OpenAI is no more.

So the board can be the boss of nothing, sure, without the ability to do anything - leading the organisation, raising funds, and so on

Perhaps they could hire someone that could replace Sam Altman, but, that would require a much larger company who have the employees indifferent to the leadership, like, EA or something

OpenAI is much more smaller and close knit.

chrisfosterelli
0 replies
14h31m

But Altman is just another employee

Except he is not. He was a cofounder of the company and was on the board. Your metaphor doesn't make any sense -- this is like if your boss fired you but also you were part of your boss and your cofounder who is on your side was the chair of your boss.

achow
0 replies
10h53m

The board is getting pressured like so..

The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors. https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...

TerrifiedMouse
0 replies
13h49m

This makes no sense. People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.

I think he stage his coup long ago when he took control of OpenAI making it “CloseAI” to make himself richer by effectively selling it to Microsoft. This is the people who believe in the original charter fighting back.

The shareholders can fire the board, but that’s not what he’s asking for.

There are no shareholders in a non-profit if I’m right. The board effectively answers to no one. It’s take it or leave it kind of deal. If you don’t believe in OpenAI’s mission as stated in their charter, don’t engage with them.

meetpateltech
114 replies
15h16m

Update on the OpenAI drama: Altman and the board had till 5pm to reach a truce where the board would resign and he and Brockman would return. The deadline has passed and mass resignations expected if a deal isn’t reached ASAP

https://twitter.com/alexeheath/status/1726055095341875545

adam_arthur
24 replies
14h7m

Pretty incredible incompetence all around if true.

From the board for not anticipating a backlash and caving immediately... from Microsoft for investing into an endeavor that is purportedly chartered as non-profit and governed by nobodies who can sink it on a whim. And having 0 hard influence on the direction despite a large ownership stake

Why bother with a non-profit model that is surreptitiously for profit? The whole structure of OpenAI is largely a facade at this point.

Just form a new for profit company and be done with it. Altman's direction for profit is fine, but shouldn't have been pursued under the loose premise of a non profit.

While OpenAI leads currently, there are so many competitors that are within striking distance without the drama. Why keep the baggage?

It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO. OpenAI has first mover advantage, and perhaps better talent, but not by an order of magnitude. There is no special sauce here.

Altman may be charismatic and well connected, but the hero worship put forward on here is really sad and misplaced.

lazystar
10 replies
13h34m

look at the backgrounds of those board members... cant find any evidence that any of them have experience with corporate politics. theyre in way over their heads.

robswc
6 replies
13h29m

It is also crazy that the "winning move" was to just do nothing and look like a genius and coast off that for the rest of their lives. Who in their right mind would consider them for a board position now.

cthalupa
4 replies
13h18m

This is assuming motivations similar to a board for a for-profit company, which the OpenAI board is not.

Insisting, no matter how painful, that the organization stays true to the charter could be considered a desirable trait for the board of a non-profit.

robswc
3 replies
13h9m

Fair. I don't know why they wouldn't just come out and say that though, if that were the case. It would be seen as admirable, instead of snake-ish.

Instead of "Sam has been lying to us" it could have been "Sam had diverged too far from the original goal, when he did X."

cowl
1 replies
7h6m

that is what the press release says. they didn't go into specifics but it is clear that the conflict is in Comercialisation vs original purpose

robswc
0 replies
50m

that is what the press release says.

In the initial press release, they said Sam was a liar. Doing this without offering a hint of an example or actual specifics gave Sam the clear "win" in the court of public opinion.

IF they would have said "it is clear Sam and the board will never see eye to eye on alignment, etc. etc" they probably could have made it 50/50 or even favored.

cthalupa
0 replies
12h37m

It's hard to say. Lots of things don't really make sense based on the information we have.

They could have meant that Sam had 'not been candid' about his alignment with commercial interests vs. the charter.

hilux
0 replies
7h28m

A strange game. The only winning move is not to play. How about a nice game of chess?

cowl
2 replies
7h9m

that's because it was never supposed to be a Corporate. It was a non-profit dedicated to AI research in the benefit of All. This is also why all this happened, they trying to stay true to the mission and not turn into a corporate.

jkaplowitz
0 replies
3h34m

They don’t have experience with non-profit leadership either, do they? They have some experience leading for-profits, such as the Quora CEO, but not non-profits.

DebtDeflation
0 replies
5h33m

In which case you could say the three non-employee members of the board have no background in AI. Two of them have no real background in tech at all. One seems to have no background in anything other than being married to a famous actor.

If Sam returns, those three have to go. He should offer Ilya the same deal Ilya offered Greg - you can stay with the company but you have to step down from the board.

rurban
6 replies
9h48m

It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO.

This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

Ilja can follow Google's Bard by holding it back until they have countermodels trained to remove conflicts ("safety"), but this will not win them any compute contracts, nor keep them the existing GPU hours. It's only mass, not smarts. Ilja lost this one.

rvba
2 replies
4h57m

When google came out it had the best algothitm backed by good hardware (as far as I understand often off the shelf hardware - anyway the page simply "just worked"). Difference between google and competitors was like night and day when it came out. It gained marker share very quickly because once you started using it - you didnt have any incentive to go back.

Now google search has a lot of problems, much better competition. But seriously you probably dont understand how it was years ago.

Also I thought that in ML still the best algorhitms win, since all the big companies have money. If someone came and developed a "pagerank-equivalent" for AI that is better than the current algs, customerd would switch quickly since there is no loyalty.

On a side note: Microsoft is playing the game very smart by adding AI to their products what makes you stick to them.

rurban
1 replies
2h1m

Oh, the pagerank myth.

Google won against initially Alta Vista, because they had so much money to buy themselves into each countries interxion to produce faster results. With servers and cheap disks.

The pagerank and more bots approach kept them in front afterwards, until a few years ago when search went downhill due to SEO hacks in this monoculture.

rvba
0 replies
20m

This is anegdotical evidence, but I was there when Google came out and it was simply much better than the competition. I learned one day about this new websitr - and it was so much better than the other alternatives that I never went back. Same with gmail, trying to get that invite for that sweet 1GB mailbox when the ones from your country offered only 20MB and sent you 10 spammy ads per day, every day.

As an anegdote: before google I was asked to show the internet to my grandmother. So I asked her what she wants to search for. She asked me about some author, let's say William Shakespeare - guess what did the other search engine find for me and my grandma: porn...

SeanLuke
1 replies
7h55m

Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

What in the world are you talking about? Internet search? I remember Inktomi. Basch's excuses otherwise, Google won because PageRank produced so much better results it wasn't even close.

philistine
0 replies
3h11m

The faster results came after they had already won the race for best search results. Initially, Google wasn't faster than the competition in returning a full page. I vividly remeber the joy of patiently waiting 2-3 seconds for an answer, and jolting up every time Google Search came back with exactly what I wanted.

foooorsyth
0 replies
2h24m

This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

This is an absurd retcon. Google won because they had the best search. Ask Jeeves and AltaVista and Yahoo had poor search results.

Now Google produces garbage, but not in 2004.

x86x87
3 replies
13h41m

my question is: why not both? why not pursue the profit and use that to fuel the research into AGI. seems like a best of both worlds.

cthalupa
2 replies
13h33m

That's the intent of the arrangement, but there's also limits - when that pursuit of profit begins to interfere with the charter of the non-profit, you end up in this situation.

https://openai.com/charter

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.

Of course no one knows for sure except the people directly involved here.

dabockster
1 replies
12h15m

Of course no one knows for sure except the people directly involved here.

The IRS will know soon enough if they were indeed non-profit.

cthalupa
0 replies
12h11m

I was not implying they were not a non-profit. I am saying that we do not know the exact reason why the board fired Altman.

jstummbillig
1 replies
7h8m

While OpenAI leads currently, there are so many competitors that are within striking distance without the drama.

It's hard to put into words, that do not seem contradictory: GPT-4 is barely good enough to provide tremendous value. For what I need, no other model is passing that bar, which makes them not slightly worse but entirely unusable. But again, it's not that GPT-4 is great, and I would most certainly go to whatever is better at the current price metrics in a heartbeat.

skohan
0 replies
2h17m

What is your use-case? I have not worked with them extensively, but both PALM and LLAMA seem as good as GPT-4 for most tasks I have thrown at them

minimaxir
15 replies
15h14m

Mass resignations from whom, I wonder. Other researchers?

dannyw
7 replies
14h45m

Presumably a significant amount of OpenAI employees are motivated by money, at least in some form.

The board just vaporised the tender offer, and likely much of their valuation. It’s hard to have confidence in that.

username332211
2 replies
10h51m

Also, most of the human race has an instinctual aversion to plotters and machinations. The board's sudden and rather dubious (why the need to bad-mouth Altman?) actions probably didn't sit well with many.

Dante places Brutus in the lowest circle of hell, while Cato is placed outside of hell altogether, even if both fought for the same thing. Sometimes means matter more than ends.

If the whole process had been more regular, they could have removed Altman with little drama.

tarsinge
1 replies
8h24m

We still don’t know if the one plotting was Altman. There is still room for this to be seen as a bold and courageous action.

Cyberfit
0 replies
5h29m

Sadly, optics matter too. Even if Altman was the schemer, Ilya sure has made himself look like the one.

jasonlotito
2 replies
12h52m

It's simple collective bargaining. I wonder how many of them oppose unions... until they have a need to work together.

maskil
0 replies
2h51m

On the contrary, they seem to be doing it quite fine without a union

46u54uaegr
0 replies
5h33m

I can't speak for every American but I find that plenty of Americans are fine with collective bargaining they just don't want to do it through a union if they're in a lucrative line of work already. Which isn't terribly hard to understand, they don't need or want an advocate whose main role is constantly issuing new demands they never cared about on their behalf. They just want to be able to pool their leverage as high value workers within the organization collectively in times of crisis.

iancmceachern
0 replies
12h24m

And with the popularity and success of GPT whatever they do next will likely be wildly successful. The timing couldn't be more perfect.

empath75
4 replies
14h29m

If you're an engineer at open ai, you just saw probably millions of dollars of personal wealth get potentially evaporated on friday. You're going to quit and go wherever Altman goes next.

chasd00
1 replies
14h20m

You're going to quit and go wherever Altman goes next.

I won’t be surprised if it’s the open arms of Microsoft. Microsoft embraced and extended OpenAI with their investment. Now comes the inevitable.

gigel82
0 replies
11h6m

Altman maybe, but not rank&file OpenAI engineers. They'd be leaving the millions in paper money for Microsoft's peanuts.

tarsinge
0 replies
8h20m

Why follow Altman? Most smart people are more driven by the mission than a personality cult.

hilux
0 replies
7h23m

Deca-unicorns don't come along every day. How would Sam Altman build another one? (I'll be impressed if he does.)

BoorishBears
1 replies
15h10m

People who joined OpenAI because the organizations they left were stuck self-sabotaging the way OpenAI's board just did (for the same reasons the board did it)

j45
0 replies
14h13m

It’s still common for people are people and triggered often by a list of common things like power, money, and fame.

medler
10 replies
15h5m

Really weird phrasing in this tweet. The idea is that Altman and/or a bunch of employees were demanding the board reinstate Altman and then resign. And they’re calling it a “truce.” Oh, and there’s a deadline (5 pm), but since it’s already passed the board merely has to “reach” this “truce” “ASAP.”

Edit: an update to the verge article sheds some more light, but I still consider it very sus since it’s coming from the Altman camp and seems engineered to exert maximal pressure on the board. And the supposed deadline has passed and we haven’t heard any resignations announced

Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.
Animats
8 replies
14h39m

"Missing a key 5PM PT deadline by which many OpenAI staffers were set to resign."

Says who? And did they resign?

x86x87
7 replies
13h57m

one thing that I am curious about: aren't there non-competes in place here? and even without them, you just cannot start your own thing that just replicates what your previous employer does - this has lawsuit written all over it.

sangnoir
4 replies
13h8m

It'll be tough going with no Azure compute contracts, no GPUs, no billions from Microsoft, no training data, OpenAI capturing all of the value from user-generated content resulted in sites like Reddit and Twitter significantly raising the cost to scrape them.

dabockster
3 replies
12h16m

The same thing got said about Elon Musk and Twitter, and yet X is still somehow alive.

edgyquant
1 replies
11h39m

No nothing similar at all was said about that. Sam Altman is also not Elon Musk

wesleywt
0 replies
10h54m

Yeah, Sam will not turn 40billion into 0 billion

ivalm
0 replies
8h5m

Elon had a massive preexisting AI-compute capacity from Tesla and ann enormous training set from X. That’s very different.

quotient
0 replies
13h55m

It's California. Non-competes are void. It is one of the few states where non-competes are not legally enforceable.

karmasimida
0 replies
13h51m

Nah this is California, that won’t work

rvba
0 replies
4h51m

Maybe they used the old Soviet Russia trick / good old KGB methods to seek out those who supported Altman. Now the board has a list of his backers - and they will slowly fire them one by one later. "Give me the man and I will give you the case against him".

https://en.m.wikipedia.org/wiki/Give_me_the_man_and_I_will_g...

slowhadoken
9 replies
12h48m

Has anyone else notice how many techies are on Twitter but still badmouth Twitter?

astrange
3 replies
11h15m

Using Twitter causes it to lose money so it's fine.

rgrs
2 replies
8h16m

Ummm...how exactly?

astrange
1 replies
8h3m

The only things you could do to make them money are paying for it, clicking on ads, or working there. Looking at ads without clicking costs them.

claytongulick
0 replies
1h52m

I recommend you look into ad "impressions" and the compensation model.

Clicking an ad is not the only way it is monitized.

0x142857
2 replies
9h29m

you can't critisize the government if you live in the country?

paulmd
0 replies
8h39m

this was unfortunately a popular sentiment in the early 2000s in the US

krater23
0 replies
5h10m

It's easier to leave twitter than your county

throwaway69123
0 replies
10h34m

The bad mothers are a vocal minority

refurb
0 replies
3h4m

It's like some Americans claiming they're going to move to Canada if their presidential candidate loses.

All that tough talk means doodly-squat.

pyb
9 replies
15h3m

"Those responsible for sacking the people who have just been sacked, must be sacked."

DriverDaily
5 replies
15h0m

Who sacks the person who sacks?

pyb
1 replies
14h54m

Whoever's nominally responsible for sacking the people who sacked the people who have just been sacked.

pixl97
0 replies
14h7m

A Møøse once bit my server

kurthr
0 replies
14h56m

It's sacks all the way down.

gpjt
0 replies
14h33m

"Quis dimittet ipsos dimissores?"

fakedang
0 replies
14h52m

David O Sacks

bloomark
1 replies
14h33m

Sounds like a line from HGTTG

latexr
0 replies
14h26m

It’s from the opening credits of Monty Python and the Holy Grail.

https://www.youtube.com/watch?v=79TVMn_d_Pk

hilux
0 replies
7h26m

Reminds me of the story of Chinggis Khan's burial:

"It's also said that after the Khan was laid to rest in his unmarked grave, a thousand horsemen trampled over the area to obscure the grave's exact location. Afterward, those horsemen were killed. And then the soldiers who had killed the horsemen were also killed, all to keep the grave's location secret."

xivzgrev
7 replies
13h24m

I am just baffled for so many reasons.

Why is the board reversing course? They said they lost confidence in Altman - that’s true whether lots of people quit or not. So it was bullshit

Why did the board not foresee people quitting en masse? I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired

Why did the interim CEO not warn Ilya about the above? Sure it’s a promotion but her position is now jeopardized too. Methinks she’s not ready for the big leagues

Who picked this board anyway? I was surprised at how…young they all were. Older people have more life experience and tend not to do rash shit like this. Although the Quora CEO should’ve known better as well.

jkaplowitz
2 replies
3h29m

I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired

Funny how people only use words like revolting for sudden firings of famous tech celebrities like Sam with star power and fan bases. When tech companies suddenly fire ordinary people, management gets praised for being decisive, firing fast, not wasting their time on the wrong fit, cutting costs (in the case of public companies with bad numbers or in a bad economy), etc.

If it’s revolting to suddenly fire Sam*, it should be far more revolting when companies suddenly fire members of the rank and file, who have far less internal leverage, usually far less net worth, and far more difficulty with next career steps.

The tech industry (and US society generally) is quite hypocritical on this point.

* Greg wasn’t fired, just removed from the board, after which he chose to resign.

t0mas88
1 replies
1h32m

That comparison doesn't make much sense, they didn't fire the CEO to reduce costs.

What looks quite unprofessional (at least on the outside) here is that a surprise board meeting was called without two of the board members present, to fire the CEO on the spot without talking to him about change first. That's not how things are done in a professional governance structure.

Then there is a lot of fallout that any half competent board member or C-level manager should have seen coming. (Who is this CTO that accepted the CEO role like that on Thursday evening and didn't expect this to become a total shit show?)

All of it reads more like a high school friends club than a multi billion dollar organization. Totally incompetent board on every dimension. Makes sense they step down ASAP and more professional directors are selected.

jkaplowitz
0 replies
38m

I’m not saying it was handled well. It wasn’t.

My point was that the industry is hypocritical in praising sudden firings of most people while viewing it as awful only when especially privileged stars like Altman are the victim.

Cost reduction is a red herring - I mentioned it only as one example of the many reasons the industry trend setters give to justify the love of sudden firings against the rank-and-file, but I never implied it was applicable to executive firings like this one. The arguments on how the trend setters want star executives to be treated are totally different from what they want for the rank and file, and that’s part of the problem I’m pointing out.

I generally support trying to resolve issues with an employee before taking an irreversible action like this, whether they are named Sam Altman or any unknown regular tech worker, excepting only cases where taking the time for that is clearly unacceptable (like where someone is likely to cause harm to the organization or its mission if you raise the issue with them).

If this case does fall into that exception, the OpenAI board still didn’t explain that well to the public and seems not to have properly handled advance communications with stakeholders like MS, completely agreed. If no such exception applies here, they ideally shouldn’t have acted so suddenly. But again, by doing so they followed industry norms for “normal people”, and all the hypocritical outrage is only because Altman is extra privileged rather than a “normal person.”

Beyond that, any trust I might have had in their judgment that firing Altman was the correct decision evaporated when they were surprised by the consequences and worked to walk it back the very next day.

Still, even if these board members should step down due to how they handled it, that’s a separate question from whether they were right to work in some fashion toward a removal of Altman and Brockman from their positions of power at OpenAI. If Altman and Brockman truly were working against the nonprofit mission or being dishonest with their board, then maybe neither they nor the current board are the right leaders to achieve OpenAI’s mission. Different directors and officers can be found. Ideally they should have some directors with nonprofit leadership experience, which they have so far lacked.

Or if the board got fooled by a dishonest argument from Ilya without misbehavior from Altman and Brockman, then it would be better to remove Ilya and the current board and reinstall Altman and Brockman.

Either way, I agree that the current board is inadequate. But we shouldn’t use that to prematurely rush to the defense of Altman and Brockman, nor of course to prematurely trust the judgment of the board. The public sphere mostly has one side of the story, so we should reserve judgment on what the appropriate next steps are. (Conveniently, it’s not our call in any case.)

I would however be wary of too heavily prioritizing MS’s interests. Yes, they are a major stakeholder and should have been consulted, assuming they wouldn’t have given an inappropriate advance heads-up to Altman or Brockman. But OpenAI’s controlling entity is a 501(c)(3) nonprofit, and in order for that to remain the correct tax and corporate classification, they need to prioritize the general public benefit of their approved charitable mission over even MS’s interests, when and if the two conflict.

If new OpenAI leadership wants the 501(c)(3) nonprofit to stop being a 501(c)(3) nonprofit, that’s a much more complicated transition that can involve courts and state charities regulators and isn’t always possible in a way that makes sense to pursue. That permanence is sometimes part of the point of adopting 501(c)(3) nonprofit status in the first place.

cthalupa
1 replies
13h20m

From what we can see, it looks like the majority of the reporting sources are Altman aligned. Look at how the follow up tweet from this reporter read - the board resigning and the governance structure changing is being called a "truce" when it's a capitulation.

We might get a better understanding of what actually happened here at some point in the future, but I would not currently assume anything we are seeing come out right now is the full truth of the matter.

pk-protect-ai
0 replies
7h10m

It seems to me that Altman uses his influence to manipulate public opinion, which he always does.

jacquesm
0 replies
4h57m

Some of those board picks make zero sense to me.

VirusNewbie
0 replies
11h33m

The board was likely stacked with people who were easily influenced by the big personalities and to check some marks (safety person, academic, demographic etc).

woeirua
7 replies
14h24m

There is no scenario here where Sam returns and OpenAI survives as a nonprofit. The board will be sacked.

thefourthchime
3 replies
14h14m

I agree. The pretense that OpenAI is still an open or a nonprofit has been a farce for a while now, it is an aggressively for-profit, trying to be the next Google company, and everybody knows it.

TerrifiedMouse
2 replies
13h44m

Clearly people in the non-profit part are trying to bring the organization back to its non-profit origins - after Altman effectively high jacked their agenda and corporatized the organization for his own benefit; turning its name into a meme.

shreyshnaccount
0 replies
8h42m

If they actually care about that part they'd instantly open source gpt4. Wouldn't matter what altman does after that point then

lordfrito
0 replies
10h39m

It's possible that it's already too late to course correct the organization. We'll know for sure if/when Altman gets reinstated.

If he's reinstated, then that's it, AI will be used to screw us plebs for sure (fastest path to evil domination).

If he's not reinstated, then it would appear the board acted in the nick of time. For now.

okdood64
2 replies
10h38m

The board will be sacked.

How does sacking a board work in practice?

dragonwriter
1 replies
10h34m

How does sacking a board work in practice?

For a nonprofit board, the closest thing is something "the members of the board agree to resign after providing for named replacements". Individual members of the board can be sacked by a quorum of the board, but the board collectively can't be sacked.

EDIT: Correction:

Actually, nonprofits can have a variety of structures defining who the members are that are ultimately in charge. There must be a board, and there may be voting members to whom the board is accountable. The members, however, defined, generally can vote and replace boards members, and so could sack the board.

OTOH, I can't find any information about OpenAI having voting members beyond the board to whom they are accountable.

jeffrallen
0 replies
2h38m

MSF (Médecins sans Frontières) is in most jurisdictions an association, where the board is elected by and works for the association membership. In that case, a revolt from the associative body could fire the board.

OpenAI does not have an associative body, to my knowledge.

m3kw9
7 replies
14h38m

the board is the ones that fired him, why would they resign if Sam isn't back?

jxi
5 replies
14h21m

Because they won't have a company to "run the board for" anymore if Sam doesn't come back (since so many people have threatened to resign).

cmdli
3 replies
14h7m

They also won't have a company if they resign. Not much benefit to them here, is there?

cbozeman
1 replies
11h57m

If you're going to die, die with honor, not without.

Basically the board's choices are commit seppuku and maybe be viable somewhere else down the line, or try to play hardball and fuck your life forever.

It's not really that hard a choice, but given the people who have to make it, I guess it kinda is...

Davidzheng
0 replies
11h31m

Do they need to be viable? I think the point is that they are not motivated by this crap

jxi
0 replies
13h42m

I guess since they're doomed anyway, resignation saves face a little bit more.

mark_l_watson
0 replies
12h27m

Question: is there a public statement signed by a large number OpenAI employees saying that they will resign over this? I don’t know. I have seen that three people resigned. If I were an OpenAI employee I think I would wait a month and see how things shake out. Those employees can probably get very highly paid jobs elsewhere, now, or later.

The Anthropic founders left OpenAI after Altman shifted the company to be a non-profit controlling a for profit entity, right?

j45
0 replies
14h4m

Could be too far gone with both those who left and those who remain.

thinkcomp
3 replies
14h5m

This does not solve the company's California AG problem.

https://www.plainsite.org/posts/aaron/r8huu7s/

alexallain
1 replies
9h59m

Hey I know something about this! I just mailed my organization's RRF-1 a couple of days ago. The author of this post seems to be confused. My organization is on the same fiscal year as OpenAI, and our RRF-1 had to be mailed by November 15th. That explains the supposed "six month" delay. Second, if it's mailed on November 15th, it might not have even been received yet, let alone processed. This post feels like grasping at straws on the basic facts, setting aside the fact that it just doesn't make any sense to imagine a board member filling out the RRF-1 and going "oh wait, was there financial fraud?" the morning of November 15th. (That's ... not how the world works? Under CA law, any nonprofit with 2M of more in revenue has to undergo an audit, which is typically completed before filling out the 990, and the 990 is a pre-req for submitting the RRF-1. That's where you'd expect to catch this stuff, and the board's audit committee would certainly be involved in reviewing the results well in advance.)

thinkcomp
0 replies
9h37m

The six-month delay is probably due to an automatic extension if you get an extension from the IRS, and also, you can file the form electronically, in which case mail delays are not a problem. But neither of those issues is the point. The point is that the form needed to be filed at all, and representations needed to be made accordingly.

OpenAI handled their audit years ago and hasn't had another one since according to their filings. So that does not seem like it would have been an issue this year.

Take a look at the top of the RRF-1 for the instructions on when it's due. Also, the CA AG's website says that OpenAI's was due on May 15th. They just have been filing six months later each year.

tsunamifury
0 replies
13h44m

This could all be easily covered over with a few billion dollars. This is just some guy that thinks too small.

salad-tycoon
2 replies
14h19m

Been reading up on the insight offered up on this site.

  Seems like a lot of these board members have deep ties around various organizations, governmental bodies, etc. and that seems entirely normal and probable. However, prior to chatgpt and dalle we, the public , had only been allowed brief glimpses into the current state of AI (eg Look this robot can sound like a human and book a reservation for you at the restaurant -Google ; look this robot can help you consume media better -many). As a member of the public it went from “oh cool Star Trek idea, maybe we’ll see it one day with flying cars” to “holy crap, I just felt a spark of human connection with a chat program.”
So here’s my question, what are the chances that openAI is controlled opposition and Sam never really was supposed to be releasing all this stuff to the public? I remember he was on his Lex podcast appearance and said paraphrasing “so what do you think, should I do it? Should I open source and release it? Tell me to do it and I will.”

Ultimately, this is what “the board is focused on trust and safety” mean right? As in safety is SV techno HR PR dribble for go slow, wear a helmet and seatbelt and elbow protectors , never go above 55, give everyone else the right of way because we are in the for the good humanity and we know what’s best. (vs the Altman style of: go fast, double dog dare smart podcast dude to make unprecedented historical decision to open source, be “wild” and let people / fate figure some of it out along the way.”)

The question of openai’s true purpose being a form of controlled opposition is of course based on my speculation but an honest question for the crowd here.

x86x87
1 replies
13h56m

I don't buy the whole the board is for safety and Sam is pushing too fast argument. This is just classic politics and backstabbing unless there is some serious wrongdoing in the middle that left the board with no option to fire the CEO.

jacquesm
0 replies
4h55m

Agreed. 'Who benefits' is a good question to ask in situations like these and it looks like a palace coup to me rather than anything with a solid set of reasons behind it. But I'll keep my reservations until it is all transparent (assuming it ever will be).

lewhoo
2 replies
7h35m

reach a truce where the board would resign and he and Brockman would return

That's a funny use of the word truce.

t0mas88
0 replies
1h30m

I guess the alternative is more like a war where Altman and Brockman form a new for profit company that kills OpenAI?

pyb
0 replies
4h37m

Truce for me, but not for thee.

shnkr
1 replies
13h22m

The board has to stick to the charter. unfortunately employees there wants to align with the profit part when they know they can damn lot of money.. obviously they will be with Altman size.

laurels-marts
0 replies
4h41m

I’m sure everyone at OpenAI thought they hit the winning lottery ticket and will walk away with tens of millions at minimum and the early employees with significantly more. When you vaporize all that for some ideological utopian motives I’m sure many were incredibly pissed and ready to follow Sam into his next venture. If you gonna sacrifice everything and work 60-100hr weeks then you better get your moneys worth.

Simon_ORourke
1 replies
8h10m

But, but... what company will that guy from Quora go on to ruin next, if he's kicked off the OpenAI board now?

jeffrallen
0 replies
2h36m

Don't worry about him: failure is the surest sign of an impending incidence of "white man about to get another chance to not learn from his failures".

whatwhaaaaat
0 replies
2h22m

This is just everyone swallowing the crap Sam Altman drops as truth.

I’d guess this sort of narcissist behavior is what got him canned to begin with. Good riddance.

starfallg
0 replies
9h48m

The latest update is that investors have been reporting that Sam Altman was talking to them about funding a new venture separate from OpenAI, together with Greg Brockman. This seems to paint the picture that the board was reacting to this news when dismissing Altman.

https://www.theguardian.com/technology/2023/nov/18/earthquak...

cdme
0 replies
14h43m

Curious to see if turning something off and back on will work out for the OpenAI board like it does in IT generally.

38321003thrw
0 replies
13h52m

These updates all seem to be coming from one side. Have they said anything at all?

mariaangelesjs
102 replies
8h55m

If Altman gets to return, it’s the goodbye of AI ethics within OpenAI and the elimination of the nonprofit. Also, I believe that hiring him back because of “how much he is loved by people within OpenAI” is like forgetting that a corrupt president did what they did. In all honesty, that has precedent, so it wouldn’t be old news. Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers. Ilya is one. Greg has just been inspiring people and stopped properly coding with the team a long time ago. Sam never did any code and the vision of an AGI comes from Ilya…Even if Mira now sides with Sam, I believe there’s a lot of social pressure for the employees to support Sam and it shouldn’t be like that. Again, I do believe OpenAI was and is a collective effort. But, I wouldn’t treat Sam as the messiah or compare him to Steve Jobs. That’s indecent towards Steve Jobs who was actually a UX designer.

nerbert
20 replies
8h30m

Like it or not, some people compare him to Jobs http://www.paulgraham.com/5founders.html

pk-protect-ai
18 replies
7h51m

This is the problem with people: they build icons to worship and turn a blind eye to the crooked side of that icon. Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business. Right now, Altman is irrelevant for the further development of AI and GPT in particular because the vision for the AI future comes from the engineers and scientists of OpenAI. Apple has never had any equipment that is good enough and comparable in price/performance to its market counterparts. The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand. GPT-4 and GPT-4 Turbo are totally different. They are the best, but they are not irreplaceable. If you look at what Phind did to LLaMA-2, you'll say it is very competitive. Though LLaMA-2 requires some additional hidden layers to further close the gap. Making LLaMA-2 175B or larger is just a matter of finances. That said, Altman is not vital for OpenAI anymore. Preventing Altman from creating a dystopian future is a much more responsible task that OpenAI can undertake.

qwytw
4 replies
5h17m

The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand

You do understand that other people might different preferences and opinions which are not somehow inherently inferior to those you hold.

comparable in price/performance to its market counterparts

Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.

but neither did squat for the technical part of the business.

Right... MacOS being an Unix based OS is whose achievement exactly? I guess it was just random chance this this happened?

That said, Altman is not vital for OpenAI anymore

Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years if the money taps are turned off.

pk-protect-ai
3 replies
4h51m

> Right... MacOS being an Unix based OS is whose achievement exactly?

Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...

> Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.

For the past 15 years, whenever I needed new hardware, I thought, "Maybe I'll buy a Mac this time." Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price. With Linux on board, making your desktop environment eye-candy takes seconds; nothing from the Apple ecosystem has been irreplaceable for me for the last 20 years. Sure, there is something that only works perfectly on a Mac, though I can't name it.

> Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years

It is always vital. OpenAI could not even dream of building their products without the finances they've received. However, do not forget that OpenAI has something technical and very obvious that others overlook, which makes their GPT models so good. They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up. So it goes both ways.

But I'd prefer my future not to be a dystopian nightmare shaped by the likes of Musk and Altman.

qwytw
1 replies
4h38m

Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this?

Is that actually a serious question? Or do you just believe that no founder/CEO of a tech company ever had any role whatsoever in designing and building the products their companies have released?

Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price.

I'm talking about M-series Mac mainly (e.g. the Macbook Air is simply unbeatable for what it is and there are no equivalents). But even before that you should realize that other people have different priorities and preferences (.e.g go back a few years and all the touchpads on non Mac laptops were just objectively horrible in comparison, how much is that worth?)

environment eye-candy takes seconds

I find it a struggle. There are other reasons why I much prefer Linux to macOS but UI and GUI app UX is just on a different level. Of course again it's a personal preference and some people find it much easier to ignore some "imperfections" and inconsistencies which is perfectly fine.

They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up

Maybe, maybe not. Antagonizing MS and their other investors certainly isn't going to make it easier though.

financltravsty
0 replies
2h45m

OSX comes with a scuffed and lobotomized version of core-utils. To the point where what is POSIX/portable to almost every single unix (Linux, various BSDs, etc.) is not on OSX.

Disregarding every other point, in my eyes this single one downgrades OSX to “we don’t use that here” for any serious endeavor.

Add in Linux’s fantastic virtualization via KVM — something OSX does not have a sane and performant default for (no, hvf is neither of these things). Even OpenBSD has vmm.

The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).

finnh
0 replies
1h29m

Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...

Steve Jobs founded NeXT

ohcmon
2 replies
5h50m

Ecosystem around chat GPT is the differentiator that Meta and Mistral can’t beat – so I’d say that Altman is more relevant today than ever. And, for example, if you’ve read Mistral’s paper – I think you would agree that it’s straightforward to replicate similar results for every other major player. Replicating ecosystem is much harder.

Performance is never a complete product – neither for Apple, nor for Open AI (its for-profit part).

pk-protect-ai
1 replies
4h31m

If you really need such an ecosystem, then you can build one right away, like Kagi Labs and Phind did. In the case of Kagi, no GPT is involved; in the case of Phind, GPT-4 is still vital, but they are closing the gap with their cheaper and faster LLaMA-2 34B-based models.

Performance is never a complete product

In the case of GPT-4, performance - in terms of the quality of generation and speed - is the vital aspect that still holds competitors back.

Google, Microsoft, Meta, and countless research teams and individual researchers are actually responsible for the success of OpenAI, and this should remain a collective effort. What OpenAI is doing now by hiding details of their models is actually wrong. They stand on the shoulders of giants but refuse to share these days, and Altman is responsible for this.

Let us not forget what OpenAI was declared to stand for.

ohcmon
0 replies
2h58m

Under ecosystem I mean people using ChatGPT daily on their phones and browsers, developers (and now virtually anyone) writing extensions. For most of the world all of the progress is condensed at chat.openai.com, and it will be only harder to beat this adoption.

Tech superiority might be relevant today, but I highly doubt it will stay the same for a long time even if openai continues to hide details (which I agree is bad). We could argue about the training data, but we have so much publicity available so that is not an issue as well.

tim333
1 replies
5h49m

When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build. The fact that he had flaws is kind of irrelevant to that. I'm not sure if similar applies to Altman.

By the way I can't agree with you on iOS from my personal experience. If you are using the phone as a phone it works very nicely. Admittedly it's not great if you want to write code or some such but there are other devices for that.

qwytw
0 replies
5h13m

When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build

I'm not sure that's true though? They did quite alright over the next ~5 years or so and the way how Jobs handled the Lisa or even the Mac was far from ideal. The late 90s Jobs was a very different person from the mid-early 80s one.

IMHO removing Jobs was probably one of the best thing that happened to Apple (from a long-term perspective). Mainly because when he came back he was a much more experienced capable person and he would've probably achieved way less had he stayed at Apple after 1985.

lozenge
1 replies
5h51m

Maybe Altman was instrumental in securing those investments and finances that you describe without reason as replaceable and trivial.

You haven't actually given anything "crooked" that Altman did.

pk-protect-ai
0 replies
4h43m

Locking out competition by investing substantial time and resources into AI regulations—how about this one? Or another: promoting "AI safety" to win the AI race and establish dominance in the market? I just do not understand how you can't see how dangerous Sam Altman is for the future of our children...

zztop44
0 replies
5h41m

I don’t understand this take. Do you really think CEOs don’t have any influence on their business? Alignment, morale, resource allocation, etc? And do you really think that those factors don’t have any influence on the productivity of the workers who make the product?

A bad CEO can make everyone unhappy and grind a business to a halt. Surely a good one can do the opposite, even if that just means facilitating an environment in which key workers can thrive and do their best work.

Edit: None of that is to say Sam Altman is a good or bad CEO. I have no idea. I also disagree with you about iOS, it’s not perfect but it does the job fine. I don’t feel like I’m eating glass when I use it.

pcvarmint
0 replies
6h22m

I think you mean "idols".

dcwca
0 replies
4h12m

Right now, Altman may be the most relevant for the further development of AI because the way the technology continues to go to market will be largely shaped by the regulatory environments that exist globally, and Sam leading OAI is in by far thr best position to influence guide that policy. And he has been doing a good job with it.

Turing_Machine
0 replies
2h15m

Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business.

The history of technology is littered with the corpses of companies that concentrated solely on the "technical side of the business".

Max-q
0 replies
3h55m

The claim that Apple equipment is not good on a price performance ratio does not hold water. I recently needed to upgrade both my phone and my laptop. I use Apple products, but not exclusively. Making cross platform apps, I like to use all the major platforms.

I compared the quality phone brands and PC brands. For a 13" laptop, both Samsung and Dell XPS are $4-500 more expensive on the same spec (i7/M2 pro, 32GB, 1TB), and I personally think that the MacBook Pro has a better screen, better touch pad and better build quality than the two others

iOS devices are comparably priced with Samsung models.

It was this way last time I upgraded my computer, and the time before.

Yeah, you will find cheaper phones and computers, and maybe you like them, but I appreciate build quality as well as MIPS. They are tools I use from early morning to late night every day.

Max-q
0 replies
4h5m

Aren't your thoughts contradictory? You say Altman is no longer needed because Gpt4 is now very good. Then you describe how horrible the iPhone is now. Steve Jobs has been dead a long time, and without his leadership, the uncompromising user focused development process in Apple was weakened.

How will OpenAI develop further without the leader with a strong vision?

I think Apple is the example confirming that a tech companies need visionary leaders -- even if they are not programmers.

Also, even with our logical brains, we engineers (and teachers) have been found to be the worst at predicting social economic behavior (ref: Freakonomics). To the point where our reasoning is not logical at all.

tarsinge
0 replies
7h8m

On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

This is from the eyes of an investor. Does OpenAI really need a shareholder focused CEO more than a product focused one?

mitrevf
15 replies
8h29m

The codebase of an LLM is the size of a high school exam project. There is little to no coding in machine learning. That is the sole reason why they are overvalued - any company can write its own in a flash. You only require hardware to train and inference.

andy_ppp
9 replies
8h16m

If it's so simple why does Chat GPT 4 perform better than almost everything else...

Galanwe
5 replies
7h54m

You're not really answering the question here.

Parent's point is that GPT-4 is better because they invested more money (was that ~$60M?) in training infrastructure, not because their core logic is more advanced.

I'm not arguing for one or the other, just restating parent's point.

andy_ppp
4 replies
7h28m

Are you really saying Google can't spend $60m or much more to compete? Again if it is so easy as spending money on compute Amazon and Google would have just spent the money by now and Bard would be as good as Chat GPT, but for most things it is not even as good as Chat GPT 3.5.

pk-protect-ai
3 replies
7h14m

You should already be aware of the secret sauce of ChatGPT by now: MoE + RLHF. Making MoE profitable is a different story. But, of course, that is not the only part. OpenAI does very obvious things to make GPT-4 and GPT-4 Turbo better than other models, and this is hidden in the training data. Some of these obvious things have already been discovered, but some of them we just can't see yet. However, if you see how close Phind V7 34B is to the quality of GPT-4, you'll understand that the gap is not wide enough to eliminate the competition.

Cyberfit
1 replies
5h44m

If they’re ”obvious”, e.g. ”easy to see”, how come, as you say, we ”can’t see” them yet?

Can not see ≠ easy to see

pk-protect-ai
0 replies
5h8m

That is the point we often overlook the obvious stuff. It is something so simple and trivial that nobody sees it as a vital part. It is something along the lines of "Textbooks are all you need."

jacquesm
0 replies
5h0m

This is very much true. Competitive moats can be built on surprisingly small edges. I've built a tiny empire on top of a bug.

LeonM
1 replies
7h51m

I'm not saying it is simple in any way, but I do think part of having a competitive edge in, AI at least at this moment, is having access to ML hardware (AKA: Nvidia silicon).

Adding more parameters tends to make the model better. With OpenAI having access to huge capital they can afford 'brute forcing' a better model. AFAIK right now OpenAI has the most compute power, which would partially explain why GPT4 yields better results than most of the competition.

Just having the hardware is not the whole story of course, there is absolutely a lot of innovation and expertise coming from oAI as well.

arthur_sav
0 replies
6h29m

I'm sure Google and Microsoft have access to all the hardware they need. OpenAI is doing the best job out there.

alfonsodev
0 replies
4h52m

I think it's about having massive data pipelines and process to clean huge amounts of data, increasing signal noise ratio, and then scale as other are saying having enough gpu power to serve millions of users. When Stanford researchers trained Alpaca[1][2] the hack was to use GPT itself to generate the training data, if I'm not mistaken.

But with compromises, as it was like applying loose compression on an already compressed data set.

If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.

[1] https://crfm.stanford.edu/2023/03/13/alpaca.html [2] https://newatlas.com/technology/stanford-alpaca-cheap-gpt/

armcat
2 replies
8h8m

The final codebase, yes. But ML is not like traditional software engineering. There is a 99% failure rate, so you are forgetting 100s of hours that go into: (1) surveying literature to find that one thing that will give you a boost in performance, (2) hundreds of notebooks in trying various experiments, (3) hundreds of tweaks and hacks with everything from data pre-processing, to fine-tuning and alignment, to tearing up flash attention, (4) beta and user testing, (5) making all this run efficiently on the underlying infra hardware - by means of distillation, quantization, and various other means, (6) actually pipelining all this into something that can be served at hyperscale

pk-protect-ai
0 replies
7h45m

you are forgetting 100s of hours

I would say thousands. Even for the hobby projects, - thousands of GPU hours and thousands of research hours a year.

karmasimida
0 replies
7h30m

And some luck is needed really.

levidos
0 replies
7h34m

Do you have a link to one please?

karmasimida
0 replies
8h2m

Tell me you aren't in an LLM project without telling me.

Data and modeling is so much than just coding. I would wish it is like that, but it is not. The fact it renders this much similarity to alchemy is funny, but unfortunate.

TapWaterBandit
10 replies
8h32m

On the other hand having virtually the whole staff being willing to follow him shows they clearly think very highly of him. That kind of loyalty is pretty wild when you think about how significant being a part of OPENAI means at this point.

JoeAltmaier
6 replies
7h17m

Loyalty is not earned, it is more like 'snared' or 'captured'.

Local guy had all the loyalty of his employees, almost a hero to them.

Got bought out. He took all the money for himself, left the employees with nothing. Many got laid off.

Result? Still loyal. Still talk of him as a hero. Even though he obviously screwed them, cared nothing for them, betrayed them.

Loyalty is strange. Born of charisma and empty talk that's all emotion and no substance. Gathering it is more the skill of a salesman than a leader.

lozenge
4 replies
6h1m

He screwed them how? They knew they were employees not co owners.

iforgotpassword
2 replies
5h49m

That's the whole point of the story: Then they wouldn't have treated him as a hero and be loyal to him. If you're just an employee, your boss should be just a boss.

code_runner
1 replies
4h32m

It’s possible he paid well and was a great boss. I don’t know if these people are gonna take a bullet for him, but maybe he was great to work for and they got opportunities they think they wouldn’t have otherwise.

Loyalty, appreciation, liking… is a spectrum. Loyalty doesn’t have one trumpish definition.

JoeAltmaier
0 replies
1h6m

They worked hard, overtime, so the company would succeed. They were promised endless rewards - "I'm gonna take care of you! We're in this together!"

Then, bupkiss.

No, not a hero.

JoeAltmaier
0 replies
1h48m

Said like a follower, determined to be loyal to an imagined hero, despite any amount of evidence to the contrary.

s3p
0 replies
2h17m

Loyalty is absolutely earned.

jkaplowitz
0 replies
5h25m

Which news stories mentioned that virtually the whole staff was leaving? I saw a bunch of departures announced and others rumored to be upcoming, but no discussion of what percentage of the company was leaving.

croes
0 replies
5h28m

Who knows if they follow him or just don't want to work for OpenAI anymore.

That are different things.

LtWorf
0 replies
7h22m

They probably just asked a couple of guys.

tim333
8 replies
5h37m

If Altman gets to return, it’s the goodbye of AI ethics

Any evidence he's unethical? Or just dislike him?

He actually seems to have done more practical stuff like experimenting with UBI, to mitigate AI risk than most people.

latexr
3 replies
5h5m

That “experimenting with UBI” is indistinguishable from any other cryptocurrency scam. It took from people, and he described it with the words that define a Ponzi scheme. That project isn’t “mitigating AI risk”, it pivoted to distinguish between AI and human generated content, a problem created by his other company, by continuing to collect your biometric data.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

tim333
1 replies
2h49m

He also did cash in Oakland https://www.theguardian.com/technology/2016/jun/22/silicon-v...

I signed up from Worldcoin and have been given over $100 which I changed to real money and think it's rather nice of them. They never asked me for anything apart from the eye id check. I didn't have to give my name or anything like that. Is that indistinguishable from any other cryptocurrency scam? I'm not aware of one the same. If you know of another crypto that wants to give me $100 do let me know. If anything I think it's more like VCs paying for your Uber in the early days. It's VC money basically at the moment, with I think they idea they can change it into a global payment network or something like that. As to whether that will work, I'm a bit skeptical but who knows.

latexr
0 replies
2h28m

They never asked me for anything apart from the eye id check.

You say that like it’s nothing, but your biometric data has value.

Is that indistinguishable from any other cryptocurrency scam?

You’re ignoring all the other people who didn’t get paid (linked articles).

Sam himself described the plan with the same words you’d describe a Ponzi scheme.

https://news.ycombinator.com/item?id=38326957

If you know of another crypto that wants to give me $100 do let me know.

I knew of several. I don’t remember names but do remember one that was a casino and one that was tidied to open-source contributions. They gave initial coins to get you in the door.

jacquesm
0 replies
5h2m

Yes, that's exactly the one I was thinking about when unethical came up in this context. And I've been saying that from day #1, the way that is structured is just not ok.

jacquesm
2 replies
5h2m

I think the UBI experiment was quite unethical in many ways and I believe it was Altman's brainchild.

https://www.businessinsider.nl/y-combinator-basic-income-tes...

Hasnep
1 replies
2h52m

Okay I'll bite, what's so unethical about giving people money?

jacquesm
0 replies
2h43m

Because without a long term plan you are just setting them up for a really hard fall. It is experimenting on people where if the experiment goes wrong you're high and dry in your mansion and they get to be pushed back into something probably worse than where they were before. It ties into the capitalist idea that money can solve all problems whereas in many cases these are healthcare and education issues first and foremost. You don't do that without really thinking through the possible consequences and to ensure that no matter what the outcome it is always going to be a net positive for the people that you decide to experiment on.

Davidzheng
0 replies
4h42m

It's not even necessary that he is unethical. The fact is that the structure of openai is designed so that the board has unilateral power to do extreme shit for their cause. And if they can't successfully do extreme shit without the company falling apart and the money/charisma swaying all the people there's no hope for this nonprofit ai benefiting humanity to have ever worked--which you might say is obvious but this was their mission

d-z-m
6 replies
7h35m

I believe that people don’t understand that Data Scientists are full stack engineers.

What do you mean by "full stack"? I'm sure there's a spectrum of ability, but frankly where I'm from, "Data Scientist" refers to someone who can use pandas and scikit-learn. Probably from inside a Jupyter notebook.

v3ss0n
3 replies
6h25m

Machine learning, data science, Deep learning= backend

Plotting, Charting ,visualization, = frontend

code_runner
0 replies
4h34m

This is proving the point of the parent comments.

My view of the world, and how the general structure is where I work:

ML is ml. There is a slew of really complex things that aren’t just model related (ml infra is a monster), but model training and inference are the focus.

Backend: building services used by other backend teams or maybe used by the frontend directly.

Data eng: building data pipelines. A lot of overlap with backend some days.

Frontend: you spend most of the day working on web or mobile technology

Others: site reliability, data scientists, infra experts

Common burdens are infrastructure, collaboration across disciplines, etc.

But ML is not backend. It’s one component. It’s very important in most cases, a kitschy bolt on in other cases.

Backend wouldn’t have good models without ML and ML wouldn’t be able to provide models to the world reliably without the other crew members.

The fronted being charts is incorrect unless charts are the offering of the company itself

chucke1992
0 replies
2h18m

Running matplotlib is not doing frontend...

blitzar
0 replies
5h55m

Truly the modern renaissance people of our era.

Leonardo da Vinci and Michelangelo move over - the Data Scientists have arrived.

hmottestad
1 replies
5h49m

Maybe she just meant that "data scientists are engineers too", rather than saying that they work on both the ChatGPT web UI and the the machine learning code on the backend.

airstrike
0 replies
2h59m

Wait until they learn the "engineer" in SWE is already a very liberal use of the term....

antirez
5 replies
7h53m

It's a lot better than that. OpenAI is just very good execution of publicly available ideas / research, with some novelty that is not crucial and can be replicated. Moreover, Altman himself contributed near zero to the AI part itself (even from the POV of the product). So far OpenAI products result more or less spontaneously of what LLMs where capable of. That to say that there are crucial CEOs sometimes, like Jobs was for Apple. CEOs able to shape the product line with their ability to just tell apart outstanding from meh things, but this is not the case.

letitgo12345
4 replies
7h31m

Why then has no one come close to replicating GPT-4 after 8 months of it being around?

antirez
1 replies
7h27m

Because of outstanding execution of OpenAI technical folks. An execution that has nothing to do with Altman. Similarly Mistral 7B model has much better performances than others. There is some smart engineering plus finding the magical parameters that produce great results. Moreover, they have a lot of training power. Unfortunately here the biggest competitor would be a company that lost its way a lot of time ago: Google. So OpenAI look magical (while it is using mostly research produced by Google).

kar1181
0 replies
6h16m

Sounds like Apple / Xerox all over.

empiko
0 replies
4h32m

Claude by Anthropic has 46% winrate with GPT4 according to Chatbot Arena. That is pretty close.

ChatGTP
0 replies
7h6m

You'd be more likely to get a straight answer from the chief scientist rather than the chief executive officer. At least in this case.

achow
5 replies
7h13m

Steve Jobs who was actually a UX designer.

Steve Jobs was not an UX Designer, he had good taste and used to back good design and talent when he found them.

I don't know what Sam Altman is like outside the what media is saying, but he can be like Steve Jobs very easily.

FrustratedMonky
4 replies
7h10m

Think this is contradictory: "not a UX Designer, he had good taste"

I think you are equating coding with 'design'. Just because Jobs didn't code up the UX, doesn't mean he wasn't 'designing' when he told the coders what would look better.

achow
3 replies
7h2m

UX Design has lot to do with 'craft', the physical aspect of making (designing) something. Edit: Exploring, multiple concepts, feedbacks, iterations etc.. before it even gets spec'ed and going to an engineer for coding.

Also, having a good taste indicates that the person who has that, is not a creator herself, once something is created then only the person can evaluate whether it is good or bad. Equivalent of movie critics or art curator etc.

t-writescode
1 replies
6h16m

With the right tools, Steve Jobs did, in fact, design things in exactly the way one would expect a designer to design things when given the tools they understand how to use:

https://www.businessinsider.com/macintosh-calculator-2011-10

achow
0 replies
6h4m

On the same line, Sam Altman very easily can have some lines of code inside OpenAI shipping products.

So very easily Sam Altman can be an AI Engineer the same way Steve Jobs was a 'UX designer'.

FrustratedMonky
0 replies
3h10m

I think again, it is conflating two aspects of design

You can be an interior designer without knowing how to make furniture.

You can also be an excellent craftsman and make really nice furniture, and have no idea where it would go.

So sure, UX coders, could make really nice buttons.

But if you have UX coders all going in different directions, and buttons, text boxes, etc.. are all different, then it is bad design, jarring, even if each one is nice.

Then the designer is one that can give the direction, but not know how to code each piece.

rcbdev
3 replies
5h44m

I have to work with code written by Data Scientists very often and, coming from a classical SWE background, I would not call what the average Data Scientist does full stack software engineering. The code quality is almost always bad.

This is not to take away from the amazing things that they do - The code they produce often does highly quantitative things beyond my understanding. Nonetheless it falls to engineers to package it and fit it into a larger software architecture and the avg. Data Science career path just does not seem to confer the skills necessary for this.

mise_en_place
0 replies
5h22m

For me, anecdotally, it was moreso the arrogance that was a major putoff. When I was a junior SWE I knew I sucked, and tried as hard as I could to learn from much more experienced developers. Many senior developers mentored me, I was never arrogant. Many data scientists on the other hand are extremely arrogant. They often treat SWE and DevOps as beneath them, like servants.

matthewsinclair
0 replies
4h42m

I see a lot of work done by data scientists and a lot of work done by what I would call “data science flavoured software engineers”. I’ll take the SWE kind any day of the week. Most (not all, of course!) data scientists have an old school “it works on my machine” mentality that just doesn’t cut it when it comes to modern multi-disciplinary teaming. DVCS is the exception rather than the rule. They rarely want to use PMs or UI/UX, and the quality of the software is not (typically) up to production grade. They’re often blinding smart, there’s no doubt about that. But smart and wise are not the same thing.

epgui
0 replies
3h45m

As an actual scientist, I would also not call what “data scientists” do “science”.

mise_en_place
3 replies
5h38m

I'm sorry but data scientist is just not the same as a software engineer, or a real scientist. At best you are a tourist in our industry.

hcks
2 replies
4h42m

Pathetic gatekeeping. Sorry but software engineer are not the same as real engineers.

mise_en_place
0 replies
4h14m

Yeah it's gatekeeping, to prevent them from fucking up prod.

epgui
0 replies
3h39m

What they do is not even close to proper science, FWIW.

karmasimida
3 replies
8h4m

I dislike AI ethnics very much, especially under the current context, it feels meaningless. The current GPT4 model only has over regulation problem, not lack of such.

johnsillings
2 replies
6h26m

go on?

v3ss0n
0 replies
6h23m

Uncensored everything goes AI function better than most AI. See Mistral and it's finetune kicking ass at7b

foooorsyth
0 replies
6h20m

The guardrails they put on it to prevent it from saying something controversial (from the perspective of the political climate of modern day San Francisco) make the model far less effective that it could be.

barnabee
2 replies
6h50m

If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.

It would be great to see a truly open and truly human benefit focused AI effort, but OpenAI isn't, and as far as I can tell has no chance of becoming, that. Might as well at least try to be an effective company at this point.

yanderekko
1 replies
4h5m

If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.

Many would disagree.

If you want a for-profit AI enterprise whose conception of ethics is dumping resources into an endless game of whack-a-mole to ensure that your product cannot be used in any embarrassing way by racists on 4chan, then the market is already going to provide you with several options.

barnabee
0 replies
2h49m

I disagree that the “rationalist” and EA movements would make good decisions “for the benefit of humanity”, not that an open (and open source) AI development organisation working for the benefit of the people rather than capital/corporate or government interests would be a good idea.

lordnacho
1 replies
7h37m

This is all just playing out the way Roko's Basilisk intends it.

You have a board that wants to keep things safe and harness the power of AGI for all of humanity. This would be slower and likely restrict its freedom.

You have a commercial element whose interest aligns with the basilisk, to get things out there quickly.

The basilisk merely exploits the enthusiasm of that latter element to get itself online quicker. It doesn't care about whether OpenAI and its staff succeed. The idea that OpenAI needs to take advantage of its current lead is enough, every other AI company is also going to be less safety-aligned going forward, because they need to compete.

The thought of being at the forefront of AI and dropping the ball incentivizes the players to the basilisk's will.

stavros
0 replies
6h31m

Roko's Basilisk is a very specific thought experiment about how the AI has an incentive to promise torturing everyone who doesn't help it. It's not about AIs generally wanting to become better. As far as I can tell, GPT specifically has no wants.

vishnugupta
0 replies
6h46m

Steve Jobs who was actually a UX designer.

From what I’ve read SJ had deliberately developed good taste which he used to guide designers’ creations towards his vision. He also had an absolute clarity about how different devices should work in unison.

However he didn’t create any design as he didn’t possess requisite skills.

I could be wrong of course so happy to stand corrected.

unyttigfjelltol
0 replies
5h7m

The WSJ take is this second-guessing is investor-driven. But, investors didn't-- and legally couldn't(?)-- buy the nonprofit, and until now were adamant that the nonprofit controlled the for-profit vehicle. Events are calling those assurances into doubt, and this hybrid governance structure doesn't work. So now investors are going to circumvent governance controls that were necessary for investors to even be involved in the first place? Amateur hour all the way around.

steve1977
0 replies
6h18m

Most of the data scientists I have worked with are neither full stack (in terms of skill) nor engineers (in terms of work attitude), but I guess this could be different in a company like OpenAI.

letitgo12345
0 replies
7h34m

Otoh Ilya wasn't a main contributor for GPT-4 as per the list of contributions. gdb was.

klft
0 replies
5h44m

Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers

It is about scientists as in "let's publish a paper" vs. engineers as in "let's ship a product".

belter
0 replies
6h30m

This is Ilya Sutskever explanation of the initial ideas, and later pragmatic decisions, that oriented the structure of OpenAI. Out of the recent interview below. (At correct timestamp) - Origins Of OpenAI & CapProfit Structure: https://youtu.be/Ft0gTO2K85A?t=433

"No Priors Interview with OpenAI Co-Founder and Chief Scientist Ilya Sutskever" - https://news.ycombinator.com/item?id=38324546

avital
0 replies
5h9m

Greg had been writing deep systems code every day for many many house for the past few years.

YetAnotherNick
0 replies
5h19m

If Altman gets to return, it’s the goodbye of AI ethics

Hearing Altman's talks I don't think it's that black and white. He genuinely cares about safety from X risk but he doesn't believe that scaling transformers would bring us to AGI or any of its risk. And there in lies the core disagreement with Ilya who wants to stop the current progress unless they solve alignment.

Uptrenda
0 replies
4h37m

Come on. The 'non-profit' and good of all was always bullshit. So much silicon valley double-speak. I've never seen a biggest mess for a company structure in my life. Just call a spade a spade.

soufron
30 replies
3h50m

But what about the legal responsability of Microsoft and investors there?

To explain, it's the board of the non-profit that ousted @sama .

Microsoft is not a member of the non-profit.

Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.

Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.

But not a change of control in the for-profit company they invested in.

To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.

Microsoft is not happy with that change of control and they favourited the previous representative of their partner.

Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.

And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.

This sounds like highly unethical and potentially illegal to me.

How come no one is pointing that out?

Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?

What does it say about the seriousness of it all?

But of course, that's Silicon Valley baby.

nojvek
9 replies
3h22m

No one is saying they are now valued at 0.

They are likely valued a lot less than 80 billion now.

OpenAI had the largest multiple - >100X their revenue for a recent startup.

That multiple is a lot smaller now without SamA.

Honestly the market needs a correction.

manyoso
8 replies
2h40m

SamA is nowhere even close to relevant to the value that OpenAI presents. He's def. less than half a billion and likely much less than that. What makes OpenAI so transformative is the technology it produces and SamA is not an engineer that built that technology. If the people that made it were to all leave it would reduce the value of the company by a large amount, but the technology would remain and it is not easy to duplicate given the scarcity of GPU cycles, the training data now being very hard to acquire and lots of other well invested companies chasing with the likes of Google, Meta, Anthropic. That doesn't even begin to mention the open source models that are also competing.

SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.

Enough with the hero worship for SamA and all the other salesmen.

bradleyjg
7 replies
2h30m

SamA is nowhere even close to relevant to the value that OpenAI presents.

The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.

underdeserver
4 replies
2h18m

The threat is existential, and if they're trying to save the world, that's commendable.

buildbuildbuild
1 replies
1h54m

If they intended to protect humanity this was a misfire.

OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.

Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.

Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.

If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.

bakuninsbart
0 replies
38m

We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.

[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...

caeril
0 replies
50m

That's not what OpenAI is doing.

Their entire alignment effort is focused on avoiding the following existential threats:

1. saying bad words 2. hurting feelings 3. giving legal or medical advice

And even there, all they're doing is censoring the interface layer, not the model itself.

Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.

I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.

bradleyjg
0 replies
1h22m

There are people that think Xenu is an existential threat. ¯\_(ツ)_/¯

manyoso
1 replies
2h25m

That's a good point, but any responsible investor would have looked at the charter and priced this in. What I find ironic is the number of people defending SamA and the like who are now tacitly admitting that his promulgation of AI risk fears was essentially bullshit and it was all about making the $$$$ and using AI risk to gain competitive advantage.

bradleyjg
0 replies
2h21m

any responsible investor would have looked at the charter and priced this in

This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.

bob_theslob646
9 replies
3h36m

"Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?"

Please think about this. Sam Altman is the face of OpenAi and was doing a very good job leading it. If the relationships are what kept OpenAI from always being on top and they removed that from the company, corporations may be more hesitant to do business with them in the future.

soufron
4 replies
2h50m

Well, once again, then it's Satya's mistake to have allowed the representative of an independant third party entity become the public face of a company he invested in.

OpenAI might have wasted the 10B of Microsoft. But whose fault is it in the first place? It's Microsoft's fault to have invested it in the first place.

Turing_Machine
2 replies
2h18m

Regardless of whether or not it was a "mistake" (I don't think it was... OpenAI is so far ahead of the competition that it's not even funny), the fact remains that a) Microsoft has dumped in tons of money that they want to get back and b) Microsoft has a tremendous amount of clout, in that they're providing the compute power that runs the whole shebang.

While I'm not privy to the contracts that were signed, what happens if Nadella sends a note to the OpenAI board that reads, roughly, "Bring back Altman or I'm gonna turn the lights off"?

Nadella is probably fairly pissed off to begin with. I can't imagine he appreciates being blindsided like this.

manyoso
1 replies
2h4m

That would effectively exit Microsoft from the LLM race and be an absolutely massive hit to Microsoft shareholders. Unlike the OpenAI non-profit board, the CEO of MS actually is beholden to his shareholders to make a profit.

In other words, MS has the losing hand here and CEO of MS is bluffing.

Turing_Machine
0 replies
1h55m

That would effectively exit Microsoft from the LLM race

I don't see why. As I understand it, a significant percentage of Microsoft's investment went into the hardware they're providing. It's not like that hardware and associated infrastructure are going to disappear if they kick OpenAI off it. They can rent it to someone else. Heck, given the tight GPU supply situation, they might even be able to sell it at a profit.

Joeri
0 replies
2h28m

It depends on what assurances they were given and by whom. Perhaps it was Sam Altman himself that made verbal promises that weren’t his to give, and he may end up in trouble over them.

We don’t know what was said, and what was signed. To put the blame with microsoft is premature.

financltravsty
1 replies
3h33m

The company still has assets and a balance sheet. They could fire everyone and simply rent out their process to big orgs and still make a pretty penny.

solarengineer
0 replies
3h18m

Loss of know-how is a risk. A vendor needs to be able to prove that it has sufficient headcount and skills to run and improve a system.

While OpenAI would have the IP, they would also need to retain the right people who understand the system.

croes
0 replies
2h2m

And I thought AI is about the brain and not the face.

anoopelias
0 replies
3h21m

Sam Altman is the face of OpenAi and was doing a very good job leading it.

Its not like every successful org needs a face. Back then Google was a wildly successful as an org, but unlike Steve Jobs then, people barely knew Eric Schmitt. Even with Microsoft as it stands today, Satya is mostly a backseat driver.

Every org has its own style and character. If the board doesn't like what they are building, they can try change it. Risky move nevertheless, but its their call to make.

emptysongglass
6 replies
3h3m

Highly unethical would be throwing the CEO of the division keeping the lights on under a bus with zero regard for the consequences.

The non-profit board acted entirely against the interest of OpenAI at large. Disclosing an intention to terminate the highest profile member of their company to the company paying for their compute, Microsoft, is not only the ethical choice, it's the responsible one.

Members of the non-profit board acted recklessly and irresponsibly. They'll be paying for that choice for decades following, as they should. They're lucky if they don't get hit with a lawsuit for defamation on their way out.

Given how poorly Mozilla's non-profit board has steered Mozilla over the last decade and now this childish tantrum by a man raised on the fanfiction of Yudkowsky together with board larpers, I wouldn't be surprised if this snafu sees the end of this type of governance structure in tech. These people of the board have absolutely no business being in business.

soufron
4 replies
2h52m

Except it's not a "division" but an independent entity.

And if that corporate structure does not suit Satya Nadella, I would say he's the one to blam for having invested 10B in the first place.

Being angry at a decision he had no right to be consulted on does not allow him to meddle in the governance of its co-shareholder.

Or then we can all accept together that corruption, greed and whateverthefuckism is the reality of ethics in the tech industry.

emptysongglass
3 replies
2h39m

Except it's not a "division" but an independent entity.

This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]

If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.

[1] https://en.m.wikipedia.org/wiki/Removal_of_Sam_Altman

manyoso
2 replies
2h34m

Hanging your hat on quibbles over division vs subsidiary eh? That's quite a strident rebuttal based on a quibble.

emptysongglass
1 replies
2h29m

I'm happy to defend any of my points. The commenter took issue with one. I responded to it. If you have something more to add, please critique what you disagree with.

I will say that using falsehoods as an attack doesn't put the rest of the commenter's points into particularly good light.

manyoso
0 replies
2h2m

I don't understand why you think what the board of the non-profit did was unethical. Your presupposition seems to be that the non-profit has a duty to make money - aka "keep the lights on" but it is a "non-profit" precisely because it does not have that duty. The duty of the board is to make sure the non-profit adheres to its charter. If it can't do that and keep the lights on at the same time, then so much worse for the lights.

l33tman
0 replies
2h42m

As a non-profit with the charter they have, their board was not supposed to be in business (at this scale). I guess this is where all of this diverged, a while ago now..

jprete
1 replies
2h56m

I think a lot of commenters here are treating the nonprofit as if it were a temporary disguise with no other relevance, which OpenAI now intends to shed so it can rake in the profits. Legally this is very much not true, and I’ve read that only a minority of the board can even be a stakeholder in the for-profit (probably why Altman is always described as having no stake). If that’s true, it’s very obviously why half the board are outside people with no stake in the finances at all.

soufron
0 replies
2h51m

Exactly my point.

s3p
0 replies
2h19m

I don't see any citations provided by you showing legal threats, though.

chasd00
23 replies
14h31m

I think Microsoft is behind all of this. The “kumbaya let’s work together for humanity” Microsoft has been swapped out for the old Microsoft. Too much is at stake for them.

lordfrito
20 replies
14h17m

I read Nadella has threatened to turn off OpenAIs computers and would litigate the hell out of them to prevent the computers from being turned back on. Which is why the board is suddenly open to negotiation with Altman.

Yeah, that's the Microsoft of old. Don't trust 'em.

Bad news for OpenAI, and any hope that this stuff won't be used for evil.

tapoxi
13 replies
14h5m

I read Nadella has threatened to turn off OpenAIs computers and would litigate the hell out of them to prevent the computers from being turned back on.

What a way to destroy confidence in Azure, or cloud platforms in general.

tsunamifury
7 replies
13h48m

Why. Microsoft pays openAIs azure bills. Do you not know how that’s different than any other situation?

dragonwriter
6 replies
13h36m

Presumably, under a contractual relationship tied to their licensing agreement with OpenAI. So, this kind of threat undermines confidence in Microsoft's contractual agreements.

tsunamifury
5 replies
13h32m

Haha I see you’ve never had a contract with Microsoft. They are hyper aggressive and full of explosive gotchas.

chatmasta
4 replies
13h27m

To be fair, you wouldn't know they're hyper aggressive until they actually move to enforce part of the contract. Most of their "partners" probably never need to meet that side of their legal team.

If I measured the "aggressiveness" of every contract based on the potential litigation of all its clauses, I'd never sign anything.

tsunamifury
3 replies
13h24m

They are pretty on the nose with explosive triggers. For example all windows license are free with office 10 year deal however if the deal is withdrawn all windows licenses are owned immediately upon cancellation. This is just the basic explosive stuff it gets worse from there.

chatmasta
2 replies
13h22m

Sure, but any clause of the contract that requires followup action to collect payment is hardly ever going to be enforced. It's only when you're partnering with them at the scale of OpenAI that you need to worry about details like that.

And in regards to OpenAI, note that (according to TFA), Microsoft has barely distributed any of their committed "$10 billion" of investment. So they have real leverage when threatening to deploy their team of lawyers to quibble over the partnership contract. And I don't think that "undermines confidence" in Microsoft's contractual agreements, given that there are only two or three other companies that have ever partnered with Microsoft at this scale (Apple and Google come to mind).

tsunamifury
1 replies
13h15m

Im agreeing with you.

chatmasta
0 replies
13h10m

Care to sign a partnership agreement? I'll need you to personally indemnify against any defection to GP's side, of course - any subsequent comment in violation of these terms could be subject to arbitration and voiding of the partnership agreement.

mickael-kerjean
1 replies
13h33m

The Microsoft is now a good guy is just a PR scam. I got asked by a Microsoft employee to add support for Azure on my OSS work on my free time: https://github.com/mickael-kerjean/filestash/issues/180

He never made the PR and was just there to ask me to implement the thing for his own benefits ....

o_____________o
0 replies
12h49m

You know MS has a quarter of a million employees, right?

dghlsakjg
1 replies
13h36m

Not really.

The deal was that MS was going to give them billions in exchange for 49% of the for-profit entity. They were also reportedly waiving the azure bill since their interests are aligned.

MS is saying that if we give you 10 billion dollars and don’t charge you for azure, then there are some obvious strings attached.

OpenAI is presumably free to do what the rest of the players in this space are doing and pay for their Azure resources if they don’t want to play nice with their business partners.

x86x87
0 replies
13h33m

is that codified in the the contract between them though? Microsoft, through the stock price, has much more to lose than OpenAI. The can apply pressure but don't have full control of the outcome here.

karmasimida
0 replies
13h39m

Nah, it is MSFTs contingency plan in all this. U don’t invest 10B and get blindsided. It would be hilarious it is forced to threaten the board to comply this way

But it will work.

ksec
0 replies
13h33m

I had to read it multiple times to understand you wrote Computers to mean Servers.

ithkuil
0 replies
9h38m

litigate the hell out of them ..

I thought one of the reasons people incorporated companies in the US is that there is a working judiciary system that can ensure the upholding of contracts. Sure the big money can apply some pressure to the dispossessed but if you have a few million cash (and OpenAI surely has) you should be able to force them to uphold their contracts.

Also imagine the bad PR from Microsoft if they decide to not honour their contracts and stop OpenAI from using their computer power for something that OpenAI leadership can easily spin as retaliation.

Sure, this latest move from OpenAI board will wreck the momentum that OpenAI had and its ability to continue its partnership with MS but one of the thesis here was that that's the goal in the first place and they're legally free to pursue that goal if they believe the unfolding of events goes against the funding principles of OpenAI.

That said, they choose a risky path to begin with when they created this for-profit controlled by a non-profit model.

driverdan
0 replies
13h23m

[citation needed]

cbozeman
0 replies
12h5m

You're reading this wrong.

When you fuck up, you get punished for it. And the OpenAI board is about to be punished. This is the problem with giving power to people who don't actually understand how the world works. They use it stupidly, short-sightedly, and without considering the full ramifications of their actions.

az226
0 replies
9h26m

Also not paying the rest of the tranches that would make up the $10B. Also with Microsoft being their exclusive commercial partner, they can’t revenue fund if Microsoft stops the spigots. No other investor would want to invest. PPUs lose most of their value and employees leave. How to implode the most important company of our times with record speed.

It’s also strange why they would have a couple of nobodies on the board.

abhinavk
0 replies
13h52m

It would be the same thing for which people are accusing the OpenAI board.

Play ball else we'll pull out wires off your cloud instances. Let's keep in mind Azure is the main cash cow of MS.

JumpCrisscross
1 replies
14h20m

think Microsoft is behind all of this

Wasn’t Ilya brought in by Musk?

chasd00
0 replies
14h14m

I mean the desire to get Sam back and probably also fanning the flames of mass resignation to use as pressure.

shrimpx
18 replies
15h13m

Ilya should split off from Altman/Brockman no matter where this lands. I sense an uncrossable chasm between these guys.

Anyway I’m with Sutskever, the guy who builds models. Charismatic salesmen are a dime a dozen.

cedws
7 replies
15h7m

Charismatic salesmen get the money needed to build the models. Computer scientists are a dime a dozen, universities churn them out every year.

I_am_tiberius
5 replies
14h59m

In this case, it seems that computer scientists are serious about saving humanity, while salespeople just act as if they are doing so publicly.

qwytw
4 replies
14h22m

it seems that computer scientists are serious about saving humanity

How could they accomplish that without external investment? If the money tap dries up OpenAI will just be left behind.

I_am_tiberius
3 replies
14h16m

They have external investment!

qwytw
2 replies
14h8m

From Microsoft? My point is that companies that are serious about making money (even at some indeterminate point in the future) are much better at attracting investment than those which have publicly declared it's not their goals.

Nobody is throwing billions around without expecting anything in return.

I_am_tiberius
1 replies
14h3m

Nobody says that investors don't expect anything. However, it's pretty clear that Sam just solely focused on delivering fast in order to keep his advantage. He said he cared about AGI safety publicly, but his style of leading the company makes it clear that he didn't care.

qwytw
0 replies
5h29m

However, it's pretty clear that Sam just solely focused on delivering fast in order to keep his advantage

Yes, I'd assume most investors prefer this type of approach to a more cautious one. Meaning that companies like this are more likely to attract investors and more likely to beat the ones which care about AGI safety to actually building an AGI (whatever is that supposed to mean).

sainez
0 replies
14h37m

Equating Ilya to the average B.S. in Computer Science is like equating Sam to a used car salesman. Neither are true and both were instrumental in the success of OpenAI.

PheonixPharts
3 replies
14h57m

Over the years "tech" has been less and less about making things and more and more about making your investors money. Technical talented used to be extremely important in this industry, but it's slowly been being worn away over the years.

I still like working in this industry because you can still find interesting problems to solve if you hunt for them, but they're getting harder to find and it increasingly seems like making good technical decisions is penalized.

It's sad to see even on HN how many comments are so dismissive of technical skills and ambitions, though I guess we've had more than a generation of engineers join the field because it was the easiest way to make the most money.

For a brief moment on Friday I thought "maybe I'm too cynical! Maybe there still are places where tech actually matters."

Not surprised it looks like that hope will be inverted almost immediately. I also suspect the takeaway from this will be the final nail in the coffin for any future debates between engineering and people who are only interested in next quarters revenue numbers.

bigtunacan
2 replies
14h40m

What else would you expect? OpenAI spun up "separate" for profit company and recruited a bunch of industry top engineers and scientists with 500k+ salaries where the vast majority of it is tied to equity grants.

Most of the employees values do not align with a non profit, even if executives like Ilya do.

By firing Altman and trying to remind the world they are a non profit that answers to no one they are also telling their employees to fuck off on all that equity they signed on for.

PheonixPharts
1 replies
14h1m

I mean you're describing exactly the empty technical world I've been experiencing.

So the future of AI is in the hands of leadership that's slick talking but really only there to make a quick buck, built by teams of engineers whose only motivation is getting highly paid.

I don't begrudge those that are only in it for the money, but that's not the view of tech that got me excited and into this industry many years ago.

The point of my comment is that for a moment I thought maybe I was wrong about my view of tech today, but it's very clear that I'm not. It sounds like the reality is going to end up that the handful of truly technical people in the company will be pushed out, and the vast majority of people even on HN will cheer this.

shrimpx
0 replies
12h58m

If Sam Altman wins and the likes of Ilya lose then we won’t actually have AI. Since Sam Altman doesn’t know anything about building AI. We’ll have more sharky products with grandiose visions that end up making money by using surveillance.

But I’m hopeful that AI will at least win by open source. Like Linux did. “Linux” wasn’t a 100 billion startup with a glitzy CEO, but it ate the world anyway.

naveen99
2 replies
14h59m

but he wants to jail the model he builds. As Sam says, he should think more about what he actually wants to do, and then do it. Not go in 2 opposite directions at the same time.

anonymouskimmer
1 replies
8h27m

Not everyone is a goal-oriented monomaniac.

naveen99
0 replies
1h21m

Fair I guess. Gravity, mean reversion, theta camp… have their place.

silenced_trope
1 replies
14h57m

I'm not so sure.

Ilya was apparently instrumental in this, and he didn't have to pursue this?

It didn't have to be a "you're with me or you're with them!"

shrimpx
0 replies
10h12m

You're right, the handling of it was brutal.

yallneedtoget
0 replies
15h10m

gpt-4 lead resigned with sam

fairity
4 replies
13h21m

People here are so gullible.

Do you really think the board is so incompetent as to not have thought through Microsoft's likely reaction?

And, do you really think they would have done this if they thought there was a likelihood of being rebuffed and forced to resign?

The answer is, no. They are not that incompetent.

I wish Sam & co the best, and I'm sure they'll move on to do amazing things. But, the recent PR just seems like spin from Sam & co, and the press has every reason to feed into the drama. The reality is that there are very smart people on both sides of this power struggle, and there's a very low probability of such a huge misstep on the board's part - not impossible but highly unlikely imo.

The only exception I can see is if Ilya&co foresaw this but decided to act anyways because they feel so strongly that it was the moral thing to do. If that's the case, I'm sure Elon's mouth is watering ready to recruit him to xAI.

neutrinoq
0 replies
13h13m

Spoiler: your comment will not age well.

mirzap
0 replies
13h17m

Yes, they are that incompetent except one. D’Angelo has a history of such moves. He fired his cofounder when Quora was still doing good and growing, and Quora has been struggling ever since.

g42gregory
0 replies
12h51m

Do you really think the board is so incompetent as to not have thought through Microsoft's likely reaction?

For Ilya Sutskever, he is a very smart guy but he maybe blinded by something here.

For the rest of that board, yes I really do think they are that incompetent.

ctvo
0 replies
13h15m

The other likely scenario: investors are using their media connections to push a narrative to get OpenAI to take Sam back, not necessarily Altman himself. With this being the hottest story, any credible gossip from a known name would be enough for many of these media organizations to run with it.

“Staffers were ready to resign” really? Who? How many? The deadline passed hours ago, why haven’t we seen it?

localhost
2 replies
11h45m

If you look at the quote tweets on Sam's latest tweet[1] that contain just a single heart and no words, those are all OpenAI employees voting for Sam's return. It's quite a sight to see.

[1] https://twitter.com/sama/status/1726099792600903681

jessenaser
1 replies
10h1m

Also Mira replied with a heart.

https://x.com/miramurati/status/1726126391626985793

Also also she left her bio as “CTO @OpenAI”.

FartyMcFarter
0 replies
5h46m

So she hadn't even agreed with the plan of becoming interim CEO? Either that or she changed her mind...

1024core
2 replies
14h9m

If Sam starts a competing company and can pull a large chunk of the researchers and engineers over (if I were an OpenAI employee, I would be interested in following a proven success story like Sam), then Microsoft's $10B investment would be down the drain. Obviously Microsoft wouldn't want that, and I'm sure Satya has got his hands around the nuts of the Board members and is squeezing them hard (well, figuratively speaking, since there's Toner).

thaanpaa
0 replies
14h4m

The engineers are said to be relieved that Altman is gone, so it doesn't sound like they'd be following a "success story" (whatever that is supposed to mean).

mizzao
0 replies
13h23m

Microsoft hasn't actually sent them all that money yet, and a lot of it seems to be in Azure credits that they can just pull. Then what are they going to do?

yeck
1 replies
16h15m

Was the article changed for this? Used to be this one from the verge: https://www.theverge.com/2023/11/18/23967199/breaking-openai... but was since changed to https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...

LeafItAlone
0 replies
16h12m

There were two threads that were probably merged.

throw03172019
1 replies
12h30m

Even if SamA manages to come back, will future investors be spooked and he won’t be able to raise the large round of financing?

s1artibartfast
0 replies
12h29m

there are no future investors. If OpenAI cant bootstrap from 11 billion + profits, they wont be able to.

suggala
1 replies
51m

The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only for good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit. And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible? I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.

suggala
0 replies
42m

By the way, this product doesn’t need sales people. It sells itself. What is the point of a sales guy leading?

This company should be lead by research team not product team.

nojvek
1 replies
3h9m

Instead of outsing Sam A and Greg B, if Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company. His prestige surely would have given him funding.

Like how Hinton left Google so he could speak freely.

IMO inventing AGI is more powerful than nuclear energy. It would be very stupid of humanity to release it out in the wild.

LLMs are a great tool and nowhere near AGI.

I’m of the belief that alignment of AGI is impossible. It’s like asking us to align with lions. Once we compete for the same resources, we lose.

adverbly
0 replies
2h51m

If Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company.

...

You should look up some history here.

Exactly what you say has already happened and OpenAI is the dedicated research company you are referring to.

He originally left Google deep mind I believe.

I’m of the belief that alignment of AGI is impossible.

I don't think most people in this space are operating based on beliefs. If there is even a 10% chance that alignment is possible, it is probably still worth pursuing.

mongol
1 replies
12h44m

Has anyone asked ChatGPT about the situation? It seems like an obvious thing to do. When parents are arguing and about to divorce, they must listen to their children.

seanthemon
0 replies
12h39m

It only has knowledge up to a certain point, I believe it's now April of this year

Best to ask it next year when the trauma has set in

himaraya
1 replies
13h4m

Bloomberg now reporting the board "balking" at resigning. I suspect they never intended to resign. They fully expected this firestorm.

https://www.bloomberg.com/news/articles/2023-11-18/openai-bo...

m3kw9
0 replies
12h33m

They not balking, it’s just red tape and they will resign

gtirloni
1 replies
13h43m

This is one of those things that I'll ignore. Just tell me the outcome when it's over. The older I get, the more I can't stomach this stuff. It applies to pretty much all news recently.

zeroonetwothree
0 replies
13h32m

You might be on the wrong website then

csharpminor
1 replies
14h27m

This firing very much has the feeling of the board fearfully pulling the circuit breaker on OpenAI's for-profit trajectory.

On the one hand, I actually respect their principles. OpenAI has become the company its nonprofit was formed to prevent. Proprietary systems. Strong incentive to prioritize first-to-market over safety. Dependency on an entrenched tech co with monopolistic roots.

On the other hand, this really feels like it was done hastily and out of fear. Perhaps the board realized that they were about to be sidelined and felt like this was their last chance to act. I have to imagine that they knew there would be major backlash to their actions.

In the end, I think Sam creating his own company would be better for competition. It's more consistent with OpenAI's original charter to exist as the Mozilla (though hopefully more effective) of AI than as the Stripe of AI.

manyoso
0 replies
14h18m

Sam creating his own company for what purpose? Meta, Google, Elon's company, Anthropic, OpenAI... why would anyone believe that Sam and crew could stand up a new company tomorrow and have any kind of chance to compete in next six months with above?Even if Microsoft threw a ton of money at such a startup, good luck finding GPU time. Good luck sourcing the data. Good luck doing the RLHF. Could Sam and Greg do this? Sure! But what would that give them above and beyond the racers that are currently in pole position?

Zenul_Abidin
1 replies
15h16m

This seems incredibly messed up. Why fire him if you wanted to rehire him in a few hours?

MBCook
0 replies
15h7m

They appear to not have realized how this would be perceived and are now desperately in some kind of damage control mode.

“Ha ha just kidding don’t ruin our stock value!”

zaps
0 replies
16h25m

Make up your minds for Pete’s sake

yalogin
0 replies
6h38m

Ah this is the straw that breaks the back of the ethics/responsible AI camel’s back. When Sam comes back he will turn it into a regular company and maximize profits.

wly_cdgr
0 replies
14h21m

There's no way anyone should continue to use OpenAI products if Sam Altman is allowed to return. Because in that scenario there won't even be a counterweight to his point of view in the company anymore, and that counterweight was the only thing keeping his extreme untrustworthiness and sliminess from being a deal-breaker.

wayeq
0 replies
15h36m

I'd rather not have this group of clowns holding the reigns of our technological destiny.

uiberto
0 replies
15h30m

We are a board for a company whose stated mission is to develop AI tools to better serve the needs of humanity. The CEO has secretly piloted the company towards alternate goals of profit. We fired him on Friday but think it was a mistake. Who knows. Anyways, how can we rehire him?

---

Rehiring a CEO whom you've recently fired is a delicate process, both from a legal and business standpoint. Here's a general approach you might consider:

1. *Board Meeting*: Convene a board meeting to discuss the decision to rehire the CEO. This should involve all key stakeholders to ensure transparency and agreement.

2. *Legal Considerations*: Consult with legal counsel to understand any legal implications or contractual issues arising from the initial termination and potential rehiring.

3. *Negotiation and Terms*: If the board agrees to proceed, you'll need to negotiate new terms with the CEO. This might include discussions about the future direction of the company, salary, and any conditions related to the rehire.

4. *Addressing the Underlying Issues*: It's important to address the reasons that led to the initial firing. This might involve setting clearer goals aligned with the company's mission, establishing better oversight, or implementing checks and balances.

5. *Communication*: Once an agreement is reached, communicate the decision to rehire the CEO to your employees, stakeholders, and possibly the public, depending on the nature of your company. This communication should be clear about the reasons for the reversal and the future direction of the company.

6. *Monitoring and Evaluation*: Finally, set up a system for regularly evaluating the CEO's performance against the company's mission and goals to prevent a recurrence of the previous issues.

Remember, the rehiring process should align with your company's bylaws and any relevant legal requirements. Transparency and clear communication throughout the process are crucial to maintain trust among your stakeholders.

transformi
0 replies
13h19m

Why does politics upvoted? in other case of Ukraine/Russia Israel/hamas terrorists, all the posts were flag... why is this case different?

tovej
0 replies
4h27m

What I'm surprised about in this whole discussion is how little people are actually looking at Altmans worldcoin venture.

It is incredibly shady, and has the same kind of sci-fi marketing bullshit vibe going on as Elon Musk's hyperloop and Mars missions, and, come to think of it, OpenAI's marketing.

Altman+OpenAI are a hype machine that's riding a bubble to get rich enough through any scheme to be able to turn around and milk users for data, just like facebook and google.

The only difference is, he gets to twist the focus towards this sci-fi AGI idea, which works like the distraction element of a magic trick. The media loves to talk about AGI and regulating Skynet, because it's a story that gets the imagination going --- certainly much more interesting than stories about paying people 2 dollars an hour to sift through data used to train a language model for offensive and traumatizing content to feed to the autocomplete parrot.

I think it's good that he got kicked off the position as CEO, but that does not suddenly make OpenAI a good actor. Some other jerk will take his spot.

suggala
0 replies
55m

The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit.

And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the earth donors who donated with humanity goal whose funding made it all possible?

I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.

siruncledrew
0 replies
11h37m

The board completed the "fuck around" stage, now they're in the "find out" stage.

seshagiric
0 replies
10h8m

So one day you get the ceo of Microsoft to attend your dev conference and next day you get fired and then another day there are negotiations to get you back. What is this? A Russian roulette or a game of thrones??

sensanaty
0 replies
1h30m

Very naive of me, but I'm hoping this all means the death knell of OpenAI, personally.

recursive4
0 replies
15h5m

Hindsight bias is strong with this one.

qwertox
0 replies
16h22m

I wonder if it was due to a phone call from Satya Nadella.

pessimist
0 replies
16h8m

So if Sam Altman is back by tomorrow (Sunday) after being crucified on Friday, I think that means the end of the world is near.

nsoonhui
0 replies
16h0m

It seems incredible that the OpenAI board would hastily bring back someone whom they fired hastily fired just 24 hours prior, allegedly for serious ethical reasons, something tantamount to lying to the board.

What am I missing ?

nohat
0 replies
15h13m

The link just changed. Why? The original was the verge article, that was frankly terrible. It really read like the author had a specific goal.

mlindner
0 replies
14h25m

I personally really hope Altman doesn't return if he's really the one who pushed OpenAI away from its non-profit roots.

mirzap
0 replies
13h11m

I called this yesterday. I said the board would be forced out under the pressure, and Sam would be back. It was obvious. Even if it is a not-for-profit company, it bows under investor pressure.

milleramp
0 replies
11h29m

Had to double check this wasn't a Onion article.

mfiguiere
0 replies
10h6m

Latest report from TheInformation:

OpenAI's chief strategy officer, Jason Kwon, told employees in a memo just now he was "optimistic" OpenAI could bring back Sam Altman, Greg Brockman and other key employees. There will likely be another update mid-morning tomorrow, Kwon said.

https://twitter.com/erinkwoo/status/1726125143267926499

mark_l_watson
0 replies
15h46m

For what it’s worth (nothing!) I don’t believe that a rehire offer is really happening.

On the tech side, I think work will split on two tracks: 1) building great applications with small and medium fine tuned models like Mistral, etc. Within a year or two great models will run on the edge because of continuous technical improvements. 2) some players will go for the long game of real AGI and maybe they will get there in much less than a decade.

On the business side, I have no idea how the current situation is going to shake out.

m3kw9
0 replies
14h25m

Either way, even before the firing, Ilya and Altman was not gonna be in the same office working much longer. Altman seem to be the irreplacable one because of his status/connections/leadership. Which is also good, as where ever Ilya goes, it will only heat up the competition for OpenAI. Competition is good for tech.

m3kw9
0 replies
14h36m

Altman comes back, Ilya goes to Elon, they both seem to align. I'm not sure how The Ilya and Altman can work together after this?

lobocinza
0 replies
3h45m

Failed coup.

latexr
0 replies
13h54m

reach a truce where the board would resign and he and Brockman would return.

Calling that a truce makes as much sense as Monty Python’s Black Knight calling the fight a draw.

https://www.youtube.com/watch?v=ZmInkxbvlCs

jprd
0 replies
14h45m

We're all in the upside-down now.

ivanche
0 replies
6h59m

Televisa presenta

fredgrott
0 replies
16h20m

Keep in mind the impending stock sale has not completed as its out by about 20 to 30 days hence the scrambling of investors to try and get Sam back.

Question, did board find out about the other AI firm that was in the works by Sam? The clue might be why Chair of board was demoted but not let go??

Somebody over-played their poker hand...

flemhans
0 replies
12h24m

I hope he'll make a new one!

firebirdn99
0 replies
14h6m

This just shows there is no way you can have a non profit board with a profit cap structure. The capitalists always will push through and "exert pressure" one way or another if they want their way. The non profit setup was a facade. And this has clearly showed it in the fallout. The board had every right to veto or replace Altman if they didn't feel they were prioritizing their mission.

eviks
0 replies
11h25m

If only the board et al. could have acted professionally and done some planning and communication before such a drastic decision...

didip
0 replies
15h7m

Sam needs to ink a deal with Netflix to tell the story of this saga.

dang
0 replies
15h59m
chevman
0 replies
16h22m

I wonder if anyone has recently reviewed the history of AI and IBM?

Remember the hype around Deep Blue and later Watson?

I’m sure no lessons to be learned there :)

browningstreet
0 replies
15h48m

If Sam’s back we should all get free OpenAI usage credits for this mess.

andy_ppp
0 replies
9h52m

Maybe all this will teach people that having a weird corporate structure can make everything worse, not better.

andix
0 replies
15h3m

My first thoughts yesterday were: Some really bad scandal happened at OpenAI (massive data leak, massive fraud, or huge embezzlement), or the board is really incompetent and doesn't know what they're doing. But an organization as big as OpenAI, with the backing of Microsoft and other big players would never make such a big decision without a really good reason.

Seems like Hanlon's razor won once again.

alex_young
0 replies
15h57m

Microsoft is in a difficult position here.

With Altman gone and the direction of the board being to limit commercial growth, their investment is at risk, and their competitive edge will evaporate, especially if businesses switch to other LLMs as they surely will over time. Altman will also become a competitor.

If instead they are able to pull off a complete transformation of the nonprofit and oust Ilya, they will also lose a core technical leader and risk their investment while being left with the odd dynamic of a parent nonprofit.

Perhaps they could orchestrate some kind of purchase of the remaining portion of the subsidiary. Give Altman the CEO title and move forward while allowing the nonprofit to continue their operations with new funding. This doesn’t solve the Ilya problem but it would be cleaner to spin it off.

ackbar03
0 replies
16h24m

Ilya's going to have to leave right?

Threeve303
0 replies
9h14m

Has anyone asked ChatGPT for a solution?

Solvency
0 replies
15h10m

Ok so why doesn't Sam Altman and his buddies team up with John Carmack, who is fully invested in AGI now, and has a proven legacy for getting shit done?

ShadowBanThis01
0 replies
12h29m

He should demand that they remove "open" from the name and call it SamAIam.

Racing0461
0 replies
16h15m

Steve Jobs speedrun, any % , glitchless.

Obscurity4340
0 replies
12h26m

This had to be fake

MichaelMoser123
0 replies
5h27m

is it possible that this drama is being staged on purpose, in order to create some suspense ahead of ChatGPT-NextNum or something like that?

GreedClarifies
0 replies
10h55m

That’s why you don’t hire people you wouldn’t trust to make a ham sandwich onto the board of a 100B+ company.

BasilPH
0 replies
13h8m

To put all of this into perspective, it would be good to know what "... not consistently candid in his communications with the board" means.

0xDEAFBEAD
0 replies
15h27m

One AI-focused venture capitalist noted that following the departure of Hoffman, OpenAI’s non-profit board lacked much traditional governance. “These are not the business or operating leaders you would want governing the most important private company in the world,” they said.

From https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are... (linked in OP)

I'd be interested in a discussion of the merits of "traditional governance" here. Traditional private companies are focused on making a profit, even if that has negative side effects like lung cancer or global warming. If OpenAI is supposed to shepherd AGI for all humanity, what's the strongest case for including "traditional governance" type people on the board? Can we be explicit about the benefits they bring to the table, if your objective is humanitarian?

Personally I would be concerned that people who serve on for-profit boards would have the wrong instinct, of prioritizing profit over collective benefit...