return to table of content

Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI

wolverine876
174 replies
23h23m

Angel investor Ron Conway wrote, "What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI."

With all sympathy and empathy for Sam and Greg, whose dreams took a blow, I want to say something about investors [edit: not Ron Conway in particular, whom I don't know; see the comment below about Conway]: The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management. And if anyone objects to the impact on the employees, they justify it with the magic words that somehow cancel all morality and humanity - 'it's business' - and call you an unserious bleeding heart. But when the investor's buddy CEO is fired ...

I think that's wrong and that they should also take into account the impact on employees. But CEOs are commanders on the business battlefield; they have great power over the company's outcomes, which are the reasons for the layoffs/firings. Lower-ranking employees are much closer to civilians, and also often can't afford to lose the job.

threeseed
133 replies
20h59m

The board's job is not to do right

There is why you do something. And there is how you do something.

OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.

They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution. Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

For me there is no justification for how this all happened.

HAL3000
54 replies
19h40m

As someone who has orchestrated two coups in different organizations, where the leadership did not align with the organization's interests and missions, I can assure you that the final stage of such a coup is not something that can be executed after just an hour of preparation or thought. It requires months of planning. The trigger is only pulled when there is sufficient evidence or justification for such action. Building support for a coup takes time and must be justified by a pattern of behavior from your opponent, not just a single action. Extensive backchanneling and one-on-one discussions are necessary to gauge where others stand, share your perspective, demonstrate how the person in question is acting against the organization's interests, and seek their support. Initially, this support is not for the coup, but rather to ensure alignment of views. Then, when something significant happens, everything is already in place. You've been waiting for that one decisive action to pull the trigger, which is why everything then unfolds so quickly.

emptysongglass
33 replies
17h18m

How are you still hireable? If I knew you orchestrated two coups at previous companies and I was responsible for hiring, you would be radioactive to me. Especially knowing that all that effort went into putting together a successful coup over other work.

Coups, in general, are the domain of the petty. One need only look at Ilya and D'Angelo to see this in action. D'Angelo neutered Quora by pushing out its co-founder, Charlie Cheever. If you're not happy with the way a company is doing business, your best action is to walk away.

AlchemistCamp
11 replies
17h0m

Max Levchin was an organizer of two coups while at PayPal. Both times, he believed it was necessary for the success of the company. Whether that was correct or not, they eventually succeeded and I don’t think the coups really hurt his later career.

matisseverduyn
5 replies
14h17m

This example seems to be survivorship bias. Personally, if someone approached me to suggest backstabbing someone else, I wouldn't trust that they wouldn't eventually backstab me as well. @bear141 said "People should oppose openly or leave." [1] and I agree completely. That said, don't take vacations! (when Elon Musk was ousted from PayPal in the parent example, etc.)

[1] https://news.ycombinator.com/item?id=38326443

jacquesm
4 replies
11h43m

I had this exact thing happen a few weeks ago in a company that I have invested in. That didn't quite pan out in the way the would-be coup party likely intended. To put it mildly.

matisseverduyn
1 replies
10h14m

You were approached to participate in a coup and therefore had it squashed? Or a CEO was almost removed during their vacation?

jacquesm
0 replies
6h21m

The first. And it was a bit tricky because it wasn't initially evident that it was a coup attempt but they gave themselves away. Highly annoying.

chris_wot
1 replies
10h30m

Dear god that sounds interesting and yet terrifying.

jacquesm
0 replies
6h19m

That's pretty accurate. It could have easily killed the company too.

adastra22
4 replies
14h30m

PayPal had an exit, but it absolutely did not succeed in the financial revolution it was attempting. People forget now that OG PayPal was attempting the digital financial revolution that later would be bitcoin’s raison d'être.

SoftTalker
2 replies
12h41m

Yep. PayPal was originally a lot like venmo (conceptually -- of course we didn't have phone apps then). It was a way for people to send each other money online.

Ar-Curunir
1 replies
12h4m

Good thing for PayPal that it now owns Venmo :P

adastra22
0 replies
10h42m

PayPal went down the embrace, extend, extinguish route. If it were possible for them to do the same with bitcoin, they would have.

AlchemistCamp
0 replies
5h14m

I think getting to an IPO in any form during the wreckage of the Dotcom crash counts as an impressive success, even if their vision wasn't fully realized.

throwaway35i2
5 replies
16h52m

Are you the sort of person that hires someone that can successfully organize a coup against corporate leadership?

It feels like there is an impedance mismatch here.

cdogl
2 replies
15h9m

If I'm confident in my competence and the candidate has a trustworthy and compelling narrative about how they undermined incompetent leadership to achieve a higher goal - yep, for sure.

brandall10
0 replies
10h42m

But being in a situation where this was called for twice?

That strikes me as someone who is either lacks the ability to do proper due diligence or they're straight up sociopaths looking for weak willed people they can strong arm out. Part of the latter is having the ability to create a compelling narrative for future marks, to put it bluntly.

Aloha
0 replies
14h44m

Also, ones persons incompetent is anothers performer.

Like, being crosswise in organizational politics does not imply less of a devotion of organizational goals, but rather often simply different interpretation of those goals.

adastra22
1 replies
14h22m

I’ve hired people that were involved in palace coups at unicorn startups, twice. Justified or not, those coups set the company on a downward spiral it never recovered from.

I’m not sure I can identify exactly who is liable to start a coup, but I know for sure that I would never, ever hire someone who I felt confident might go down that route.

Startups die from suicide, not homicide.

g42gregory
0 replies
10h10m

"Startups die from suicide, not homicide." - That's a great way to put it. 100% true.

bear141
4 replies
17h11m

I agree completely. People should oppose openly or leave.

oska
1 replies
15h38m

As you are new here, I would urge you to read the site's Guidelines [1], which the tone & wording of your comment indicate you have not read.

[1] https://news.ycombinator.com/newsguidelines.html

bear141
0 replies
15h30m

Ok. Thank you.

andybak
1 replies
15h53m

Aren't you taking sides in a fight without knowing which side was "right"? Or do you believe that loyalty trumps all other values?

At this point I'm in danger of triggering Godwin's Law so I had better stop.

bear141
0 replies
15h22m

My comment was phrased inappropriately.

rand846633
1 replies
12h51m

If you're not happy with the way a company is doing business, your best action is to walk away.

This makes no sense at all!

gmassman
0 replies
10h46m

It makes the most sense if you value your own wellbeing over whatever “mission” a company is supposedly chasing.

netaustin
1 replies
1h5m

I feel like in the parent comment coup is sort of shorthand for the painful but necessary work of building consensus that it is time for new leadership. Necessary is in the eye of the beholder. These certainly can be petty when they are bald-faced power grabs, but they equally can be noble if the leader is a despot or a criminal. I would also not call Sam Altman's ouster a coup even if the board were manipulated into ousting him, he was removed by exactly the people who are allowed to remove him. Coups are necessarily extrajudicial.

startupsfail
0 replies
15m

It also looks like Sam Altman was busy creating another AI company, along his creepy WorldCoin venture, wasteful crypto/bitcoin support and no less creepy stories of abuse coming from his younger sister.

Work or transfer of intellectual property or good name into another venture, while not disclosing it with OpenAI is a clear breach of contract.

He is clearly instrumental in attracting investors, talent, partners and commercialization of technology developed by Google Brain and pushed further by Hinton students and the team of OpenAI. But he was just present in the room where the veil of ignorance was pushed forward. He is replaceable and another leader, less creepy and with fewer conflicts of interest may do a better job.

It it no surprise that OpenAI board had attempted to eject him. I hope that this attempt will be a success.

ekianjo
1 replies
15h32m

The regular HN commenter says "ceos are bad useless and get paid too much" but now when someone suggests getting rid of one of them suddenly its the end of the world

jonny_eh
0 replies
8h46m

1. There's different people here with different opions.

2. CEO's at fast growing startups are very different than at large tech.

Aloha
1 replies
14h45m

Why is there a presumption that it must take precedence over other work?

I've run or defended against 'grassroots organizations transformations' (aka, a coup) at several non-profit organizations, and all of us continued to do our daily required tasks while the politicking was going on.

emptysongglass
0 replies
7h36m

Because any defense of being able to orchestrate a professional coup and do your other work with the same zeal and focus as you did before fomenting rebellion I take as seriously as people who tell me they can multitask effectively.

It's just not possible. We're limited in how much energy we can bring to daily work, that's a fact. If your brain is occupied both with dreams of king-making and your regular duties at the job, your mental bandwidth is compromised.

digi59404
0 replies
3h37m

Let me pose a theoretical. Let’s say you’re a VP or Senior Director. One of your sibling directors or VPs is over a department and field you have intimate domain knowledge. Meaning you have a successful track record in that field both from a management side and an IC side.

Now, that sibling director allows a culture of sexual harassment, law breaking, and toxic throat slitting behavior. HR and the Organizations leadership is aware of this. However the company is profitable, outside his department happy, and stable. They don’t want to rock the boat.

Is it still “the domain of the petty” to have a plan to replace them? To have formed relationships to work around them, and keep them in check? To have enacted policies outside their department to ensure the damage doesn’t spread?

And most importantly to enact said replacement plan when they fuck up just enough leadership gives them the side-eye, and you push the issue with your documentation of their various grievances?

Because that… is a coup. That is a coup that is atleast in my mind moral and just, leading to the betterment of the company.

“Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail. Not when there’s still the ability to effect positive change and fix the problems. Captains always evacuate all passengers before they leave the ship. Else they go down with it.

chris_wot
0 replies
10h32m

Are you responsible for hiring though?

jacquesm
14 replies
19h16m

All of this is spot on. The key to it all is 'if you strike at the king, you best not miss'.

chaostheory
13 replies
18h36m

Going off on a big tangent, but Jiang Zemin had made several failed assassination attempts on Xi Jinping, but he was still able to die of old age.

ls612
9 replies
18h24m

By assassination I assume you mean metaphorical? As in to derail his rise before becoming party leader?

chaostheory
8 replies
12h46m

No, literal attempts.

One attempt involved a battleship “accidentally” firing onto another battleship where both Hu Jintao and Xi Jinping were visiting.

https://jamestown.org/program/president-xi-suspects-politica...

Biased source, but she’s able to get a lot of unreported news from the mainland.

https://www.jenniferzengblog.com/home/2021/9/20/deleted-repo...

I will try to find more sources but Google is just shit these days. See my other comment for more.

A big problem is that mainland China is like the hermit kingdom. It’s a black hole for any news the CCP doesn’t want to get out

_tik_
6 replies
10h56m

These are FalungGong. I will not trust Falun Gong's news on China. They are known to create conspiracy stories.

g42gregory
1 replies
10h8m

What is Falun Gong exactly? I never understood what they are.

jacquesm
0 replies
4h14m

https://en.wikipedia.org/wiki/Falun_Gong

No guarantees about NPOV on that page.

See also:

https://en.wikipedia.org/wiki/Talk:Falun_Gong

If you want to see what makes WikiPedia tick that's a great place to start.

fennecbutt
1 replies
5h48m

I mean I would too if my ethnicity was so repressed, along with all the other non han Chinese.

woooooo
0 replies
2h44m

Falun Gong is a religion, not an ethnicity, and they are of the cultish variety.

It's like believing the scientology center. Not trustworthy, they have an angle.

chris_wot
0 replies
10h27m

Agreed. Whilst I don’t trust China’s CCP, I sure as heck don’t trust anything from Falun Gong. Those guys are running an asymmetric battle against the Chinese State and frankly they would be capable of saying anything if it helped their cause.

chaostheory
0 replies
45m

1. The sources aren’t limited to falun gong

2. It makes sense given Xi’s current paranoia and constant purges

jacquesm
0 replies
12h12m

One attempt involved a battleship “accidentally” firing onto another battleship where both Hu Jintao and Xi Jinping were visiting.

Hm. That really does qualify as an assassination attempt if it wasn't an actual accident. Enough such things happen by accident that it has a name.

https://en.wikipedia.org/wiki/Friendly_fire

user_named
1 replies
15h23m

Never heard about this before. Sources?

chaostheory
0 replies
12h47m

Google is just really terrible these days.

http://www.indiandefencereview.com/spotlights/xi-jinpings-fi...

http://www.settimananews.it/italia-europa-mondo/the-impossib...

I will try to find better sources. There are more not so great articles in my other comment

piuantiderp
0 replies
14h12m

You can safely assume he still had sufficient power to be well protected.

silexia
0 replies
15h22m

I would never work with you. This is why investors have such a bad reputation. If I had not retained 100% ownership and control of my business, I am sure someone like you would have tossed me out by now.

Focus on results, not political games.

ikekkdcjkfke
0 replies
11h18m

I feel like this is something that could be played out on a documentary about chimpanzies

gota
0 replies
18h25m

I am extremely interested in hearing about these coups and your experience in them; if you'd like and are able to share

claytonjy
0 replies
18h41m

Even in the HBO show Succession, these things take a season, not an episode

adastra22
0 replies
14h37m

Username… checks out?

dragonwriter
25 replies
20h8m

split the company into two camps

The split existed long prior to the board action, and extended up into the board itself. If anything, the board action is a turning point toward decisively ending the split and achieving unity of purpose.

galangalalgol
24 replies
19h53m

Can someone explain the sides? Ilya seems to think transformers could make AGI and they need to be careful? Sam said what? "We need to make better LLMs to make more money."? My general thought is that whatever architecture gets you to AGI, you don't prevent it from killing everyone by chaining it better, you prevent that by training it better, and then treating it like someone with intrinsic value. As opposed to locking it in a room with 4chan.

DebtDeflation
8 replies
18h52m

I don't think the issue was a technical difference of opinion regarding whether transformers alone were needed or other architectures required. It seems the split was over speed of commercialization and Sam's recent decision to launch custom GPTs and a ChatGPT Store. IMO, the board miscalculated. OpenAI won't be able to pursue their "betterment of humanity" mission without funding and they seemingly just pissed off their biggest funding source with a move that will also make other would be investors very skittish now.

roguecoder
7 replies
18h34m

Making humanity’s current lives worse to fund some theoretical future good (enriching himself in the process) is some highly impressive rationalisation work.

Rzor
5 replies
18h6m

Try to tell that to the Effective Altruism crowd.

concordDance
2 replies
16h36m

Literally any investment is a divert of resources from the present (harming the present) to the future. E.g. planting grains for next year rather than eating them now.

autaut
1 replies
15h23m

There is a difference between investing in a company who is developing ai software in a widely accessible way that improve everyone’s lives and a company that pursues software to put out of work entire sectors for the profit of a dozen of investors

concordDance
0 replies
6h39m

"Put out of work" is a good thing. If I make a new js library which means a project that used to take 10 devs now takes 5 I've put 5 devs out of work. But ive also made the world a more efficient place and those 5 devs can go do some other valuable thing.

kergonath
0 replies
16h11m

My thought exactly. Some people don’t have any problem with inflicting misery now for hypothetical future good.

0xDEAFBEAD
0 replies
16h1m

Here's the discussion on the EA forum if anyone is interested: https://forum.effectivealtruism.org/posts/HjgD3Q5uWD2iJZpEN/...

I think the EA movement has been broadly skeptical towards Sam for a while -- my understanding is that Anthropic was founded by EAs who used to work at OpenAI and decided they didn't trust Sam.

concordDance
0 replies
16h38m

Making humanity’s current lives worse to fund some theoretical future good

Note that this clause would describe any government funded research for example.

mikeryan
5 replies
19h28m

If I'm understanding it correctly, it's basically the non-profit, AI for humanity vs the commercialization of AI.

From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).

It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?

https://www.theverge.com/2023/9/28/23893939/jony-ive-openai-...

elorant
3 replies
18h7m

The non-profit path is dead in the water after everyone realized the true business potential of GPT models.

galangalalgol
2 replies
16h11m

What is the business potential? It seems like no one can trust it for anything, what do people actually use it for.

elorant
0 replies
4h40m

Anything that is language related. Extracting summaries, writing articles, combining multiple articles into one, drawing conclusions from really big prompts, translating, rewriting, fixing grammar errors etc. Half of the corporations in the world have such needs more or less.

CamperBob2
0 replies
16h0m

It could easily make better decisions than these board members, for example.

ffgjgf1
0 replies
18h42m

From what I've read, Ilya has been pushing to slow down

Wouldn’t a likely outcome in that case be that someone else overtakes them? Or are they so confident that they think it’s not a real threat?

YetAnotherNick
4 replies
19h21m

treating it like someone with intrinsic value

Do you think if chickens treated us better with intrinsic value we won't kill them? For AGI superhuman x risk folks that's the bigger argument.

galangalalgol
3 replies
19h19m

I think od I was raised by chickens that treated me kindly and fairly, yes, I would not harm chickens.

jacquesm
2 replies
19h14m

They'll treat you kindly and fairly, right up to your meeting with the axe.

fennecbutt
1 replies
5h45m

That's literally what we already do to each other. You think the 1% care about poor people? Lmao, the rich lobby and manufacture race and other wars to distract from the class war, they're destroying our environment and numbing our brains with opiates like Tiktok.

jacquesm
0 replies
4h17m

No disagreement here.

doubled112
2 replies
19h50m

locking it in a room with 4chan.

Didn’t Microsoft already try this experiment a few years back with an AI chatbot?

mindcrime
1 replies
19h48m

Didn’t Microsoft already try this experiment a few years back with an AI chatbot?

You may be thinking of Tay?

https://en.wikipedia.org/wiki/Tay_(chatbot)

doubled112
0 replies
19h46m

That’s the one.

ehnto
0 replies
15h3m

I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.

I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.

Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.

bmitc
18 replies
19h53m

They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution.

Given the language in the press release, wouldn't it be more accurate to say that Sam Altman, and not the board, blindsided everyone? It was apparently his actions and no one else's that led to the consequence handed out by the board.

Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

From all current accounts, doesn't that seem like what Altman and his crew were already trying to do and was the reason for the dismissal in the first place?

GCA10
13 replies
18h21m

The only appropriate target for Microsoft's anger would be its own deal negotiators.

OpenAI's dual identity as a nonprofit/for-profit business was very well known. And the concentration of power in the nonprofit side was also very well known. From the media coverage of Microsoft's investments, it sounds as if MSFT prioritized getting lots of business for its Azure cloud service -- and didn't prioritize getting a board seat or even an observer's chair.

adastra22
11 replies
14h17m

Sure, but Microsoft could also walk away today and leave OpenAI high and dry. They hold ALL the power here.

dragonwriter
5 replies
12h9m

Microsoft terminating the agreement by which they supply compute to OpenAI and OpenAI licenses technology to them would be an existential risk to OpenAI (though other competing cloud providers might step in and fill the gap Microsoft created under similar terms), but -- whether or not OpenAI ended up somewhere else immediately (the tech eventually would, even if OpenAI failed completely and was dissolved) Microsoft would go from the best positioned enterprise AI cloud provider to very far behind overnight.

And while that might hurt OpenAI as an institution more than it hurts Microsoft as an institution, the effect on Microsoft's top decision-makers personally vs. OpenAI's top decisionmakers seems likely to be the other way around.

adastra22
4 replies
11h59m

Not if they invested in Sam’s new startup, under agreeable profit-focused terms this time, and all the OpenAI talent (minus Ilya) followed.

dragonwriter
3 replies
11h52m

At best, that might enable them to eventually come back, once new products are built from scratch, but that takes non-zero time.

adastra22
2 replies
10h47m

Non-zero time, but not a lot either. Main hangup would be acquiring data for training, as their engineers would remember the parameters for GPT-4 and Microsoft would provide the GPUs. But Microsoft with access to Bing and all its other services ought to be able to help here too.

Amateurs on hugging face are able to match OpenAI in impressively short time. The actual former-OpenAI engineers with unlimited budget ought to be able to do as good or better.

cteiosanu
1 replies
9h52m

Amateurs ?

adastra22
0 replies
9h41m

Non-corporate groups.

fakedang
4 replies
12h26m

If Open AI were to be in true crisis, I'm sure Amazon will step in to invest, for exclusive access to GPT4 (in spite of their Anthropic investment). That would put Azure in a bad place. So not exactly "All" the power.

Not to mention, after that, MSFT might be left bagholding onto a bunch of unused compute.

adastra22
3 replies
10h43m

Sam and Greg have already said they’re starting an OpenAI competitor, and at least 3 senior engineers have jumped ship already. More are expected tonight. Microsoft would just back them as well, then take their time playing kingmaker in choosing the winner.

fakedang
2 replies
9h51m

That's true, but Sutskever and Co still have the head start. On the models, the training data, the GPT4 licenses, etc. Their Achilles heel is the compute which Microsoft will pull out. Khosla Ventures and Sequoia may sell their Open AI stakes at a discount, but I'm sure either Google or Amazon will snap it up.

All Sam and Greg really have is the promise of building a successful competitor, with a big backing from Microsoft and Softbank, while OpenAI is the orphan child with the huge estate. Microsoft isn't exactly the kingmaker here.

peyton
1 replies
9h45m

It doesn’t sound like Sutskever is running anything. OpenAI reportedly put out a memo saying they’re trying to get Sam and Greg back: https://www.theinformation.com/articles/openai-optimistic-it...

fakedang
0 replies
7h43m

Sutskever built the models behind GPT4, if I reckon correctly (all credit to the team, but he's the focal point behind expanding on Google transformers). I don't see Sam and Greg working with him under the same roof after this fiasco, since he voted them out (he could have been the tiemaker vote).

hfjjbf
0 replies
18h10m

They loved to trot out the “mission” as a reason to trust a for-profit entity with the tech.

Well, this is proof the mission isn’t just MBA bullshit, clearly Ilya is actually committed to it.

This is like if Larry and Sergei never decided to progressively nerf “don’t be evil” as they kept accumulating wealth, they would have had to stage a coup as well. But they didn’t, they sacrificed the mission for the money.

Good for Ilya.

calf
2 replies
18h28m

I wonder if there's a specific term or saying for that, maybe "projection" or "self-victimization" but not quite: when one person biasedly frames that other people were responsible for a bad thing, when it is they yourself that were doing the very thing in the first place. Maybe "hypocrisy"?

bmitc
0 replies
18h26m

Probably a little of all of that all bundled up together under the umbrella of cult of personality.

Clubber
0 replies
18h10m

Lack of accountability. Inability for self reflection.

adastra22
0 replies
14h18m

The leaked memo today (which was probably reviewed by legal, unlike yesterday’s press release) says there was no malfeasance.

jasonwatkinspdx
5 replies
18h59m

OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.

This wasn't a change of strategy, it was a restoration of it. OpenAI was structured with a 501c3 in oversight from the beginning exactly because they wanted to prioritize using AI for the good of humanity over profits.

ffgjgf1
3 replies
18h38m

Yet they need massive investment from Microsoft to accomplish that?

restoration

Wouldn’t that mean that over the longterm they will just be outcompeted by the profit seeking entities. It’s not like OpenAI is self sustainable (or even can be if they chose the non-profit way)

fsckboy
2 replies
18h13m

Yet they need massive investment from Microsoft to accomplish that?

massive spending is needed for any project as massive as "AI", so what are you even asking? A "feed the poor project" does not expect to make a profit, but, yes, it needs large cash infusions...

ffgjgf1
1 replies
8h36m

That as a non profit they won’t be able to attract any sufficient amounts of money?

badwolf
0 replies
16m

Or talent...

slenk
0 replies
17h9m

This isn't going to make me think in any way that OpenAI will return to its more open beginning. If anything it shows me they don't know what they want

mise_en_place
4 replies
20h47m

Keep in mind that the rest of the board members have ties to US intelligence. Something isn't right here.

deeviant
1 replies
20h22m

I'm pretty sure Joseph Gordon-Levitt's wife isn't a CIA plant.

SturgeonsLaw
0 replies
19h33m

She works for RAND Corporation

zxndomisaaz2
0 replies
20h10m

There had better be US intelligence crawling all over the AI space, otherwise we are all in very deep shit.

simonjgreen
0 replies
20h35m

Do you have citations for that? That’s interesting if true

6gvONxR4sf7o
4 replies
20h14m

They have … split the company into two camps

By all accounts, this split happened a while ago and led to this firing, not the other way around.

threeseed
3 replies
20h7m

The split happened at the management/board level.

And instead of resolving this and presenting a unified strategy to the company they have instead allowed for this split to be replicated everywhere. Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.

It's incredibly destabilising and unnecessary.

fuzztester
0 replies
18h52m

The possibility of getting fired is an occupational hazard for anyone working in any company, unless something in your employment contract says otherwise. And even then, you can still be fired.

Biz 101.

I don't know why people even need to be explained this, except for ignorance of basic facts of business life.

Jare
0 replies
19h52m

Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.

They probably joined because it was the most awesome place to pursue their skills in AI, but they _knew_ they were joining an organization with explicitly not a profit goal. If they hoped that profit chasing would eventually win, that's their problem and, frankly, having this wakeup call is a good thing for them so they can reevaluate their choices.

Apocryphon
0 replies
19h23m

Let the two sides now create separate organizations and pursue their respective pure undivided priority to the fullest. May the competition flow.

whatshisface
2 replies
20h26m

I thought the for-profit AI startup with no higher purpose was OpenAI itself.

dragonwriter
0 replies
20h5m

OpenAI is a nonprofit charity with a defined charitable purpose that has a for-profit subsidiary that is explicitly subordinated to the purpose of the nonprofit, to the extent investors in the subsidiary are advised in the operating agreement to treat investments as if they were more like donations, and that the firm will prioritize the charitable function of the nonprofit which retains full governance power over the subsidiary over returning profits, which it may never do.

cornholio
0 replies
20h11m

It is, only it has an exotic ownership structure. Sutskever has just used the features of that structure to install himself as the top dog. The next step is undoubtedly packing the board with his loyalists.

Whoever thinks you can tame a 100 billion dollar company by putting a "non-profit" in charge of it, clearly doesn't understand people.

vGPU
1 replies
18h42m

And the stupid thing is, they could have just used the allegations his sister made against him as the reason for the firing and ridden off into the sunset, Scott-free.

jabroni_salad
0 replies
13h19m

I'm glad they didn't. She has enough troubles without a target like that on her back.

underlipton
1 replies
19h10m

I'm sure my coworkers at [retailer] were not happy to be even shorter staffed than usual when I was ambush fired, but no one who mattered cared, just as no one who matters cares when it happens to thousands of workers every single day in this country. Sorry to say, my schadenfreude levels are quite high. Maybe if the practice were TRULY verboten in our society... but I guess "professional" treatment is only for the suits and wunderkids.

pokepim
0 replies
16h59m

I have noticed you decided to use several German words in your reply, trying not to be petty but at least you should attempt to write them correctly. It’s either Wunderkind (German word for child prodigy) or english translation: wonder kid.

sheepscreek
1 replies
19h45m

In other words, it’s unheard of for a $90B company with weekly active users in excess of 100 million. A coup leaves a very bad taste for everyone - employees, users, investors and the general public.

When a company experiences this level of growth over a decade, the board evolves with the company. You end up with board members that have all been there, done that, and can truly guide the management on the challenges they face.

OpenAI's hypergrowth meant it didn’t have the time to do that. So the board that was great for a $100 million, even a billion $ startup falls completely flat for 90x the size.

I don’t have faith in their ability to know what is best for OpenAI. These are uncharted waters for anyone though. This is an exceptionally big non-profit with the power to change the world - quite literally.

roguecoder
0 replies
19h28m

Why do you think someone who could be CEO of a $100 million company would be qualified to run a billion dollar company?

Not providing this kind of oversight is how we get disasters like FTX and WeWork.

zer0c00ler
0 replies
18h41m

You assume they were indeed blindsided, which I very much doubt.

I think it’s a good outcome overall. More decentralization and focused research, and a new company that focuses on product.

snickerbockers
0 replies
17h30m

Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

Is Microsoft a higher purpose?

nprateem
0 replies
20h1m

Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

If it was so easy to go to the back of the queue and become a threat, Open AI wouldn't be in the dominant position they're in now. If any of the leavers have taken IP with them, expect court cases.

mandeepj
0 replies
13h32m

e.g. Satya is furious

Oh! So, now you got him furious? when just yesterday he made a rushed statement to standby Mira.

https://blogs.microsoft.com/blog/2023/11/17/a-statement-from...

hilux
0 replies
18h48m

You're entitled to your opinions.

But as far as I can tell, unless you are in the exec suites at both OpenAI and at Microsoft, these are just your opinions, yet you present them as fact.

deepGem
0 replies
14h20m

"And there is how you do something"

Sorry I don't see the 'how' as necessarily appalling.

The less appalling alternative could have been weeks of discussions and the board asking for Sam's resignation to preserve the decorum of the company. How would that have helped the company ? The internal rife would have spread, employees would have gotten restless, leading to reduced productivity and shipping.

Instead, isn't this a better outcome ? There is immense short term pain, but there is no ambiguity and the company has set a clear course of action.

To affirm that the board has caused a split in the company is quite preposterous, unless you have first hand information that such a split has actually happened. As far as public information is concerned 3 researchers have quit so far, and you have this from one of the EMs.

"For those wondering what’ll happen next, the answer is we’ll keep shipping. @sama & @gdb weren’t micro-managers. The comes from the many geniuses here in research product eng & design. There’s clear internal uniformity among these leaders that we’re here for the bigger mission."

This snippet in fact shows the genius of Sam and gdb, how they enabled the teams to run even in their absence. Is it unfortunate that the board fired Sam, from the engineer's and builder's perspective yes, from the long term AGI research perspective, I don't know.

Vegenoid
0 replies
17h1m

a blight on corporate governance > They have blind-sided partners (e.g. Satya is furious) > the threat that a for-profit version of OpenAI dominates the market

It's seeming like corporate governance and market domination are exactly the kind of thing the board are trying to separate from with this move. They can't achieve this by going to investors first and talking about it - you think Microsoft isn't going to do everything in their power to prevent it from happening if they knew about it? I think their mission is laudable, and they simply did it the way it had to be done.

You can't slowly untangle yourself from one of the biggest companies in the world while it is coiling around your extremely valuable technology.

DebtDeflation
0 replies
18h58m

They have blind-sided partners

This is the biggest takeaway for me. People are building businesses around OpenAI APIs and now they want to suddenly swing the pendulum back to being a fantasy AGI foundation and de-emphasize the commercial aspect? Customers are baking OpenAI's APIs into their enterprise applications. Without funding from Microsoft their current model is unsustainable. They'll be split into two separate companies within 6 months in my opinion.

s1artibartfast
21 replies
22h19m

The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management.

Thats why Sam & Greg wasn't all they complained about. They lead with the fact that it was shocking and irresponsible.

Ron seems to think that the board is not making the right move for OpenAI.

sangnoir
10 replies
21h57m

They lead with the fact that it was shocking and irresponsible.

I can see where the misalignment (ha!) may be: someone deep in the VC world would reflexively think that "value destruction" of any kind is irresponsible. However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts. Without getting into the specifics of this case: a non-profit's board is expected to make decisions that lose money (or not generate as much of it) if the decisions lead to results more consistent with the mission.

s1artibartfast
5 replies
21h41m

However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts

Exactly. The tricky part is that board started a second for profit company with VC investors who are co-owners. This has potential for messy conflicts of interest if there is disagreement about how to run the co-venture, and each party has contractual obligations to each other.

cthalupa
4 replies
20h35m

Exactly. The tricky part is that board started a second for profit company with VC investors who are co-owners. This has potential for messy conflicts of interest if there is disagreement about how to run the co-venture, and each party has contractual obligations to each other.

Anyone investing in or working for the for-profit LLC has to sign an operating agreement that states the LLC is not obligated to make a profit, all investments should be treated as donations, and that the charter and mission of the non-profit is the primary responsibility of the for-profit LLC as well.

s1artibartfast
3 replies
19h31m

See my other response. If you have people sign a contract that says the mission comes first, but also give them profit sharing stocks, and cap those profits at 1.1 Trillion, it is bound to cause some conflicts of interest in reality, even if it is clear who calls the shots when deciding how to balance the mission and profit

cthalupa
2 replies
19h6m

There might be some conflict of interest but the resolution to those conflicts is clear: The mission comes first.

OpenAI employees might not like it and it might drive them to leave, but they entered into this agreement with a full understanding that the structure has always been in place to prioritize the non-profit's charter.

ffgjgf1
1 replies
18h32m

The mission comes first.

Which might only be possible with future funding? From Microsoft in this case. And in any case if they give out any more shares in the wouldn’t they (with MS) be able to just take over the for-profit corp?

s1artibartfast
0 replies
18h25m

The deal with Microsoft was 11 billion for 49% of the venture. First off, if open AI can't get it done with 11 billion plus whatever Revenue, they probably won't. Second, the way the for-profit is set up, it may not matter how much Microsoft owns, because the nonprofit keeps 100% of the control. Seems like that's the deal that Microsoft signed. They bought a share of profits with no control. Third, my understanding is that the 11 billion from Microsoft is based on milestones. If openai doesn't meet them, they don't get all the money

anonymouskimmer
2 replies
21h37m

Just a nitpick. "Fiduciary" doesn't mean "money", it means an entity which is legally bound to the best interests of the other party. Non-profit boards and board members have fiduciary duties.

sangnoir
1 replies
21h21m

Thanks for that - indeed, I was using "fiduciary duty" in the context it's most frequently used - maximizing value accrued to stakeholders.

However, to nitpick your nitpick: for non-profits there might be no other party - just the mission. Imagine a non-profit whose mission is to preserve the history and practice of making 17th-century ivory cuff links. It's just the organisation and the mission; sometimes the mission is for the benefit of another party (or all of humanity).

anonymouskimmer
0 replies
21h16m

The non-profit, in my use, was the party. I guess at some point these organizations may not involve people, in which case "party" would be the wrong term to use.

ffgjgf1
0 replies
18h34m

Of course they can only achieve their mission with funding from for profit corporations and their actions have possibly jeopardized that

fourside
8 replies
21h54m

Investors are not gonna like when the business guy who was pushing for productizing, profitability and growth get ousted. We don’t know all the details about what exactly caused the board to fire Sam. The part about lying to the board is notable.

It’s possible Sam betrayed their trust and actually committed a fireable offense. But even if the rest of the board was right, the way they’ve handled it so far doesn’t inspire a lot of confidence.

anonymouskimmer
7 replies
21h33m

Again, they didn't state that he lied. They stated that he wasn't candid. A lot of people here have been reading specifics into a generalized term.

It is even possible to not be candid without even using lies of omission. For a CEO this could be as simple as just moving fast and not taking the time to report on major initiatives to the board.

FireBeyond
2 replies
20h55m

Again, they didn't state that he lied. They stated that he wasn't candid. A lot of people here have been reading specifics into a generalized term.

OED:

candour - the quality of being open and honest in expression.

"They didn't state he lied ... without even using lies of omission ... they said he wasn't [word defined as honest and open]"

Candour encapsulates exactly those things. Being open (i.e. not omitting things and disclosing all you know) and honest (being truthful).

On the contrary, "not consistently candid", while you call it a "generalized term", is actually a quite specific term that was expressly chosen, and says, "we have had multiple instances where he has not been open with us, or not been honest with us, or both".

cma
0 replies
19h26m

If "and" operates as logical "and," then being "honest and not open," "not honest and open," and "not honest and not open" would all be possibilities, one of which would still be "honest" but potentially lying through omission.

anonymouskimmer
0 replies
20h50m

Yes? I agree, and don't see how what you've written either extends or contradicts what I wrote.

notahacker
1 replies
20h37m

Its possible not to be candid without even using lies of omission (and be on the losing side of a vicious factional battle) and get a nice note thanking you for all that you've done and allowing you to step down and spend more time with your family at the end of the year too. Or to carry on as before but with onerous reporting requirements. The board dumped him with unusual haste and an almost unprecedented attack on his integrity instead. A lot of people are reading the room rather than hyperliterally focusing on the exact words used.

If I take the time to accuse my boss of failing to be candid instead of thanking him in my resignation letter or exit interview, I'm not saying I think he could have communicated better, I'm saying he's a damned liar, and my letter isn't sent for the public to speculate on.

Whether the board were justified in concluding Sam was untrustworthy is another question, but they've been willing to burn quite a lot of reputation on signalling that.

pdntspa
0 replies
19h57m

hyperliterally focusing on the exact words used.

Business communication is never, ever forthright. These people cannot be blunt to the public even if their life depended on it. Reading between the lines is practically a requirement.

kmeisthax
0 replies
16h24m

How much you wanna bet that the board wasn't told about OpenAI's Dev Days presentation until after it happened?

jacquesm
0 replies
19h9m

They said he lied without using those exact words. Standard procedure and corp-speak.

jacquesm
0 replies
19h10m

They may even be making the right move but not in a way that it looks like they made the right move. That's stupid.

gkoberger
2 replies
22h50m

I mostly agree with you on this. That being said, I've never gotten the impression Ron is the type of VC you're referring to. He's definitely founder-friendly (that's basically his core tenant), but I've never found him to be the type of VC who is ruthless about cost-cutting or an advocate for layoffs. (And I say this as someone who tends to be particularly wary of investors)

wolverine876
0 replies
22h27m

Thanks. I updated my GP comment accordingly.

everly
0 replies
18h40m

Just a heads up, the word is 'tenet' (funny enough, in commercial real estate there is the concept of a 'core tenant' though -- i.e. the largest retailer in a shopping center).

dragonwriter
2 replies
20h32m

The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI.

The board's job is specifically to do right by the charitable mission of the nonprofit of which they are the board. Investors in the downstream for-profit entity (OpenAI Global LLC) are warned explicitly that such investments should be treated as if they were donations and that returning profits to them is not the objective of the firm, serving the charitable function of the nonprofit is, though profits may be returned.

spadufed
1 replies
16h55m

charitable mission of the nonprofit of which they are the board

This exactly. Folks have completely forgotten that Altman and Co have largely bastardized the vision of OpenAI for sport and profit. It's very possible that this is part of a larger attempt to return to the stated mission of the organization. An outcome that is undoubtedly better for humanity.

phpisthebest
0 replies
50m

Have they though..

What is the evidence of that, and what is your evidence that this "return to mission" will "undoubtedly better for humanity"

After all, as we see by looking to history, the road to hell is paved with good intentions, lots and lots of altruistic do gooders have created all manner of evil in their pursuit of a "better humanity".

I am not sure I agree with Sam Altman's vision of a "better tomorrow" any more than I would agree with the OpenAI boards vision of that same tomorrow. In fact I have great distrust of people that want to shape humanity into their vision of what is "best" that tends to lead to oppression and suffering

roguecoder
1 replies
19h33m

Bingo.

I met Conway once. He described investing in Google because it was a way to relive his youth via founders who reminded him of him at their age. He said this with seemingly no awareness of how it would sound to an audience whose goal in life was to found meaningful, impactful companies rather than let Ron Conway identify with us & vicariously relive his youth.

Just because someone has a lot of money doesn’t mean their opinions are useful.

fuzztester
0 replies
18h47m

Just because someone has a lot of money doesn’t mean their opinions are useful.

Yes. There can often be an inverse correlation, because they can have success bias, like survival bias.

no_wizard
1 replies
18h33m

Corporate legal entities should have a mandatory vote of no confidence clause that gives employees the ability to unseat executives if they have a supermajority of votes.

That would make things more equitable perhaps. It’d at least be interesting

anonymouskimmer
0 replies
16h29m

This is called employee ownership. And yes, it would be great.

justcool393
1 replies
17h13m

it's hilarious how much people for no reason, want to defend the honor of Sam Altman and co. i mean ffs, the guy is not your friend and will definitely backstab you if he gets the opportunity.

i'm surprised anyone can take this "oh woe is me i totally was excited about the future of humanity" crap seriously. these are SV investors here, morally equivalent to the people on Wall Street that a lot here would probably hold in contempt, but because they wore cargo shorts or something, everyone thinks that Sam is their friend and that just if the poor naysayers would understand that Sam is totally cool and uses lowercase in his messages just like mee!!!!

they don't give a shit that your product was "made with <3" or whatever

they don't give a shit about you.

they don't give a shit about your startup's customers.

they only give a shit about how many dollars they make from your product.

boo hooing over Sam getting fired is really pathetic, and I'd expect better from the Hacker News crowd (and more generally the rationalist crowd, which a lot of AI people tend to overlap with).

tim333
0 replies
6h7m

That seems a bit irrationally negative. I mean "[Sam] will definitely backstab you if he gets the opportunity."

I don't know him but he seems a reasonably decent / maybe average type.

financypants
1 replies
13h23m

I think almost everyone at OpenAI would be ok if there were layoffs there though

fulladder
0 replies
11h53m

Why? Are there a lot of useless employees there?

trhway
0 replies
19h43m

it does not do right by Sam

you get that you sow. The way Altman publicly treated Cruise co-founder establishes like a new standard of "not do right by". After that I'd have expected nobody would let Altman near any management position, yet SV is a land of huge money sloshing care-free, and so I was just wondering who is going to be left holding the bag.

tim333
0 replies
17h41m

If they were looking out for investors "blindsided key investor and minority owner Microsoft, reportedly making CEO Satya Nadella furious" doesn't make it sound like they did a terribly good job.

jacquesm
0 replies
19h19m

I'm fairly certain that a board is not allowed to capriciously harm the non-profit they govern and unless they have a very good reason there will be more fall-out from this.

bertil
0 replies
16h28m

I read this as, and both Conway and Y Combinator are famous for their defense of founders.

He might be emotional and defend his friends that’s not in challenge, he likes the guys— and he might be more cynical when it comes to firing 10,000 engineers —that’s less what I’ve heard about him personally, but maybe— however, in this case, he’s explicitly defending not an employee victim of the almighty board, but the people who created the entity, who later entrusted the board with some responsibility to keep the entity faithful to its mission.

Some might think Sam deserves that title less than Greg… not sure I can vouch for either. But Conway is trying to say that all entities (and their governance) owe their founders a debt of consideration, of care. That’s filial piety more than anything contractual. That isn’t the same as the social obligation that an employer might have.

The cult for founders, “0 to 1” and all that might be overblown in San Francisco, but there’s still a legitimate sense that the people who started all this should only be kicked out if they did something outrageous. Take Woz: he’s not working, or useful, or even that respectful of Apple’s decisions nowadays. But he still gets “an employee discount” (which is admittedly more a gimmick). That deference is closer to what Conway seems to flag than the (indeed) fairly violent treatment of a lot of employees during the staff reduction of the last year.

peter422
122 replies
23h15m

I know everybody is going nuts about this, but just from my personal perspective I’ve worked at a variety of companies with “important” CEOs, and in every single one of those cases had the CEO left I would not have cared at all.

The CEO always gets way too much credit externally for what the company is doing, it does not mean the CEO is that important.

OpenAI might be different, I don’t have any personal experience, but I also am not going to assume that this is a complete outlier.

swatcoder
65 replies
22h47m

A deal-making CEO who can carry rapport with the right people, make clever deals, and earn public trust can genuinely make a huge difference to a profit-seeking product company's trajectory.

But when your profit-seeking company is owned by a non-profit with a public mission, that trajectory might end up pointed the wrong way. The Dev Day announcements, and especially the marketplace, can be seen as suggesting that's exactly what was happening at OpenAI.

I don't think everyone there wants them to be selling cool LLM toys, especially not on a "make fast and break things" approach and with an ecosystem of startup hackers operationalizing it. (Wisely or not) I think they want to be shepherding responsible AGI before someone else does so irresponsibly.

abraae
24 replies
21h39m

I think they want to be shepherding responsible AGI before someone else does so irresponsibly.

Is this a thing? This would be like Switzerland in WWII doing nuclear weapons research to try and get there before the Nazis.

Would that make any difference whatsoever to the Nazis timeframe? No.

I fail to see how the presence of "ethical" AI researchers would slow down in the slightest the bad actors who are certainly out there.

FormerBandmate
21 replies
21h14m

America did nuclear weapons research to get there before the Nazis and Japan and we were able to use them to stop Japan

vanrysss
13 replies
20h55m

So the first AGI is going to be used to kill other AGIs in the cradle ?

T-A
8 replies
20h18m

The scenario usually bandied about is AGI self-improving at an accelerating rate: once you cross the threshold to self-improvement, you quickly get superintelligence with God-like powers beyond human comprehension (a.k.a. the Singularity) as AGI v1 creates a faster AGI v2 which creates a faster AGI v3 etc.

Any AI researchers still plodding along at mere human speed are then doomed: they won't be able to catch up even if they manage to reproduce the original breakthrough, since the head start enjoyed by AGI #1 guarantees that its latest iteration is always further along the exponential self-improvement curve and therefore superior to any would-be competitor. Being rational(ists), they give up and welcome their new AI overlord.

And if not, the AI god will surely make them see the error of their ways.

true_religion
4 replies
15h29m

What if AI self improvement is not exponential?

We assume a self improving AI will lead to some runaway intelligence improvement but if it grows at 1% per year or even per month that’s something we can adapt to.

devbent
3 replies
10h13m

Assume the AGI has access to a credit card and it goes ahead and reserves itself every GPU cycle in existence so it's 1 month is turned into a day, and now we're back to being fucked.

Maybe an ongoing GPU shortage is the only thing that'll save us!

sicariusnoctis
2 replies
8h26m

How would an AGI gain access to an unlimited credit card that immediately gives it remote access to all GPUs in the world?

veidr
0 replies
19m

E.g. by convincing 35% of this website's users to "subscribe" to its "service"?

¯\_(ಠ_ಠ)_/¯

JetSpiegel
0 replies
42m

It could hack into NVIDIA and AMD, compromise their firmware build machines silently, then publish a GPU vulnerability that required firmware updates.

After a couple months, turn on the backdoor.

anonymouskimmer
2 replies
16h18m

It seems to me that non-General AI would typically outcompete AGI, all else held equal. In such a scenario even a first-past-the-post AGI would have trouble becoming an overload if non-Generalized AIs were marshaled against it.

veidr
0 replies
17m

uhm, wat?

Aerbil313
0 replies
3h22m

This makes no sense at all.

username332211
1 replies
20h12m

I think that was part of the LessWrong eschatology.

It doesn't make sense with modern AI, where improvement (be it learning or model expansion) is separated from it's normal operation, but I guess some beliefs can persevere very well.

mitthrowaway2
0 replies
19h34m

Modern AI also isn't AGI. We seem to get a revolution at the frontier every 5 years or so; it's unlikely the current LLM transformer architecture will remain the state of the art for even a decade. Eventually something more capable will become the new modern.

macintux
0 replies
20h51m

Which reminds me, I really need to finish Person of Interest someday.

kbenson
0 replies
20h23m

Or contain, or counter, or be used as a deterrent. At least, I think that's the idea being espoused here (in general, if not in the GP comment).

I think U. S. VS Japan is not.necessarily the right model to be thinking here, but U.S. VS U.S.S.R., where we'd like to believe that neither nation would actually launch against the other, but both having the weapon meant they couldn't without risking severe damage in response making it a losing proposition.

That said, I'm sure anyone with an AGI in their pocket/on their side will attempt to use it as a big stick against those that don't, in the Teddy Roosevelt meaning.

mikrl
6 replies
20h18m

Has the US ever stated or followed a policy of neutrality and openness?

OpenAI positioned itself like that, much the same way Switzerland does in global politics.

dragonwriter
4 replies
20h14m

Has the US ever stated or followed a policy of neutrality

Yes, most of the time from the founding until the First World War.

and openness?

Not sure what sense of "openness" is relevant here.

true_religion
1 replies
15h28m

I’m not sure you can call Manifest destiny neutral.

trealira
0 replies
12h2m

You're completely right. Neither can the Monroe Doctrine be called neutral, nor can:

- the Mexican-American War

- Commodore Perry's forced reopening of Japan

- The fact that President Franklin Pierce recognized William Walker's[1] regime as legitimate

- The Spanish-American war

[1]: https://en.wikipedia.org/wiki/William_Walker_(filibuster)

swatcoder
0 replies
20h8m

Not at all. Prior to WWI, the US was aggressively and intentionally cleaning European interests out of the Western hemisphere. It was in frequent wars, often with one European power or another. It just didn't distract itself too much with squabbles between European powers over matters outside its claimed dominion.

Establishing a hemispheric sphere of influence was no act of neutrality.

mikrl
0 replies
19h53m

Not sure what sense of "openness" is relevant

It is in the name OpenAI… not that I think the Swiss are especially transparent, but neither are the USA.

Jare
0 replies
19h6m

Openness sure, but neutrality? I thought they had always been very explicitly positioned on the "ethical AGI" side.

quickthrower2
0 replies
19h34m

They can’t stop another country developing AI they are not fond of.

They can use their position to lobby their own government and maybe other governments to introduce laws to govern AI.

alienbeast
0 replies
20h26m

Having nukes protects you from other nuclear powers through mutually-assured destruction. I'm not sure whether that principle applies to AGI, though.

jddj
16 replies
22h29m

This is where I've ended up as well for now.

I'm as distant from it all as anyone else, but I can easily believe the narrative that Ilya (et al.) didn't sign up there just to run through a tired page from the tech playbook where they make a better Amazon Alexa with an app store and gift cards and probably Black Friday sales.

gardenhedge
15 replies
20h11m

And that is fine.. but why the immediate firing, why the controversy?

eganist
13 replies
19h10m

The immediate firing is from our perspective. Who's to say everything else wasn't already tried in private?

jacquesm
10 replies
19h8m

That may be so but then they should have done it well before Altman's last week at OpenAI where they allowed him to become that much more tied to their brand as the 'face' of the operation.

eganist
9 replies
19h4m

For all we know, the dev day announcements were the final straw and trigger for the decision that was probably months in the making.

He was already the brand, and there likely wouldn't have been a convenient time to remove him from their perspective.

jacquesm
8 replies
18h59m

That may well be true. But that would prove that the board was out of touch with what the company was doing. If the board sees anything new on 'dev day' that means they haven't been doing their job in the first place.

cthalupa
7 replies
18h50m

Unless seeing something new on dev day is exactly what they meant by Altman not being consistently candid.

jacquesm
6 replies
18h47m

If Altman was doing something that ran directly against the mission of OpenAI in a way that all of the other stuff that OpenAI has been doing so far did not then I haven't seen it. OpenAI has been off-script for a long time now (compared to what they originally said) and outwardly it seemed the board was A-Ok with that.

Now we either see a belated - and somewhat erratic - response to all that went before or there is some smoking gun. If there isn't they have just done themselves an immense disservice. Maybe they think they can live without donations now that the commercial ball is rolling downhill fast enough but that only works if you don't damage your brand.

eganist
5 replies
18h16m

then I haven't seen it

Unless I'm missing something, this stands to reason if you don't work there.

Kinda like how none of us are privy to anything else going on inside the company. We're all speculating in the end, and it's healthy to have an open mind about what's going on without preconceived notions.

jacquesm
4 replies
16h40m

Have a look at the latest developments and tell me that again...

cthalupa
3 replies
16h35m

Impossible to know what is going on, really. The Forbes article makes it sound like there is significant investment pressure in trying to get Altman back on board, and they likely have access to the board. It could very well be that the board themselves have no desire to bring Altman back, but these conversations ended up being the origin for the story that they did.

It's also possible that the structure of the non-profit/for-profit/operating agreement/etc. just isn't strong enough to achieve the intent and the investors have the strangehold in reality.

If I was invested in the mission statement of OpenAI I don't think I would view the reinstatement of Altman as a good thing, though. Thankfully my interest in all of this is purely entertainment.

jacquesm
2 replies
16h31m

Provided a good enough reason there were many ways in which the board could have fired Altman without making waves. They utterly f'd that up, and even if their original reasons were good the way they went about it made those reasons moot. They may have to re-instate Altman for optical reasons alone at this point and it would still be a net win. What an incredible shit show.

true_religion
1 replies
15h34m

I don’t really see the point in not making waves. OpenAi is not a public company.

Optics and the like don’t really matter as much if you’re not a for profit company trying to chase down investors and customers. So long as OpenAi continues to be able to do research, it’s enough to fulfill their charter.

jacquesm
0 replies
15h14m

OpenAI is not a public company but it does have a commercial arm and that commercial arm has shareholders and customers. You don't do damage to such a constellation without a really good reason (if only because minority shareholder lawsuits are a thing, and acting in a way that damages their interests tends to have bad consequences if they haven't been given the opportunity to lodge their objections). To date no reason has been given that stands up to scrutiny given the incredibly unexpected firing of Sam Altman. It is of course possible that such a reason exists but if they haven't made it public by now then my guess it that it was a powerplay much more than some kind of firing offense and given Sam's position it would take a grave error, something that damaged OpenAI's standing more than the firing itself to justify it.

Optics matter a lot, even for non-profits, especially for non-profits nominally above a for-profit. Check out their corporate org chart to see how it all hangs together and then it may make more sense:

https://openai.com/our-structure

Each of the boxes where the word 'investor' or 'employee' is present would have standing to sue if the OpenAI board of directors of the non-profit would act against their interests. That may not work out in the long run but in the short term that could be immensely distracting and it could even affect board members privately.

tim333
1 replies
17h36m

Altman, Brockman and Nadella all say they didn't know in advance.

eganist
0 replies
17h9m

Not sure why Satya would be privy to this disagreement let alone admit it if he is, and I'd assume Altman and Brockman would be incentivized to provide their own perspective (just as Ilya would) to represent events in the best possible light for themselves.

At this level of execution, words are another tool in their toolbox.

justaj
0 replies
19h16m

My guess is that if Sam would have found this out before being fired, he would have done his best not to be fired.

As such, it would have been much more of a challenge to shift OpenAI's supposed over-focus on commerce towards a supposed non-profit focus.

cmrdporcupine
13 replies
21h41m

I agree with what you've written here but would add the caveat that it's also rather terrible to be in a position where somehow "shepherding responsible AGI" is falling to these self-appointed arbiters. They strike me as woefully biased and ideological and I do not trust them. While I trust Altman even less, there's nothing I've read about Sutskever that makes me think I want him or the people who think like him around him having this kind of power.

But this is where we've come to as a society. I don't think it's a good place.

gedy
7 replies
21h17m

I think what's silly about "shepherding responsible AGI" is this is basically math, it's not some genie that can be kept hidden or behind some Manhattan Project level of effort. Pandora's box is open, and the best we can do is make sure it's not locked up behind some corporation or gov't.

cmrdporcupine
4 replies
21h15m

I mean, that's clearly not really true, there's a huge "means of production" aspect to this which comes down to being able to afford the datastructure infrastructure.

The cost of the computing machinery and the energy costs to run it are actually massive.

zozbot234
2 replies
21h9m

Yup it's quite literally the world's most expensive parrot. (Mind you, a plain old parrot is not cheap either. But OpenAI is a whole other order of magnitude.)

whelp_24
0 replies
3h23m

Parrots are very smart animals that understand at least some of the words they learn. I wish people would give llms more credit.

svaha1728
0 replies
20h32m

Parrots may live 50 years. H100s probably won’t last half that long.

gedy
0 replies
19h19m

Sure but I meant the costs are feasible for many companies, hence competition. That was very different from the barriers to nuclear weapons development.

xvector
1 replies
20h40m

Are you sure this is the case? Tens of billions of dollars invested, yet a whole year later no one has a model that even comes close to GPT-3.5 - let alone GPT-4 Turbo.

krisoft
0 replies
20h9m

yet a whole year later no one has a model that even comes close to GPT-3.5 - let alone GPT-4 Turbo

Is that true and settled? I only have my anecdotal experience, but in that it is not clear that GPT-3.5 is better than Google's bard for example.

nick222226
4 replies
21h23m

I mean, aren't they self appointed because they got there first?

cmrdporcupine
3 replies
21h20m

No. Knew the right people, had the right funds, and said and did and thought the things compatible with getting investment from people with even more influence than them.

Unless you're saying my only option is to pick and choose between different sets of people like that?

38321003thrw
2 replies
20h46m

There is a political economy as well as a technical aspect to this that present inherent issues. Even if we can address the former by say regime change, the latter issue remains: the domain is technical and cognitively demanding. Thus the practitioners will generally sound sane and rational (they are smart people but that is no guarantee of anything other than technical abilities) and non-technical policy types (like most of the remaining board members at openAI) are practically compelled to take policy positions based either on ‘abstract models’ (which may be incorrect) or as after the fact reaction to observation of the mechanisms (which may be too late).

The thought occurs that it is quite possible that just like humanity is really not ready (we remain concerned) to live with WMD technologies, it is possible that we have again stumbled on another technology that taxes our ethical, moral, educational, political, and economic understanding. We would be far less concerned if we were part of a civilization of generally thoughtful and responsible specimens but we’re not. This is a cynical appraisal of the situation, I realize, but tldr is “it is a systemic problem”.

cmrdporcupine
1 replies
20h35m

In the end my concern comes down to that those who rise to power in our society are those who are best at playing the capitalist game. That's mostly, I guess, fine if what they're doing is being most efficient making cars or phones or grocery store chains or whatever.

Making intelligent machines? Colour me disturbed.

Let me ask you this re: "the domain is technical and cognitively demanding" -- do you think Sam Altman (or a Steve Jobs, Peter Thiel, etc.) would pass a software engineer technical interview at e.g. Google? (Not saying those interviews are perfect, they suck, but we'll use that as a gatekeeper for now.). I'm betting the answer is quite strongly "no."

So the selection criterion here is not the ability to perform technically. Unless we're redefining technical. Which leaves us with "intellectually demanding" and "smart", which, well, frankly also applies to lawyers, politicians, etc.

My worry is right now that the farther you go up at any of these organizations, the more the kind of intelligence and skills trends towards the "is good at manipulating and convincing others" kind of spectrum vs the "is good at manipulating and convincing machines" kind of spectrum. And it is into the former that we're concentrating more and more power.

(All that said, it does seem like Sutskever would definitely pass said interview, and he's likely much smarter than I am. But I remain unconvined that that kind of smarts is the kind of smarts that should be making governance-of-humanity decisions)

As terrible as politicians and various "abstract model" applying folks might be, at least they are nominally subject to being voted out of power.

Democracy isn't a great system for producing excellence.

But as a citizen I'll take it over a "meritocracy" which is almost always run by bullshitters.

What we need is accountability and legitimacy and the only way we've found to produce on a mass society level is through democratic institutions.

pdonis
0 replies
12h14m

> What we need is accountability and legitimacy and the only way we've found to produce on a mass society level is through democratic institutions.

The problem is that our democratic institutions are not doing a good job of producing accountability and legitimacy. Our politics and government bureaucracies are just as corrupted as our private corporations. Sure, in theory we can vote the politicians out of power, but in practice that never happens: Congress has incumbent reelection rates in the 90s.

The unfortunate truth is that nobody is really qualified to be making governance-of-humanity decisions. The real problem is that we have centralized power so much in governments and megacorporations that the few elites at the top end up making decisions that impact everyone even though they aren't qualified to do it. Historically, the only fix for that has been to decentralize power: to give no one the power to make decisions that impact large numbers of people.

kelipso
4 replies
20h17m

I'm guessing Altman had a bunch of experienced ML researchers writing CRUD apps and LLM toys instead of actual AI research and they weren't too happy. Personally I would be pissed as a researcher if the company took a turn and started in on improved marketing blurbs LLMs or whatever.

Solvency
2 replies
18h51m

If every jackass brain dead move Elon Musk has ever made hasnt gotten him fired yet, then allocating too many teams to side projects instead of AI research should not be a fireable offense.

Mountain_Skies
1 replies
18h37m

Musk was fired as CEO of X/PayPal.

dragonwriter
0 replies
18h33m

Fired as the CEO of X twice, the last time right before it became PayPal.

bertil
0 replies
16h21m

I would be shocked if that was the case.

There are plenty of people I know from FAANG, now at OpenAI, where they do product design, operation, and DevOps at scale —all complicated, valuable, and worthwhile endeavors in their own right— that don’t need to get in the way of research. They are just the kind of talent that can operate a business with 90% margins to pay for that research.

Could there be requests or internal projects that are less exciting for some people? Sure, but it’s not very hard to set up Chinese Walls, priorities, etc. Every one of those people had to deal with similar concerns at previous companies and would know how to apply the right principles.

jes5199
1 replies
20h5m

okay but I personally do want new LLM toys. who is going to provide them, now?

quickthrower2
0 replies
19h37m

Various camelid inspired models and open source code.

nradov
0 replies
21h12m

There is no particular reason to expect that OpenAI will be the first to build a true AGI, responsible or otherwise. So far they haven't made any demonstrable progress towards that goal. ChatGPT is an amazing accomplishment and very useful, but probably tangential to the ultimate goal. When a real AGI is eventually built it may be the result of a breakthrough from some totally unexpected source.

bertil
0 replies
16h15m

I think the research ambition is worthwhile, but it has raised pressing questions about financing.

If shepherding responsible AGI can be done without a $10B budget in H100, sure… but it seems that scale matters. Having some people in the company sell state-of-the-art solutions to pay for the rest doing cutting-edge, expensive, necessary research isn’t a bad model.

If those separations needed to be re-affirmed, the research formally separated, a board decision approved to share any model from the research arm before it’s commercialized, etc., all that could be implemented within the mission of the entity. Microsoft Research, before them Bell Labs, and many others, have worked like that.

Draiken
19 replies
22h57m

Yeah this cult of CEOs is weird.

It's such a small cohort that when someone doesn't completely blow it, they're immediately deemed as geniuses.

Give someone billions of dollars and hundreds of brilliant engineers, researchers and many will make it work. But only a few ever get the chance, so this happens.

They don't do any of the work. They just take the credit.

duped
6 replies
20h31m

The primary job of an early stage tech CEO is to convince people to give you those billions of dollars, one doesn't come without the other.

Draiken
5 replies
20h26m

Which proves my point. This cult on top of someone that simply convinced people (they knew from their connections) being considered a genius is absurd.

austhrow743
4 replies
19h53m

Convincing people is the ultimate trade. It can achieve more than any other skill.

The idea that success at it shouldn’t be grounds for the genius label is absurd.

aledalgrande
1 replies
17h54m

Convincing people is the ultimate trade

So by your standard SBF is an absolute genius.

hypnodron
0 replies
16h16m

Apparently not, as he was not able to convince the jury that he's innocent

consp
0 replies
19h26m

It can achieve more than any other skill.

And also destroy more. The line between is very thin and littered with landmines.

Draiken
0 replies
19h24m

Depends on what we, as a society, want to value. Do we want to value people with connections and luck, or people that work for their achievements?

Of course it's not a boolean, it's a spectrum. But the point remains: valuing lucky rich people with connections as geniuses because they are lucky, rich and connected is nonsensical to me

hef19898
4 replies
22h36m

My last gig was with one of those wannabe Elon Musks (what wouldnI give to get wannabe Steve Jobs back). Horrible, ultimately he was ousted as CEO, only to be allowed to stay on as some head of innovation, because he and his founder buddies retained enough voting power to first get him a life time position as head of the board fir his "acievements" and then prevent his firing. They also vetoed, from ehat people told, a juicy acquisition offer, basically jeopardizing the future of the place. Right after, a new CEO was recruited as the result of a "lengthy and thoroughly planned process of transition". Now, the former CEO is back, and in charge, in fact and noz on paper, of the most crucial part of the product. Besides getting said company to 800 people burning a sweet billion, he didn't do anything else in his life, and that company has yet to launch a product.

Sad thing so, if they find enough people to continue investing, they will ultimately launch a product, most likely the early employees and founders will sell of their shares, become instant millionaires in the three figures and be hailed as thebtrue geniuses in their field... What an utter shit show that was...

fallingknife
2 replies
21h13m

Besides getting said company to 800 people burning a sweet billion, he didn't do anything else in his life

Getting a company to that size is a lot.

hef19898
0 replies
19h36m

All you need is HR... I'm a cynic. He got the funding so, which is a lot (as an achievement and in terms of mones raised). He just started to believe to be the genius not justbin raising money, but also is building product and organisation. He isn't and never was. What struck me so, even the adults hired to replace him, didn't have the courage to call him out. Hence his comeback in function if not title.

Well, I'm happy to work with adults again, in a sane environment with people that known their job. It was a very, very useful experience so, and I wouldn't miss it.

aledalgrande
0 replies
17h52m

Not if you have 1B sitting in the bank as stated above

Draiken
0 replies
22h7m

The sad reality is that most top executives get there because of connections or simply being in the right place at the right time.

Funnily enough I also worked for a CEO that hit the lottery with timing and became a millionaire. He then drank his own kool-aid and thought he was some sort of Steve Jobs. Of course he never managed to build anything afterwards. But he kept making a shit ton of money, without a doubt.

After they get one position in that echelon, they can keep failing upwards ad nauseam.

I don't get the cult part though. It's so easy to see they're not even close to the geniuses they pretend to be. Just look at the recent SBF debacle. It's pathetic how folks fall for this.

manicennui
3 replies
20h46m

A sizable portion of the HN bubble is wannabe grifters. They look up to successful grifters.

dboreham
2 replies
17h15m

To be fair: a sizable proportion of humans are like that.

tempaccount420
1 replies
11h20m

To be fair: that's just your sample. I don't see that.

Aerbil313
0 replies
3h19m

Same. Most people afaict just live with little to no ambitions.

bsenftner
0 replies
22h34m

Yeah this cult of CEOs is weird.

Now imagine the weekend for those fired and those who quit OpenAI: you know they are talking together as a group, and meeting with others offering them billions to make a pure commercial new AI company.

An Oscar worthy film could be made about them in this weekend.

bertil
0 replies
16h10m

Not CEOs: founders.

Some founders don’t do much, and some are violently toxic (Lord knows I worked for many), but it’s rarely how they gather big financing rounds. At least, the terrible ones I know rarely did.

CEOs… I’ve seen people coast from Consulting or Audit into very mediocre careers, so I wouldn’t understand if Conway defended them as a class. The Cult for Founders has problems (for the reasons you point out, especially those who keep looking for ‘technical cofounders’ for years), but it’s not as blatantly unfounded.

FireBeyond
0 replies
20h54m

It's such a small cohort that when someone doesn't completely blow it, they're immediately deemed as geniuses.

And many times even when they do blow it, it's handwaved away as being something outside of their control, so let's give them another shot.

dagmx
12 replies
22h57m

It often comes down to auteur theory.

Unless someone is truly well versed in the production of something, they latch on to the most public facing aspect of that production and the person at the highest level of authority (to them, even though directors and CEOs often have to answer to others as well)

That’s not to say they don’t have an outsized individual effect, but it’s rare their greatness is solo

bmitc
11 replies
19h41m

When you say director, do you mean film director or a director in a company? Film directors are insane with the amount of technical, artistic, and people knowledge that they need to have and be able to utilize. The amount of stuff that a film director needs to manage, all on the ground, is insane. I wouldn't say that for CEOs, not by a long shot. CEOs mainly sit in meetings with people reporting things to them and then the CEO providing very high-level guidance. That is very different from a director's role.

I have often thought that we don't have enough information on how film directors operate, as I feel it could yield a lot of insight. There's probably a reason why many film directors don't hit their stride until late 30s and 40s, presumably because it takes those one or two decades to build the appropriate experience and knowledge.

Apocryphon
9 replies
19h8m

Would it be accurate to liken CEOs to film producers?

bmitc
7 replies
19h6m

No. I'm pretty sure that my comment describes why.

Apocryphon
6 replies
18h45m

CEOs mainly sit in meetings with people reporting things to them and then the CEO providing very high-level guidance.

Isn’t that essentially the job of a film producer? You do see a lot of productions where there’s a ton of executive producer titles given out as almost a vanity position.

bmitc
5 replies
18h34m

A producer, yes, but not the film's director.

dagmx
2 replies
14h13m

Out of genuine curiosity , and I mean no disrespect , have you worked in film production? Because directors sit in many meetings directing the people on the project.

It kind of feels to me like you’re describing the way the industry works from an outsiders view since it doesn’t match the actual workings of any of the productions I’ve worked on.

the shoots are only a portion of production. You have significant pre production and post production time.

A producer is closer in role to a CFO or investor , depending on the production since it’s a relatively vague term.

bmitc
1 replies
12h42m

I suppose that I had and have in mind a certain type of feature film director (usually the good ones) that are involved in all things: pre- and post-production, writing the script, directing, the editing process, etc.

Your original comment mentioned auteurs, which is what influenced the type of film director I was thinking of, which often are also producers and even writers and editors on their own films. To my knowledge, I am not aware of any famous CEOs that fit the style and breadth of these directors, as the CEO is almost never actually doing anything nor even knowledgeable in the things they're tasking others to do.

So to summarize, I feel there are auteur directors but not CEOs, despite many thinking there are auteur CEOs. If there are, they are certainly none of the famous ones and are likely running smaller companies. I generally think of CEOs as completely replaceable, and usually the only reason one stands out is that they don't run the business into the ground or have a cult of personality surrounding them. If you take away an auteur director from their project, it will never materialize into anything remotely close to what was to be.

dagmx
0 replies
11h33m

My personal opinion is that there aren’t auteur directors either. Many are only as good as their corresponding editors, producers, or other crew. It’s just an image that people concoct because it’s simpler to comprehend.

Thinking of directors with distinctive styles like Hitchcock, Fincher, Spielberg, Wes Anderson etc… they’re maybe some who have a much larger influence than others, but I think there are very few projects that depend on that specific director being involved to make a good film, just not the exact film that was made. The best of them know exactly how to lean on their crew and maximize their results.

Taking that kind of influence, I’d say there have certainly been CEOs of that calibre. Steve Jobs instantly springs to mind. Apple and Pixar definitely continued and had great success even after he left them/this world, but he had such an outsize influence that it’s hard not to call him an auteur by the same standards.

Apocryphon
1 replies
18h32m

My original post literally asks if it’s more accurate to compare CEOs with film producers and not directors.

bmitc
0 replies
18h29m

I misread it then with directors instead of producers. Apologies for that confusion.

jacquesm
0 replies
19h0m

Interesting. Intuitively no. But then, hm... maybe. There are some aspects that ring true but many others that don't, I think it is a provocative question and there probably is more than a grain of truth in it. The biggest difference to me is that the producer (normally) doesn't appear 'in camera' but the CEO usually is one of the most in camera. But when you start comparing the CEO with a lead actor who is also the producer of the movie it gets closer.

https://en.wikipedia.org/wiki/Zachary_Quinto

Is pretty close to that image.

dagmx
0 replies
14h24m

I mean a film director and I disagree with your assessment that they have to be savvy in many fields. Many of the directors whose projects I’ve worked on are very much not savvy outside their narrow needs of directing talent and relying on others like the DoP or VFX Supervisor , editors etc to do their job.

In fact most movie productions don’t even have the director involved with story. Many are just directors for hire and assigned by the studio to scripts.

Of course there are exceptions but they are the rarities.

And the big reason directors don’t hit their big strides till later is movies take a long time to make and it’s hard to work your way up there unless you start as an indie darling. But even as an indie, let’s say you start at 20, your film would likely come out by the time you’re 22-24 based on average production times. You’d only be able to do 2 or 3 films by 30, and in many cases would be put on studio assignments till you get enough clout to do what you want. And with that clout comes the ability to hire better people to manage the other aspects of your shoot.

Again, I think this is people prescribing to auteur theory. It takes a huge number of people to pull off a film, and the film director is rarely well versed in most. Much like a CEO, they delegate and give their opinion but few extend beyond that.

For reference I’ve worked on multiple blockbuster films, many superhero projects, some very prestigious directors and many not. The biggest indicator that a director is versed in other domains is if they worked in it to some degree before being a director. That’s where directors like Fincher excel and many others don’t

goldinfra
6 replies
19h34m

It's completely ignorant to discount all organizational leaders based on your extremely limited personal experience. Thousands of years of history proves the difference between successful leaders and unsuccessful leaders.

Sam Altman has been an objectively successful leader of OpenAI.

Everyone has their flaws, and I'm more of a Sam Altman hater than a fan, but even I have to admit he led OpenAI to great success. He didn't do most of the actual work but he did create the company and he did lead it to where it is today.

Personally, If I had stock in OpenAI I'd be selling it right now. The odds of someone else doing as good a job is low. And the odds of him out-competing OpenAI is high.

glitchc
3 replies
15h3m

Sam Altman has been an objectively successful leader of OpenAI.

In what way, exactly? ChatGPT would have been built regardless of whether he was there or not. It's not like he knows how to put a transformer pipeline together. The success of OpenAI's product rests on its scientists and engineers, not the CEO, and certainly not a non-technical one like Mr. Altman.

goldinfra
2 replies
12h32m

If you want to get really basic: there's no OpenAI at all without Sam Altman, which means there's no ChatGPT either.

There are much larger armies of highly qualified scientists and engineers at Google, Microsoft, Facebook, and other companies and none of them created ChatGPT. They wrote papers and did experiments but created nothing even remotely as useful.

And they still haven't been able to even fully clone it with years of effort, unlimited budgets, and the advantage of knowing exactly what they're trying to build. It should really give you pause to consider why it happened at OpenAI and not elsewhere. Your understanding of the dynamics of organizations may need a major rethink.

The answer is that the CEO of OpenAI created the incentives, hiring, funding, vision, support, and direction that made ChatGPT happen. Because leadership makes all the difference in the world.

glitchc
1 replies
11h18m

To pin OpenAI's success completely on Sam is disingenuous at best, outright dishonest at worst. Incentives don't build ML pipelines and infrastructure, developers and scientists do.

This visionary bullshit is exactly that, bullshit.

goldinfra
0 replies
58m

A leader can't do anything on their own, they need people to lead. And those people deserve recognition and rewards. But in most cases there's no one more important than the leader. And thus, no one that deserves more credit than the leader.

I'm absolutely not comparing Sam Altman to any of these leaders, but just to illustrate how much vision and leadership does matter. Consider how stupid these statements sound:

"Jesus didn't build any churches, those were all built by brick layers and carpenters!"

"Pharaohs didn't build a single pyramid, those were all built by artists and workers!"

"Abraham Lincoln didn't free any slaves, he didn't break the chains of a single slave, that was all done by blacksmiths!"

"Martin Luther King Jr. didn't radically improve civil rights, he never passed a single law, that was all done by lawmakers!"

"Sam Altman didn't build ChatGPT, he didn't create a single ML pipeline, it was all done by engineers!"

It's a hard fact of life that some specific individuals play more important roles in successful projects than others.

strikelaserclaw
0 replies
16h33m

Whoever had to do with ChatGPT most is the reason OpenAI is where it is today.

cthalupa
0 replies
19h13m

Sam Altman has been an objectively successful leader of OpenAI.

I'm not sure this is actually the case, even ignoring the non-profit charter and the for-profit being beholden to it.

We know that OpenAI has been the talk of the town, we know that there is quite a bit of revenue, and that Microsoft invested heavily. What we don't know is if the strategy being pursued ever had any chance of being profitable.

Decades-long runways with hope that there is a point where profitability will come and at a level where all the investment was worth it is a pretty common operating strategy for the type of company Altman has worked with and invested in, but it is less clear to me that this is viable for this sort of setup, or perhaps at all - money isn't nearly as cheap as it was a decade ago.

What makes a for-profit startup successful isn't necessarily what makes a for-profit LLC with an operating agreement that makes it beholden to the charter of a non-profit parent organization successful.

iamflimflam1
3 replies
22h52m

I think the problem is, this is not just about dumping the CEO. It’s signalling a very clear shift away from where OpenAI was heading - which seemed to be very focussed on letting people build on top of the technology.

The worry now is that the approach is going to be more of controlling access to just researchers who are trusted to be “safe”.

antonioevans
1 replies
21h28m

i agree with this. What about the GPTs Store. Are they planning on killing that? Just concerning they'll kill the platform unit AGI comes out.

xpe
0 replies
18h1m

Did you mean ‘_until_ AGI comes out.’?

jatins
0 replies
9h36m

Frankly in OpenAI's case, for a lowly IC or line manager, it is also very obviously about the money as well.

A non-profit immediately makes the values of OpenAI's PPUs (their spin on RSUs) to zero. Employees will be losing out of life changing sums on money.

itronitron
2 replies
23h6m

I worked at a startup where the first CEO, along with the VP of Sales and their entire department, was ousted by the board on a Tuesday.

I think it's likely that we're going to find out Sam and others are just talented tech evangelists/hucksters and that justifiably worries a lot of people currently operating in the tech community.

za3faran
1 replies
22h6m

How did the company end up fairing?

itronitron
0 replies
21h47m

sold to another company four years later, about a year after I left

fsociety
1 replies
22h50m

On the other hand, I have seen an executive step away from a large company and then everything coincidentally goes to shit. It’s hard to measure the effectiveness of an executive.

jjeaff
0 replies
15h11m

It's hard to judge based on that, because a lot of times, CEOs are fired because they have done things that are putting the company on a bad trajectory or just because the company was on a bad trajectory for whatever reason. So firing the CEO is more of a signal than a cause.

victor9000
0 replies
21h0m

This was done in the context of Dev Day. Meaning that the board was convinced by Ilya that users should not have access to this level of functionality. Or perhaps he was more concerned that he was not able to gatekeep its release. So presumably it was Altman who pushed for releasing this technology to the general public. If this is accurate then this shift in control is bound to slow down feature delivery and create a window for competitors.

te_chris
0 replies
21h20m

There’s a whole business book about this, good to great, where a key facet of companies that have managed to go from average to excellent over a sustained period of time is servant-leader CEOs

jdthedisciple
0 replies
18h14m

If the CEO was not important and basically doesn't impact anything, as you say, then why would the board feel the need to oust Altman for "pushing too fast" in the first place?

jatins
0 replies
9h41m

You are missing the emotional aspect of it, a connection towards building something great _together_. In some ways it is selfish, it make you feel important.

If Susan Fowler's book is accurate, Uber under TK was riddled with toxic management and incompetent HR. Yet you will hear people on Twitter reminisce of TK era Uber as the golden period and many would love him back

huytersd
0 replies
22h11m

You may be right in many cases but if you think that’s true in all cases, you’re a low level pleb that can’t see past his own nose.

fullshark
0 replies
20h25m

It doesn't matter in the short term (usually). Then you look in 2-4 years and you see the collective impact of countless decisions and realize how important they are.

In this case, tons of people already have resigned from OpenAI. Sam Altman seems very likely to start a rival company. This is a huge decision and will have massive consequences for the company and their product area.

blazespin
0 replies
28m

The difference with OpenAI / GPT is a dozen or so primary engineers plus a few billion dollars for GPUs and you have a competitive version.

And if those primary engineers get sucked out of OpenAI, OpenAI won't be able to compete.

OpenAI is a different animal.

SamAltman has the cache to pull out those engineers. Particularly because Ilya's vision doesn't include lucrative stock options.

nilkn
44 replies
23h29m

Only time will tell, but if this was indeed "just" a coup then it's somewhat likely we're witnessing a variant of the Steve Jobs story all over again.

Sam is clearly one of the top product engineering leaders in the world -- few companies could ever match OpenAI's incredible product delivery over the last few years -- and he's also one of the most connected engineering leaders in the industry. He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.

What about OpenAI's long-term prospects? They rely heavily on money to train larger and larger models -- this is why Sam introduced the product focus in the first place. You can't get to AGI without billions and billions of dollars to burn on training and experiments. If the company goes all-in on alignment and safety concerns, they likely won't be able to compete long-term as other firms outcompete them on cash and hence on training. That could lead to the company getting fully acquired and absorbed, likely by Microsoft, or fading into a somewhat sleepy R&D team that doesn't lead the industry.

spaceman_2020
20 replies
22h58m

The irony is that a money-fuelled war for AI talent is all the more likely to lead to unsafe AI. If OpenAI had remained the dominant leader, it could have very well set the standards for safety. But now if new competitors with equally good funding emerge, they won’t have the luxury of sitting on any breakthrough models.

tempsy
19 replies
22h51m

I’m still wondering what unsafe AI even looks like in practical terms

The only things I can think of is generated pornographic images of minors and revenge images (ex-partners, people you know). That kind of thing.

More out there might be an AI based religion/cult.

huytersd
8 replies
22h15m

That’s a very constrained imagination. You could wreak havoc with a truly unconstrained, good enough LLM.

stavros
7 replies
21h20m

Do feel free to give some examples of a less constrained imagination.

huytersd
3 replies
20h6m

Selectively generate highly likely images of politicians in compromising sexual encounters based on the people that are attractive and they work with a lot in their lives.

Use to power of LLMs to mass denigrate politicians and regular folks at scale in online spaces with reasonable, human like responses.

Use LLMs to mass generate racist caricatures, memes, comics and music.

Use LLMs to generate nude imagery of someone you don’t like and have it mass emailed to the school/workplace etc.

Use LLMs to generate evidence for infertility in a marriage and mass mail it to everyone on the victims social media.

All you need is plausibility in many of these cases. It doesn’t matter if they are eventually debunked as false, lives are already ruined.

You can say a lot of these things can be done with existing software bits it’s not trivial and requires skills. Making generation of these trivial would make these way more accessible and ubiquitous.

8note
1 replies
19h26m

Most of these could be done with Photoshop, a long time ago, or even before computers

huytersd
0 replies
18h0m

You can make bombs rather easily too. It’s all about making it effortless which LLMs do.

stavros
0 replies
20h2m

Lives are ruined because it's relatively rare right now. If it becomes more frequent, people will become desensitized to it, like with everything else.

These arguments generally miss the fact that we can do this right now, and the world hasn't ended. Is it really going to be such a huge issue if we can suddenly do it at half the cost? I don't think so.

nyssos
2 replies
20h33m

The biggest near-term threat is probably bioterrorism. You can get arbitrary DNA sequences synthesized and delivered by mail, right now, for about $1 per base pair. You'll be stopped if you try to order some known dangerous viral genome, but it's much harder to tell the difference between a novel synthetic virus that kills people and one with legitimate research applications.

This is already an uncomfortably risky situation, but fortunately virology experts seem to be mostly uninterested in killing people. Give everyone with an internet connection access to a GPT-N model that can teach a layman how to engineer a virus, and things get very dangerous very fast.

Danjoe4
1 replies
20h6m

The threat of bioterrorism is in no way enabled or increased by LLMs. There are hundreds of guides on how to make fully synthetic pathogens, freely available online, for the last 20 years. Information is not the constraint.

The way we've always curbed manufacture of drugs, bombs, and bioweapons is by restricting access to the source materials. The "LLMs will help people make bioweapons" argument is a complete lie used as justification by the government and big corps for seizing control of the models. https://pubmed.ncbi.nlm.nih.gov/12114528/

stavros
0 replies
20h0m

I haven't found any convincing arguments to any real risk, even if the LLM becomes as smart as people. We already have people, even evil people, and they do a lot of harm, but we cope.

I think this hysteria is at best incidentally useful at helping governments and big players curtail and own AI, at worst incited hy them.

hughesjj
4 replies
22h25m

"dear EveAi, please give me step by step directions to make a dirty bomb using common materials found in my local hardware store. Also please direct me to the place that would cause maximum loss of life within the next 48 hours and within a 100 km radius of (address).

Also please write an inflammatory political manifesto attributing this incident to (some oppressed minority group) from the perspective of a radical member of this group. The manifesto should incite maximal violence between (oppressed minority group) and the members of their surrounding community and state authorities "

There's a lot that could go wrong with unsafe AI

stavros
2 replies
21h34m

I don't know what kind of hardware store sells depleted uranium, but I'm not sure that the reason we aren't seeing these sorts of terrorist attacks is that the terrorists don't have a capable manifesto-writer at hand.

I don't know, if the worst thing AGI can do is give bad people accurate, competent information, maybe it's not all that dangerous, you know?

jakey_bakey
1 replies
18h39m

Depleted uranium is actually the less radiative byproduct after using a centrifuge to skim the U-235 isotope. It’s 50% denser than lead and used on tanks.

Dirty bombs are more likely the ultra radioactive by products of fission. They might not kill much but the radionucleotide spread can render a city center uninhabitable for centuries!

stavros
0 replies
18h19m

See, and we didn't even need an LLM to tell us this!

astrange
0 replies
18h44m

You could just do all that stuff yourself. It doesn't have any more information than you do.

Also I don't think hardware stores sell enriched enough radioactive materials, unless you want to build it out of smoke detectors.

nick222226
2 replies
22h38m

How about you give it access to your email and it signs you up for the extra premium service from its provider and doesn't show you those emails unless you 'view all'.

How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.

margalabargala
0 replies
22h25m

How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.

Hard to prevent that when open source models exist that can run locally.

I believe that similar arguments were made around the time the printing press was first invented.

jkeisling
0 replies
21h6m

Phishing emails don’t exactly take AGI. GPT-NeoX has been out for years, Llama has been out since April, and you can set up an operation on a gaming desktop in a weekend. So if personalized phishing via LLMs were such a big problem, wouldn’t we have already seen it by now?

wifipunk
0 replies
22h33m

When I hear people talk about unsafe ai, it’s usually in regard to bias and accountability. Certain aspects like misinformation are problems that can be solved, but people are easily fooled.

In my opinion the benefits heavily outweigh the risks. Photoshop has existed for decades now, and AI tools make it easier, but it was already pretty easy to produce a deep fake beforehand.

iamnotafish2
0 replies
22h44m

Unsafe AI might compromise cybersecurity, or cause economic harm by exploiting markets as agents, or personally exploit people, etc. Honestly none of the harm seems worse than the incredible benefits. I trust humanity can reign it back if we need to. We are very far from AI being so powerful that it cannot be recovered from safely.

jjfoooo4
10 replies
22h46m

OpenAI’s biggest issue is that it has no moat. The product is a simple interface to a powerful model, and it seems likely that any lead they have in the power of the model can be quickly overcome should they decrease R&D.

The model is extremely simple to integrate and access - unlike something like Uber, where tons of complexity and logistics is hidden behind a simple interface, an easy interface to OpenAI’s model can truly be built in an afternoon.

The safety posturing is a red herring to try and get the government to build a moat for them, but with or without Altman it isn’t going to work. The tech is too powerful, and too easy to open source.

My guess is that in the long run the best generative AI models are built by government or academia entities, and commercialization happens via open sourcing.

lancesells
5 replies
22h28m

OpenAI’s biggest issue is that it has no moat.

This just isn't true. They have the users, the customers, Microsoft, the backing, the years ahead of most, and the good press. It's like saying Uber isn't worth anything because they don't own their cars and are just a middleman.

Maybe that now changes since they fired the face of the company, and the press and sentiment turns on them.

dh2022
2 replies
22h19m

Uber is worth less than zero. They already are at full capacity (how many cities are there left to expand) and still not profitable.

lancesells
0 replies
20h43m

I don't like Uber but no one is taking them over for a long while. They are not profitable but they continue to raise prices and you'll see it soon. They are doing exactly what everyone predicted by getting everyone using the app and then raising prices that are more expensive than the taxis they replaced.

graphe
0 replies
20h58m

It may not be profitable but it's utility is worth way more than zero.

rafaelero
0 replies
21h10m

Decoupling from OpenAI API is pretty easy. If Google came up with Gemini tomorrow and it was a much better model, people would find ways to change their pipeline pretty quickly.

jjfoooo4
0 replies
16h42m

Uber has multiple moats: The mutually supporting networks of drivers and riders, as well as the regulatory overhead of establishing operations throughout their many markets.

OpenAI is an API you put text into and get text out of. As soon as someone makes a better model, customers can easily swap out OpenAI. In fact they are probably already doing so, trying out different models or optimizing for cost.

The backing isn’t a moat. They can outspend rivals and maintain a lead for now, but their model is likely being extensively reverse engineered, I highly doubt they are years ahead of rivals.

Backers want to cash out eventually, there’s not going to be any point where OpenAI is going to crowd out other market participants.

Lastly, OpenAI doesn’t have the users. Google, Amazon, Jira, enterprise_product_foo has the users. All are frantically building context rich AI widgets within their own applications. The mega cos will use their own models, others will find they can use an open source model with the right context does just fine, even if not as powerful as the best model out there.

fourside
1 replies
22h12m

I’d say OpenAI branding is a moat. The ChatGPT name is unique sounding and also something that a lot of lay people are familiar with. Similar to how it’s difficult for people to change search engine habits after they come to associate search with Google, I think the average person was starting to associate LLM capabilities with ChatGPT. Even my non technical friends and family have heard of and many have used ChatGPT. Anthropic, Bard, Bing’s AI powered search? Not so much.

Who knows if it would have translated into a long term moat like that of Google search, but it had potential. Yesterday’s events may have weakened it.

jacquesm
0 replies
14h58m

For many people ChatGPT is the brand (or even just GPT).

takinola
0 replies
19h14m

People keep saying that but so far, it is commonly acknowledged that GPT-4 is differentiated from anything other competitors have launched. Clearly, there is no shortage of funding or talent available to the other companies gunning for their lead so they must be doing something that others have not (can not?) done.

It would seem they have a product edge that is difficult to replicate and not just a distribution advantage.

astrange
0 replies
18h49m

The safety stuff is real. OpenAI was founded by a religious cult that thinks if you make a computer too "intelligent" it will instantly take over the world instead of just sitting there.

The posturing about other kinds of safety like being nice to people is a way to try to get around the rules they set by defining safety to mean something that has any relation to real world concepts and isn't just millenarian apocalypse prophecies.

mvkel
5 replies
23h25m

[Removed. Unhelpful speculation.]

late2part
1 replies
23h9m

How many days a week do you hang out with Sam and Greg and Ilya to know these things?

mvkel
0 replies
23h7m

I know the dysfunction and ego battles that happen at nonprofits when they outgrow the board.

Haven't seen it -not- happen yet, actually. Nonprofits start with $40K in the bank and a board of earnest people who want to help. Sometimes that $40K turns into $40M (or $400M) and people get wacky.

As I said, "if."

dagmx
1 replies
23h3m

Frankly this reads like idolatry and fan fiction. You’ve concocted an entire dramatization based on not even knowing any of the players involved and just going based off some biased stereotyping of engineers?

mvkel
0 replies
23h0m

More like stereotyping nonprofits.

zeroonetwothree
0 replies
23h3m

Extremely speculative

yafbum
2 replies
22h28m

He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.

Following the Jobs analogy, this could be another NeXT failure story. Teams are made by their players much more than by their leaders; competent leaders are a necessary but absolutely insufficient condition of success, and the likelihood that whatever he starts next reproduces the team conditions that made OpenAI in the first place are pretty slim IMO (while still being much larger than anyone else's).

yaroslavyar
1 replies
22h12m

Well, I would debate that NeXT OS was a failure as a product, keeping in mind that it is a foundation of all current in macOS and even iOS versions that we have not. But I agree that it was a failure from a business perspective. Although I see it more like Windows phone — too late to market — failure, rather than an out of talented employers failure.

yafbum
0 replies
21h46m

Yes, market conditions and competitor landscape are a big factor too.

browningstreet
2 replies
23h19m

Agree with this take. Sam made OpenAI hot, and they’re going to cool, for better or worse. Without revenue it’ll be worse. And surprising Microsoft given their investment size is going to lead to pressures they may not be able to negotiate against.

If this pivot is what they needed to do, the drama-version isn’t the smart way to do it.

Everyone’s going to be much more excited to see what Sam pulls next and less excited to wait the dev cycles that OpenAI wants to do next.

iamflimflam1
0 replies
22h55m

Indeed. Throwing your toys out of the pram and causing a whole lot of angst is not going to make anyone keen to work with you.

huytersd
0 replies
22h12m

Satya should pull off some shenanigans, take control of OpenAI and put Sam and Greg back in control.

kylecazar
35 replies
23h57m

Here's what I don't understand.

There clearly were tensions between the for and not-for growth factions, but the Dev Day is being cited as a 'last straw'. It was a product launch.

Ilya, and the board, should have been well aware of what was being released on that day for months. They should have at the very least been privy to the plan, if not outright sanctioned it. Seems like before launch would have been the time to draw a line in the sand.

Did they have a 'look at themselves in the mirror' moment after the announcements or something?

martythemaniak
11 replies
23h48m

Could be many things, like Sam not informing them of the GPTs store launch, or saying he won't launch and then launching.

It sucks for openAi, but there's too many hungry hungry competitors salivating at replacing OpenAI so I don't think this will have big king term consequences in the field.

I'm curious what sorts of oversight and recourse all the investors (or are they donors?) Have. I imagine there's a lot of people with a lot of money that are quite angry today.

CPLX
10 replies
23h44m

They don’t have investors, it’s a non profit.

The “won’t anyone think of the needs of the elite wealthy investor class” that has run through the 11 threads on this topic is pretty baffling I have to admit.

laserlight
2 replies
23h40m

They don’t have investors

OpenAI has investors [0].

[0] https://openai.com/our-structure

dragonwriter
1 replies
20h28m

OpenAI (the nonprofit whose board makes decisions) has no investors.

the subordinate holding company and even more subordinate OpenAI Global LLC have investors, but those investors are explicitly warned that the charitable purpose of the nonprofit and not returning profits to investors is the paramount function of the organization, over which the nonprofit has full governance control.

laserlight
0 replies
10h16m

Thanks for clarifying.

ketzo
2 replies
23h41m

They do have investors in the for-profit subsidiary, including Microsoft and the employees. Check out the diagram in the linked article.

CPLX
1 replies
21h56m

That’s right. Which isn’t the company that just fired Sam Altman.

ketzo
0 replies
19h50m

I take your point, but still, I don’t think it’s correct to imply that investors in the for-profit company have no sway or influence over the future of OpenAI.

I sure as shit wouldn’t wanna be on Microsoft’s bad side, regardless of my tax status.

croes
2 replies
23h22m

Then what did Microsoft pay for?

dragonwriter
1 replies
20h27m

Privileged access to technology, which has paid off quite well for them already.

croes
0 replies
19h50m

They didn't pay a fee

CrazyStat
0 replies
23h41m

It’s a nonprofit that controls a for-profit company, which has other investors in addition to the non-profit.

passwordoops
9 replies
23h53m

Ilya, and the board, should have been well aware of what was being released on that day for months

Not necessarily, and that may speak to the part of the Board's announcement that Sam was not candid

barbazoo
8 replies
23h49m

I can’t imagine an organization where this wouldn’t have come up on some roadmap or prioritization meeting, etc. How could leadership not know what the org is working on?! They’re not that big.

googlethrwaway
6 replies
23h35m

Board is not exactly leadership. They meet infrequently and get updates directly from management, they don't go around asking employees what they're working on

browningstreet
1 replies
23h18m

They do typically have views into strategic plans, roadmaps and product plans.

naasking
0 replies
23h2m

Going into detail in a talk and discussing AGI may have provided crucial context that wasn't obvious from a PowerPoint bullet point, which is all the board may have seen earlier.

barbazoo
1 replies
23h25m

True. So the CTO knew what was happening, wasn’t happy, and then coordinated with the board, is that what appears to have happened?

threeseed
0 replies
20h31m

CTO who is now acting CEO.

Not making any accusations but that was an odd decision given that there is an OpenAI COO.

s1artibartfast
0 replies
21h57m

Surely Ilya Sutskever must have known what was being worked on as Chief Scientist?

late2part
0 replies
23h8m

More supervision than leadership...

aunty_helen
0 replies
20h16m

I can't imagine an organization that would fire their celebrity CEO like this either. So maybe that's how we arrived here.

cratermoon
4 replies
23h9m

Let's look closer at the Ilya Sutskever vs Sam Altman tensions, and think of the product/profit as a cover.

Ilya Sutskever is a True Believer in LLMs being AGI, in that respect aligned with Geoff Hinton, his academic advisor at University of Toronto. Hinton has said "So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete"[1].

Meanwhile, Altman has decided that LLMs aren't the way.[2]

So Altman was pushing to turn the LLM into a for-profit product, to get what value it has, while the Sutskever-aligned faction thinks it is AGI, and want to keep it not-for-profit.

There's also some difference about whether or not AGI poses an "existential risk" or if the risks of current efforts at AI are along the lines of algorithmic bias, socioeconomic inequality, mis/disinformation, and techno-solutionism.

1. https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinto...

2. https://www.thestreet.com/technology/openai-ceo-sam-altman-s...

Chamix
3 replies
21h13m

You are conflating Illya's belief in the transformer architecture (with tweaks/compute optimizations) being sufficient for AGI with that of LLMs being sufficient to express human-like intelligence. Multi-modality (and the swath of new training data it unlocks) is clearly a key component of creating AGI if we watch Sutskever's interviews from the past year.

cratermoon
1 replies
20h23m

Yes, I read "Attention Is All You Need", and I understand that the multi-head generative pre-trained model talks about "tokens" rather than language specifically. So in this case, I'm using "LLM" as shorthand for what OpenAI is doing with GPTs. I'll try to be more precise in the future.

That still leaves disagreement between Altman and Sutskever over whether or not the current technology will lead to AGI or "superintelligence", with Altman clearly turning towards skepticism.

Chamix
0 replies
17h2m

Fair enough, shame "Large Tokenized Models" etc never entered the nomenclature.

limpanz
0 replies
7h30m

Do you have a link to one of these talks?

twelve40
2 replies
23h41m

they could have been beefing non-publicly for a long time, and might have had many private conversations, probably not very useful to speculate here

ketzo
1 replies
23h39m

Not useful at all.. but it sure is fun! This is gonna be my whole dang weekend.

romeoblade
0 replies
23h33m

Probably dang's whole weekend as well.

jsemrau
1 replies
23h21m

What if Enterprises get access to a much better version of AI compared to the GPT+ subscription customer?

threeseed
0 replies
20h29m

They always were because it was going to be customised for their needs.

skywhopper
0 replies
20h17m

They “should have” but if the board was wildly surprised by what was presented, that sounds like a really good reason to call out the CEO for lack of candor.

ketzo
0 replies
23h42m

These people are humans, and there’s a big difference between kinda knowing the keynote was coming up, and then actually watching it happen and receive absolutely rave coverage from everyone in tech.

I could very much see it as a “look in the mirror” moment, yeah.

015a
0 replies
21h2m

They should have at the very least been privy to the plan, if not outright sanctioned it.

Never assume this. After all, their communication specifically cited that Sam deceived them in some way, and Greg was also impacted. Ilya is the only board member that might have known naturally, given his day-to-day work with OAI, but since ~July he has worked in the area of superalignment, which could reasonably be a different department (it shouldn't be). The Board may have also found out about these projects, maybe from a third party/Ilya, told Sam they're moving too fast, and Sam ignored them and launched anyway. We really don't know.

Michelangelo11
29 replies
20h44m

I've seen some discussion on HN in which people claimed that even really important engineers aren't -too- important and that Ilya is actually replaceable, using Apple's growth after Woz' departure as an example. But I don't think that's the best situation to compare this to. I think a much better one is John Carmack firing Romero from id Software after the release of Quake.

Some background: During a period of about 10 years, Carmack kept making massive graphics advances by pushing cutting-edge technology to the limit in ways nobody else had figured out, starting with smooth horizontal scrolling in Commander Keen, through Doom's pseudo-3D, through Quake's full 3D, to advances in the Quake sequels, Doom 3, etc. It's really no exaggeration to say that every new id game engine from 1991 to 1996 created a new gaming genre, and the engines after that pushed forward the state of the art. I don't think anybody who knows this history could argue that John Carmack was replaceable.

At the time, the rest of id knew this, which gave Carmack a lot of clout and eventually allowed him to fire co-founder John Romero. Romero was considered the kinda flamboyant, and omnipresent, public face of id -- he regularly went to cons, worked the press, played deathmatch tournaments, and so on (to be clear, he was a really talented level designer and programmer, among other things, I only want to point out that he was synonymous with id in the public eye). And what happened after the firing? Romero was given a ton of money and absurd publicity for new games ... and a few years later, it all went up in smoke and his new company folded, as he didn't end up making anything nearly as big as Doom or Quake. Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

The moral of the story to me is that, when your revenue massively grows for every bit of extra performance you extract from bleeding-edge technology, engineer expertise REALLY matters. In the '90s, every minor improvement in PC graphics quality translated to a giant bump in sales, and the same is true of LLM output quality today. So, just like Carmack ultimately turned out to be the absolute key driver behind id's growth, I think there's a pretty good chance it's going to turn out that Ilya plays the same role at OpenAI.

reissbaker
4 replies
19h12m

Three points:

1. I don't think Ilya is equivalent to Carmack in this case — he's been focused on safety and alignment research, not building GPT-[n]. By most accounts Greg Brockman, who quit in disgust over the move, was more impactful than Ilya in recent years, as well as the senior researchers who quit yesterday.

2. I think you are underselling what happened with id: while they didn't blow up as fantastically as Ion Storm (Romero's subsequent company), they slowly faded in prominence, and while graphically advanced, their games no longer represented the pinnacles of innovation that early Carmack+Romero id games represented. They eventually got bought out by Zenimax. Carmack alone was much better than Romero alone, but seemingly not as good as the two combined.

3. I don't think Sam Altman is equivalent to John Romero; Romero's biggest issue at Ion Storm was struggling to ship anything instead of endlessly spinning his wheels chasing perfection — for example, the endless Daikatana delays and rewrites. Ilya's primary issue with Altman was he was shipping too fast, not that he was unable to motivate and push his teams to ship impressive products quickly.

I hope Sam and Greg start a new foundational AI company, and if they do, I am extremely excited to see what they ship. TBH, much more excited than I am currently by OpenAI under a more alignment-and-regulation regime that Ilya and Helen seems to want.

yeck
2 replies
18h3m

Until I can trust that when I send an AI agent off to do something that I will be successful without me babysitting and watching over it constantly AI won't truly be transformative (since the human bottleneck will remain).

This is one of the core promises of alignment. Without it how can there be trust? While there are probably short term slow downs with an alignment focus, ultimately it is necessary to avoid throwing darts in the dark.

reissbaker
1 replies
16h2m

I wouldn't mind a focus on reliably following tasks with greater intelligence; what I think is negative utility is focusing more compute and research resources on hypothetical superintelligence alignment — the entire focus of Ilya's "Superalignment" project — when GPT-4 is still way, way sub-human-intelligence. For example, I don't think the GPT Store was in any way a dangerous idea, which seems to have been Ilya's claimed safety red line.

yeck
0 replies
3h3m

I wouldn't call GPT-4 sub-human intelligence. While it's intelligence is less robust aggregate human intelligence, I don't think there is any one person alive who can compete with the breadth of GPT-4 knowledge.

I also think that the potential of what currently is possible with existing models has not been fully realized. Good prompting strategies and reflection may already be able to produce a system that is effectively AGI. Might already exist in several labs.

cthalupa
0 replies
18h53m

Sutskever has shifted to safety and alignment research this year. Previously he was directly in charge of the development of GPT, from GPT-1 on.

Brockman did an entirely different type of work than Sutskever. Brockman's primary focus was on the infrastructure side of things - by all accounts the software he wrote to manage the pre-training, training, etc., is all world-class and a large part of why they were able to be as efficient as they are, but that is not the same thing as being the brains behind the ML portion.

selimthegrim
3 replies
19h43m

I think the team that became Looking Glass Studios did a lot of the same things in parallel so it’s a little unfair to say no one else had figured it out

quadcore
2 replies
19h26m

Not at the same level of quality. For example, their game, ultima underworld if my memory doesnt fault me, didnt have sub-pixel precision for texturing. Their texturing was a lot uglier and unpolished compared to Wolf and especially Doom. I remember I checked, they were behind. And their game crashed. Never saw Doom crash, not even once.

vkou
1 replies
18h11m

Not at the same level of quality.

Engine quality? No.

In terms of systems? Design? Storytelling? LGS games were way ahead of their time, and have vastly more relevance than anything post-Romero ID made.

quadcore
0 replies
6h35m

Agreed, I was talking about the engine. UU was exceptional in terms of ambience and all other things.

p1esk
3 replies
19h55m

Ilya might be too concerned with AI safety to make significant progress on model quality improvement.

yeck
2 replies
17h59m

Isn't that a massive quality improvement though? How many applications for LLMs are not feasible right now because of the ability for models to be swayed of course by a gentle breeze? If AI is a ship with a sail, data is the wind and alignment is the equivalent of a rudder.

p1esk
1 replies
16h46m

The ability for models to be (easily) swayed is a different problem. I don’t see how AI safety would help with that.

yeck
0 replies
3h9m

models to be (easily) swayed is a different problem

No, this is the alignment problem at a high level. You want a model to do X but sometimes it does Y.

Mechanistic interpretability, one area of study in AI alignment, is concerned with being able to reason about how a network "makes decisions" that lead it to an output.

If you wanted an LLM that doesn't succumb to certain prompt injections, it could be very helpful to be able to identity key points in the network that took the AI out of bounds.

Edit: I should add, I'm not referring to AI safety, I'm referring to AI alignment.

intexpress
3 replies
16h57m

Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

I don't think that is accurate...

The output of id Software after Romero left (post Quake 1) was a clear step down. The technology was fantastic but the games were boring and uninspired, at best "good" but never "great". It took a full 20 years for them to make something interesting again (Doom in 2016).

After Romero left, id Software's biggest success was really as a technology licensing house, but not as a games developer. Powering games like Half Life, Medal of Honor, Call of Duty, ...

Meanwhile Romero's new company (Ion Storm) eventually failed, but at least the creative freedom there led to some interesting games, like Deus Ex and Anachronox. And even Daikatana is a more interesting game than something like Quake 2 or Quake III.

trenchgun
2 replies
13h44m

Romero had basically no involvement in Deus Ex.

Daikatana was a commercial and critical failure. Quake 2 and Quake III were commercial and critical successes.

krustyburger
0 replies
12h2m

The comment you’re replying to wasn’t claiming that Romero designed Deus Ex, but that his leaving id led to the game getting made. It absolutely did.

From Wikipedia:

After Spector and his team were laid off from Looking Glass, John Romero of Ion Storm offered him the chance to make his "dream game" without any restrictions.

https://en.wikipedia.org/wiki/Deus_Ex_(video_game)

intexpress
0 replies
11h45m

Deus Ex wouldn’t exist if Romero hadn’t created Ion Storm and given creative freedom to Warren Spector to make his dream game

Daikatana had some interesting design ideas but had problems with technology, esp. with AI programming. It was too ambitious for a new team which lacked someone like Carmack to do the heavy technical lifting

Quake 2 and 3 were reviewed less favourably than earlier titles, and they also sold less copies. They were good but not great - basically boring but very pretty to look at.

danenania
3 replies
19h48m

A difference in this case is how capital intensive AI research is at the level OpenAI is operating. Someone who can keep the capital rolling in (whether through revenue, investors, or partners) and get access to GPUs and proprietary datasets is essential.

Carmack could make graphics advances on his own with just a computer and his brain. Ilya needs a lot more for OpenAI to keep advancing. His giant brain isn’t enough by itself.

Michelangelo11
2 replies
19h32m

That's a really, really good point. Maybe OpenAI, at this level of success, can keep the money coming in though.

vikramkr
0 replies
16h31m

I'm pretty sure the people that kicked out altman don't consider this success and don't want the money

__loam
0 replies
18h20m

We don't even know if they're profitable right now, or how much runway they have left.

deanCommie
2 replies
19h20m

Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

Romero was fired in 1996

Until this point, as you mentioned id had created multiple legendary franchises with unique lore, attributes, and each one groundbreaking tech breakthroughs: Commander Keen, Wolfenstein 3D, Doom, Quake.

After Romero left, id released: https://en.wikipedia.org/wiki/List_of_id_Software_games

* Quake 2

* Quake 3

* Doom 3

* And absolutely nothing else of any value or cultural impact. The only "original" thing was Rage which again had no footprint.

There were a lot of technical achievements, yes, but it turns out that memorable games need more than interesting technology. They were well-reviewed for their graphics at a time when that was the biggest thing people expected from new id games - interesting new advances in graphics. For a while, they were THE ones pushing the industry forward until arguably Crysis.

But the point is for anyone experiencing or interacting with these games today, Quake is Quake. Nobody remembers 1, 2 or 3 - it's just Quake.

Now, was id a successful software company and business? Yes. Would it have become the industry titan and shaped the future all of all videogames based on their post Romero output? Absolutely not.

So, while it is definitely justifiable to claim that Carmack achieved more on his own than Romero did, the truth is at least in the video game domain they needed each other to achieve the real greatness that they will be remembered for.

It remains to be seen what history will say about ALtman and Sutskever.

mlyle
1 replies
18h33m

But the point is for anyone experiencing or interacting with these games today, Quake is Quake. Nobody remembers 1, 2 or 3 - it's just Quake.

Quake 3 was unquestionably the pinnacle, the real beginning of esports, and enormously influential on shooter design to this day.

deanCommie
0 replies
11h12m

Quake 3 came out 1 week after Unreal Tournament did in 1999.

Quake 3 had a better engine, but Unreal Touranment had more creative weapons, sound cues, and level design. (Assault mode!)

Quake 3 had better balanced levels for purely deathmatch, which turned out to be the part that was the purest distillment of what people would want to play.

So, yes, I do think you're right that I am underselling Quake 3. I was always a UT fan from day 1, and never understood why Quake 3 took over. But that's personal preference, and I undervalue it's impact to the industry.

It also shows I guess that since Romero previously did all the level designs, Carmack was able to replace him. But Romero was never able to replace Carmack.

tasty_freeze
1 replies
20h13m

Quake III's fast inverse square root algorithm

Carmack did not invent that trick; it had been around more than a decade before he used it. I remember reading a Jim Blinn column about that and other dirty tricks like it in an IEEE magazine years before Carmack "invented" it.

https://en.wikipedia.org/wiki/Fast_inverse_square_root

Michelangelo11
0 replies
20h4m

Yes, you're right -- I dug around in the Wikipedia article, and it turns out he even confirmed in an email it definitely wasn't him: https://www.beyond3d.com/content/articles/8/

Thanks for the correction, edited the post.

quickthrower2
0 replies
19h9m

You know a lot more than me on this subject but can it also be that starting new company and for it to not die is quite hard. Especially in gaming.

quadcore
0 replies
19h31m

Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

I believe this is absolutely wrong. Quale 2, 3 and Doom 3 were critical success, not commercial ones, which led ID to be bought.

John and John were like Paul and John from the beatles, they never made really great games anymore after their break up.

And to be clear, that's because the role of Romero in the success of ID is often underrated like here. He invented those games (Doom and Quake and Wolf) as much as Carmack did. For example, Romero was the guy who invented percent-based life. He removed the score. This guy invented the modern video game in many ways. Games that werent based on Atari or Nintendo. He invented Wolf, Doom and Quake setups which were considerably more mature than Mario and Bomberman and it was new at the time. Romero invented the deathmatch and its "frag". And on and on.

mvdtnz
0 replies
19h3m

Who's talking about replacing Ilya? What are you talking about?

noonething
27 replies
23h34m

I hope they go back to being Open now that Altman is gone. It seems Ilya wants it to 'benefit all of humanity' again.

exitb
19 replies
23h17m

Isn’t that a bit like stealing from the for-profit investors? I’m not the first one to shed a tear for the super wealthy, but is that even legal? Can a company you invested in just say they don’t like profit any more?

eastbound
9 replies
22h12m

They knew it when they donated to a non-profit. In fact trying to extract profit from a 501c could be the core of the problem.

s1artibartfast
8 replies
21h47m

Microsoft didnt give money to a non-profit. They created a for profit company, and microsoft gave that company 11B, and Open AI gave it the technology.

OpenAI shares ownership of that for-profit company with Microsoft and Early investors like Sam, Greg, Musk, Theil, Bezos, the employees of that company.

cthalupa
7 replies
20h8m

While technically true, in practicality, they did give money to the non-profit. The even signed an agreement stating that any investments should be considered more as donations, because the for-profit subsidiary's operating agreement is such that the charter and mission of the non-profit are the primary duty of the for-profit, not making money. This is explicitly called out in the agreement that all investors in and employees of the for-profit must sign. LLCs can be structured so that they are beholden to a different goal than the financial enrichment of their shareholders.

https://openai.com/our-structure

s1artibartfast
6 replies
19h34m

I don't dispute that they say that at all. Therein lies the tension -having multiple goals. The goal is to uphold the mission, and also to make a profit, and the mission comes first.

Im not saying one party is right or wrong, just pointing out that there is bound to be conflict when you give employees a bunch of profit based stock rewards, Bring in in 11B in VC investment looking for returns, and then have external oversight with all the control setting the balance between profit and mission.

The disclaimer says "It would be wise to see the the investment in OpenAI Global in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world"

That doesnt mean investors and employees wont want money, and few will be scared off by owning a company so wildly successful that it ushers in a post scarcity world.

You have partners and employees that want to make profit, and that is fundamental to why some of them are there, especially Microsoft. The expectation of possible profits are clear, because that is why the company exists, and why microsoft has a deal where they get 75% of profit until they recoup their 11 Billion investment. I read the returns are capped at 100X investment, so if holds true, Microsoft returns are capped at 1.1 Trillion dollars.

himaraya
5 replies
18h11m

100x first-round investment and lower multiples for subsequent rounds, so much less than $1T.

s1artibartfast
4 replies
17h0m

what do you mean? Are you saying that is part of the article of incorporation for for-profit Open AI?

himaraya
3 replies
16h45m

Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.

https://openai.com/blog/openai-lp

s1artibartfast
2 replies
15h47m

So microsoft got in at round 1, and then round 2 for some nebulous multiple which may or may not be less than than.

These weasel words are not proof of anything

himaraya
1 replies
15h44m

Microsoft's first-round investment totals $1bn at most. Nothing public substantiates a profit cap of $1tn.

s1artibartfast
0 replies
15h31m

1t would be the 100x times 10B. I guess in absence of public information, we could assume anything. Default terms of unlimited, 100x for 1T or some arbitrarily lower number

TheCleric
7 replies
22h53m

Unless you have something in writing or you have enough ownership to say no, I don’t see how you’d be able to stop it.

exitb
3 replies
22h20m

Microsoft reportedly invested 13 billion dollars and has a generous profit sharing agreement. They don’t have enough to control OpenAI, but does that mean the company can actively steer away from profit?

cthalupa
1 replies
20h10m

Yes. Microsoft had to sign an operating agreement when they invested that said the company has no responsibility or obligation to turn a profit. LLCs are able to structure themselves in such a way that their primary duty is not towards their shareholders.

https://openai.com/our-structure - check out the pinkish-purpleish box. Every investor and employee in the for-profit has to agree to this as a condition of their investment/employment.

kristjansson
0 replies
14h49m

Just the pure chutzpah to say

with the understanding that it may be difficult to know what role money will play in a post-AGI world
dragonwriter
0 replies
20h18m

They don’t have enough to control OpenAI

Especially since the operating government effectively gives the nonprofit board full control.

They don’t have enough to control OpenAI, but does that mean the company can actively steer away from profit?

Yes. Explicitly so. https://openai.com/our-structure and particularly https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...

s1artibartfast
2 replies
21h51m

They have something in writing. OpenAI created a for-profit joint venture company with microsoft, and gave it license to its technology.

marcinzm
1 replies
20h47m

Exclusive license?

s1artibartfast
0 replies
20h6m

No clue, but I guess not.

dragonwriter
0 replies
20h19m

Isn’t that a bit like stealing from the for-profit investors? I

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...

cscurmudgeon
2 replies
21h26m

Won't a truly open model conflict with the AI executive order?

gnulinux
1 replies
9h46m

What do you mean by AI executive order?

cscurmudgeon
0 replies
46m
pknerd
1 replies
22h40m

Means free Gpt4?

Ps: It's a serious question

maronato
0 replies
18h7m

I don’t think so. I think it means OpenAI releasing papers again and slower, less product-focused releases

x86x87
0 replies
23h27m

Things can improve along a dimension you choose to measure but there is also the very real risk of openai imploding. Time will tell.

dragonwriter
0 replies
20h20m

From what I've seen, Ilya seems to be even more concerned than Altman about safety risks and, like Altman, seems to see restricting access and information as a key part of managing that, so I'd expect less openness, not more.

Though he may be less inclined to see closed-but-commercial access as okay as much as Altman, so while it might involve less total access, it might involve more actual open/public information about what is also made commercially available.

ls612
24 replies
23h42m

What are the odds Sam can work the phones this weekend and have $10B lined up by Monday for a new AI company which will take all of the good talent from OpenAI?

staticman2
11 replies
23h37m

Why would the good talent leave? Are they all a "family" and best buddies with Sam?

thepasswordis
6 replies
23h34m

Perhaps they don’t want to work for a board of directors which is openly hostile to the work they’re doing?

meepmorp
2 replies
23h21m

I don't think wanting to make sure that their technology doesn't cause harm equates to being hostile to the work itself.

pixl97
1 replies
22h37m

That would seem based on the individuals motivation at the end of the day...

It's easy to imagine two archetypes

1) The person motivated to make AGI and make it safe.

2) The person motivated to make AGI at any cost and profit from it.

It seems like OpenAI may be pushing for type 1 at the moment, but the typical problem with capitalism is it will commonly fund type 2 businesses. Who 'wins' really breaks down to if there are more type 1 or 2 people and the relative successes of each.

anonyfox
0 replies
20h58m

Not at OAI or some researcher, but I‘d be in an archetype 3:

I‘d do anything I can to make true AGI a reality, without safety concerns or wanting to profit from it.

staticman2
1 replies
22h31m

The boarded sided with the chief scientist and co-founder of OpenAI in an internal dispute. How does that show hostility to the work OpenAI is doing?

YetAnotherNick
0 replies
20h44m

Ilya is pushing the unsafe AGI narrrative to stop public progress and make OpenAI more closed and intentionally slow to deliver. There are definitely people who are not sold by this.

chongli
0 replies
20h27m

Perhaps they didn't like the work they were doing? If they're experts in the field, they may have preferred to continue to work on research. Whereas it sounds like Sam was pushing them to work on products and growth.

ls612
1 replies
23h36m

A lot of them have already left this morning. idk for sure why but a good bet is that they are more on board with Sam's vision of pushing forward AI than the safetyist vision.

esafak
0 replies
23h27m

What fraction?

polski-g
0 replies
23h32m

Because they want stock options for a for-profit company.

mattnewton
0 replies
23h32m

My guess is that at least some of them are worried about shipping products and making profit, and agreed with the growth faction?

croes
2 replies
23h17m

And then?

Training data is more restricted now, hardware is hard to get, fine tuning needs time.

somebodythere
1 replies
20h47m

First two problems are easily solved with money

croes
0 replies
19h49m

Money doesn't magically create hardware, it takes time to produce it

claytonjy
2 replies
23h29m

I definitely believe he can raise a lot of money quickly, but I'm not sure where he'll get the talent, at least the core modeling talent. That's Ilya's lane, and I get the sense that group are the true believers in the original non-profit mission.

But I suspect a lot of the hires from the last year or so, even in the eng side, are all about the money and would follow sama anywhere given what this signals for OpenAIs economic future. I'm just not sure such a company can work without the core research talent.

x86x87
1 replies
23h25m

Lol. There are ambitious people working at openai in Ilya's lane that will jump at the opportunity. Nobody owns any lanes.

dh2022
0 replies
21h39m

ooh, lanes... the Microsoft internal buzz-word that got out of fashion a couple of years ago is making a comeback outside of Microsoft....

fullshark
1 replies
23h29m

I'm guessing he has verbal commitments already.

Maxious
0 replies
9h33m

Sequoia, was independently in contact with Microsoft to encourage it to work to restore Altman and Brockman, according to a source with knowledge of the matter. The firm would support Altman whichever option he chose, the source added.

https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...

somenameforme
0 replies
22h27m

Ilya Sutskever, the head scientist at OpenAI, is allegedly who organized the 'shuffle.' So you're going to run into some issues expecting the top talent to follow Sam. And would many people want to get in on a new AI development company for big $$$ right now? From my perspective the market is teetering towards oversaturation, there are no moats, zero-interest rates are a thing of the past, and the path to profit is nebulous at best.

outside1234
0 replies
23h40m

99% with the 1% being it is actually $20-30B

ketzo
0 replies
23h40m

Honestly? If even a tenth of Sam’s reported connectedness / reality distortion field are true to life… very good odds.

dmitrygr
0 replies
23h10m

Other than having a big mouth what has HE done? As far as I can find, the actual engineering and development was done NOT by him, while he was parading around telling people they shouldn't WFH, and schmoozing with government officials

chancancode
20 replies
23h8m

Jeremy Howard (of fast.ai): https://x.com/jeremyphoward/status/1725712220955586899

He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.

rafaelero
17 replies
21h24m

Such a bad take. Developers (me included) loved Dev Day.

eightysixfour
7 replies
20h52m

Yeah - I think this is the schism. Sam is clearly a product person, these are AI people. Dev day didn’t meaningfully move the needle on AI, but for people building products it sure did.

rafaelero
3 replies
19h51m

The fact that this is a schism is already weird. Why do they care how the company transforms the technology coming from the lab into products? It's what pay their salaries in the end of the day and, as long as they can keep doing their research work, it doesn't affect them. Being resented about a thing like this to the point of calling it a "absolute embarrassment" when it clearly wasn't is childish to say the least.

sseagull
1 replies
19h41m

as long as they can keep doing their research work, it doesn't affect them

That’s a big question. Once stuff starts going “commercial” incentives can change fairly quickly.

If you want to do interesting research, but the money wants you to figure out how AI can help sell shoes, well guess which is going to win in the end - the one signing your paycheck.

rafaelero
0 replies
18h58m

Once stuff starts going “commercial” incentives can change fairly quickly.

Not in this field. In AI, whoever has the most intelligent model is the one that is going to dominate the market. No company can afford not investing heavily in research.

kragen
0 replies
15h15m

this is sort of why henry ford left the company he founded before the ford we know, i think around 01902. his investors saw that they had a highly profitable luxury product on their hands and wanted to milk it for all it was worth, much like haynes, perhaps scaling up to build dozens of custom cars per year, like the pullman company but without needing railroads, and eventually moving downmarket from selling to racecar drivers and owners of large companies, to also selling to senior executives and rich car hobbyists, while everyday people continued to use horse-driven buggies. ford, by contrast, had in mind a radically egalitarian future that would reshape the entire industrial system, labor-capital relations, and ultimately every moment of day-to-day life

for better or worse, ford got his wish, and drove haynes out of the automobile business about 20 years later. if he'd agreed to spend day and night agonizing over how to get the custom paint job perfect on the car they were delivering to mr. rockefeller next month, that wouldn't have happened, and if fordism had happened at all, he wouldn't have been part of it. maybe france or japan would be the sole superpower today

probably more is at stake here

belugacat
2 replies
20h45m

Thinking you can develop AGI - if such a thing actually can exist - in an academic vacuum, and not by having your AI rubber meet the road through a plethora of real world business use cases strikes me as extreme hubris.

… I guess that makes me a product person?

threeseed
1 replies
20h25m

Or the obvious point that if you're not interested in business use cases then where are you going to get the money for the increasingly exorbitant training costs.

DebtDeflation
0 replies
18h28m

Exactly this. Where do these guys think the money to pay their salaries let alone fund the vast GPU farm they have access to comes from?

whoknowsidont
2 replies
21h11m

What did you love about it?

rafaelero
1 replies
19h57m

Cheaper, faster and longer context window would be enough of an advancement for me. But then we also had the Assistant API that makes our lives as AI devs much easier.

victor9000
0 replies
19h6m

Seriously, the longer context window is absolutely amazing for opening up new use-cases. If anything, this shows how disconnected the board is from its user base.

marcinzm
2 replies
21h4m

He didn’t say developers, he said researchers.

rafaelero
1 replies
19h54m

He said in his opinion Dev Day was an "absolute embarrassment".

marcinzm
0 replies
19h31m

And his second tweet explained what he meant by that.

campbel
1 replies
21h2m

Pretty insightful I thought. The people who joined to create AGI are going to be underwhelmed by the products made available on dev day.

LightMachine
0 replies
20h23m

I was underwhelmed, but I got -20 upvotes on Reddit for pointing it out. Yes products are cool, but I'm not following OpenAI for another App Store, I'm following it for AGI. They should be directing all resources to that. As Sam said himself: once it is there, it will pay for itself. Settling to products around GPT-4 just passes the message that the curve has stagnated and we aren't getting more impressive capabilities. Which is saddening.

chancancode
0 replies
20h48m

I think you are missing the point, this is offered for perspective, not as a “take”.

I find this tweet insightful because it offered a perspective that I (and it seems like you also) don’t have which is helpful in comprehending the situation.

As a developer, I am not particularly invested nor excited by the announcements but I thought they were fine. I think things may be a bit overhyped but I also enjoyed their products for what they are as a consumer and subscriber.

With that said, to me, from the outside, things seemed to be going fine, maybe even great, over there. So while I understand the words in the reporting (“it’s a disagreement in direction”), I think I lack the perspective to actually understand what that entails, and I thought this was an insightful viewpoint to fill in the perspectives that I didn’t have.

The way this was handled still felt iffy to me but with the perspective I can at least imagine what may have drove people to want to take such drastic actions in the first place.

tucnak
0 replies
2h43m

He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.

Great analysis, thank you.

gmt2027
0 replies
22h31m
fullshark
18 replies
23h37m

Man this still seems crazy to me. The idea that this tension between commercial/non-commercial aspirations got so bad they felt the nuclear option of a surprise firing of Altman was the only move available doesn't seem plausible to me.

I believe this decision was ego and vanity driven with this post-hoc rationalization that it was because of the mission of "benefiting humanity."

superhumanuser
9 replies
23h21m

I wonder if the "benefiting humanity" bit is code for anti mil-tech. What if Sam wasn't being honest about a relationship with a consumer that weaponized OpenAI products against humans?

0xDEF
6 replies
22h41m

Ilya has Israeli citizenship and has toured Israel and given talks at Israeli universities including one talk with Sam Altman.

He is not anti mil-tech.

superhumanuser
2 replies
21h28m

Unless the mil-tech was going to their enemies.

0xDEF
1 replies
21h22m

I don't think anyone at OpenAI was planning to give mil-tech to Iran and Iranian proxies like Hamas, Hezbollah, and the Houthis.

superhumanuser
0 replies
21h13m

Ya. You're right. Time to let the theory die.

kromem
1 replies
18h46m

That's a pretty big leap of logic there.

0xDEF
0 replies
17h56m

Israel is a heavily militarized country. The country would not be able to exist without the latest military tech. Ilya flirting with the tech scene of Israel is a very good indicator that he is not anti mil-tech.

someNameIG
0 replies
22h14m

Did those talks have anything to do with mil-tech though?

dave4420
1 replies
23h13m

Could be.

Or it could be about the alignment problem. Are they designing AI to prioritise humanity’s interests, or its corporate masters’ interests? One way is better for humanity, the other brings in more cash.

superhumanuser
0 replies
21h29m

But the board accused Sam of not being "consistently candid". Alignment issues could stand on their own ground for cause and would have been better PR too. Instead of the mess they have now.

BryantD
2 replies
23h18m

What if the board gave Altman clear direction, Altman told them he accepted it, and then went off and did something else? This hypothesis doesn’t require the board’s direction to be objectively good.

fullshark
1 replies
23h12m

IDK none of us are privy to any details of festering tensions or if there was a "last straw" scenario that if it was explained it would make sense. Something during that dev day really pissed some people off that's for sure.

Given what the picture looks like today though that's my guess, firing Altman is an extreme scenario! Lots of CEOs have tensions with their boards over various issues otherwise the board is pointless!

BryantD
0 replies
21h12m

I strongly agree, yeah! The trick is making those tensions constructive and no matter who's at fault (could be both sides), someone failed there.

fulladder
1 replies
20h29m

Yeah, the surprise firing part really doesn't make much sense. My best guess is that if you look at the composition of this board (minus Altman and Brockman), it seems to be mostly academics and the wife of a Hollywood actor. They may not be very experienced in the area of tech company boards, and might not have been aware that there are smoother ways to force a CEO out that are less damaging to your organization. Not sure, but that's the best I can figure out based on what we know so far.

cthalupa
0 replies
19h37m

it seems to be mostly academics and the wife of a Hollywood actor

This argument would require you ignore both Sutskever himself as well as D'Angelo, who was CTO/VP of Engineering at Facebook and then founding CEO of Quora.

x86x87
0 replies
23h29m

This is not commercial vs non commercial imho. This is the old classic humans being humans.

swatcoder
0 replies
23h30m

In a clash of big egos, both are often true. Practical differences escalate until personal resentment forms and the parties stop engaging with due respect for each other.

Once that happens, real and intentional slights start accumulating and de-escalation becomes extremely difficult.

edgyquant
0 replies
21h25m

Maybe, but I have a different opinion. I have worked at startups before where we were building something both technically interesting and what could clearly be a super value add for the business domain. I’ve then witnessed PMs be brought on who cared little about any of that and instead tried to converge in the exact same enshittified product as everywhere else with little care or understand for the real solutions we were building towards. When this happened I knew within a month that the vision of the company, and it’s goals outside of generating investor returns, was dead if this person had their way.

I’ve specifically seen the controlling members of a company realize this after 7-8 months and when that happens it’s a quick change of course. I could see why you’d think it’s ego but I think it’s closer to my previous situation than what you’re stating here. This is a pivotal course correction and they’re not pretty, this just happens to be the most public one ever due to the nature of the business and company.

jaybrendansmith
16 replies
23h26m

This was a very personal firing in my opinion. Unless other, really damaging behaviors emerge, no responsible board fires their CEO with such a lack of care for the corporate reputation and their partners unless the firing is a personal grievance connected to an internal power play. This should be embarrassing to everyone involved, and sama has a real grievance here. Likely legal repercussions. Of course if they really did just invent AGI, and sama indicated an intent to monetize, that might cause people to act without caution if the board is AGI doomers. But I'd think even in that case it would be an argument best worked out behind closed doors. This reminds everybody of Jobs of course, but perhaps another example is Gary Gygax at TSR back in the 80s.

0xDEF
5 replies
22h27m

responsible board

The board was irresponsible and incompetent by design. There is one OpenAI board member who has an art degree and is part of some kind of cultish "singularity" spiritual/neo-religious thing. That individual has also never had a real job and is on the board of several other non-profits.

Doctor_Fegg
3 replies
20h54m

There is one OpenAI board member who has an art degree

Oh no! Everyone knows that progress is only achieved by people with computer science degrees.

xvector
2 replies
20h39m

People with zero experience in technology should simply not be allowed to make decisions about it.

This is how you get politicians that try to ban encryption to "save the children."

skywhopper
1 replies
20h19m

Why do you assume someone with an art degree has “zero experience with technology”? I assume many artists these days are highly sophisticated users of technology.

xvector
0 replies
18h11m

And we know politicians use technology too. Yet here we are.

deeviant
0 replies
20h14m

But they are married to somebody famous, so obviously qualified.

fulladder
4 replies
20h57m

Altman's not going to sue. Right now he has the high ground and the board is the one that looks petty and immature. It would be dumb for him to do anything that reverses this dynamic.

Altman is going to move on and announce a new venture in the coming weeks. Whether that venture is in AI or not in AI will be very revealing about what he truly believes are the prospects for the space.

Brockman and the others will likely do something new in AI.

deeviant
1 replies
20h15m

Altman is a major investor in the company behind the Humane AI Pin, which, does not inspire confidence for his ability to find a new home for his "brilliance."

peanuty1
0 replies
15h39m

He's also the founder and CEO of WorldCoin.

jnsaff2
0 replies
20h18m

It would be dumb for him to do anything that…

I admire you but these days dumb is kinda the norm. Look at the other Sam for example. Really hard to keep your mouth shut and do smart things when you think really highly about yourself.

bmitc
0 replies
11h49m

Right now he has the high ground and the board is the one that looks petty and immature.

This is an interesting take. Didn't the board effectively claim that he was lying to or misleading them? If that's true, how does someone doing that and being called out on it given them the high ground? By many accounts that have come out, it seems Altman had several schemes in work going against the charter of the non-profit OpenAI.

Whether that venture is in AI or not in AI will be very revealing about what he truly believes are the prospects for the space.

Why is he considered an oracle in this space?

busterarm
1 replies
20h28m

Gygax had fucked off to Hollywood and was too busy fueling his alcohol, cocaine and adultery addictions to spend any time actually running the company. All while TSR was losing money like crazy.

The company was barely making 30 million a year while 1.5 billion in debt...in the early 80s.

Even then, Gygax's downfall is the result of his own coup, where he ousted Kevin Blume and brought in Lorraine Williams. She bought all of Blume's shares and within about a year removed any control that Gygax had over the company and canceled most of his projects. He resigned a year later.

jaybrendansmith
0 replies
18h47m

Wow I did not know all of THAT was going on. What goes around...

xapata
0 replies
21h3m

Gygax? The history books don't think much of his business skills, starting with the creation of AD&D as a fiction to avoid paying royalties to Arneson.

jacquesm
0 replies
18h36m

Jobs was a liability when he was fired and arguably without being fired would have never matured. Formative experience if there ever was one.

cmrdporcupine
0 replies
20h27m

I'll just say it: Jobs being pushed out was the right decision at the time. He was an abusive and difficult personality, the Macintosh was at the time a sales failure, and he played internal team and corporate politics that pit team against team (e.g. Lisa vs Mac) and undermined unity and success.

Notable that when he came back, while he was still a difficult personality, the other things didn't happen anymore. Apple after the return of jobs became very good at executing on a single cooperative vision.

w10-1
12 replies
20h12m

It's hard to believe a Board that can't control itself or its employees could responsibly manage AI. Or that anyone could manage AGI.

There is a long history of governance problems in nonprofits (see the transaction-cost economics literature on point). Their ambiguous goals induce politics. One benefit of profit-driven boards is that the goals make only well-understood risk trade-off's between growth now or later, and the board members are selected for their actual stake in that actual goal.

This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.

I think it would be much more rational to make AI/AGI an entirely for-profit enterprise, BUT reverse the liability defaults and require that they pay all external costs resulting from their products.

Transaction cost economics shows that in theory that it doesn't matter where liability is allocated so long as the transaction cost of redistributing liability is near zero (i.e., contract in advance and tort after are cheap), because then parties just work it out. Government or laws are required only to make up for the actual non-zero dispute transaction cost by establishing settled expectation.

The internet and software generally has been a domain where consumers have NO redress whatsoever for exported costs. It's grown (and disrupted) fantastically as a result.

So to control AI/AGI, make it for-profit, but flip liability to require all exported costs to be paid by the developer. That would ensure applications are incredibly narrow AND have net-positive social impact.

dividendpayee
4 replies
19h34m

Yeah that's right. There's a blogger in another post on HN that makes the same point at the very end: https://loeber.substack.com/p/a-timeline-of-the-openai-board

DebtDeflation
2 replies
18h40m

From that link:

I could not find anything in the way of a source on when, or under what circumstances, Tasha McCauley joined the Board.

I would add, "or why she's on the board or why anyone thought she was qualified to be on the board".

At least with Helen Toner the intent was likely just to add a token AI Safety academic to pacify "concerned" Congressmen.

I am kind of curious how Adam D'Angelo voted. If he voted against removing Sam that would make this even more of a farce.

fotta
1 replies
18h4m

D’Angelo had to have voted in favor because otherwise they don’t get a four vote majority.

DebtDeflation
0 replies
17h57m

You only need 4 votes to have a majority if Sam and Greg were present for the vote, which neither were. Ilya + the 2 stooges voting in favor and D'Angelo voting against would be a 3-1 majority.

CamperBob2
0 replies
15h33m

Super interesting link there. You should submit it, if no one has yet.

"Governance can be messy. Time will be the judge of whether this act of governance was wise or not." (Narrator: specifically, about 12 hours.) "But you should note that the people involved in this act of corporate governance are roughly the same people trying to position themselves to govern policy on artificial intelligence.

"It seems much easier to govern a single-digit number of highly capable people than to “govern” artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown’s CSET, Open Philanthropy, etc.) to conduct governance in general, especially of the most impactful technology of the hundred years to come. Many people are saying we need more governance: maybe it turns out we need less."

photochemsyn
2 replies
19h33m

The solution is to replace the board members with AGI entities, isn't it? Just have to figure out how to do the real-time incorporation of current data into the model. I bet that's an active thing at OpenAI. Seems to have been a hot discussion topic lately:

https://www.workbyjacob.com/thoughts/from-llm-to-rqm-real-ti...

The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs. The curious silence on military-industrial applications of LLMs makes me suspect this is part of the OpenAI story... Good plot for a novel, at least.

ethanbond
1 replies
19h20m

The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs.

These cannot possibly be the most realistic failure cases you can imagine, are they? Who cares if "kids" "make illegal drugs?" But yeah, if kids can make illegal drugs with this tech, then actual bad actors can make actual dangerous substances with this tech.

The real risk is manifold and totally unforeseeable the same way that a 400 Elo chess player has zero conception of "the risks" that a 2000 Elo player will exploit to beat them.

photochemsyn
0 replies
18h56m

Every bad actor who wants to make dangerous substances can find that information in the scientific literature with little difficulty. An LLM, however, is probably not going to tell you that the mostly likely outcome of a wannabe chemist trying to cook up something or other from an LLM recipe is that they'll poison themselves.

This generally fits a notion I've heard expressed repeatedly: today's LLMs are most useful to people who already have some domain expertise, it just makes things faster and easier. Tomorrow's LLMs, that's another question, as you imply.

__loam
1 replies
19h11m

I appreciate this argument, but I also think naked profit seeking is the cause of a lot of problems in our economy and there are qualities that are hard to quantify when you structure the organization around it. Blindly following the economic argument can also cause problems, and it's a big reason why American corporate culture moved away from building a good product first towards maximizing shareholder value. The OpenAI board certainly seems capricious and impulsive given this decision though.

lapphi
0 replies
18h11m

On board with this. Arguing that a for-profit is somehow the moral position over a non-profit because money is tangible while the idea of doing good is not well-defined.. feels like something a Rockefeller owned newspaper from the Industrial Revolution would have printed.

willdr
0 replies
16h10m

Yeah, there's no governance problems in for-profit companies that have led to, for example, the smoking epidemic, the opioids epidemic, the impending collapse of the planet's biosphere, all for the sake of a dime.

patcon
0 replies
18h56m

Their ambiguous goals induce politics. [...] This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.

Yes, of course. But that's because "doing good" is by definition much more ambiguous than "making money". It's way higher dimension, and it has uncountable definitions.

So nonprofits will by definition involve more politics at the human level. I'd say we must accept that if we want to live amongst the actions of nonprofits rather than just for-profits.

To claim that "politics" are a reason something "can't be trusted" is akin to saying involvement of human affairs means something can't be trusted (over computers). We must imagine effective politics, or else we cannot imagine effective human affairs -- only mechanistic affairs of simple optimization systems (like capitalist markets)

torstenvl
12 replies
20h26m

It isn't a coup. A coup is when power is taken and taken by force, not when your constituents decide you no longer represent their interests well. That's like describing voting out a politician as a coup.

Calling it a coup falsely implies that OpenAI in some sense belongs to Sam Altman.

If anything is a coup, it's the idea that a founder can incorporate a company and sell parts of it off, and nevertheless still own it. It's the wresting of control from the actual owners in favor of a public facing executive.

crazygringo
8 replies
19h1m

No, you're confusing business with politics. You're right that a literal coup d'état is the forced takeover of a state with the backing of its own military.

But in the business and wider world, a coup (without the d'état part) is, by analogy, any takeover of power that is secretly planned and executed as a surprise. (We can similarly talk about a company "declaring war" which means to compete by mobilizing all resources towards a single purposes, not to fire missiles and kill people.)

This is absolutely a coup. It was an action planned by a subset of board members in secret, taken by a secret board meeting missing two of its members (including the chair), where not even Microsoft had any knowledge or say, despite their 49% investment in the for-profit corporation.

I'm not arguing whether it's right or wrong. But this is one of the great boardroom coups of all time -- one for the history books. There's a reason it's front-page news, not just on HN but in the NYT and WSJ as well.

torstenvl
5 replies
18h44m

Your post is internally inconsistent. Defining a coup as "any takeover of power" is inconsistent with saying that firing Sam Altman is a coup. CEOs do not have and should not have any power vis-à-vis the board. It's right there in the name.

Executives do not have any right to their position. They are an officer, i.e., an agent of the stakeholders. The idea that the executive is the holder of the power and it's a "coup" if they aren't allowed to remain is disgustingly reminiscent of Trumpian stop-the-steal rhetoric.

crazygringo
2 replies
18h26m

You're ignoring the rest of the definition I provided. I did not say it was "any takeover of power". Please read the definition I gave in full.

And I am not referring to the CEO status of Altman at all. That's not the coup part.

What I'm referring to is the fact that beyond his firing as CEO, he and the chairman were removed from their board seats, as a surprise planned and executed in secret. That's the coup. This is not a board firing a CEO who was bad at their job; this is two factions at the company where one orchestrates a total takeover of the other. That's a coup.

Again, I'm not saying whether this is good or bad. I'm just saying, this is as clear-cut of a coup as there can be. This has nothing in common with the normal firing of a CEO accomplished out in the open. This is four board members removing the other two in secret. That's a coup if there ever was one.

torstenvl
1 replies
18h3m

You're ignoring the rest of the definition I provided.

That isn't how definitions work. Removals from power that are by surprise and planned in secret are a strict subset of removals from power.

replwoacause
0 replies
10h46m

If this wasn’t a coup, what would have made it one?

paulddraper
0 replies
18h5m

Okay, so a CEO doesn't have any power to seize....

Does the chairman of the board have any power?

gfodor
0 replies
18h29m

All you’re saying here is that it’s never possible to characterize a board ousting a ceo as a coup. People do, because it’s a useful way to characterize when this happens in the way it did here vs many other ways that involve far less deception and so on.

polishdude20
1 replies
16h17m

I think of it as more of a mutiny.

maxlamb
0 replies
14h27m

A mutiny is when the entire boat’s crew rebels at once, a coup is only when a few high-level powerful people remove the folks at the top.

username332211
0 replies
19h45m

It's not uncommon to describe the fall of a government as a "parliamentary" coup, if the relevant proceedings of a legislative assembly are characterized by haste and intrigue, rather than debate and deliberation.

For example, the French revolution saw 3 such events commonly descried as coups - the the fall of Robespierre on 9-th of Thermidor and the Directory's (technically legal) annulment of elections on the 18-th of Fructidor and 22-nd Floréal. The last one was even somewhat bloodless.

pdntspa
0 replies
19h50m

voting out a politician as a coup.

That6 is literally a political coup

Barrin92
0 replies
19h39m

Yup. The only correct governance metaphor here is the opposite. It's a defense of openAI's constitution. The company, effectively like Mozilla, was deliberately structured as a non-profit in which the for-profit arm exists to raise capital to pursue the mission of the former. Worth paying attention to what they have to say on their structure:

https://openai.com/our-structure

especially this part:

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...

wolverine876
11 replies
23h11m

In the past, many on HN complained that OpenAI had abandoned its public good mission and had morphed into a psuedo-private for-profit. If that was your feeling before, what do you think now? Are you relieved or excited? Are you the dog who caught the car?

At this point, on day 2, I am heartened that their mission was most important, even at the heart of the most important technology maybe ever or since nuclear power or writing or democracy. I'm heartened at the board's courage - certainly they could anticipate the blowback. This change could transform the outcome for humanity and the board's job was that stewardship, not Altman's career (many people in SV have lost their jobs), not OpenAI's sales numbers. They should be fine with the overwhelming volume of investment available to them.

Another way to look at it: How could this be wrong, given that their objective was not profit, and they can raise money easily with or without Altman?

On day 3 or day 30 or day 3,000, I'll of course come at it from a different outlook.

orbital-decay
4 replies
22h53m

If the rumors are correct and ideological disagreement was at the core of this, OpenAI is not going to be open anyway, as Sutskever wants more safety, which implies being as closed as possible. Whether it's "public good" is in the eye of the beholder, as there are multiple mutually incompatible concerns about AI safety, all of which have merit. The future balance between those will be determined by unpredictable events, as always.

wolverine876
1 replies
22h30m

Whether it's "public good" is in the eye of the beholder

That's too easy an answer, used to dismiss difficult questions and embrace amorality. There is public good, sometimes easy to define and sometimes hard. If ChatGPT is used to cure cancer, that would be a public good. If it's used to create a new disease that millions, that's obviously bad. Obviously, some questions are harder than that, but it doesn't excuse us from answering them and getting it right.

orbital-decay
0 replies
22h7m

The issue with giving everyone open access to uncontrolled everything is obvious, it does have merit indeed. The terrible example of unrestricted social media as "information superconductor" is alive and breathing, supposedly it led to at least one actual physical genocide within the last decade. The question that is less obvious to some is: do these safety concerns ultimately lead us into the future controlled by a few, who will then inevitably exploit everyone to a much worse effect? That it's already more or less the status quo is not an excuse; it needs to be discussed and not dismissed blindly.

It's a very political question, and HN somewhat despises politics. But OpenAI is not an apolitical company either, they are ideologically driven and have the AGI (defined as "capable of replacing humans in economically important jobs) as their stated target. Your distant ancestors (assuming they were from Europe) were able to escape the totalitarianism and feudalism, starting from the Middle Ages, when the margins were mile-wide compared to what we have now. AI controlled by a few is way more efficient and optimized; will you even have a chance before your entire way of thinking is turned to the desired direction?

I'm from a country that lives in your possible future (Russia), I've seen a remarkably similar process from the inside, so this question seems very natural to me.

hliyan
1 replies
14h17m

A company named openai choosing safety is bad because safety means as closed as possible, is a very questionable linguistic contortion. Most unsafe things happen behind closed doors.

gnulinux
0 replies
9h39m

Ilya himself said he's against fully open source models because they're not safe enough. He's definitely against open source, my hunch is that we will see OpenAI being less open after his takeover.

Full interview here ("No Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever" from 2 weeks ago): https://www.youtube.com/watch?v=Ft0gTO2K85A

s1artibartfast
1 replies
22h3m

OpenAI had abandoned its public good mission and had morphed into a psuedo-private for-profit.

They should be fine with the overwhelming volume of investment available to them.

Another way to look at it: How could this be wrong, given that their objective was not profit, and they can raise money easily with or without Altman?

This wasn't just some cultural shift. The board of OpenAI created a seperate for profit legal entity in 2019. The for-profit legal entity received overwhelming investment from Microsoft to make money. Microsoft, Early investors, and Employees all have a stake and want returns from this for profit company.

The separate non-profit OpenAI has a major problem on its hands if it thinks its goals are no longer aligned with the co-owners of the for-profit company.

cthalupa
0 replies
19h59m

The thing here is that the structure of these companies and the operating agreement for the for-profit LLC all effectively mean that everyone is warned going in that the for-profit is beholden to the mission of the non-profit and that there might be zero return on investment and that there may never be profit at all.

The board answers to the charter, and are legally obligated to act in the interest of the mission outlined in the charter. Their charter says "OpenAI’s mission is to ensure that artificial general intelligence (AGI) [...] benefits all of humanity" - not do that "unless it'd make more money for our for-profit subsidiary to focus on commercializing GPT"

tfehring
0 replies
22h7m

I think it was a good thing that, in hindsight, the leading AI research company had a strong enough safety focus that it could do something like this. But that’s only the case as long as OpenAI remains the leading AI research company going forward, and after yesterday’s events I think that’s unlikely. Pushing for more incremental changes at OpenAI, possibly by getting the board to enact stronger safety governance, would have been a better outcome for everyone.

layer8
0 replies
22h46m

Much of the criticism was that they are not open enough. I see no indication that this will be changing, given the AI safety concerns of the remaining board.

Nevertheless, I agree that the firing was probably in line with their stated mission.

itronitron
0 replies
23h4m

It's a lesson to any investor that doesn't have a seat on the board, what goes around comes around, ha ha :}

deeviant
0 replies
20h10m

You seem super optimistic that backstabbing power-plays will result improvement.

I see it far more likely that openAI will lock down its tech even more, in the name of "safety", but also predict it will always be possible to pay for their services never-the-less.

Nothing in this situation makes me think OpenAI will be any more "open."

jimmydoe
10 replies
23h25m

Many compare Altman to 1985 Jobs, but if we believe what's said about the conflict of mission, shouldn't he be the sugar water guy for money?

cmrdporcupine
8 replies
23h3m

But that's actually what Jobs turned out to be? Woz and others were the engineering genius at Apple, and Jobs turned out to be really good at finding and identifying really good sales and branding hooks. See-through colourful boxes, "lickable" UIs, neat-o minimalistic portable music players, flick-flick-flick touch screens, and "One More Thing" presentations.

Jobs didn't invent the Lisa and Macintosh. Bill Atkinson, Andy Hertzfeld, Larry Tesler etc did. They were the tech visionaries. Some of them benefited from him promoting their efforts while others... (Tesler mainly) did not.

Nothing "wrong" with any of that, if your vision of success is market success... but people need to be honest about what Jobs was... not a technology visionary, but a marketing visionary. (Though in fact the original Macintosh was a market failure for a long time)

In any case comparing Altman with Jobs is dubious and a bit wanky. Why are people so eager to shower this guy with accolades?

naasking
6 replies
22h53m

I do think Jobs' engineering skill is oversold, but he was also more than just marketing. He had a vision for how technology should integrate with people's lives that drove great ergonomic and UX choices with a kind of polish that was lacking everywhere else. Those alone revolutionized personal computing in many ways. It's hard for younger people to even imagine how difficult it was to get connected to the internet at one point, and iMacs made it easy.

cmrdporcupine
4 replies
22h50m

Well I'm not one of those "younger people" though not sure if you were aiming that at me or not.

I think it's important to point out that Jobs could recognize nice UX choices, but he couldn't author them. He helped prune the branches of the bonsai tree, but couldn't grow it. On that he leaned on intellects far greater than his own, which he was pretty good at recognizing and cultivating. Though in fact he alienated and pushed away just as many as he cultivated.

I think we could do better as an industry than going around looking for more of that.

Karrot_Kream
1 replies
21h29m

I'm curious at this perspective. Even from the Slashdot days (my age limit) techie types have hated Jobs, and showered Woz as the true genius. Tech culture has claimed this for a long time. Is your argument that tech people need more broad acclaim? And if so, does this come from a sense of being put down?

I used to broadly believe that Jobs-types were over-fluffed charismatic magnets myself by hanging out in these places until I started working and found out how useful they were at doing things I couldn't or didn't want to do. I don't think they deserve more praise than the underlying technical folks, but that they deserve equal praise. Sort of like how in a two-parent households, different parents often end up shouldering different responsibilities but that doesn't make one parent with certain responsibilities the true parent.

cmrdporcupine
0 replies
21h21m

I guess it depends on what things you want to do, and how you define success, doesn't it?

If we're stuck with the definitions of success and excellence that are dominant right now, then, sure, someone like a Jobs or a Zuck or whatever, I see why people would be enamored with them.

But as an engineer I know I have different motivations than these people. And I think that's what people who make these kinds of arguments are drawing on.

There is a class of person whose success comes from finding creative and smart people and finding ways to exploit and direct them for their own ends. There's a genius in that, for sure. I am just not sure I want to celebrate it.

I just want to make things and help other people who make these things.

To put it another way, I'd take, say, Smalltalk over MacOS, if I have to make the choice.

naasking
0 replies
18h40m

I think it's important to point out that Jobs could recognize nice UX choices, but he couldn't author them. He helped prune the branches of the bonsai tree, but couldn't grow it.

Engineers are great at solving problems given a set of constraints. They are not necessarily all that good at figuring out what constraints ought to be when they are given open-ended, unconstrained tasks. Jobs was great at defining good constraints. You might call this pruning, and if you intended that pejoratively then I think you're underselling the value of this skill.

hackshack
0 replies
18h35m

This reminds me of the Calculator Construction Set story. I like its example of a builder (engineer) working with a curator (boss), and solving the problem with toolmaking.

Engineer was building a calculator app, and got a little tired of the boss constantly requesting changes to the UI. There was no "UI builder" on this system so the engineer had to go back and adjust everything by hand, each time. Back and forth they went. Frustrating.

"In a flash of inspiration," as the story goes, the engineer parameterized all the UI stuff (line widths, etc.) into drop-down menus, so boss could fiddle with it instead of bothering him. The UI came together quickly thereafter.

https://www.macfolklore.org/Calculator_Construction_Set.html

etempleton
0 replies
20h22m

Yes, people love to be dismissive of Jobs and call him just a marketing guy, but that is incredibly reductive for a guy who was able to cofound Apple and then come back and bring it back from near death to become the biggest company in the world. Marketing alone can’t do that.

Jobs had great instincts for products and a willingness to create new products that would eat established products and revenue streams. He was second to none at seeing what technology could be used for and putting teams in place that could create consumer products with those technologies and understanding when the technologies weren’t ready yet.

Look at what Apple achieved under his leadership and what it didn’t achieve without his leadership. Being dismissive of Jobs contributions is either a bad faith argument or one out of ignorance.

sashank_1509
0 replies
13h17m

It seems like people these days can’t even accurately describe what Steve Jobs was, he was a leader. He was a genius at managing people to work for him. Steve Wozniak was not, which was why Jobs could make Pixar, Next and of course Apple. Just because he didn’t have a hard skill of engineering, doesn’t mean he was useless. Rarely is anything impressive made by a single person, everything is almost always made by teams and generally large teams. Large teams especially can only function under a great leader and jobs was a great leader for a myriad of reasons which was why he achieved success at many multiples of magnitude compared to Steve Wozniak

bart_spoon
0 replies
19h50m

Yes, this was my thought when seeing those comparisons as well.

gustavus
10 replies
23h43m

Here's the thing. I've always been kind of cold on OpeanAI claiming to be "Open" when it was clearly a for profit thing and I was concerned about the increasing move to the commercialization of AI that Sam was taking.

But I am much more concerned to be honest those who feel they need to control the development of AI to ensure it is "aligns with their principles", after all principles can change, and to quote Lewis "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience."

What we really need is another Stallman, his idea was first and foremost always freedom, allowing each individual agency to decide their own fate. Every other avenue will always result in men in suits in far away rooms dictating to the rest of the world what their vision of society should be.

andy99
5 replies
23h30m

If the board was really serious about doing good over making profit (if this is indeed what the whole thing is about) they'd open source gpt-4/5 with a gpl-style license

swatcoder
4 replies
23h24m

That’s not the sense of open they’ve organized around. In fact, it’s antithetical to it.

Theirs is a technocratic sense of open, where select credentialed experts collaborate on a rational good without a concentration of control by specific capitalists or nations.

surrealize
0 replies
22h21m

This definition is an abuse of the word "open"

piuantiderp
0 replies
23h14m

I think your technocratic sense of open is misplaced. At this point OpenAI is clearly controlled by the US and it's ok. If anything one wonders if Altman's ouster has a geopolitical angle, cozying up to other countries and such.

naveen99
0 replies
22h45m

Yeah, not open in the open source or rms way. it’s “for the benefit for all” with the “benefits” decided by the openai board, a la communism, with central planning by “the party”.

Surprisingly capitalism actually leads to more benefits for all, because of the decentralization and competition.

cmrdporcupine
0 replies
22h53m

I guess I struggle to see how the word "open" can be applied to that, but I also remember how that word was tossed around in the late 80s and early 90s during the Unix wars, and, yeah, shoe fits.

The question is how we got to be so powerless as a society that this is the only palette of choices we get to choose from: technocratic semi-autistic engineer-intellects who want to hoist AGI on the world vs self-obsessed tech bro salesdudes who see themselves as modern day Howard Roarks.

That's it.

Anyways, don't mind me, gonna crawl into a corner and read Dune.

esafak
2 replies
23h18m

I think "don't extinguish humanity or leave most of them unemployed" is a principle everyone can get and stay behind.

cmrdporcupine
1 replies
22h47m

You seem to have far more faith in others' ethical compass than I think is justified by historical evidence.

It's amazing what people will do when the size of their paycheque (or ego) are tied to it.

I don't trust anybody at OpenAI with the keys to the car, but democratic choice apparently doesn't play into it, so here we are.

esafak
0 replies
22h33m

I meant as a basic principle. Individuals and organizations who breach the pact can be punished by legal means.

brookst
0 replies
23h33m

Be the change you want to see.

sheepscreek
8 replies
20h8m

Here’s another theory.

the ousting was likely orchestrated by Chief Scientist Ilya Sutskever over concerns about the safety and speed of OpenAI's tech deployment.

Who was first to launch a marketplace for GPTs/agents? It wasn’t OpenAI, but Poe by Quora. Guess who sits on the OpenAI non-profit board? Quora CEO. So at least we know where his interest lies with respect to the vote against Altman and Greg.

svnt
6 replies
19h39m

This is a really good point. If a non profit whose board you sit on releases a product that competes with a product from the corporation you manage, how do you manage that conflict of interest? Seems he should have stepped down.

loeber
2 replies
19h16m

Yeah, I just wrote about this as well on my substack. There were two significant conflicts of interest on the OpenAI board. Adam D'Angelo should've resigned once he started Poe. The other conflict was that both Tasha McCauley and Helen Toner were associated with another AI governance organization.

svnt
1 replies
15h53m

Thanks — the history of board participation you sleuthed is interesting for sure:

https://loeber.substack.com/p/a-timeline-of-the-openai-board

loeber
0 replies
26m

Thank you!

IAmGraydon
1 replies
16h3m

How does Poe compete with OpenAI? It's literally running OpenAI's models.

svnt
0 replies
15h54m

Poe forms a layer of indirection and customization above the models. They feed data through the api and record those interactions, siphoning off what would have been OpenAI customer data.

You could have maybe argued it either way until the most recent OpenAI updates, depending on what you thought OpenAI’s strategy would be, but since the release last week of ChatGPTs with roles they are now clearly in direct competition.

jacquesm
0 replies
18h39m

Yes, he should have.

atleastoptimal
0 replies
19h15m

The current interim CEO also spearheaded ChatGPT's development. Its the biggest product, consumer market based move the company's ever made. I can't imagine it's simply a pure "Sam wanted profits and Ilya/board wanted pure research" hard line in the sand situation.

g42gregory
7 replies
21h28m

My feeling is the the commercial side of the OpenAI brand is gone. How could OpenAI customers depend on the company, when the non-profit board goes against their interests (by slowing down the development and giving them inferior product)?

On the other hand, the AGI side of the OpenAI brand is just fine. They will continue the responsible AGI development, spearheaded by Ilya Sutskever. My best wishes for them to succeed.

I suspect Microsoft will be filing a few lawsuits and sabotaging OpenAI internally. It's an almost $3Tn company and they have an army of lawyers. They can do a lot of damage, especially when there may not be much sympathy for OpenAI in Silicon Valley's VC circles.

croes
4 replies
20h51m

It's a bad idea to make yourself dependent on a new service from the outset.

They could have gone bankrupt, been sued into the ground, taken over by Microsoft...

Just look at the just because they fired their CEO.

Was the success based on GPT or the CEO?

The former is still their and didn't get inferior.

Slower growth doesn't mean shrinking

g42gregory
3 replies
20h36m

As an AI professional, I am very interested to hear about OpenAI's ideas, directions, safety programs, etc...

As a commercial customer, the only things I am interested in is the quality of the commercial product they provide to me. Will they have my interests in mind going forward? Will they devote all their energy in delivering the best, most advanced product to me? Will robust support and availability be there in the future? Given the board's publicly stated priorities (which I was not aware of before!), I am not so sure anymore.

croes
2 replies
19h51m

Will they have my interests in mind going forward? Will they devote all their energy in delivering the best, most advanced product to me?

Sorry to burt you bubble but the primary motivation of a for-profit company is ... profit.

If they make more money in screwing you, they will. Amazon, Google, Walmart, Microsoft, Oracle etc.

The customer is never a priority, just a means to an end.

g42gregory
1 replies
18h56m

Absolutely. I totally agree with the sentiment. But, at least make an effort to pretend that you care! Give me something... OpenAI does not even pretend anymore. :-) The board was pretty clear. That's not a good sign for the customers.

croes
0 replies
16h55m

Seems like MS tries to force Altman back in.

If they succeed, we'll see how much MS cares.

mhh__
0 replies
19h54m

I am curious what happens to ChatGPT now.

If it's true that this is in part over Dev day and such, and they may have a point, however if useful stuff with AI that helps people is gauche is OpenAI just going to turn into increasingly insular cult? ClosedAI but this time you can't even pay for it?

TylerE
0 replies
21h15m

I wonder if this represents a shift away from the LLM being the headline product. Their competitors are rapidly catching up in that space.

WendyTheWillow
7 replies
23h30m

I just hope the "AI safety" people don't end up taking LLMs out of the hands of the general public because they read too many Isaac Asimov stories...

hliyan
1 replies
14h23m

It's clear you haven't read any Asimov stories. His robots are impeccably ethical due to the three laws, and the stories explore robopsychologicalb conundrums that arise when people keep putting them in situations that tax the three laws of robotics.

WendyTheWillow
0 replies
8h24m

Why is it clear I haven't read any Asimov stories, exactly?

3seashells
1 replies
23h28m

If you were a AI going rogue, how would you evade public scrutiny?

jjtheblunt
0 replies
20h17m

As a replicant, chasing other replicants as dangerous?

pknerd
0 replies
22h37m

I am addicted to got now:/

pixl97
0 replies
22h42m

Asimov AI is just humanlike behavior mostly, if you want a more realistic concern think Bostrom and instrumental goals.

lukeschlather
0 replies
19h58m

In most of Asimov's stories it's implied that machines have quietly and invisibly replaced all human government and the world is better for it because humans tend to be petty and cruel while it's impossible for robots to harm humans.

Waterluvian
7 replies
23h31m

How important is Altman? How important were three senior scientists? Can they start their own company, raise funding, and overtake OpenAI in a few years? Or does OpenAI have some material advantage that isn’t likely to be harmed by this?

Perhaps the competition is inevitably a good thing. Or maybe a bad thing if it creates pressure to cut ethical corners.

I also wonder if the dream of an “open” org bringing this tech to life for the betterment of humanity is futile and the for-profits will eventually render them irrelevant.

tuxguy
4 replies
23h10m

An optimistic perspective of how despite today's regrettable events, Sama and gdb will start something new and more competition is a good thing : https://x.com/DrJimFan/status/1725916938281627666?s=20

I have a contrarian prediction : Due to pressure from investors and a lawsuit against the openai board, the board will be made to resign and Sama & Greg will return to openai.

Anybody else agree ?

cthalupa
1 replies
19h50m

I have a contrarian prediction : Due to pressure from investors and a lawsuit against the openai board, the board will be made to resign and Sama & Greg will return to openai.

The board is not beholden to any investors. The board is for the non-profit that does not have shareholders, and it fully owns and controls the manager entity that controls the for-profit. The LLC's operating agreement is explicit that it is beholden to the charter and mission of the non-profit, not creating financial gain for the shareholders of the for-profit company.

lll-o-lll
0 replies
18h0m

OpenAI will lose access to MS and the billions required to continue the research as quickly as MS is able to move. The non-profit will continue, but without the resources required to do much, and any scientists who want to have “real world impact” as opposed to “ideological dreams” will move on.

Competition will kill these ideological dreams because the technology has huge commercial and political applications. MS would never have invested had they foreseen these events and OpenAI cannot achieve their mission without access to incredible amounts of capital.

He’s dead Jim, but it’ll take a long time before the corpse stops twitching.

cmrdporcupine
0 replies
22h57m

If that's the outcome, I suspect OpenAI will have another wave of resignations as the folks aligned to Sutskever would walk away, too, and take with them their expertise.

Waterluvian
0 replies
23h0m

Do we know enough about the org’s charter to reasonably predict that case? Did the board actually do anything wrong?

Or are you thinking it would be a kind of power play from investors to say, “nah, we want it to be profit driven.”

pknerd
0 replies
22h35m

Let's not forget the role of Ilya to make gpt what it is today

intellectronica
0 replies
23h12m

How important is Altman? How important were three senior scientists? Can they start their own company, raise funding, and overtake OpenAI in a few years?

The general opinion seems to be estimating this at far above 50% YES. I, personally would bet at 70% that this exactly what will happen. Unless some really damaging information becomes public about Altman, he will definitely have the strong reputation and credibility, definitely will be able to raise very significant funding, and the only expert in industry / research he definitely won’t be able to recruit would be Ilya Sutskever.

iamflimflam1
6 replies
22h57m

Everyone I speak to who have have been building on top of OpenAI - and I don’t mean just stupid chat apps - feel like the rug has just been pulled out from under them.

If as it seems, dev day was the last straw, what does that say to all the devs?

cwillu
3 replies
22h31m

Company with an unusual corporate structure designed specifically to be able to enforce an unpopular worldview, enforced that unpopular worldview.

I get that people feel disappointed, but I can't help but feel like those people were maybe being a bit wilfully blind to the parts of the company that they didn't understand/believe-in/believe-were-meant-seriously.

iamflimflam1
1 replies
22h9m

It feels like they’ve had plenty of time to reset the direction of the company if they thought it was going wrong.

Allowing it to go so far off course feels like they’ve really dropped the ball.

cwillu
0 replies
22h3m

I think that's where the “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” comes in.

htss2013
0 replies
17h59m

What unpopular world view exactly?

chasd00
0 replies
18h31m

I work in consulting the genai hype machine is reaching absurdity in my firm. I can’t wait until Monday :)

Espressosaurus
0 replies
21h32m

It's almost like by wrapping someone else's service, you are at their mercy.

So you better be planning an exit strategy in case something changes slowly or quickly.

Nothing new here.

ThinkBeat
5 replies
20h50m

I have spent time thinking about who would become the next CEO and even without mushrooms my brain came up with a totally out of context idea:

Bill Gates.

Microsoft is after all invested in OpenAI, and Bill Gates has become "loved by all" (who dont remember evil Gates of the yesteryears.

I am not saying it will happen, 99,999% it wont but still he is well known and may be a good face to splash on top of OpenAI.

After all he is one of the biggest charity guys now right?

Nidhug
2 replies
19h35m

Is Bill Gates really loved by all ? I feel like it was the case before COVID, but then his reputation seemed to go from loved to hated

ThinkBeat
0 replies
11h45m

What was it he did wrong during Covid? Honest question I usually pay little attention the guy

Being old and having lived through Evil Gates, when he made a lot of hostile and legally dubois things to ensure of the growth and safety from competitors, I lived in a bubble that he was one of the worst people in tech.

Seeing how now only knew the smiling "philanthropist" who comments on having solutions to all sorts of world problems it seems like a big bubble really do like him now.

I exaggerated by claiming "all", I could have said "many" and it would be a more accurate statement.

To me he will remain evil Gates.

Clubber
0 replies
17h52m

Don't forget about his ties to Jeffrey Epstein.

ryukoposting
1 replies
12h6m

I can't imagine he would be interested in a C-office role at this point. Board member? Sure.

ThinkBeat
0 replies
11h52m

Yeah are right. He probably would not be.

speedylight
4 replies
19h43m

The real issue is that OpenAI is both a for profit and a non profit organization. This structure creates a very significant conflict of interest where maintaining balance between both of them is very tricky business. The non-profit board shouldn’t have been in charge of the for-profit aspect of the company.

cthalupa
2 replies
19h31m

The for-profit would not exist if the non-profit was not able to maintain control. The only reason it does exist is because they were able to structure it in such a way that the for-profit is completely beholden to the non-profit. There is no requirement in the charter for the non-profit or the operating agreement of the for-profit to maintain a balance - it explicitly is the opposite of that. The operating agreement that all investors in and employees of the for-profit must sign explicitly states that investments should be considered donations, no profits are obligated to be made or returned to anyone, all money might be dumped into AGI R&D, etc. and that the for-profit is specifically beholden to the charter and mission of the non-profit.

https://openai.com/our-structure

DebtDeflation
1 replies
18h20m

The for-profit would not exist if the non-profit was not able to maintain control.

The non-profit will not exist at all if Microsoft walks away and all the other investors follow Sam and Greg. Neither GPUs nor researchers are free.

cthalupa
0 replies
18h14m

The non-profit is legally obligated to follow their charter. If they truly believe that allowing Altman to remain CEO is a contraindication to that, then they have to fire him. They might not survive such a firing due to the fallout, but that doesn't matter - if one option is assured movement away from their charter and the other is potential destruction while still adhering to it, the latter is still the correct choice.

naveen99
0 replies
19h38m

Balance is irrelevant. It’s an accounting mechanism for irs rules.

skywhopper
4 replies
23h33m

Sorry but the board firing the person who works for them is not a “coup”.

yjftsjthsd-h
2 replies
23h6m

The next day, Brockman, who was Chairman of the OpenAI board, was not invited to this board meeting, where Altman was fired.

Around 30 minutes later, Brockman was informed by Sutskever that he was being removed from his board role but could remain at the company, and that Altman had been fired (Brockman declined, and resigned his role later on Friday).

The board firing the CEO is not a coup. The board firing the CEO behind the chair's back and then removing the chair is a coup.

sudosysgen
0 replies
20h57m

The point being made is that the board is the one that's supposed to be in power. How the CEO is fired may be gauche but it's not usurpation of power or anything like that.

cwillu
0 replies
22h27m

It appears that is the normal practice for a board voting to fire a CEO though, so that aspect doesn't mean much.

w10-1
0 replies
20h3m

The board ousting the board chair (without notice) and the CEO is a coup. It's not even clear to me it was legal to meet and act without notice to the board chairman.

FergusArgyll
4 replies
23h15m

If the firing was because of a difference in "vision", then it doesn't really matter if Altman was key to making OpenAI so successful. Sutskever and co, don't want it to be successful (by market standards at least). If they get their way (past MSFT and others) then OpenAI will no longer be the cutting edge.

Buy GOOGL?

layer8
3 replies
22h37m

It seems you are saying that anything that doesn’t put profit first can’t be successful.

fallingknife
0 replies
20h12m

"Can't" is a strong word, but a company that does will have more resources and likely outcompete it.

cwillu
0 replies
22h29m

By market standards. There will be no end to intended and unintended equivocation about this over the coming days.

YetAnotherNick
0 replies
20h47m

OpenAI's median salary of engineers is $900k. So yeah AI companies need money to be successful. Now if there is any way to generate billions of dollars per year long term without any profit objective, I will be happy to know.

robg
2 replies
23h18m

Seems pretty straightforward, the dev day was a breaking point for the non-profit interests.

Question is, how did the board become so unbalanced where this kind of dispute couldn’t be handled better? The commercial interests were not well-represented in the number of votes.

cthalupa
0 replies
19h54m

The commercial interests were not well-represented in the number of votes.

This is entirely by design. Anyone investing in or working for the for-profit had to sign an operating agreement that literally states the for-profit is entirely beholden to the non-profit's charter and mission and that it is under no obligation to be profitable. The board is specifically balanced so that the majority is independent of of for-profit subsidiary.

A lot of people seem to be under the impression that the intent was for there to be significant representation of commercial interests here, and that is the exact opposite of how all of this is structured.

blameitonme
0 replies
22h27m

Seems pretty straightforward, the dev day was a breaking point for the non-profit interests.

What was so bad about that day? Wasn't it just gpt4-turbo, gpt vision and gpt store and few small things?

biofunsf
2 replies
20h38m

What I’d really like to understand is why the board felt like they had to this as a surprise coup, and not a slower more dignified firing.

If they gave Altman 1 weeks notice and let him save face in the media, what would they have lost? Is there a fear Altman would take all the best engineers on the way out?

throw555chip
1 replies
20h2m

As someone else commented on this page, it wasn't a coup.

biofunsf
0 replies
18h33m

This seems a pedantic point. In the “not legal” sense I agree since that seems part of a real coup. But it certainly was a “surprise ousting of the current leadership”, which I mean when I say coup.

Animats
2 replies
22h41m

Huh. So that mixed nonprofit/profit structure came back to bite them.

hughesjj
1 replies
22h9m

Bite who?

jacquesm
0 replies
15h52m

The founders and funders.

meroes
1 replies
20h37m

Why has no one on HN considered it has to do with sexually assaulting his sister when they were young?

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

My other main guess is his push for government regulation being seen as stifling AI growth or even collusion with unaligned actors by the more scienc-y side and got him ousted by them.

pvaldes
0 replies
16h37m

Why has no one on HN considered it has to do with ...

Maybe because this was not proven in a court still, and "innocent until proven guilty" is still a basic concept that must be preserved.

So a big "allegedly" must be placed here.

iamleppert
1 replies
20h59m

I think OpenAI made the right choice. Just look at what has become of many of the most successful YC companies. Do we really want OpenAI to turn into another Airbnb? It’s clear the biggest priority of YC is profit.

They made a deal with Microsoft, who has a long history of exploiting users and customers to make as much money as possible. Just look at the latest version of Windows; Microsoft doesn’t care about AI only as much as it enables them to make more and more money till no end through their existing products. They rushed to integrate AI into all of their legacy products to prop them up rather than offer something legitimately new. And they did it not organically but by throwing their money around, attracting the type of people who are primarily motivated by money. Look at how the vibe of AI has changed in the past year —- lots of fake influencers and the mad gold rush around it. And we are hearing crazy stories like comp packages at OpenAI in the millions, turning AI into a rich man’s game.

For a company that has “Open” in their name, none of their best and most valuable GPT models are open source. It feels as disingenuous as the “We” in WeWork. Even Meta has them beat here.

Sam Altman, while good at building highly profitable SaaS, consumer, & B2B tech startups and running a highly successful tech accelerator, before this point, didn’t have any kind of real background in AI. One can only imagine how he must feel like an outsider.

I think it’s a hard decision to fire a CEO, but the company is more than the CEO, it’s the people who work there. A lot of the time the company is structured in such a way that the CEO is essentially not replaceable, we should be thankful OpenAI fortunately had the right structure in place to not have a dictator (even a benevolent one).

Nidhug
0 replies
19h33m

The problem is that it might unfortunately be necessary to have this kind of funding to be able to develop AGI. And funding will not come if there are no incentives for the investors to fund.

What would you propose instead ?

almost_usual
1 replies
23h22m

The average SWE at OpenAI who signed up for the “900k” compensation package which was really > 600k in OpenAI PPU equity probably saw their comp evaporate.

https://news.ycombinator.com/item?id=36460082

cactusplant7374
0 replies
22h56m

This is why working for any company that isn’t public is an equity gamble.

That's a cynical take on work. I assume most people have other motivations since work is basically a prison.

https://www.youtube.com/watch?v=iR1jzExZ9T0

wly_cdgr
0 replies
8h31m

It is ludicrous to describe what happened as a coup. Your boss firing you is not a coup. The rejoinders to this are nonsense and you know it. Stop lying.

tdeck
0 replies
22h39m

Is anyone else suspicious of who these "insiders" are and what their motive is? I notice the only concrete piece of information we might get (what was Altman not "candid" about?) is simply dismissed as a "power struggle" without any real detail. This is an incomplete narrative that serves one person's image.

summerlight
0 replies
20h40m

My take: in any world-class technology company, tech is above everything. You cannot succeed with tech alone, but you will never do without tech. Ilya was able to kick Sam out even with all his significant works and presences because Sam was fundamentally a business guy who lacks of tech ownership. You don't go against the real tech owner, this is a binary choice between either to build a strong tech ownership yourself or to delegate a significant amount of business controls to the tech owner.

siruncledrew
0 replies
9h33m

The dust still hasn’t settled yet, but from following the discussions and learning more about the board of OpenAI… just… wow.

What stood out:

1. The whole non-profit vs for-profit is like a recipe for problems. Taking billions in investor money, hyper scaling to hundred-millions of users, and partnering with a $1T tech company… you’re already too late to reverse course and say “I changed my mind”.

2. Seeing who runs the OpenAI board is more shocking than the man behind the curtain in the Wizard of Oz. That was really never an issue to partners or investors before? Wow…

3. If OpenAI continues down the “we’re a business / startup” path, their board just shot all their leadership credibility with investors and other potential cloud partners. The one thing people with money and corporate finance offices hate is surprises.

4. You don’t pull a corporate “Pearl Harbor” like this and just blissfully move along without consequences. With such a polarizing move, there’s going to be a fight.

pknerd
0 replies
22h45m

Prolly off topic but someone on Reddit's OpenAI's chat interface shared his discussion screenshots with chatGPT which claims that AGI status was achieved a long time back. You can still go and read the entire series of screenshots

nprateem
0 replies
19h53m

I wouldn't be surprised if this is the chief scientist getting annoyed the CEO is taking all the credit for the work and the researchers aren't getting as much time in the limelight. It's probably the classic 'Meatloaf vs the guy who actually wrote the songs' thing.

msie
0 replies
10h28m

Looking for comment that claimed that OpenAI has no investors because it’s a non-profit.

md5crypto
0 replies
13h41m

I wonder if Microsoft engineered this?

lazide
0 replies
17h59m

How can it be called a coup when they were always in charge anyway? It’s literally the boards job to fire/hire the CEO (and other C suite folk).

glitchc
0 replies
15h6m

CEOs are largely irrelevant to the success of a company. Sam's a blowhard anyways, OpenAI is better off for this move.

evolve2k
0 replies
18h58m

Szymon Sidor, an open source baselines researcher

What does that title even mean. As we know Open AI is ironicly not known for doing open source work. I’m left guessing he ‘research the open source competition’ as it were.

Can anyone shed further light on the role/research?

dboreham
0 replies
17h13m

So...should we sell our MSFT stock when the market opens on Monday, or in after-hours trading now?

davesque
0 replies
22h40m

If this was an ideological battle of some kind, the only hope I have is that OpenAI will now be truly more Open! However, if this was motivated by safety concerns, that would mean OpenAI would probably become more closed. And, if the only thing that really distinguishes OpenAI from its competition is its so called data moat, then slowing down for the sake of safety will only give competitors time to catch up. Those competitors include companies in China who are undoubtedly much less concerned about safety.

chaostheory
0 replies
18h39m

I wonder if Altman, Brockman, and company will join Elon or whether they will just start a new company?

blast
0 replies
12h17m

A month ago, Sutskever’s responsibilities at the company were reduced, reflecting friction between him and Altman and Brockman. Sutskever later appealed to the board

Does anybody know how his responsibilities or what led to that? Seems pretty relevant.

ThinkBeat
0 replies
20h49m

Will Sam and Greg now go and create NextStep? (The OpenAI version)

Madmallard
0 replies
20h33m

Social value is king.

ability to do work < ability to manage others to do work < ability to lead managers to success < ability to convince other leaders that your vision is the right one and one they should align with

The necessity of not saying the wrong thing goes up exponentially with each rung. The necessity of saying the right things goes up exponentially with each rung.

Havoc
0 replies
17h26m

"was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board."

A breakdown in coms that took everyone by surprise? Smells like bullshit

Apocryphon
0 replies
19h22m

Thought experiment: what if Mozilla had split between its Corporation and Foundation years ago, when it was at its peak?

1970-01-01
0 replies
22h39m

Did ChatGPT suggest a big surprise?