"return to table of content"

We have reached an agreement in principle for Sam to return to OpenAI as CEO

jafitc
81 replies
7h57m

OpenAI's Future and Viability

- OpenAI has damaged their brand and lost trust, but may still become a hugely successful company if they build great products

- OpenAI looks stronger now with a more professional board, but has fundamentally transformed into a for-profit focused on commercializing LLMs

- OpenAI still retains impressive talent and technology assets and could pivot into a leading AI provider if managed well

---

Sam Altman's Leadership

- Sam emerged as an irreplaceable CEO with overwhelming employee loyalty, but may have to accept more oversight

- Sam has exceptional leadership abilities but can be manipulative; he will likely retain control but have to keep stakeholders aligned

---

Board Issues

- The board acted incompetently and destructively without clear reasons or communication

- The new board seems more reasonable but may struggle to govern given Sam's power

- There are still opposing factions on ideology and commercialization that will continue battling

---

Employee Motivations

- Employees followed the money trail and Sam to preserve their equity and careers

- Peer pressure and groupthink likely also swayed employees more than principles

- Mission-driven employees may still leave for opportunities at places like Anthropic

---

Safety vs Commercialization

- The safety faction lost this battle but still has influential leaders wanting to constrain the technology

- Rapid commercialization beat out calls for restraint but may hit snags with model issues

---

Microsoft Partnership

- Microsoft strengthened its power despite not appearing involved in the drama

- OpenAI is now clearly beholden to Microsoft's interests rather than an independent entity

qualifiedai
27 replies
7h48m

No structure or organization is stronger when their leader emerged as "irreplaceable".

rmbyrro
14 replies
7h45m

In this case, I don't see as a flaw, but really as Sam's abilities to lead a highly cohesive group and keep it highly motivated and aligned.

I don't personally like him, but I must admit he displayed a lot more leadership skills than I'd recognize before.

It's inherently hard to replace someone like that in any organization.

Take Apple, after losing Jobs. It's not that Apple was a "weak" organization, but really Jobs that was extraordinary and indeed irreplaceable.

No, I'm not comparing Jobs and Sam. Just illustrating my point.

prh8
8 replies
7h38m

What's the difference between leadership skills and cult of following?

spurgu
3 replies
7h16m

I think an awesome leader would naturally create some kind of cult following, while the opposite isn't true.

Popeyes
2 replies
7h9m

Just like former President Trump?

marcosdumay
1 replies
7h1m

There are two possible ways to read "the opposite" from the GP.

"A cult follower does not make an exceptional leader" is the one you are looking for.

0perator
0 replies
6h0m

While cult followers do not make exceptional leaders, cult leaders are almost by definition exceptional leaders, given they're able to lead the un-indoctrinated into believing an ideology that may not be upheld against critical scrutiny.

There is no guarantee or natural law that an exceptional leader's ideology will be exceptional. Exceptionality is not transitive.

TheOtherHobbes
1 replies
6h50m

Leadership Gets Shit Done. A cult following wastes everyone's time on ineffectual grandstanding and ego fluffing while everything around them dissolves into incompetence and hostility.

They're very orthogonal things.

rvnx
0 replies
6h43m

I also imagine the morale of the people who are currently implementing things, and getting tired of all these politics about who is going to claim success for their work.

thedaly
0 replies
7h10m

Results

rmbyrro
0 replies
6h1m

Have you ever seen a useful product produced by a cult?

pk-protect-ai
3 replies
7h10m

Can't you imagine a group of people motivated to conduct AI research? I don't understand... All nerds are highly motivated in their areas of passion, and here we have AI research. Why do they need leadership instead of simply having an abundance of resources for the passionate work they do?

jjk166
0 replies
6h34m

It's not hard to motivate them to do the fun parts of the job, the challenge is in convincing some of those highly motivated and passionate nerds to not work on the fun thing they are passionate about and instead do the boring and unsexy work that is nevertheless critical to overall success; to get people with strong personal opinions about how a solution should look to accept a different plan just so that everyone is on the same page, to ensure that people actually have access to the resources they need to succeed without going so overboard that the endeavor lacks the reserves to make it to the finish line, and to champion the work of these nerds to the non-nerds who are nevertheless important stakeholders.

gcanyon
0 replies
6h47m

Someone has to set direction. The more people that are involved in that decision process, the slower it will go.

Having no leadership at all guarantees failure.

DSingularity
0 replies
6h55m

As far as it goes for me the only endorsements that matter are those of the core engineering and research teaches of OpenAI.

All these opinions of outsiders don’t matter. It’s obvious that most people don’t know Sam personally or professionally and are going off of the combination of: 1. PR pieces being pushed by unknown entities 2. positive endorsements from well known people who are likely know him

Both those sources are suspect. We don’t know the motivation behind their endorsements and for the PR pieces we know the author but we don’t know commissioner.

Would we feel as positive about Altman if it turns out that half the people and PR pieces endorsing him are because government officials pushing for him? Or if the celebrities in tech are endorsing him because they are financially incentivized?

The only endorsements that matter are those of OpenAI employees (ideally those who are not just in his camp because he made them rich).

scythe
0 replies
6h14m

Jobs was really unusual in that he was not only a good leader, but also an ideologue with the right obsession at the right time. (Some people like the word "visionary".) That obsession being "user experience". Today it's a buzzword, but in 2001 it was hardly even a term.

The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".

There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)

1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...

dimitrios1
5 replies
7h35m

This is false, and I see the corollary as a project having a BDIF, especially if the leader is effective. Sam is unmistakably effective.

acchow
4 replies
7h28m

Have you or anyone close to you ever had to take multiple years of leave from work from a car accident or health condition?

slingnow
3 replies
7h18m

Nope, I've never even __heard__ of someone having to take multiple years of leave from work for any reason. Seems like a fantastically rare event.

yeck
1 replies
6h40m

In my immediate family I have 3 people that have taken multi-year periods away from work for health reasons. Two are mental health related and the other severe arthritis. 2 of those 3 will probably never work again for the rest of their lives.

I've worked with a contractor that went into a coma during covid. Nearly half a year in a coma, then rehab for many more months. Guy is working now, but not shape.

I don't know the stats, but I'd be surprised if long medical leaves are as rare as you think.

filleduchaos
0 replies
5h2m

Yeah, there are thousands of hospitals across the US and they don't run 24/7 shifts just to treat the flu or sprained ankles. Disabling events happen a lot.

(A seriously underrated statistic IMO is how many women leave the workforce due to pregnancy-related disability. I know quite a few who haven't returned to full-time work for years after giving birth because they're still dealing with cardiovascular and/or neurological issues. If you aren't privy to their medical history it would be very easy to assume that they just decided to be stay-at-home mums.)

thingification
0 replies
6h57m

Not sure if that's intended as irony, but of course, if somebody is taking multiple years off work, you would be less likely hear about it because by definition they're not going to join the company you work for.

I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:

https://www.statista.com/statistics/1219257/us-employment-ra...

osigurdson
3 replies
7h42m

Seriously, even in a small group of a few hundred people?

catapart
2 replies
7h37m

I dunno, seems like a pretty self-evident theory? If your leader is irreplaceable, regardless of group size, that's a single point of failure. I can't figure how a single point of failure could ever make something "stronger". I can see arguments for necessity, or efficiency, given contrivances and extreme contexts. But "stronger" doesn't seem like the assessment for whatever necessitating a single point of failure would be.

vipshek
0 replies
7h11m

"Stronger" is ambiguous. If you interpret it as "resilience" then I agree having a single point of failure is usually more brittle. But if you interpret it as "focused", then having a single charismatic leader can be superior.

Concretely, it sounds like this incident brought a lot of internal conflicts to the surface, and they got more-or-less resolved in some way. I can imagine this allows OpenAI to execute with greater focus and velocity going forward, as the internal conflict that was previously causing drag has been resolved.

Whether or not that's "better" or "stronger" is up to individual interpretation.

hughw
0 replies
6h38m

I guess though, a lot of organizations never develop a cohesive leader at all, and the orgs fall apart. They never had an irreplaceable leader though!

rvnx
0 replies
7h5m

And correlation does not imply causality.

Example: Put a loser as CEO of a rocket ship, and there is a huge chance that the company will still be successful.

Put a loser as CEO of a sinking ship, and there is a huge chance that the company will fail.

The exceptional CEOs are those who turn failures into successes.

The fact this drama has emerged is the symptom of a failure.

In a company with a great CEO this shouldn’t be happening.

Aunche
0 replies
6h57m

I don't think Sam is necessarily irreplaceable. It's just that Helen Toner and co were so detached from the rest of the organization they might as well been on Mars, as demonstrated by their interim CEO pick instantly turning against them.

amalcon
12 replies
7h21m

>- Microsoft strengthened its power despite not appearing involved in the drama

Depending on what you mean by "the drama", Microsoft was very clearly involved. They don't appear to have been in the loop prior to Altman's firing, but they literally offered jobs to everyone who left in solidarity with same. Do we really think things like that were not intended to change people's minds?

gcanyon
6 replies
7h0m

I’d go further than just saying “they were involved” —- by offering jobs to everyone who wanted to come with Altman, they were effectively offering to acquire OpenAI, which is worth ~$100B, for (checks notes) zero dollars.

gsuuon
2 replies
6h47m

How has the valuation of OpenAI increased by $20B since this weekend? I feel like every time I see that number it goes up by $10B.

tacoooooooo
0 replies
6h40m

you're off by a bit, the announcement of Sam returning as CEO actually increased OpenAI valuation to $110B last night

sebzim4500
0 replies
6h40m

$110B? Where are you getting this valuation of $120B?

breadwinner
1 replies
6h51m

You mean zero additional dollars. They already gave (checks notes) $13 Billion dollars and own half of the company.

rvnx
0 replies
6h50m

+ according to the rumors on Bloomberg.com / CNBC:

The investment is refundable and has high priority: Microsoft has a priority to receive 75% of the profit generated until the 10B USD have been paid back

+ (checks notes) in addition (!) OpenAI has to spend back the money in Microsoft Cloud Services (where Microsoft takes a cut as well).

theptip
0 replies
6h43m

If the existing packages are worth more than MSFT pay AI researchers (they are, by a lot) then it’s not acquiring OAI for $0. Plausibly it could cost in the $B to buy put every single equity holder, at a $80B+ valuation.

Still a good deal, but your accounting is off.

FirmwareBurner
2 replies
7h17m

>but they literally offered jobs to everyone who left in solidarity with same

Offering people jobs is neither illegal nor immoral, no? And wasn't HN also firmly on the side of abolishing non-competes and non-soliciting from employment contracts to facilitate freedom of employment movement and increase industry wages in the process?

Well then, there's your freedom of employment in action. Why be unhappy about it? I don't get it.

spankalee
0 replies
7h14m

Offering people jobs is neither illegal nor immoral

The comment you responded to made neither of those claims, just that they were "involved".

notahacker
0 replies
6h31m

I'm pretty sure there's a middle ground between recruiters for Microsoft should be banned from approaching other companies' staff to fill roles and Microsoft should be able to dictate decisions made by other companies' boards by publicly announcing that unless they change track it will attempt to hire every single one of their employees to newly created roles.

Funnily enough a bit like there's a middle ground between Microsoft should not be allowed to create browsers or have license agreements and Microsoft should be allowed to dictate bundling decisions made by hardware vendors to control access to the Internet

It's not freedom of employment when funnily enough those jobs aren't actually available to any AI researchers not working for an organisation Microsoft is trying to control.

malfist
1 replies
7h6m

The GP looks to me like an AI summary. Which would fit with the hallucination that microsoft wasn't involved.

chankstein38
0 replies
6h35m

That's a good callout. I was reading over it and confused who this person was and why they were summarizing but yeah they might've just told ChatGPT to summarize the events of what happened.

nurumaik
8 replies
7h48m

Gpt-generated summary?

Mistletoe
7 replies
7h16m

That was my first thought as well. And now it is the top comment on this post. Isn’t this brave new world OpenAI made wonderful?

nickpp
6 replies
6h58m

If it’s a good comment, does it really matter if a human or an AI wrote it?

makeworld
5 replies
6h44m

Yes.

nickpp
4 replies
6h39m

Please expand on that.

iamflimflam1
1 replies
6h22m

This is the most cogent argument against AI I've seen so far.

https://youtu.be/iGJcF4bLKd4?si=Q_JGEZnV-tpFa1Tb

nickpp
0 replies
4h19m

I am sorry, I greatly respect and admire Nick Cave, but that letter sounded to me like the lament of a scribe decrying the invention of the printing press.

He's not wrong, something is lost and it has to do with what we call our "humanity", but the benefits greatly outweigh that loss.

Mistletoe
1 replies
4h6m

I think this summarizes it pretty well. Even if you don't mind the garbage, the future AI will feed on this garbage, creating AI and human brain gray goo.

https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...

https://en.wikipedia.org/wiki/Gray_goo

nickpp
0 replies
3h19m

Is this a real problem model trainers actually face or is it an imagined one? The Internet is already full of garbage - 90% of the unpleasantness of browsing these days is filtering through mounts and mounds of crap. Some is generated, some is written, but still crap full of wrong and lies.

I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.

In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.

paulddraper
7 replies
7h27m

Peer pressure and groupthink likely also swayed employees more than principles

What makes this "likely"?

Or is this just pure conjecture?

mrfox321
6 replies
7h22m

What would you do if 999 employees openly signed a letter and you are the remaining holdout.

paulddraper
5 replies
7h18m

Is your argument that the 1 employee operated on peer pressure, or the other 999?

Could it possibly be that the majority of OpenAI's workforce sincerely believed a midnight firing of the CEO were counterproductive to their organization's goals?

dymk
2 replies
7h6m

It's almost certain that all employees did not behave the same way for the exact same reasons. And I don't see anyone making an argument about what the exact numbers are, nor does it really matter. Just that some portion of employees were swayed by pressure once the letter reached some critical signing mass.

paulddraper
1 replies
5h59m

some portion

The logic being that if any opinion has above X% support, people are choosing it based on peer pressure.

mrfox321
0 replies
5h57m

The key is that the support is not anonymous.

mrfox321
1 replies
6h0m

Doing the math, it is extremely unlikely for a lot of coin flips to skew from the weight of the coin.

To that end, observing unanimous behavior may imply some bias.

Here, it could be people fearing being a part of the minority. The minority are trivially identifiable, since the majority signed their names on a document.

I agree in your stance that a majority of the workforce disagreed with the way things were handled, but that proportion is likely a subset of the proportion who signed their names on the document, for the reasons stated above.

paulddraper
0 replies
5h54m

it is extremely unlikely for a lot of coin flips to skew from the weight of the coin

So clearly this wasn't a 50/50 coin flip.

The question at hand is whether the skew against the board was sincere or insincere.

Personally, I assume that people are acting in good faith, unless I have evidence to the contrary.

jxi
6 replies
7h21m

Was this really motivated by AI safety or was it just Helen Toner’s personal vendetta against Sam?

It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.

cbeach
2 replies
7h0m

Curious how a relatively unknown academic with links to China [1] attained a board seat on America's hottest and most valuable AI company.

Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]

During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

hn_throwaway_99
0 replies
6h24m

Oh lord, spare me with the "links to China" idiocy. I once ate a fortune cookie, does that mean I have "links to China" too?

Toner got her board seat because she was basically Holden Karnofsky's designated replacement:

Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.

https://loeber.substack.com/p/a-timeline-of-the-openai-board

Zpalmtree
0 replies
6h55m

Wow, very surprised this is the first I'm hearing of this, seems very suspect

jkaplan
1 replies
6h34m

was it just Helen Toner’s personal vendetta against Sam

I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed

[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

jxi
0 replies
4h47m

Right, so getting Sam fired was retaliation for that.

hn_throwaway_99
0 replies
7h6m

As someone who was very critical of how the board acted, I strongly disagree. I felt like this Washington Post article gave a very good, balanced overview. I think it sounds like there were substantive issues that were brewing for a long time, though no doubt personal clashes had a huge impact on how it all went down:

https://www.washingtonpost.com/technology/2023/11/22/sam-alt...

neonbjb
3 replies
6h51m

As an employee of OpenAI: fuck you and your condescending conclusions about my peers and my motivations.

jprete
0 replies
6h42m

I’m curious about your perceptions of the (median) motivations of OpenAI employees - although of course I understand if you don’t feel free to say anything.

iamflimflam1
0 replies
6h23m

"condescending conclusions" - ask anyone outside of tech how they feel when we talk to them...

alextheparrot
0 replies
6h40m

Users here often get the narrative and motivations deeply wrong, I wouldn’t take it too personally (Speaking as a peer)

miohtama
2 replies
7h47m

Employees followed the money trail and Sam to preserve their equity and careers

Would you not when the AI safety wokes decide the torch the rewards of your hard work of grinding for years? I feel there is less groupthink and everyone saw the board as it is and their inability lead, or even act rationally. OpenAI did not just become a sinking ship, but it was unnecessary sunk by someone not skin in the game and your personal wealth and success was tied to the ship.

brookst
0 replies
7h31m

Yeah, this is like using “groupthink” to describe people fleeing a burning building. There’s maybe some measure of literal truth, but it’s an odd way to frame it.

acjohnson55
0 replies
5h43m

How do you know the "wokes" aren't the ones who were grinding for years?

I suspect OpenAI has an old guard that is disproportionately ideological about AI, and a much larger group of people who joined a rocket ship led by the guy who used to run YC.

seydor
1 replies
7h30m

who would want to work for an irreplaceable CEO long term

rvnx
0 replies
6h55m

Desperate people who have no choice than to wait for someone to remove their golden handcuffs.

sam0x17
1 replies
7h23m

Peer pressure and groupthink likely also swayed employees more than principles

Chilling to hear the corporate oligarchs completely disregard the feelings of employees and deny most of the legitimacy behind these feelings in such a short and sweeping statement

DSingularity
0 replies
6h50m

Honestly he has a point — but the bigger point to be made is financial incentives. In this case it matters because of the expressed mission statement of OpenAI.

Let’s say there was some non-profit claiming to advance the interests of the world. Let’s say it paid very well to hire the most productive people but they were a bunch of psychopaths who by definition couldn’t care less about anybody but themselves. Should you care about their opinions? If it was a for profit company you could argue that their voice matter. For a non-profit, however, a persons opinion should only matter as far as it is aligned with the non-profit mission.

RationalDino
1 replies
6h53m

The one piece of this that I question is the employee motivations.

First, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.

But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.

I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.

Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???

There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.

Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."

TheGRS
0 replies
6h32m

I was thinking the same. The letter symbolized a deep distrust with leadership over the mission and direction of the company. I’m sure financial motivations were involved, but the type of person working at this company can probably get a good paycheck at a lot of places. I think many work at OpenAI for some combination of opportunity, prestige, and altruism, and the weekend probably put all 3 into question.

windowshopping
0 replies
6h41m

This comment bugs me because it reads like a summary of an article, but it's just your opinions without any explanations to justify them.

orsenthil
0 replies
7h26m

- Mission-driven employees may still leave for opportunities at places like Anthropic

Which might have an oversight from AMZN instead of MSFT ?

ensocode
0 replies
7h22m

Good points. Anyway I guess nobody will remember the drama in some months so I think the damage done is very manageable for OAI.

garrison
81 replies
10h29m

If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here. I don't expect the IRS to be a fan of this arrangement.

bradleybuda
19 replies
5h33m

Major corporate boards are rife with "on paper" conflicts on interest - that's what happens when you want people with real management experience to sit on your board and act like responsible adults. This happens in every single industry and has nothing to do with tech or with OpenAI specifically.

In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.

iandanforth
13 replies
5h31m

"In practice, board bylaws and common sense mean that individuals ... don't do stupid shit."

Were you watching a different show than the rest of us?

hinkley
8 replies
5h10m

I get a lostredditor vibe way too often here. Oddly more than Reddit.

I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.

“Do you know where you are right now?”

LordDragonfang
2 replies
4h52m

I think people forget sometimes that comments come with a context.

I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.

mhluongo
1 replies
4h32m

Common example of recusal is CEO comp when the CEO is on the board.

alsetmusic
0 replies
4h9m

That's what I would term a black-and-white case. I don't think there's anyone with sense who would argue in good faith that a CEO should get a vote on their own salary. There are many degrees of grey between outright corruption and this example, and I think the concern lies within.

alsetmusic
1 replies
4h11m

I get what you're saying, but I also live in the world and see the mechanics of capitalism. I may be a person who's interested in tech, science, education, archeology, etc. That doesn't mean that I don't also have political views that sometimes overlap with a lot of other very-online people.

I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.

hinkley
0 replies
3h47m

Oh I wasn’t complaining about the parent, I was complaining it needed to be said.

We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.

Instead it often sounds like “it’s very unusual for the front to fall off”.

mhh__
0 replies
4h29m

So?

iandanforth
0 replies
3h53m

I apologize, the comment's irony overwhelmed my snark containment system.

Juicyy
0 replies
4h53m

Its a more technical space then reddit. Youre gonna have more know it alls spewing

badloginagain
1 replies
5h21m

And we're seeing the result in real-time. Stupid shit doers have been replaced with hopefully-less-stupid-shit-doers.

It's a real shame too, because this is a clear loss for the AI Alignment crowd.

I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.

alsetmusic
0 replies
4h7m

at least there is a strong moral compass in the field

Is this still true when the board gets overhauled after trying to uphold the moral compass.

jjoonathan
0 replies
5h13m

No, this is the part of the show where the patronizing rhetoric gets trotted out to rationalize discarding the principles that have suddenly become inconvenient for the people with power.

freedomben
0 replies
3h23m

You need to be able to separate macro-level and micro-level. GP is responding to a comment about the IRS caring about the conflict-of-interest on paper. The IRS has to make and follow rules at a macro level. Micro-level events obviously can affect the macro view, but you don't completely ignore the macro because something bad happened at the micro level. That's how you get knee-jerk reactionary governance, which is highly emotional.

fouc
1 replies
4h52m

OpenAI isn't a typical corporation but a 501(c)(3), so bylaws & protections that otherwise might exist appear to be lacking in this situation.

dragonwriter
0 replies
3h50m

501c3's also have governing internal rules, and the threat of penalties and loss of status imposed by the IRS gives them additional incentive to safeguard against even the appearance of conflict being manifested into how they operate (whether that's avoiding conflicted board members or assuring that they recuse where a conflict is relevant.)

If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.

throwaway-blaze
0 replies
4h11m

No conflict, no interest.

ip26
0 replies
5h27m

Reminds me of the “revolving door” problem. Obvious risk of corruption and conflict of interest, but at the same time experts from industry are the ones with the knowledge to be effective regulators. Not unlike how many good patent attorneys were previously engineers.

dragonwriter
0 replies
5h20m

A corporation acting (due to influence from a conflicted board member that doesn't recuse) contrary to the interests of its stockholders and in the interest of the conflicted board member or who they represent potentially creates liability of the firm to its stockholders.

A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.

baking
13 replies
7h57m

My guess is that the non-profit has never gotten this kind of scrutiny now and the new directors are going to want to get lawyers involved to cover their asses. Just imagine their positions when Sam Altman really does something worth firing.

I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.

danaris
5 replies
7h41m

Well, I think that's really the question, isn't it?

Was it a mistake to create OpenAI as a public charity?

Or was it a mistake to operate OpenAI as if it were a startup?

The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.

blackoil
1 replies
6h4m

OpenAI the charity would have survived only as an ego project for Elon doing something fun with minor impact.

Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.

baking
0 replies
5h30m

I think the only way this can end up is to convert to a private foundation and make sizable (8 figures annually) grants to truly independent AI safety (broadly defined) organizations.

ToucanLoucan
1 replies
5h25m

IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.

I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.

danaris
0 replies
5h18m

I was specifically thinking of people seeing a non-profit doing stuff with ML, and trying to finagle their way in there to turn it into a profit for themselves.

(But yes; what you describe is absolutely happening left and right...)

baking
0 replies
5h55m

To create a public charity without public fundraising is a no go. Should have been a private foundation because that is where it will end up.

qwery
3 replies
7h31m

I think it was a real mistake to create OpenAI as a public charity

Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.

zerohalo
0 replies
3h14m

Exactly this. OpenAI was started for ostensibly the right reasons. But once they discovered something that would both 1) take a tremendous amount of compute power to scale and develop, and 2) could be heavily monetized, they choose the $ route and that point the mission was doomed, with the board members originally brought in to protect the mission holding their fingers in the dyke.

broast
0 replies
5h49m

Wishfully I hope there was some intent from the beginning on exposing the impossibility of this contradictory model to the world, so that a global audience can evaluate on how to improve our system to support a better future.

baking
0 replies
5h57m

I think it could have easily been predicted just from the initial announcements. You can't create a public charity simply from the donations of a few wealthy individuals. A public charity has to meet the public support test. A private foundation would be a better model but someone decided they didn't want to go that route. Maybe should have asked a non-profit lawyer?

purple_ferret
1 replies
5h33m

Perhaps creating OpenAI as a charity is what has allowed it to become what it is, whereas other for-profit competitors are worth much less. How else do you get a guy like Elon Musk to 'donate' $100 million to your company?

Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth

baking
0 replies
5h20m

Elon only gave $40 million, but since he was the primary donor I suspect he was the one who was pushing for the "public charity" designation. He and Sam were co-founders. Maybe it was Sam who asked Elon for the money, but there wasn't anyone else involved.

Turing_Machine
0 replies
5h1m

I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess.

I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.

stikit
11 replies
7h20m

OpenAI is not a charity. Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

From https://openai.com/our-structure

- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.

-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

dragonwriter
5 replies
6h14m

OpenAI is not a charity.

OpenAI is a charity nonprofit, in fact.

Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.

And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.

hackernewds
4 replies
5h21m

well not anymore, as they cannot function as a nonprofit.

also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly

dragonwriter
3 replies
5h9m

well not anymore, as they cannot function as a nonprofit.

There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.

also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive

No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.

DebtDeflation
2 replies
4h54m

they are and plan to remain a charity nonprofit

Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.

tempestn
1 replies
1h39m

"Almost certainly"? Here's a fun exercise. Over the course of, say, a year, keep track of all your predictions along these lines, and how certain you are of each. Almost certainly, expressed as a percentage, would be maybe 95%? Then see how often the predicted events occur, compared to how sure you are.

Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.

kyle_grove
0 replies
21m

I’m pretty confident (close to the 95% level) they will abandon the public charity structure, but throughout this saga, I have been baffled by the discourse’s willingness to handwave away OpenAI’s peculiar legal structure as irrelevant to these events.

ezfe
3 replies
6h22m

The board is the charity though, which is why the person you're replying to made the remark about MSFT employees being appointed to the board

UrineSqueegee
2 replies
6h17m

A charity is a type of not-for-profit organisation however the main difference between a nonprofit and a charity is that a nonprofit doesn't need to reach a 'charitable status' whereas a charity, to qualify as a charity, needs to meet very specific or strict guidelines

ezfe
1 replies
4h22m

Yes, I misspoke - I meant nonprofit

zja
0 replies
1h55m

You were right though, OpenAI Inc, which the board controls, is a 501c3 charity.

strangesmells06
0 replies
6h10m

First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.

Good for them.

flagrant_taco
8 replies
9h24m

I don't expect the government to regulate any of this aggressively. AI is much to important to the government and military to allow pesky conflicts of interest to slow down any competitive advantage we may have.

dgrin91
7 replies
8h8m

If you think that OpenAI is the Gov's only source of high quality AI research then I have a bridge to sell you.

jakderrida
6 replies
7h33m

If you think the person you're replying to was talking about regulating OpenAI specifically and not the industry as a whole, I have ADHD medicine to sell you.

swores
5 replies
5h8m

The context of the comment thread you're replying to was a response to a comment suggesting the IRS will get involved in the question of whether MS have too much influence over OpenAI, it was not the subject of general industry regulation.

But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.

jakderrida
3 replies
2h17m

Maybe you're right? At least my joke was original, though, unlike your little boyfriend's tired line.

swores
1 replies
2h15m

I think the comment you had replied to was equally unwarranted, no need to tie me to them.

jakderrida
0 replies
2h11m

I'm sorry. I was just taking the snark discussion to the next level. I thought going overboard was the only way to convey that there's no way I'm serious.

rsrsrs86
0 replies
1h47m

When did this become a boy-girlfriend issue?

freedomben
0 replies
3h15m

if up-the-line parent wasn't talking about regulation of AI in general, then what do you think they meant by "competitive advantage"? Also, governments have to set policy and enforce that policy. They can't (or shouldn't at least) pick and choose favorites.

Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.

pc86
4 replies
6h15m

Others have pointed out several reasons this isn't actually a problem (and that the premise itself is incorrect since "OpenAI" is not a charity), but one thing not mentioned: even if the MS-appointed board member is a MS employee, yes they will have a fiduciary duty to the organizations under the purview of the board, but unless they are also a board member of Microsoft (extraordinarily unlikely) they have no such fiduciary duty to Microsoft itself. So in the also unlikely scenario that there is a vote that conflicts with their Microsoft duties, and in the even more unlikely scenario that they don't abstain due to that conflict, they have a legal responsibility to err on the side of OpenAI and no legal responsibility to Microsoft. Seems like a pretty easy decision to make - and abstaining is the easiest unless it's a contentious 4-4 vote and there's pressure for them to choose a side.

But all that seems a lot more like an episode of Succession and less like real life to be honest.

throwoutway
1 replies
6h10m

It's still a conflict of interest. One that they should avoid. Microsoft COULD appoint someone who they like and shares their values, that is not a MSFT employee. That would be a preferred approach but one that I doubt a megacorp would take

ghaff
0 replies
5h44m

Both profit and non-profit boards have members that have potential conflicts of interest all the time. So long as it’s not too egregious no one cares, especially not the IRS.

oatmeal1
0 replies
6h3m

Microsoft is going to appoint someone who benefits Microsoft. Whether a particular vote would violate fiduciary duty is subjective. There's plenty of opportunity for them to prioritize the welfare of Microsoft over OAI.

dragonwriter
0 replies
6h6m

and that the premise itself is incorrect since "OpenAI" is not a charity

OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.

OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)

paulddraper
4 replies
7h24m

What if I told you...Bill Gates was/is on the board of the non-profit Bill and Melinda Gates Foundation?

Lol HN lawyering is hilarious.

fatbird
3 replies
7h18m

Indeed, it is hilarious.

The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.

Did you think OP meant there was some inherent conflict of interest with charities?

paulddraper
2 replies
7h17m

Have you seen OpenAI's current board?

Explain how an MS employee would have greater conflict of interest.

uxp8u61q
1 replies
5h43m

Conflict of interest with what? The other board members? That's utterly irrelevant. Look up some big companies boards some day. You'll see.

paulddraper
0 replies
3h12m

See earlier

If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.

https://news.ycombinator.com/item?id=38378069

TigeriusKirk
4 replies
7h28m

Larry Summers is in place to effectively give the govt seal of approval on the new board, for better and worse.

ilrwbwrkhv
2 replies
6h37m

Isn't he a big Jeffrey Epstein fanboy? Ethical AGI is in safe hands.

https://www.thecrimson.com/article/2023/5/5/epstein-summers-...

kossTKR
0 replies
2h58m

It's obvious this class of people love their status as neu-feudal lords above the law living as 18th century libertines behind closed doors.

But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.

The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.

futuretaint
0 replies
5h22m

nothing screams 'protect public interest' more than Wall Streets biggest cheerleader during 2008 financial crisis. who's next, Richard S. Fuld Jr ? Should the Enron guys be included ?

mcast
0 replies
1h3m

If you wanted to wear a foil hat, you might think this internal fighting was started from someone connected to TPTB subverting the rest of the board to gain a board seat, and thus more power and influence, over AGI.

The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.

zerohalo
2 replies
3h16m

OpenAI's charter is dead. I expect future boards to amend it.

ric2b
0 replies
7m

People keep saying this but is there any evidence that any of this was related to the charter?

dragonwriter
0 replies
3h13m

Its useful PR pretext for their regulatory advocacy, and subjective enough that if they are careful not to be too obvious about specifically pushing one company’s commercial interest, they can probably get away with it forever, so why would it be any deader than when Sam was CEO before and not substantively guided by it.

boh
1 replies
5h12m

Whenever there's an obvious conflict, assume it's not enforced or difficult to litigate or has relatively irrelevant penalties. Experts/lawyers who have a material stake in getting this right have signed off on it. Many (if not most) people with enough status to be on the board of a fortune 500 company tend to also be on non-profit boards. We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.

hinkley
0 replies
5h5m

Do you remember before Bill Gates got into disease prevention he thought that “charity work” could be done by giving away free Microsoft products? I don’t know who sat him down and explained to him how full of shit he was but they deserve a Nobel Peace Prize nomination.

Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.

voxic11
0 replies
6h42m

Even if the IRS isn't a fan, what are they going to do about it? It seems like the main recourse they could pursue is they could force the OpenAI directors/Microsoft to pay an excise tax on any "excess benefit transactions".

https://www.irs.gov/charities-non-profits/charitable-organiz...

mwattsun
0 replies
6h30m

Microsoft doesn't have to send an employee to represent them on the board. They could ask Bill Gates.

jklein11
0 replies
4h0m

I'm a little bit confused, are you saying that the IRS would have some sort of beef with employees of Microsoft serving on the board of a 501(c)(3)?

hackernewds
0 replies
5h23m

Not to mention, the mission of the Board cannot be "build safe AGI" anymore. Perhaps something more consistent with expanding shareholder value and capitalism, as the events of this weekend has shown.

Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival

brookst
0 replies
7h28m

There’s no indication a Microsoft appointed board member would be a Microsoft employee (though the they could be of course), and large nonprofits often have board members that come from for-profit companies.

I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.

_b
0 replies
5h36m

There are obvious conflicts of interest here.

There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.

taway1874
45 replies
4h53m

Some perspective ...

One developer (Ilya) vs. One businessman (Sam) -> Sam wins

Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win

From the outside it looks like developers held the power all along ... which is how it should be.

jejeyyy77
14 replies
4h27m

$$$ vs. Safety -> $$$ wins.

Employees who have $$$ incentive threaten to quit if that is taken away. News at 8.

baby
13 replies
4h25m

Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.

mi_lk
9 replies
4h10m

It's you who are naive if you really think the majority of those 7xx employees care more about safe AGI than their own equity upside

nh23423fefe
7 replies
3h51m

Why would anyone care about safe agi? its vaporware.

mecsred
5 replies
3h44m

Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!

superturkey650
4 replies
3h6m

If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.

How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.

bcrosby95
1 replies
2h43m

The US government got involved in regulating airplanes long before there were any widely available commercial offerings:

https://en.wikipedia.org/wiki/United_States_government_role_...

If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.

superturkey650
0 replies
2h37m

I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.

There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.

mecsred
0 replies
1h55m

I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.

At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.

FartyMcFarter
0 replies
2h57m

The difference between unsafe AGI and an unsafe plane or car is that the plane/car are not existential risks.

stillwithit
0 replies
3h29m

Exactly what an OpenAI developer would understand. All the more reason to ride the grift that brought them this far

concordDance
0 replies
3h10m

Uh, I reckon many do. Money is easy to come by for that type of person and avoiding killing everyone matters to them.

jejeyyy77
0 replies
4h22m

of course the employees are motivated by $$$ - is that even a question?

hackerlight
0 replies
4h17m

The large majority of people are motivated by $$$ (or fame) and if they all tell me otherwise I know many of them are lying.

DirkH
0 replies
3h13m

Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.

adverbly
7 replies
4h39m

There are three dragons:

Employees, customers, government.

If motivated and aligned, any of these three could end you if they want to.

Do not wake the dragons.

pdntspa
5 replies
4h9m

The Board is another one, if you're CEO.

elliotec
3 replies
4h0m

I think the parent comment’s point is that the board is not one, since the board was defeated (by the employee dragon).

pdntspa
2 replies
3h46m

I think the analogy is kind of shaky. The board tried to end the CEO, but employees fought them and won.

I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.

davesque
1 replies
2h21m

I have an intuition that OpenAI's mid-range size gave the employees more power in this case. It's not as hard to coordinate a few hundred people, especially when those people are on top of the world and want to stay there. At a megacorp with thousands of employees, the board probably has an easier time bossing people around. Although I don't know if you had a larger company in mind when you gave your second example.

pdntspa
0 replies
1h31m

No, I'm thinking a smaller company, like 50 people, $20m ARR. Engineering-focused, but not tech

adverbly
0 replies
2h31m

My comment was more of a reflection of the fact that you might have multiple different governance structures to your organization. Sometimes investors are at the top. Sometimes it's a private owner. Sometimes there are separate kinds of shares for voting on different things. Sometimes it's a board. So you're right, the depending on the governance structure you can have additional dragons. But, you can never prevent any of these three from being a dragon. They will always be dragons, and you can never wake them up.

bossyTeacher
0 replies
1h30m

Or tame the dragons. AFAIK Sam hired the employees. Hence they are loyal to him

philipwhiuk
6 replies
4h43m

Are you sure Ilya was the root of this.

He backed it and then signed the pledge to quit if it wasn't undone.

What's the evidence he was behind it and not D'Angelo?

jiveturkey
3 replies
3h35m

wake up people! (said rhetorically, not accusatory or any other way)

This is Altman's playbook. He did a similar ousting at Reddit. This was planned all along to overturn the board. Ilya was in on it.

I'm not normally a conspiracy theorist. But fool me ... you can't be fooled again. As they say in Tennessee

bugglebeetle
1 replies
3h31m

What’s the backstory on Reddit?

occamsrazorwit
0 replies
2h46m

Yishan (former Reddit CEO) describes how Altman orchestrated the removal of Reddit's owner: https://www.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Note that the response is Altman's, and he seems to support it.

As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.

bossyTeacher
0 replies
1h29m

what happenned in reddit?

dr_dshiv
0 replies
4h22m

If we only look at the outcomes (dismantling of board), Microsoft and Sam seem to have the most motive.

__loam
0 replies
4h12m

I'm not sure I buy the idea that Ilya was just some hapless researcher who got unwillingly pulled into this. Any one of the board could have voted not to remove Sam and stop the board coup, including Ilya. I'd bet he only got cold feet after the story became international news and after most of the company threatened to resign because their bag was in jeopardy.

jessenaser
3 replies
4h45m

Yes, 95% agreement in any company is unprecedented but:

1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

2. Sam approved each hire in the first place.

3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".

brrrrrm
1 replies
3h10m

1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

citation?

davio
0 replies
2h56m
andersa
0 replies
3h23m

OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?

I'd bet more than half the people are just there for the money.

zerohalo
1 replies
3h17m

more like $$ wins.

It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.

In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.

One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(

joewferrara
0 replies
2h42m

Where we are today is a world where people do not generally worry about nuclear bombs being dropped. So seems like a pretty good outcome in that example.

stillwithit
0 replies
3h23m

Of course workers hold all the power as the value of assets like businesses is entirely made up, propagated intentionally.

That non-contributing ownership produces anything is memorized hallucination, in-line with the last few decades of socioeconomic and political theory, conveniently peddled by non-contributing ownership class.

We binned religious story but kept some ridiculous idea we have to serve serve the story of an arbitrary meat suit due to hand me down tales of their dad’s investment skills. Never mind it was legal to exclude women and racial minorities from enriching work until 30 years ago.

All the old money people earned it fair and square! Believe them! It’s not just biology driving them to thrive! Their progeny are owed control over out agency for being such good little Karen’s and Special Boys! Freedom for them, obligation for us!

sokoloff
0 replies
4h39m

Is your first “-> Sam wins” different than what you intended?

rexarex
0 replies
4h51m

Money won.

nikcub
0 replies
1h34m

The employees rapidly and effectively formed a quasi-union to grant themselves a very powerful seat at the table.

m00x
0 replies
2h38m

Ilya signed the letter saying he would resign if Sam wasn't brought back. Looks like he regretted his decision and ultimately got played by the 2 departing board members.

Ilya is also not a developer, he's a founder of OpenAI and was the CSO.

hsavit1
0 replies
4h32m

seems like the union of developers is stronger than the company itself. hence why unions are so frowned upon by big tech corporate leadership

dylan604
0 replies
3h59m

It's not like this is the first:

One developer (Woz) vs One businessman (Jobs) -> Jobs wins

awb
0 replies
2h17m

It’s a cost / benefit analysis.

If people are easily replaceable then they don’t hold nearly as much power, even en mass.

Quentincestino
0 replies
2h58m

OpenAI developers are redefining the state-of-the-art of AI each 6 months, if the company lose them they already can go bankrupt

zerohalo
8 replies
3h22m

Google, Meta and now OpenAI. So long, responsible and safety AI guardrails. Hello, big money.

Disappointed by the outcome, but perhaps mission-driven AI development -- the reason OpenAI was founded -- was never possible.

Edit: I applaud the board members for (apparently, it seems) trying to stand up for the mission (aka doing the job that they were put on the board to do), even if their efforts were doomed.

risho
4 replies
3h13m

you just don't understand how markets work. if openai slows down then they will just be driven out by competition. that's fine if that's what you think they should do, but that won't make ai any safer, it will just kill openai and have them replaced by someone else.

WanderPanda
2 replies
3h7m

You can still be a force for decentralization by creating actually open ai. For now it seems like Meta AI research is the real open ai

insanitybit
1 replies
2h55m

What does "actually open" mean? And how is that more responsible? If the ethical concern of AI is that it's too powerful or whatever, isn't building it in the open worse?

WanderPanda
0 replies
2h50m

Depends on how you interpret the mission statement of building ai for all of humanity. It’s questionable that humanity is better off if ai only accrues to one or a few centralised entities?

zerohalo
0 replies
3h8m

you're right about market forces, however:

1) openAI was explicitly founded to NOT develop AI based on "market forces"; it's just that they "pivoted" (aka abandoned their mission) once they struck gold in order to become driven by the market

2) this is exactly the reasoning behind nuclear arms races

paulddraper
2 replies
3h8m

I applaud the board members for (apparently, it seems) trying to stand up for the mission

What about this is apparent to you?

What statement has the board made on how they fired Altman "for the mission"?

Have I missed something?

alsetmusic
1 replies
2h57m

To me, commentary online and on podcasts universally leans on the idea that he appears to be very focused on money (from the outside) in seeming contradiction to the company charter:

Our primary fiduciary duty is to humanity.

Also, the language of the charter has watered down a stronger commitment that was in the first version. Others have quoted it and I'm sure you can find it on the internet archive.

paulddraper
0 replies
2h29m

commentary online and on podcasts

:/

daveguy
6 replies
3h31m

Hi dang,

Seeing a bug in your comment here:

https://news.ycombinator.com/item?id=38382563

You reference the pages like this:

https://news.ycombinator.com/item?id=38375239?p=2

The second ? should be an & like this:

https://news.ycombinator.com/item?id=38375239&p=2

Please feel free to delete this message after you've received it.

saliagato
2 replies
3h4m

Why do you (dang) always write a comment specifying that people can read more and even providing some links when it's clear that when you reach the bottom of the page you have to click "read more" to indeed read more. Isn't it a bit useless?

pvg
0 replies
2h50m

when it's clear

It isn't that clear. People missing ui elements they have to scroll to is one of the most common ways of missing ui elements.

bartread
0 replies
3h2m

Because people don't, that's why.

pvg
1 replies
3h1m

If you want to reach the mods just email hn@ycombinator.com

daveguy
0 replies
2h24m

Thank you for the advice. I will do that in the future.

paulddraper
0 replies
3h10m

Also, while we're at it:

"Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past" [1]

HN plans to be multi-core?!?! A bigger scoop than OpenAI governance!

Anything more you can share?

[1] https://news.ycombinator.com/item?id=38351005

nickysielicki
4 replies
11h34m

Satya and Sam committed securities fraud with their late Sunday “funding secured” ploy to protect the MSFT stock price. This was the obvious outcome. Sam had no intentions of actually going through with that and Satya was in no position to unilaterally commit to the type of funding that he was implying.

They lied to protect the stock. That should be illegal. In fact, it is illegal.

computerex
1 replies
11h32m

I don't think this is actionable in anyway, even if what you say was shown unequivocally to be true.

nickysielicki
0 replies
11h23m

What do you mean? It would be conspiring to commit bank and wire fraud, the SEC can totally act on that if they want to.

nmfisher
0 replies
10h16m

Yeah, I think there may well be an investigation into that. At best, he said something that was unequivocally untrue, and at worst it was an outright lie. That's blatant market manipulation.

TrackerFF
0 replies
10h1m

Short sellers in shambles right now.

lysecret
4 replies
8h17m

Fascinating, I see a lot of VC/Msfot has overthrown our NPO governing structure because of profit incentives narrative.

I don't think this is what really happened at all. The reason this decision was made was because 95% of employees sided with Sam on this issue, and the board didn't explain themselves in any way at all. So it was Sam + 95% of employees + All investors against the board. In which case the board should lose (since they are only governing for themselves here).

I think in the end a good and fair outcome. I still think their governing structure is decent to solve the AGI problem, this particular board was just really bad.

r_thambapillai
0 replies
7h26m

Of course, the profit incentive also applies to all the employees (which isn't necessarily a bad thing, its good to align the company's goals with those of the employees). But when the executives likely have 10s of millions of dollars on the line, and many of the IC's will likely have single digit millions on the line as well, it doesn't seem exactly straightforward to view this as the employees are unbiased adjudicators of what's in the interest of the non-profit entity, which is supposed to be what's in charge.

It is sort of strange that our communal reaction is to say "well this board didn't act anything like a normal corporate board": of course it didn't, that was indeed the whole point of not having a normal corporate board in charge.

Whatever you think of Sam, Adam, Ilya etc, the one conclusion that seems safe to reach is that in the end, the profit/financial incentives ended up being far more important than the NGOs mission, no matter what legal structure was in place.

jkaplan
0 replies
6h15m

1. Microsoft was heavily involved in orchestrating the 95% of employees to side with Sam -- through promising them money/jobs and through PR/narrative 2. The profit incentives apply to employees too

Bigger picture, I don't think the "money/VC/MSFT/commercialization faction destroyed the safety/non-profit faction" is mutually exclusive with "the board fucked up." IMO, both are true

greenie_beans
0 replies
7h52m

next time, can't wait to see what happens when capital is on the opposite side of the 95% of employees.

campbel
0 replies
5h44m

I don't think the board was big enough for starters. Of the folks on their, only one (Adam) had experience as a leader of a for profit venture. Helen probably lacks the leadership background to make any progress pushing her priorities.

voiceblue
2 replies
6h56m

For some reason this reminds me of the Coke/New Coke fiasco, which ended up popularizing Coke Classic more than ever before.

Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:

    "We are not that dumb, and we are not that smart."
https://en.wikipedia.org/wiki/New_Coke

jdlyga
0 replies
4h5m

I tried New Coke when it was re-released for Stranger Things. It really is a lot better than Coca Cola Classic. It's a shame that it failed.

freedomben
0 replies
4h24m

That is one of the greatest lines of all time. Classic

thepasswordis
1 replies
7h21m

Yeah I don’t know. I think you’d be kind of nuts to build anything on their APIs anymore.

Sure I’ll keep using ChatGPT in a personal capacity/as search. But no way I’d trust my business to them

campbel
0 replies
5h46m

Working out nicely for Msft then. You can use GPT4 via Azure already.

superultra
1 replies
11h31m

I find it interesting that for all the talk from OpenAI staff that it was all about the people, and from Satya that MS has all the rights and knowledge and can jumpstart their own branch at the turn of a dime, it seems getting control of OpenAI proper was a huge priority.

Given that Claude sucks so bad, and this week’s events, I’m guessing that the ChatGPT secret sauce is not as replicable as some might suggest.

0xDEF
0 replies
9h43m

Bard is better than ChatGPT-3.5.

But GPT-4 is indeed a class of its own.

roody15
1 replies
10h34m

“The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.”

Not looking good for the “Open” part of OpenAI.

otteromkram
0 replies
10h29m

Could have said the same thing once Microsoft got involved.

quietpain
1 replies
11h46m

Why is this subject giving me Silicon Valley season 2 flashbacks with every update?

seydor
0 replies
11h40m

The script of SV2 was given as training data to the AGI that has taken over.

minzi
1 replies
10h7m

I would be surprised if the original board’s reasons for caving in were not influenced by personal factors. They must’ve been receiving all kinds of threats from those involved and from random twitter extremists.

It is troubling because it shows that this “external” governance meant to make decisions for the good of humanity is unable to enforce decisions. The internal employees were obviously swayed by financial gain as well. I don’t think that I would behave differently were I in their shoes honestly. However, this does definitively mean that they are a product and profit driven group.

I think that Sam Altman is dishonest and a depressing example of what modern Americans idealize. He has all these ideals he preaches but will happily turn on if it upsets his ego. On top of that he is held up as some star innovator when in reality he built nothing himself. He just identified one potential technological advancement and threw money at it with all his billionaire friends.

Gone are the days of building things in a garage with a mission. Founders are no longer visionary engineers and designers. The path now is clear. Convince some rich folks you’re worthy of being rich too. When they adopt you into wealth you can start throwing shit at the wall until something sticks. Eventually something will and you can claim visionary status. Now your presence in the billionaire club is beyond reproach because you’re a “founder”.

InCityDreams
0 replies
7h49m

They must’ve been receiving all kinds of threats from those involved and from random twitter extremists.

Oooh, yeah. "Must have".

melvinmelih
1 replies
6h6m

“You could parachute Sam into an island full of cannibals and come back in 5 years and he'd be the king.” - Paul Graham

rsanek
0 replies
4h14m
xyst
0 replies
1h49m

OpenAI board f’d around and found out the consequences of their poor decisions. The decision to back pedal from previous position just shows the level of disconnect between these 2 entities.

If I was an investor. I would be scared.

wilde
0 replies
11h27m

But “Sam Altman, Microsoft PM” would have been a much funnier outcome

throwaway74852
0 replies
9h4m

So OpenAI's board is now exclusively white men, and predominantly tech insiders? Lovely to have such a diverse group behind this technology Could this be more comical?

theGnuMe
0 replies
8h0m

Larry Summers is an interesting choice. Any ideas why? I know he was Sheryl Sandberg's mentor/professor which gives him a tech connection. However, I've watched him debate Paul Krugman on inflation in some economic lectures and it almost felt like Larry was out of his element as in Larry was outgunned by Paul... but maybe he was having an off day or it was a topic he is not an expert in. But I don't know the history, haven't read either of their books and I am not an economist. But it was something I noticed.. almost like he was out of touch.

That has nothing to do with AI though.

sys_64738
0 replies
9h44m

Why have OpenAI take to poaching employees from M$ now?

rennsport_eth
0 replies
7h23m

I love you, but you are not serious people.

rceDia
0 replies
8h34m

The "giveaway" is the fact that "Microsoft is happy" with the return of Mr. Altman. Can't wait for the former boards tell-all story. Bets on: how a founder of cutting edge tech company wanted world peace and no harm but outside capital forces steered him to other "unfathomable riches" option. It happens.

rashidae
0 replies
4h8m

Could someone do a sentiment analysis from the comments and share it with all of us who can’t read all the 1,700+ comments?

pimpampum
0 replies
9h30m

So Altman started it and ended up winning it, clearly his coup. Sad how employees were duped into standing behind him.

orsenthil
0 replies
7h27m

What's even the lesson learnt here?

1. Keep doing your work, and focus on building your product. 2. Ignore the noise, go back to 1.

nomaD_
0 replies
8h59m

Hiring engineers at 900K salary & pretending to be non-profit does not work. Turns out, 97% of them wanted to make money.

Government should have banned big tech investment in AI companies a year ago. If they want, they can create their own AI but buying one should be off the table.

nojvek
0 replies
9h31m

What this proves is that OpenAI interests are now entrenched with profit.

I’m assuming most of the researchers there probably realize there is a loooot of money to be made and they have to optimize for that.

They are deffo pushing the frontier of AI.

However I wish OpenAI doesn’t get to AGI first.

I don’t think it will be the best for all of humanity.

I’m scared.

nbzso
0 replies
2h57m

Stop dreaming about alignment. All bets are off. This is the start of AI arms race. Think globally for a second. Yes, everybody wants to be a millionaire or billionaire. This is the current culture we are living in. Corporations have unprecedented power waved into the governments, but governments still have a monopoly on violence. People cannot switch to the new abstraction layer (UBI, Social Rating) for two or five years. They will keep a consumer-oriented mindset before the option to have one is erased. Where you think this is going? To a better Democracy? This is the Cold War V.2 scenario unfolding.

mlindner
0 replies
11h14m

Well that's disappointing. They might as well disband the entire concept of the non-profit as it's clearly completely irrelevant and powerless.

martin_a
0 replies
11h19m

What a total shitshow. Amazing.

macrael
0 replies
4h8m

What a delightful shit show. I don't even personally care whether Sam Altman is running OpenAI but it brings me no end of schadenfreude to see a bunch of AI Doomers make asses of themselves. Ethical Altruism truly believes that AI could destroy all of human life on the planet which is a preposterous belief. There are so many better things to worry about, many of which are happening right now! These people are not serious and should not hold serious positions of power. It's not hard to see the dangers of AI: replacing a lot of make-work that exists in the world, giving shoddy answers with high confidence, taking humans out of the loop of responsible decision making, but I cannot believe that it will become so smart that it becomes an all powerful god. These people worship intelligence (hence why they believe that with infinite intelligence comes infinite power) but look what happens when they actually have power! Ridiculous.

kibwen
0 replies
7h17m

This will be remembered as the biggest waste of time and energy since the LK-99 fiasco.

jrflowers
0 replies
2h38m

This here is what we call a load-bearing “in principle”

jmyeet
0 replies
11h40m

I figured if Sam came back, the board would have to go as a condition. That's obvious. And deserved. The handling of this whole thing has been a very public clownshow.

Obviously, Microsoft has some influence here. That's no different to any other large investor. But the key factors are:

1. Lack of a good narrative from the board as to why they fired Sam;

2. Failure to loop in Microsoft so they're at least prepared from a communications front and feel like they were part of the process. The board can probably give them more details why privately;

3. People leaving in protest speaks well of Sam;

4. The employee letter speaks well of Sam;

5. The interim CEO clown show and lack of an all hands immediately after speaks poorly of the board.

jcutrell
0 replies
5h44m

I wonder what Satya will say here; will the AI CEO position there just evaporate?

jacquesm
0 replies
2h53m

49% stock (lower bound) + 90% of employees (upper bound) > board.

To be updated as more evidence rolls in.

j4yav
0 replies
11h15m

This has been a whirlwind, I feel like I've seen every single possible wrong outcome confidently predicted here, twice.

iteratethis
0 replies
9h3m

Sam's power was tested and turned out to be absolute.

Sam was doing whatever he wanted, got caught, and now can continue to do what he wants with even more backing.

incahoots
0 replies
6h30m

Cue the "it's a Christmas Miracle!"

iamleppert
0 replies
6h23m

So dangerous on so many levels. Just let him start his own AI group, competition is good!

Instead he will come away with this untouchable. He’ll get to stack the board like he wanted to. Part of being on a board of directors is sticking to your decisions. They are weak and weren’t prepared for the backlash of one person.

hackerlight
0 replies
4h55m

So OpeNAI charter still in place? Once OpenAI reaches AGI, Microsoft won't be able to access the tech. Then what will happen to Microsoft when other commercial competitors catch up and also reach AGI one or two years later?

gsuuon
0 replies
6h4m

At least they'll be operating under the original charter - it sounds like the mission continues. Not sure about this new board but hard to imagine they'd make the same sort of mistake.

gongagong
0 replies
11h14m

Meta is looking like the Mother Teresa of large corp LLM providers which is crazy to say out loud (; ꒪ö꒪)

geniium
0 replies
8h8m

This was a nice ride. Nice story to follow

fredgrott
0 replies
10h51m

MS and OpenAI did not win here, but one of their competitors did...whoops.

Why did I say that? Look at the product release by the competitors these past few days. 2nd, Sam pushing for AI chips implies that chatGPT's future breakthroughs are hardware bounded. Hence, the road to AGI is not through chatGPT.

evan_
0 replies
7h12m

What a waste of time

ecmascript
0 replies
11h54m

All these posts about OpenAI.. are people really this interested in whatever happens inside one company?

donohoe
0 replies
9h23m

Larry Summers!?

dizzydes
0 replies
11h6m

D'Angelo is still there... there goes that theory.

diamondfist25
0 replies
6h29m

Adam D’Angelo keeping everyone straight on the mission of OpenAI. What a true boss in the face of woke mob

davidthewatson
0 replies
11h6m

The most interesting thing here is not the cult of personality battle between board and CEO. Rather, it's that these teams have managed to ship consumer AI that has a liminal, asymptotic edge where the smart kids can manipulate it into doing emergent things that it was not designed to do. That is, many of the outcomes of in-context learning could not be predicted at design time and they are, in fact, mind-blowing, magical, and likely not safe for consumption by those who believe that the machines are anywhere near the spectrum from consciousness to sentience.

dangerface
0 replies
10h7m

Keeping D'Angelo on the board is an obvious mistake, he has too much conflicting interest to be level headed and has demonstrated that. The only people that benefited from all this are Microsoft and D'Angelo. Give it a year and we will see part 2 of all this.

Further where is the public accountability? I thought the board was to act in the interests of the public but they haven't communicated anything. Are we all just supposed to pretend this never happend and that the board will now act in the public interest?

We need regulations to hold these boards which hold so much power accountable to the public. No reasonable AI regulations can be made until the public are included in a meaningful way, anyone that pushes for regulations without the public is just trying to control the industry and establish a monopoly.

dang
0 replies
4h32m

All: there are over 1800 comments in this thread. If you want to read them all, click More at the bottom of each page, or like this:

https://news.ycombinator.com/item?id=38375239?p=2

https://news.ycombinator.com/item?id=38375239?p=3

https://news.ycombinator.com/item?id=38375239?p=4 (...etc.)

corobo
0 replies
11h22m

The thing we should all take from this is that unions work :)

cbeach
0 replies
11h10m

Does anyone know which faction (e/acc vs decels) the new board members Bret Taylor and Larry Summers will be on?

One thing IS clear at this point - their political alignment:

* Taylor a significant donor to Joe Biden ($713,637 in 2020): https://nypost.com/2022/04/26/twitter-board-members-gave-tho...

* Summers is a former Democrat Treasury Secretary who has shifted leftwards with age: https://www.newstatesman.com/the-weekend-interview/2023/03/w...

causi
0 replies
10h52m

Kicking Sam out was a bad move. Begging him back is worse. Instead of having an OpenAI whose vision you disagree with, now we have an OpenAI with no vision at all that's simply blown back and forth.

carapace
0 replies
5h13m

So it's the Osiris myth?

bvan
0 replies
10h20m

All involved have clearly demonstrated the lack of credibility in self-governance or the ability to make big-boy decisions. All reassurances from now on will sound hollow.

bmitc
0 replies
7h9m

What a gigantic mess. Everyone looks bad in this: Altman, Microsoft, the OpenAI board, OpenAI employees, etc.

It also has confirmed that greed and cult of personality win in the end.

beepbooptheory
0 replies
6h39m

When the first CEO appeared on the earth he got tied to cliff so the birds could eat him. It seems like that was a good call.

archsurface
0 replies
5h52m

I'm not American - I'm unclear what all this fuss is about? From where I am it looks like some arbitrary company politics in a hyped industry with a guy whose name I've seen mentioned on this site occasionally but really comes across as just a SV or San Fran cult of personality type. Am I missing something? Is there some substance to this story or is it just this week's industry soap opera?

alienicecream
0 replies
9h45m

High Street salesman takes over Frankenstein's lab. Can't wait to see what's going to happen next.

al_be_back
0 replies
10h55m

Losing the CEO must not push significant number of your staff to throw hissy fits and jump ship - it doesn't instill confidence in investors, partners, and crucially customers.

as this event turned into a farce, it's evident that neither the company nor it's key investors accounted much for the "bus factor/problem" i.e loosing a key-person threatened to destroy the whole enterprise.

for me this a failure in Managing Risk 101.

account-5
0 replies
7h30m

Farse, plain and simple.

Uptrenda
0 replies
12h4m

Yep, this ones going in my cringe compilation.

Ruq
0 replies
4h6m

that fast huh?

Pigalowda
0 replies
8h43m

Shows over I guess. Feels like the ending to GoT. I’m not sure I even care what happened to begin it all anymore.

NorwegianDude
0 replies
10h14m

Why is people so interested in this? Why exactly was he fired? I did not get why when I read the news, so I find it strange that people care if they don't even know what it's about. Do we know for sure what this was/is about?

Mrirazak1
0 replies
8h23m

The Steve jobs of our TikTok generation. Came back very quickly in comparison to the 12 years but still.

EarthAmbassador
0 replies
9h49m

Larry effing Summers?!

Really?

Was Henry Kissinger unavailable?

DebtDeflation
0 replies
11h2m

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

Is Ilya off the board then?

Why is Adam still on?

Brett and Larry are good choices, but they need to get that board up to 10 or so people representing a balance of perspectives and interests very quickly.

ChoGGi
0 replies
8h23m

I'm sure that first meeting will be... Interesting.

ChatGTP
0 replies
12h4m

In my opinion, MS will neuter this product too, there is no way they're just going to have the public accessing tools which make their own software and products obsolete.

They will take over the board, and then steer it in some weird dystopian direction.

Ilya knows that IMO, he was just more principled than Altman.

BryantD
0 replies
8h11m

We’re not gonna see it but I’d love to see Sam’s new contract and particularly any restraints on outside activities.