return to table of content

Sam Altman returns as CEO, OpenAI has a new initial board

bedobi
128 replies
15h40m

i'm still not clear what the accusation against Altman was... something about being cavalier about safety? if that was the claim and it has merit, i don't understand why it wasn't right to oust him, and why the employees are clamoring for him back

sanderjd
81 replies
15h10m

Well, their big mistake was being unwilling to be clear and explicit about this, but as I read it, the board's problem with him was that he wasn't actually acting as the executive of the non-profit that he was meant to be the executive of, but rather was acting entirely in the interests of a for-profit subsidiary of it (and in his own interests), which were in conflict with the non-profit's charter.

I think where they really screwed up was in being unwilling or unable to argue this case.

drooby
55 replies
15h0m

It's just so strange. This is such a clearly justifiable reason that the fact that they didn't argue it... or argue.. anything, makes me very suspicious that it is correct.

7e
47 replies
14h48m

They were either scared of being sued for defamation or unwilling to divulge existential company secrets. Or both.

0xDEAFBEAD
39 replies
12h59m

I think "unwilling to divulge company secrets" is the best explanation here.

We know that OpenAI does a staged release for their models with pre-release red-teaming.

Helen says the key issue was the board struggling to "effectively supervise the company": https://nitter.net/hlntnr/status/1730034017435586920#m

Here's Nathan Labenz on how sketchy the red-teaming process for GPT4 was. Nathan states that OpenAI shut him out soon after he reached out to the board to let them know that GPT4 was a big deal and the board should be paying attention: https://nitter.net/labenz/status/1727327424244023482#m [Based on the thread it seems like he reached out to people outside OpenAI in a way which could have violated a confidentiality agreement -- that could account for the shutout]

My suspicion is that there was a low-level power struggle ongoing on the board for some time, but the straw that broke the camel's back was something like Nathan describes in his thread. To be honest I don't understand why his thread is getting so little play. It seems like a key piece of the puzzle.

In any case, I don't think it would've been right for Helen to say publicly that "we hear GPT-5 is lit but Sam isn't letting us play with it", since "GPT-5 is lit" would be considered confidential information that she shouldn't unilaterally reveal.

g42gregory
38 replies
11h29m

So what is Nathan Labenz saying? That GPT-4 is dangerous somehow? It will get many people out of jobs? MS Office got all the typists out of jobs. OCR and Medical Software got all the medical transcriptionists out of jobs. And they created a lot more jobs in the process. GPT-4 is a very powerful tool. It has not a whiff of AGI in it. The whole AGI "scare" seems to be extremely political.

0xDEAFBEAD
31 replies
11h19m

Nathan says the initial version of GPT-4 he red-teamed was "totally amoral" and it was happy to plan assassinations for him: https://nitter.net/labenz/status/1727327464328954121#m

Reducing the cost of medical transcription to ~$0 is one thing. Reducing the cost of assassination to ~$0 is quite another.

g42gregory
15 replies
10h30m

This is a piece of software. What would "totally amoral" even mean here? It's an inanimate object, it has no morals, feelings, conscience, etc... He gives it an amoral input, he gets an amoral output.

bayindirh
6 replies
8h15m

Then we should stop teaching Therac-25 incident to developers and remove envelope protection from planes and safety checks from nuclear reactors.

Because, users should just input the moral inputs to these things. These are inanimate objects too.

Oh, while we're at it, we should also remove battery charge controllers. Just do the moral and civic thing and unplug when your device charges.

int_19h
5 replies
6h20m

In both of your examples, the result of "immoral inputs" is immediate tangible harm. In case of GPT-4 or any other LLM, it's merely "immoral output" - i.e. text. It does not harm anyone by itself.

bayindirh
3 replies
5h56m

In case of GPT-4 or any other LLM, it's merely "immoral output" - i.e. text. It does not harm anyone by itself.

Assuming that you're not running this query over an API and relaying these answers to another control system or a gullible operator.

An aircraft control computer or reactor controller won't run my commands regardless of its actuators connected or not. Same for weapon systems.

Hall pass given to AI systems just because they're outputting text to a screen is staggering. Nothing prevents me to process this output automatically and actuate things.

alexilliamson
1 replies
4h10m

Why would anyone give control of air traffic or weapons to AI? That's the key step in AGI, not some tech development. By what social process exactly would we give control of nukes to a chatbot? I can't see it happening.

bayindirh
0 replies
4h2m

Why would anyone give control of air traffic or weapons to AI?

Simplified operations, faster reaction time, eliminating human resistance for obeying killing orders. See "War Games" [0] for a hypothetical exploration of the concept.

a chatbot.

Some claim it's self-aware. Some say it called for airstrikes. Some say it gave a hit list for them. It might be a glorified Markov-chain, and I don't use it, but there's a hoard of people who follows it like it's the second Jesus, and believe what it emits.

I can't see it happening.

Because, it already happened.

Turkey is claimed to use completely autonomous drones in a war [1].

South Korea has autonomous sentry guns which defend DMZ [2].

[0]: https://www.imdb.com/title/tt0086567/

[1]: https://www.popularmechanics.com/military/weapons/a36559508/...

[2]: https://en.wikipedia.org/wiki/SGR-A1

freeopinion
0 replies
19m

We give hall passes to more than AI. We give passes to humans. We could have a detailed discussion of how to blow up the U.S. Capitol building during the State of the Union address. It is allowed to be a best selling novel or movie. But we freak out if an AI joins the discussion?

bossyTeacher
0 replies
5h43m

The point here is that is giving the user detailed knowledge on how to harm others. This is way different than a gun where you are doing the how (aiming and pulling the trigger).

The guy says he wanted to slow down the progress of AI and GPT suggested a detailed assassination plan with named targets and reasons for each of them. That's the problem.

0xDEAFBEAD
4 replies
9h32m

I mean, there's a sense in which my mind is software that's being run by my brain, right? Yet that doesn't absolve me of moral responsibility.

In any case, an F16 fighter jet is amoral in a certain sense, but it wouldn't be smart to make F16s available to the average Joe so he can conduct an airstrike whenever he wants.

consp
3 replies
8h55m

Completely depends on your morality. I'm pretty sure there are some libertarians out there who think the most basic version of the second amendment includes owning F16 with live weapons.

0xDEAFBEAD
1 replies
8h2m

Sure -- if you're a libertarian who thinks it should be possible to purchase an F16 without a background check, that seems consistent with the position that an amoral GPT-4 should be broadly available.

civilitty
0 replies
6h21m

What kind of background check do you think exists when buying a fighter jet?

It’s kind of a moot point since only one F16 exists in civilian hands but you can buy other jets with weapon hardpoints for a million. Under 3 million if you want to go supersonic too. The cheapest fighter jet is on the order of $250k. There’s zero background check.

jacquesm
0 replies
7h38m

Sure, but idiots are a thing and the intersection of the sets of libertarians who may believe that and idiots is hopefully empty but it may not be so such outsized power is best dealt with through a chain of command and accountability of sorts.

ben_w
1 replies
8h30m

We don't want immoral output even for immoral input.

(We do disagree about what constitutes "immoral", which makes this much harder).

int_19h
0 replies
6h19m

We absolutely do, though, if we want those things to e.g. write books and scripts in which characters behave immorally.

r_hoods_ghost
0 replies
8h58m

Amoral literally means "lacking a moral sense; unconcerned with the rightness or wrongness of something." Generally this is considered a problem if you are designing something that might influence the way people act.

logicchains
11 replies
10h27m

The cost of planning an assassination is not the same thing as the cost (and risk) of carrying out an assassination, what a stupid take.

esafak
7 replies
10h25m

A would be assassin would obviously ask the algorithm to suggest a low risk and cost way of assassinating.

SXX
6 replies
8h27m

Except the reason why we dont all just killed each other yet have nothing to do with risk or cost of killing someone.

And everything LLM can come up with will be exactly the same information you can find in any fiction detective book or TV series about crime. Yeah very very dumb criminal can certainly benefit from it, but he can as well go on 4chan and ask about assassination there. Or on some detective book discussion club or forum.

ben_w
2 replies
7h53m

Except the reason why we dont all just killed each other yet have nothing to do with risk or cost of killing someone

Most of us don't want to.

Most of those who do, don't know enough to actually do it.

Sometimes such people get into power, and they use new inventions like the then-new-pesticide Zyklon B to industrialise killing.

Last year an AI found 40k novel chemical agents, and because they're novel, the agencies that would normally stop bad actors from getting dangerous substances, would generally not notice the problem.

LLMs can read research papers and write code. A sufficiently capable LLM can recreate that chemical discovery AI.

The only reasons I'm even willing to list this chain, is that the researchers behind that chemical AI have spent most of the intervening time making those agencies aware of the situation, and I expect the agencies to be ready before a future LLM reaches the threshold for reproducing that work.

SXX
1 replies
5h52m

Everything you say does make sense, except those people who able to get equipment to produce those chemicals and have funding to do something like that - they dont really need AI help here. There are plenty dangerous chemicals already well known to humanity and some dont actually take anything regulated to produce "except" complicated and expensive lab equipment.

Again difficulty of production of poisons and chemicals it's not what prevent mass murdering around the globe.

ben_w
0 replies
5h35m

Complexity and cost are just two of the things that inhibit these attacks.

Three letter agencies knowing who's buying a suspicious quantity from the list of known precursors, that stops quite a lot of the others.

AI in general reduces cost and complexity, that's kind of the point of having it. (For example, a chemistry degree is expensive in both time and money). Right now using an LLM[0] to decide what to get and how to use it is almost certainly more dangerous for the user than anyone else — but this is a moving goal, and the question there has to be "how to we delay this capability for as long as possible, and at the same time how do we prepare to defend against the capability when it does arrive?"

[0] I really hope that includes even GPT-4 before the red-teaming efforts to make it not give detailed instructions for how to cause harm

0xDEAFBEAD
2 replies
7h56m

And everything LLM can come up with will be exactly the same information you can find in any fiction detective book or TV series about crime.

As Nathan states:

And further, I argued that the Red Team project that I participated in did not suggest that they were on-track to achieve the level of control needed

Without safety advances, I warned that the next generation of models might very well be too dangerous to release

Seems like each generation of models is getting more capable of thinking, beyond just regurgitating.

SXX
1 replies
5h45m

I dont disagree with his points, but you completely miss the point of my post. People dont need an AI advise to commit crime and kill others. Nonestly humans they're pretty good at it using technology of 1941.

You don't have bunch of cold blood killers going around not just because police is so good and killers are dumb and need AI help. It's because you live in functioning state where society have enough resources so people happy enough to instead go and kill each other in Counter Strike or Fortnite.

I totally agree that AGI could be a dangerous tech, but it's will require autonomity where it can manipulate real world. So far GPT with API access is very far from that point.

giantrobot
0 replies
4h59m

If you have ChatGPT API access you can have it write code and bridge that to other external APIs. Without some guard rails an AI is like a toddler with a loaded gun. They don't understand the context of their actions. They can produce dangerous output if asked for it but also if asked for something else entirely.

The danger also doesn't need to be an AI generating code to hack the Gibson. It could also be things like "how do I manipulate someone to do something". Asking an AI for a marketing campaign isn't necessarily amoral. Asking it how to best harass someone into committing self-harm is.

0xDEAFBEAD
1 replies
9h38m

There's been a fair amount of research into hooking up LLMs with the ability to call APIs, browse the web, and even control robots, no? The barrier between planning and doing is not a hard one.

As for cost and risk -- ask GPT-5 how to minimize it. As Nathan said in his thread, it's not about this generation, it's about the next generation of models.

A key question is whether the control problem gets more difficult as the model gets stronger. GPT-4 appears to be self-aware and passing the mirror test: https://nitter.net/AISafetyMemes/status/1729206394547581168#...

I really don't know how to interpret that link, but I think there is a lot we don't understand which is going on in those billions of parameters. Understanding it fully might be just as hard as understanding the human brain.

I'm concerned that at some point in the training process, we will stumble across a subset of parameters which are both self-aware and self-interested, too. There are a lot of self-interested people in the world. It wouldn't be surprising if the AI learns to do the sort of internal computation that a self-interested person's brain does -- perhaps just to make predictions about the actions of self-interested people, at first. From there it could be a small jump to computations which are able to manipulate the model's training process in order to achieve self-preservation. (Presumably, the data which the model is trained on includes explanations of "gradient descent" and related concepts.)

This might sound far-fetched by the standard of the current model generation. But we're talking about future generations of models here, which almost by definition will exhibit more powerful intelligence and manifest it in new unexpected ways. "The model will be much more powerful, but also unable to understand itself, self-interest, or gradient descent" doesn't quite compute.

cthalupa
0 replies
58m

The image is OCR'ed and that data is fed back into the context. This is no more interesting or indicative of it passing the mirror test than if you had copy and pasted the previous conversation and asked it what the deal was.

ben_w
0 replies
8h35m

I can think of several ways that AI assistance might radically alters both attack and bodyguard methods. I say "might" because I don't want to move in circles that can give evidenced results for novel approaches in this. And I'm not going to list them for the same reasons I don't want an AI to be capable of listing them: while most of the ideas are probably just Hollywood plot lines, there's a chance some of them might actually work.

creato
1 replies
9h42m

Reducing the cost of assassination to ~$0 is quite another.

It is reducing the cost of developing an assassination plan from ~$0 to ~$0. The cost of actually executing the plan itself is not affected.

0xDEAFBEAD
0 replies
9h35m
phpisthebest
0 replies
4h38m

This is where I have issues with OpenAIs stated mission

I want AI to be amoral, or rather I should say I do not want the board of OpenAI, or even the employee of OpenAI choosing what "moral" is and what is "immoral" especially given that OpenAI may be superficially "diverse" in race, gender, etc, but they sure as hell are not politically diverse, and sure has hell do not share a moral philosophy that is aligned with the vast majority of the population of humanity given the vast majority of humanity is religious in someway and I would guess the majority of OpenAI is at best agnostic if not atheist

I do not want a AI Wikipedia.... aka politically biased to only 1 worldview and only useful for basic fact regurgitation like what is the speed of light

djhn
1 replies
10h58m

Medical transcriptionists out of jobs? As far as I'm aware, medical transcription is still very much the domain of human experts, since getting doctors to cater their audio notes to the whims of software turned out to be impossible (at least in my corner of the EU).

consp
0 replies
8h52m

My mom had to do this for her job and apparently some of the docs are so mumbly they have to infer a lot of the words from context and type of procedure but there is a lot of crossover everywhere so it depends a lot on which doc is mumbeling what. And yes you need special training for it (no medical degree though)

vikramkr
0 replies
8h33m

Did they really create a ton more jobs? The past few rounds of industrialization and automation have coinved with plagues/the black death that massively reduced the population, mass agitation, increasing inequality, and recently a major opioid epidemic in regions devastated by the economic changes of globalization and industrialization. I think these tools are very good and we should develop, I also think it's delusional to think it'll just balance out magically and dangerous to expect our already failed systems to protect people left behind. Doesnt exactly look like they worked any of the previous times!

tomjakubowski
0 replies
9h30m

FWIW I had a doctor's appointment just this year with a transcriptionist present. (USA)

ben_w
0 replies
8h8m

Would you rather:

1) be surprised by and unprepared for AGI and every step on the path to that goal, or

2) have the developers of every AI examine their work for its potential impact, both when used as intended and when misused, with regard to all the known ways even non-G AI can already go wrong: bugs, making stuff up, reward hacking, domain shift, etc.; or economically speaking how many people will be made unemployed by just a fully [G]eneral self-driving AI? What happens if this is deployed over one year? Do existing LLMs get used by SEO to systematically undermine the assumptions behind Page rank and thus web search?; and culturally: how much economic harm do artists really suffer from Diffusion models? Are harms caused by AI porn unavoidable thanks to human psychology, or artefacts of out milieu that will disappear as people become accustomed to it?

There's also a definition problem for AGI, with a lot of people using a standard I'd reserve for ASI. Also some people think an AGI would have to be conscious, I don't know why.

The best outcome is Fully Automated Luxury Communism, but assuming the best outcome is the median outcome is how actual Communism turned into gulags and secret police.

Towaway69
0 replies
7h4m

It seems that silicon valley is developing not a conscious sentient piece of software rather a conscience, a moral compass is beginning to emerge.

After giving us Facebook, insta, Twitter, and ego-search, influencing many people negatively, suddenly there are moral values being discussed amongst those that decide our collective tech futures.

AI will have even more influence on humankind and some are questioning the morality of money (hint: money had no morals).

fastball
6 replies
14h3m

Doesn't really make sense to be unwilling to divulge company secrets if you're willing to gut the company for this hill.

0xDEAFBEAD
3 replies
13h13m

They weren't willing to gut the company. That's why Sam is back as CEO.

jxi
1 replies
11h56m

It sounded like they would if they could (for instance trying to sell to Anthropic or instating a "slow it way down" CEO), but they even failed at that. Not an effective board at all.

0xDEAFBEAD
0 replies
10h42m

for instance trying to sell to Anthropic or instating a "slow it way down" CEO

I wouldn't put these in the same category as "90% of staff leaves for Microsoft".

In any case, let's not confuse unsuccessful with incompetent. (Or incompetent with immoral, for that matter.)

jacquesm
0 replies
7h35m

They were willing but failed, and mostly on account of not doing enough prepwork.

DebtDeflation
0 replies
6h23m

Exactly. It sounds like those board members themselves were acting in the interest of profit instead of the "benefit all humanity" mission stuff, no different than Altman. If anything then, the only difference between the two groups is one of time horizon. Altman wants to make money from the technology now. The board wants to wait until it's advanced enough to take over the world, and then take over the world with it. For the world's benefit, of course.

6gvONxR4sf7o
0 replies
10h42m

It's remarkable that the old board is the side characterized as willing to blow up the company, since it was Altman's side who threatened to blow it up. All the old board really did was fire Altman and remove Brockman from the board.

sanderjd
6 replies
13h23m

Yeah, I totally agree! Like, this is such an obviously true and valid reason to fire him, but they never came out and said it! So ... is this not what it actually was? Or ... what? It truly is mystifying.

danbmil99
3 replies
12h38m

From a doomer/EA perspective, publicly saying that GPT5 is AGI or such would likely inspire & accelerate labs around the world to catch up. Thus it was more "Altruistic" / aligned with humanity's fate to stay mum and leave the situation cloudy.

esafak
2 replies
10h30m

No, it would not; they are already trying to catch up.

throwuwu
0 replies
1h3m

If you know the outcome is favourable then you go all in. Right now the other competitors are just trying to match GPT4, if they knew AGI was achievable then they would throw everything they have at it in order to not be left out.

brhsagain
0 replies
10h7m

Having an existing example showing that something difficult is possible causes everyone else to replicate it much faster, like how a bunch of people started running sub-4-minute miles after the first guy did it.

pms
1 replies
32m

I think they didn't anticipate sucha a large backlash, especially from investors. They felt the backlash threatens OpenAI, both its non-profit and for-profit arms, so they reverted their decision, which in my opinion was a mistake, but time will tell.

sanderjd
0 replies
13m

Yeah, I guess so. I keep waffling between thinking it's some 4d chess thing, and thinking it's just normal human fallibility, where the board just made a massive mistake in predicting how it would go. But I just struggle so much to imagine that, because everyone I know in the industry, regardless of their level of expertise or distance from OpenAI, immediately knew how big a deal it was going to be, when we heard he was hired. But supposedly the people on the board had no idea? I think this might be the right conclusion, but I nevertheless struggle to fathom it.

upwardbound
10 replies
13h48m

Regardless of the board's failure to make their case, recent news suggests that the SEC is going to investigate whether it is true that Altman acted in the manner you describe, which would be a violation of fiduciary duty.

I agree that it seems like an open & shut case.

Typical SEC timelines mean that this will go public in about 18 months from now.

    An anonymous person has already filed an SEC whistleblower complaint about the behavioral pattern of Altman and Nadella, which has SEC Submission Number 17006-030-065-098.
https://pressat.co.uk/releases/ai-community-calls-for-invest...

    As the quid pro quo favoritism allegations remain under investigation, it is crucial to note that they are as yet unproven, and both Altman and Nadella are presumed innocent until proven guilty.
https://influencermagazine.uk/2023/11/allegations-of-quid-pr...

11 hours ago, the SEC tweeted the following new rule, which could be interpreted as a declaration that if Altman and Nadella are found guilty in this case, the SEC will block certain asset sales by OpenAI until the conflict of interest is unwound / neutralized:

    The Commission has adopted a new rule intended to prevent the sale of asset-backed securities (ABS) that are tainted by material conflicts of interest.

    Washington D.C., Nov. 27, 2023 — The Securities and Exchange Commission today adopted Securities Act Rule 192 to implement Section 27B of the Securities Act of 1933, a provision added by Section 621 of the Dodd-Frank Act. The rule is intended to prevent the sale of asset-backed securities (ABS) that are tainted by material conflicts of interest. It prohibits a securitization participant, for a specified period of time, from engaging, directly or indirectly, in any transaction that would involve or result in any material conflict of interest between the securitization participant and an investor in the relevant ABS. Under new Rule 192, such transactions would be “conflicted transactions.”
https://twitter.com/SECGov/status/1729895926297247815

https://www.sec.gov/news/press-release/2023-240

More information:

    The Company exists to advance OpenAI, Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit.
https://stratechery.com/2023/openais-misalignment-and-micros...

    Some analysts, including Stratechery writer Ben Thompson, have described the 2019 acceptance of Microsoft’s controversial investment by Altman as the beginning of a troubling pattern of Altman repeatedly making deals with Microsoft which were often unfavorable to OpenAI. ... As Thompson describes it, this pattern of behavior culminated in an unusual intellectual property licensing arrangement which Microsoft’s Investor Relations site describes as a “broad perpetual license to all the OpenAI IP developed through the term of this partnership” “up until AGI” (Artificial General Intelligence). This perpetual license agreement includes the technology for OpenAI’s flagship products GPT-4 and Dall•E 3. https://www.microsoft.com/en-us/Investor/events/FY-2023/AI-Discussion-with-Amy-Hood-EVP-and-CFO-and-Kevin-Scott-EVP-of-AI-and-CTO
https://michigan-post.com/redditors-from-r-wallstreetbets-ca...

    U.S. Securities and Exchange Commission – Tips, Complaints, and Referrals – Summary Page - Submitted Externally – PDF excerpt obtained 2023-11-26 via Signal
    Submission Number (redacted) was submitted on Wednesday, November 22, 2023 at 12:18:27 AM EST
    This PDF was generated on Wednesday, November 22, 2023 at 12:28:38 AM EST
    Image above includes ... the heading of a 7-page PDF titled "TCRReport (1).pdf" which was received by this reporter over the weekend via Signal.
https://www.outlookindia.com/business-spotlight/sec-consider...

nirv
3 replies
13h38m

Very interesting, thanks for posting this.

upwardbound
2 replies
7h45m
jacquesm
1 replies
7h31m

And yet, they continue to exist and no CEO of MS ever stepped down because of any of these.

And I predict that even if Microsoft is going to be caught again that it will be a non-event in terms of actual repercussions. If Nadella exits MS HQ in Seattle in handcuffs I would be most surprised.

upwardbound
0 replies
6h21m

Yeah. I think the most realistically-achievable positive outcome is that Microsoft is forced to give up their new board-observer seat, which seems highly improper for them to have since the board they are observing is supposed to make decisions that benefit all of humanity equally. If Microsoft gets to have a fly on the wall in those discussions, it gives them a gold mine of juicy insider knowledge about which sectors of the economy are about to be affected next by Generative AI — knowledge which they can use to front-run the market by steering the roadmaps of Bing, MS Office, etc. so as to benefit from upcoming OpenAI product launches before any non-insiders are aware of what OpenAI is currently planning.

Microsoft plainly shouldn't be allowed to have this advantage, in that giving an advantage to any one party directly harms the mandate set forth in OpenAI's Charter:

    Broadly distributed benefits

    We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

    Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
https://openai.com/charter

It certainly seems that Microsoft, a "stakeholder", has managed to get a highly improper listening seat that will give them the ability to act on insider information about what's coming next in AI, allowing Microsoft to front-run the rest of the AI software industry and all those industries it affects, in a way that will plainly "compromise broad benefit". (Since any wealth that accrues excessively to Microsoft shareholders is not distributed to other humans who don't hold Microsoft shares.)

A mere 10 days ago, Nadella was shamelessly throwing his weight around on national TV, by appearing on CNBC where he improperly pressured the OpenAI non-profit board — which owes nothing to him legally or morally — to give him more deference, in direct violation of the "always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit" provision of the OpenAI non-profit charter.

https://www.cnbc.com/2023/11/20/microsoft-ceo-nadella-says-o...

althea_tx
2 replies
11h46m

The Michigan Post article is mostly speculation and that publication doesn’t have much depth/history to it. Check out their “advertise with us” page.

This whole info dump feels like a mishmash of links to thoughtful things (Stratechery) with links to speculative articles that are clearly biased. Like how is “Influencer Magazine” breaking a story that Wall Street Journal and Kara Swisher are overlooking?

I don’t mean to be a jerk. Just really unconvinced.

0xDEAFBEAD
1 replies
11h27m

I would guess that "WLW FUTURE PRESS RELEASE DISTRIBUTION" is a publicist service that was hired by the person making the whistleblower complaint. upwardbound claims there's a lot of money in being a successful whistleblower: https://news.ycombinator.com/item?id=38388246

I don't know if I find the Microsoft/OpenAI favor trading allegation that persuasive (unless new information is uncovered, e.g. Microsoft letting Sam use a private jet or something like that). However if the SEC actually ends up enforcing the "fiduciary duty is to humanity" thing in OpenAI's charter at some point, that would be incredibly sweet.

upwardbound
0 replies
8h27m

| if the SEC actually ends up enforcing the "fiduciary duty is to humanity" thing in OpenAI's charter at some point, that would be incredibly sweet.

Absolutely. It's 100% their job.

0xDEAFBEAD
1 replies
11h5m

"Asset-backed securities" doesn't sound like corporate equity to me.

jacquesm
0 replies
7h27m

In a broad reading it could easily be just that.

Securities class contains stock

Stock = backed by the company balance sheet

sanderjd
0 replies
13h17m

Yep, no doubt there are more shoes left to drop.

bhpm
5 replies
14h13m

Argue their case? To whom? They were the board.

sanderjd
3 replies
13h17m

To the public, and to employees.

bhpm
2 replies
3h46m

To what end? The public and the employees don’t have a say in the corporate governance. That is the function of the board.

As far as I can tell, the board had no obligation to consult the public or their shareholders or their employees on any of this.

Karunamon
1 replies
2h58m

And how did that attitude work out for them? Upwards of 90% of their staff threatened to bail out, every single one of them look like fools in public, and I would be shocked if there were not some recriminations behind closed doors from the likes of Microsoft's CEO.

bhpm
0 replies
2h5m

It’s not an “attitude.” It’s the legal structure of the corporation’s leadership. They were no more capable of incorporating the public will than an individual is capable of taking a vote on the flow of traffic on a public highway. That is to say, even if they had done what is proposed here, it wouldn’t have made a difference.

LudwigNagasena
0 replies
13h56m

To the stakeholders, which include employees, customers, partners and, by OpenAI own mission statement, all of humanity in general.

dacryn
4 replies
8h2m

so much this, he kept introducing clauses in contracts that tied investments to him, and not necessarily to openai. He more or less did it with microsoft, to a small degree. So his firing could have caused quite a lot of money to be lost. But ok no big deal.

But then he tried to do it again with a saudi contract. OpenAI board said explicitly they didn't want the partnership, and especially not tied to Altman personally being the CEO as a clause.

Altman did it behind their back -> fired.

This is the rumour on the streets, unconfirmed though

octacat
3 replies
7h54m

If they gave a reasonable reason to the public, they could get away with it. Shady CEO vs equally shady board.

staunton
1 replies
6h14m

My take is that the board probably never had a chance no matter what they said or did. The company already "belonged" to Altman and Microsoft. The board was just there for virtue signaling and for quite a while already had no real power anymore beyond a small threat of creating bad publicity.

sanderjd
0 replies
5m

I think they could have actually blown up the whole thing and remained in charge of a greatly-diminished but also re-aligned non-profit organization. A lot of people (like me) would have thought, holy crap, that was insanely bold and unprecedented, I can't believe they actually did that, but it was admirably principled.

Instead it was a confusing shambles that just left them looking like idiots with no plan.

timeon
0 replies
5h24m

How is public relevant here?

maegul
0 replies
14h59m

Really hope details come about this with all perspectives being provided.

Whether they stuffed up or there are some details that made the situation unworkable for the board, it’s an interesting case study in governance and the whole nonprofit with a for profit subsidiary thing.

Towaway69
0 replies
4h48m

For me it seems to be a debate between moral versus money. Is it morally correct to create technology that would be extremely profitable but has the potential to fundamentally change humankind.

Reptur
0 replies
15h2m

Makes me curious if the reason behind that is just an NDA.

bmitc
17 replies
14h30m

It also isn't clear why Altman couldn't have been replaced by someone else with literally no change in operations and progress. It is just really confusing why people acted as if they fired Michael Jordan from the Bulls.

mikeg8
10 replies
13h34m

He is obviously a great leader and those that work there wanted to work with him. It’s very clear in this thread how undervalued exceptional leadership actually is, as evidence by comments thinking the top role in the most innovative company could be just plug-and-play.

jjtheblunt
4 replies
12h1m

This comment made me look for a recent article about Paul Graham firing him from Ycombinator for being exactly not a great leader or trustworthy person.

The article was just days ago but it’s eluding my search.

mikeg8
2 replies
11h56m

Would love to see it. Everything I’ve read/seen from PG regarding Sama has been nothing but high praise. My understand is Sam chose to leave YC president role to pursue other interests/ventures which eventually turned into OpenAI

jjtheblunt
0 replies
11h48m

Washington Post and Hacker News

https://news.ycombinator.com/item?id=38378216

jacquesm
0 replies
7h29m

I think PG is very, very subtle when it comes to his writings about Sam and what you think is high praise may well be faint damnation.

sumedh
0 replies
6h42m

for being exactly not a great leader or trustworthy person.

Didnt PGs wife invest/donate to Open AI?

contrarian1234
3 replies
10h39m

I'm going to guess it's not about leadership. From the Lex Fridman interview he claims to be personally involved in all hires - and spend a good fraction of his time evaluating candidates.

- He's not going to hire someone he doesn't like

- Someone that doesn't like him is unlikely to join his team

So it's very likely the whole staff ends up being people that "like" him or get along with him. He did come off as a charming smooth talking - and I'm sure he has lot of incredibly powerful friends/connections. But at least from that little window into his world I didn't feel he showed any particularly brilliance or "leadership". He did seem pretty deferential to ML experts (which i guess he's not) - but it's hard to know if it's a false humility or not

calf
2 replies
8h57m

That's pathetic. I cannot respect someone and will not work under someone who functions that way. A personality cultist.

bbarnett
1 replies
8h40m

It's also an unvalidated claim, predicated upon assumptions.

I've hired people I "don't like" on a personal level. I care more about their ability to work positively with oters, and their professionalism and skill.

Yet you and the parent poster have assumed he is hiring a cult, because he spends time evaluating?

A weird assumption to make.

contrarian1234
0 replies
8h14m

oh sorry - I didn't mean it in a nefarious way at all

I think it's just human nature to not hire people you feel you won't get along with. If you're deeply involved in all your hires, then I feel you'll end up with an organization full of people that you get along with and who you like (and probably like you back). I wouldn't go so far as to say it'd make a personality cult - though with their lofty mission statements and ambitions to make the world better.. who knows. Not going to psychoanalyze a bunch of people I don't know

"I've hired people I "don't like" on a personal level."

I'm honestly impressed... I feel that's rather exceptional. I feel a lot of hiring goes on "gut feeling"

int_19h
0 replies
6h4m

There's exceptional leadership, and then there are charming sociopaths.

Unfortunately, it can sometimes be hard to tell those two apart when a sociopath is actively pushing your emotional buttons to make you feel like they care about the same things as you etc.

0xDEAFBEAD
3 replies
11h1m

See https://www.theinformation.com/articles/openais-86-billion-s...

What if lots of employees stood to make "fuck you" money from that sale, and with Sam's departure, that money was in danger of evaporating?

bad_user
2 replies
9h18m

If employees would have voted Sam out, you'd take that as a shiny example of the proletariat exercising the power for the good of human kind, hammer, sickle and all that.

I always find it funny when people understand democracy to mean "other people that should vote my way, otherwise they are imoral and should be re-educated".

0xDEAFBEAD
1 replies
8h17m

First, I'm actually quite libertarian and capitalist -- although not necessarily so when it comes to companies working on powerful AI (or fighter jets for that matter). Here are some comments of mine from other discussions if you don't believe me:

* Expressing skepticism about unions in Sweden -- https://news.ycombinator.com/item?id=38308184

* Arguing against central planning, with a link to a book detailing how socialism always fails -- https://news.ycombinator.com/item?id=38303195

* I often push back against the "greedy corrupt American plutocrats" narrative which you see all over HN. Here are a few examples -- https://news.ycombinator.com/item?id=37541805 https://news.ycombinator.com/item?id=37962796 https://news.ycombinator.com/item?id=38456106

And by the way, here is a comment I made the other day making essentially the point you are making, that in a democracy everyone is entitled to their opinion, even the dastardly Elon Musk: https://news.ycombinator.com/item?id=38261265 And I also argue in favor of freedom of speech here, for whatever that's worth: https://news.ycombinator.com/item?id=37713086

Point being, I'm not sure our disagreement lies where you think it does.

The purpose of the board is that they're supposed to be disinterested representatives of humanity, supervising OpenAI. The employees aren't chosen for being disinterested, and it seems quite likely that they are, in fact, interested, per my link.

From the perspective of human benefit (or from the perspective of my own financial stake in OpenAI, given that their charter says their "primary fiduciary duty is to humanity"), I prefer a small group of thoughtful, disinterested people over a slightly larger group whose interest is systematically biased relative to the interest of me or the average person. Which is more likely to produce a fair trial: a jury of 12 randomly chosen citizens, or a jury of 1200 mafiosos?

mvc
0 replies
2h6m

When their charter says

"primary fiduciary duty is to humanity"

I don't think that means it intends to pay a financial dividend to each and every person on the planet. I think it means that if it is successful at AGI, that in itself will expand the economy enough to have the same effect.

"Rising tide lifts all boats" type logic.

dacryn
0 replies
8h0m

a sane company has a plan for succession, even if worst case scenario Altman has a sudden medical issue or car crash or something.

It tells a lot that Altman made openAI so dependend on him that his ousting could have killed the company. That's also contributing to the fact that the board was not trusting him

TheGRS
0 replies
8h50m

I dunno about this thought, are there other AI startups operating at this level and that have the amount of market share and headspace that OpenAI has? I see comments like this on hacker news a lot, and I get that yes, the man is human and fallible, but they are doing something that’s working well for their space. If there’s some compelling reason to doubt Altman’s leadership or character I haven’t heard it yet.

solardev
3 replies
14h44m

Even if -- and that's a big if -- it really was just a dispute over alignment (nonprofit vs for-profit, safety, etc.), the board executed it really poorly and completely misjudged their employees' response. They saw the limits of their power / persuasiveness compared to [Altman/the allure of profit/the simple stability and clarity of day-to-day work without a secretive activist board/etc]

Or maybe they already knew the employees weren't on their side, saw no other way to do it, and hoped a sudden and dramatic ouster of the CEO would make the others fall in line? Who knows.

I'd be pretty concerned too if my CEO was doing what I considered a great job and he was suddenly removed for no clear reason. If the board had explained its rationale and provided evidence, maybe some of the employees would've listened. But they didn't... to this day we have no idea what the actual accusation was.

It looks like a failed coup from the outside, and we have no explanations from the people who tried to instigate it.

kmeisthax
1 replies
14h1m

Let's also keep in mind that if the AI doomers are right and spicy autocomplete is just a few more layers away from taking over the world, OpenAI has completely failed at building anything that could keep it under control. Because they can't even keep Sam Altman under control.

...actually, now that I think of it...

Any creative work - even a computer program - tends to be a reflection of the organizational hierarchies and people who made it. If OpenAI is a bunch of mad scientists with a thin veneer of "safety" coating, then so is ChatGPT.

chrismartin
0 replies
12h20m

Not sure if I agree with the conclusion, but the phenomenon you're referring to is Conway's Law (https://en.m.wikipedia.org/wiki/Conway's_law)

UberFly
0 replies
11h9m

I think it's wild that with all the 700+ employees involved, there haven't been more details leaked.

tayo42
2 replies
15h5m

Maybe the safety concerns are from a vocal minority and most are quiet and don't think much about or don't actually think ai is really that close. It could just be hysterical people or people who get traffic from outrageous things

gopher_space
1 replies
14h47m

Either it’s a world changing paradigm shift or it isn’t. You can’t have it both ways.

mikeg8
0 replies
13h31m

World changing does not mean world destroying.

nsxwolf
2 replies
15h20m

I wonder how many of the OpenAI employees are part of the "Effective Accelerationism" movement (often styled e/acc on X/Twitter). These people seem to think safety concerns get in the way of progress toward a utopian AGI future.

ergocoder
0 replies
14h29m

The employees earn when OpenAI has more profit.

No matter how idealistic you are, you won't be happy when your compensation is reduced from 600k to 200k.

cyanydeez
0 replies
15h0m

like everything we have seen in America, whatever philosophy papers over "greed is good" will move technology and profits forward.

might as well just call it "line goes up"

danbmil99
1 replies
12h21m

Or, just go to the source:

"The Prince", Machiavelli

diamondfist25
0 replies
11h53m

I’m reading the Prince now as bed time read. It’s not going into my head

GreedClarifies
1 replies
13h31m

They clearly had nothing.

They had a couple of people on the board who had no right being there. Sam wanted them gone and they struck first by somehow getting Ilya on their side. They smeared Sam in hopes that he would slink away, but he had build so much goodwill with his employees that they wouldn't let it stand.

They probably had smeared people before and it had worked. I'm thrilled it didn't work for them this time and they got ousted.

jacquesm
0 replies
7h19m

This sounds like a lot of conjecture. Those people definitely had a right to be there: they were invited to and accepted board positions, in some cases it was Sam himself who asked them to join.

But an oversight board can be established easier than that it can be disbanded and that's for very good reasons. The only reason that it worked is not because the board made any decisions they shouldn't have made (though that may well be the case) but because they critically misjudged the balance of power. They could and maybe should have made their move, but they could not make it stick.

As for the last line of your comment: I think that explains your motivation of interpreting things creatively but that doesn't make it true.

7e
1 replies
14h49m

My pet theory is that Altman found out about Q* and planned to start a hardware company to make chips accelerating it, all without telling the board. Which is both dangerous to humanity and self-serving. It’s also almost baseless speculation; I’m interpolating on very, very few scraps of information.

int_19h
0 replies
5h59m

How is that dangerous to humanity?

wellthisisgreat
0 replies
3h7m

why the employees are clamoring for him back

what will happen with their VC-backed valuations without a VC-oriented CEO

shrimpx
0 replies
13h37m

I’m with you. The (apparently, very highly coordinated) employees should sign a public letter explaining why they wanted Altman back so badly.

paulddraper
0 replies
15h13m

i'm still not clear

It isn't clear to anyone else either.

nsajko
0 replies
4h20m

There was this document, no idea how trustworthy it is: https://web.archive.org/web/20231121225252/https://gist.gith...

Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.

Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.

Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.

Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.
irthomasthomas
0 replies
7h7m

Cannot but think it's related to that performance he gave the night before https://news.ycombinator.com/item?id=38471651

gapchuboy
0 replies
15h17m

Employees care about their share value$. That worked well with Altman raising big rounds.

blackoil
0 replies
13h54m

Occam’s razor. It is a fight of egos and power masked around AI Safety and Q*. Equivalent of politician's "Think about the children".

MallocVoidstar
0 replies
14h7m

They apparently refused to tell even their CEO, Shear. I don't think anyone other than the board knows.

IshKebab
0 replies
9h59m

It's pretty clear from what multiple people have said that he's a charismatic bullshitter, and they got fed up with being lied to.

Blackthorn
0 replies
15h13m

why the employees are clamoring for him back

Because he's the one who's promising to make them all rich.

0xDEAFBEAD
0 replies
13h16m

To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work

https://nitter.net/hlntnr/status/1730034022737125782#m

Here's some interesting background which is suggestive of what's going on: https://nitter.net/labenz/status/1727327424244023482#m

kaycebasques
81 replies
15h14m

The fact that we did not lose a single customer will drive us to work even harder for you, and we are all excited to get back to work.

I'm not a big customer, but I am starting the process of moving away from OpenAI in response to these events

bko
31 replies
14h51m

Why? If the product is useful (it is to me), then why do you care so much as to the internal politics? If it ceases to be useful or something better comes along, sure. But this strikes me as being serially online and involved in drama

iLoveOncall
12 replies
14h33m

Because you don't rely on a business that had 80% of its staff threaten to quit overnight?

Terretta
11 replies
14h28m

staff threaten to quit overnight

They didn't, though. They threatened to continue tomorrow!

It's called "walking across the street" and there's an expression for it because it's a thing that happens if governance fails but Makers gonna Make.

Microsoft was already running the environment, with rights to deliver it to customers, and added a paycheck for the people pouring themselves into it. The staff "threatened" to maintain continuity (and released the voice feature during the middle of the turmoil!).

Maybe relying on a business where the employees are almost unanimously determined to continue the mission is a safer bet than most.

l33t7332273
5 replies
13h11m

They threatened to walk across the street to a service you aren’t using.

starttoaster
4 replies
13h8m

And if they walk across that street, I'll cancel my subscription on this side of the street, and start a subscription on that side of the street. Assuming everything else is about equal, such as subscription cost and technology competency. Seems like a simple maneuver, what's the hang up? The average person is just using ChatGPT in a browser window asking it questions. It seems like it would be fairly simple, if everything else is not about equal, for that person to just find a different LLM that is performing better at that time.

croes
2 replies
11h54m

Not that easy, MS can sell the service of GPT but don't own it.

No OpenAI no GPT.

starttoaster
0 replies
9h41m

I was going on the assumption that MS would not have still been eager to hire them on if MS wasn't confident they could get their hands on exactly that.

Terretta
0 replies
4h9m

That's not how contracts like this are written.

It's far more common that if I'm building on you, if you blow up, I automatically own the stuff.

Tostino
0 replies
11h54m

It's super easy to replace an OpenAI api endpoint with an Azure api endpoint. You totally correct here. I don't see why people are acting like this is a risk at all.

ribosometronome
2 replies
14h14m

They didn't, though. They threatened to continue tomorrow!

Are you saying ~80% of OpenAI employees did not threaten to stop being employees of OpenAI during this kerfuffle?

starttoaster
1 replies
13h24m

They're saying that ~80% of OpenAI employees were determined to follow Sam to Microsoft and continue their work on GPT at Microsoft. They're saying this actually signals stability, as the majority of makers were determined to follow a leader to continue making the thing they were making, just in a different house. They're saying that while OpenAI had some internal tussling, the actual technology will see progress under whatever regime and whatever name they can continue creating the technology with/as.

At the end of the day, when you're using a good or service, are you getting into bed with the good/service? Or the company who makes it? If you've been buying pies from Anne's Bakery down the street, and you really like those pies, and find out that the person who made the pies started baking them at Joe's Diner instead, and Joe's Diner is just as far from your house and the pies cost about the same, you're probably going to go to Joe's Diner to get yourself some apple pie. You're probably not going to just start eating inferior pies, you picked these ones for a reason.

croes
0 replies
11h55m

They showed they are hypocrites.

Blaming the board the hindered OpenAI mission by firing Altman but at the same time threaten to work for MS which would kill that mission completely.

croes
1 replies
11h58m

Microsoft was already running the environment, with rights to deliver it to customers.

But they don't own it. If OpenAI goes down they have the rights of nothing.

Terretta
0 replies
4h10m

But they don't own it. If OpenAI goes down they have the rights of nothing.

This is almost certainly false.

As a CTO at largest banks and hedge funds and serial founder of multiple Internet companies, I assure you contracts for novel and "existential" technologies the buyer builds on top of are drafted with rights to the buyer that protect them in event of seller blowing up.

Two of the most common provisions are (a) code escrow w/ perpetual license (you blow up, I keep the source code and rights to continue it) and (b) key person (you fire whoever I did the deal with, that triggers the contract, we get the stuff). Those aren't ownership before blowup, they turn into ownership in the event of anything that costs stability.

I'd argue Satya's public statement on the Friday the news broke ("We have everything we need..."), without breaching confidentiality around actual terms of the agreement, signaled Microsoft has that nature of contract.

djbusby
4 replies
14h11m

These internal drama can play out in the service. Frame the question as: do you want to build on an unstable/unsteady platform?

toomuchtodo
1 replies
13h59m

As long as you can outrun the technical debt, sure. Nothing lasts forever. Architect against lock in. This is just good vendor/third party risk management. Avoid paralysis, otherwise nothing gets built.

osigurdson
0 replies
13h35m

I'm convinced embeddings are the ultimate vendor lock in of our time.

aantix
1 replies
13h19m

Do you want to build on subpar technology?

Nothing beats OpenAI at the moment. Nothing is even close.

quickthrower2
0 replies
5h38m

Phind is an example where they use their own model and it is pretty good at it’s specialty. OpenAI is hard to beat “in general” and especially if you don’t want to fine tune etc.

epgui
3 replies
13h45m

It’s not about politics, it’s about stability and trust.

Same reason I’m hesitant to wire up my home with IoT devices (just a personal example). Nothing to do with politics, I’m just afraid companies will drop support and all the things I invested in will stop working.

aantix
2 replies
13h17m

Eventually you have to make a decision though? Even if it’s the wrong decision?

Our time is finite.

hanselot
1 replies
12h41m

Not filing your home with more triangulating spyware is a decision.

monkeywork
0 replies
12h6m

Yes, but that's not the decision the person in this thread was struggling with - they were struggling with the idea that they may invest $$ into something that 2,3,10 years down the road no longer works because a company went out of biz.

Sounds like they would like to have the devices but have a hard time pulling the trigger for a fear of sinking money into to something temporary.

MuffinFlavored
2 replies
14h46m

why do you care so much as to the internal politics?

agree and why did they go from internal politics -> external politics (large scale external politics)

mattzito
0 replies
14h36m

It’s a dramatic story - a high-flying ceo of one of the hottest tech companies is suddenly fired without explanation or warning. Everyone assumes it’s some sort of dodgy personal behavior, so information leaks that it wasn’t that, it was something between the board and Sam.

Well, that’s better for Sam, sure, but that just invites more speculation. That speculation is fed by a series of statements and leaks and bizarre happenings. All of that is newsworthy.

The most consistently asked question I got from various family over thanksgiving beyond the basic pleasantries was “so what’s up with OpenAI?” - it went way outside of the tech bubble.

cosmojg
0 replies
14h33m

why did they go from internal politics -> external politics (large scale external politics)

My guess is it has something to do with the hundreds of employees whose net worth is mostly tied up in OpenAI equity. It's hard to leverage hundreds of people in a bid for power without everyone and their mother finding out about it, especially in such a high-profile organization. This was a potentially life-changing event for a surprisingly large group of people.

startupsfail
0 replies
14h2m

It’s a bit like buying a Tesla.

grammarnazzzi
0 replies
13h36m

The public drama is a red flag that the organization's leaders lack the integrity and maturity to solve their problems effectively and responsibly.

They are clearly not responsible enough to deal with their own internal problems maturely. They have proven themselves irresponsible. They are not trustworthy. I think it's reasonable to conclude that they cannot be trusted to deal with anybody or any issue responsibly.

evantbyrne
0 replies
12h44m

Based on the how their post is worded, I'm guessing they never needed OpenAI's products in the first place. For most people, OpenAI's offerings are still luxury products, and all luxury brands are vulnerable to bad press. Some of the things I learned in the press frenzy certainly made me uncomfortable.

deanCommie
0 replies
13h33m

I despise the engineering instinct to derisively dismiss anything that involves humans as "politics".

The motivations of individuals, the trade-offs of organizations, the culture of development teams - none of those are "politics".

And neither is the fundamental hierarchical and governance structure of big companies. They influence the stability of architectures, the design of APIs, the operational priorities. It is absolutely reasonable to have one's confidence in depending on the technology of a company based on the shenanigans OpenAI went through.

bee_rider
0 replies
14h45m

If OpenAI decides to change their business model, it might be bad for companies that use them, depending on how they change things. If they are looking unstable, might as well look around.

Angostura
0 replies
12h25m

You don’t believe that the non-profit’s stated mission is important enough to some people that it is a key part of them deciding to use the paid service to support it?

Denzel
23 replies
14h28m

That’s a strange statement because I definitely canceled my subscription as a result of the happenings. This very public battle confirmed for me how misaligned OpenAI is with the originating charter and mission of its nonprofit. And I didn’t want to financially contribute towards that future anymore.

I guess my subscription didn’t count as a customer.

jacquesm
7 replies
13h57m

This happens to me frequently. When I report an obvious problem in some service it is always the very first time that they've heard of it and no other customers seem to have the issue.

happytiger
3 replies
13h2m

Same. I don’t think it’s the truth. It happens with alarming frequency to our family. We seem to be some kind of stealth customer QA for companies.

The other possibility is that they are lying to cover their ass, but they would never do that… right?

m463
2 replies
12h49m

I had a friend who did call center stuff.

It was kind of eye-opening - they took phone calls form late-night tv infomercials and there was a script.

They would take down your name, take your order, and then... upsell, cross-sell, special offer sell, etc.

If the person said anything like "I'm not interested in this, blah blah", they had responses for everything. "But other people have quite upset when they didn't receive these very special offers and called back to complain"

It was carefully calculated. It was refined. It was polished and tested.

The only way OUT of the script was to say "I will cancel my order unless you stop"

If the call center operator didn't follow the script, they would be fired.

(You know this happens now with websites at scale. A/B test until the cancellation message is scary enough. A/B test until you give up on the privacy policy.)

krisoft
1 replies
7h52m

The only way OUT of the script was to say "I will cancel my order unless you stop"

Hanging up the phone is always an option. If you feel civilised you first say you are not interested and thank the sales person for their time, and then hang up no matter what they try to say. That is a way out of the script of course.

zizee
0 replies
4h43m

I find it extremely frustrating when people/businesses/organizations take advantage of the general populations politeness.

Ten years ago I would have found it really difficult to hang up on some random phone caller that I didn't want to speak to. Now I don't give it a second thought.

Inch by inch we'rkke all getting ruder and ruder to deal with these motherfuckers, and I can't help but feel that it is spilling out into regular interactions. I would hate to be in mmmmk

rezonant
1 replies
10h41m

This is a universal truth of feedback and customer service. Every user report is an iceberg: for every 1 person there's a much more significant number of people who experienced the problem but never reported it.

TeMPOraL
0 replies
8h26m

Yes, but the company may be as an ice breaker going across the pole in a straight line, and still when asked about hitting ice, the captain will say that this now is literally the first time it ever happened.

giancarlostoro
0 replies
12h29m

I mean... Given the millions of people who have browsed and used sites I've been responsible for, the number of complaints aren't usually high, and if guest services could narrow it down its usually passed down, but a lot of the time, it's one guy angry enough to report the issue. I've reported issues on several sites now and then, and I'm not even sure if they bothered to respond or ever got my email, how do you get a gmail email through a corporate firewall?

I think a lot of people will just leave your site and go elsewhere vs bother to provide feedback.

I think the true customers of OpenAI are likely not the people paying for a ChatGPT subscription, but paying to use their APIs which is significantly harder to just step away from.

dataflow
6 replies
13h48m

Is there some technicality here that we're missing (e.g., is there a difference between you and other customers?) or is he just lying?

wutwutwat
2 replies
13h27m

It's called "spin" in a press release/marketing, but we on the outside call it a lie, yes.

It wouldn't shock me to learn all of the events that took place were to get worldwide attention, and strengthen their financial footprint. I'd imagine not being able to be fired, and having the entire company ready to quit to follow you, sends a pretty clear signal to all VC that hitching your cart to any other AI wagon is suicide, because the bulletproof ceo has literally the people at the cutting edge of the entire market ready to go wherever he does. How could anyone give funding to a company besides his at this point? Might as well catch it on fire if you're going to give it to someone else's company.

NOWHERE_
1 replies
13h6m

Because LLMs from competitors already have real use? Ex. kagi.com uses claude by anthropic [1].

[1] https://help.kagi.com/kagi/ai/assistant.html

wutwutwat
0 replies
1h18m

Yeah but their CEO can be fired, which would be who the VCs backed

EDIT: The fact that me, average joe, knows all about open ai and its CEO, and even some of its engineers, yet didn't know Kagi was doing anything with AI until your comment, tells me that Kagi is not any sort of competition, not as far as VCs are concerned, anyway.

davrosthedalek
1 replies
13h25m

It might be that there was no net outflow of customers. I am sure customers quit all the time, and others sign up. It probably means that they either didn't see a statistical relevant increase in churn, or that the amount of excess quits was compensated by excess new customers.

TapWaterBandit
0 replies
12h54m

Yea this seems like the most likely read to me. The customers lost are indistinguishable from their churn rate.

starttoaster
0 replies
13h29m

He's probably somewhat deceptively only referring to enterprise license customers. When there's an enterprise offering, many times the individual personal use licenses are just seen as gravy on top of the potatoes. Not like good gravy though, like the premade jars of gravy you can buy at the grocery store and just heat up.

spoonjim
1 replies
12h59m

“Customer” usually means business customer in this context.

johndhi
0 replies
12h53m

Obviously this. They mean the enterprises that have integrated OpenAI into their platforms (like eg Salesforce has). All of this happened so quickly that no one could have dropped them lol but nevertheless yeah they probably didn't officially lose one - plus they're all locked into annual contracts anyway.

queuebert
1 replies
13h1m

I don't think CEOs are selected for their honesty.

ajmurmann
0 replies
12h58m

I hear the board wasn't happy with Sam because he wasn't always entirely honest...

vikramkr
0 replies
8h31m

They might mean net? Have the same number of customers at the end as the start? Instead of a steep cliff?

itronitron
0 replies
12h17m

They didn't lose any of their current customers... /s

giancarlostoro
0 replies
12h28m

If you mean a ChatGPT subscription, I'm assuming no, you're not their primary customer base. I assume their primary customers are paying for significant API usage, and it's a not fully feasible to just migrate overnight.

corethree
0 replies
12h4m

It counted. It's just most people didn't share your opinion.

But that's the not the main problem. Even if people did share your opinion it wouldn't matter. ChatGPT is a tool. It is a hammer.

People are concerned about the effectiveness of a tool. They are not concerned about whether the hammer has any ethical "misalignments."

zer0c00ler
5 replies
14h22m

Yeah, OpenAI lost a bit of its magic. It's sad because it was really fun so far to see all the great progress.

But there are so many unanswered questions still and the lack of transparency is an issue, as is the cult like behavior that can be observed recently.

peanuty1
3 replies
13h28m

By cult-like behavior, are you referring to 700+ OpenAI employees threatening to quit unless the board brought back Altman?

giardini
1 replies
11h34m

How many of OpenAI's employees are actually developing the software they market? 700+ seems awfully high.

quickthrower2
0 replies
5h44m

Not this again!

croes
0 replies
11h53m
0xDEAFBEAD
0 replies
13h22m

For those who are curious here's some background on the "cult like behavior" rumors

https://nitter.net/JacquesThibs/status/1727134087176204410#m

"The early employees have the most $$$$ to lose and snort the company koolaid [...] They were calling people in the middle of the night"

"The before ChatGPT [employees] are cultists and Sam Altman bootlickers"

From anonymous posts on Blind, current/former OpenAI employee

vb234
5 replies
15h1m

What alternatives are you currently looking at? I’ve just begun scratching the surface of Generative AI but I’ve found the OpenAI ecosystem and stack to be quite excellent at helping me complete small independent projects. I’m curious about other platforms that offer the same acceleration for content generation.

toomuchtodo
4 replies
14h48m

Azure offers a mostly at parity offering.

https://learn.microsoft.com/en-us/azure/ai-services/openai/w...

Edit: I misunderstood the ask, my apologies.

sroussey
2 replies
14h36m

That is still OpenAI. Anthropic might be a choice depending on the use case.

vb234
0 replies
14h32m

Yeah I just watched the keynote on Amazon’s Q product. I’m going to tinker with that in the coming days. Pretty excited about the Google drive/docs integration since we have a lot of our company documents over the last 15 years in Drive.

behnamoh
0 replies
12h50m

anthro? no. they over censor their models.

vb234
0 replies
14h34m

That’s fair but I’m mostly building prototypes with the API intended for exploring the space so I’m not too worried about productionizing these yet. I was curious if there’s another solution that meets or exceeds OpenAI for quality of content and ease of use. I’m an ex-programmer working as a PM so most of this is just learning about these tools.

karmasimida
2 replies
13h59m

For serious work, you don't have a choice though, the competition isn't there

devjab
1 replies
12h26m

I depends on what we mean when we say “serious work” but from an European enterprise perspective you would not use OpenAI for “serious work”, you would use Microsoft products.

Co-pilot is already much more refined in terms of business value than the various OpenAI products. If you’ve never worked in a massive organisation you probably wouldn’t believe the amount of efficiency it’s added to meetings by being able to make readable PowerPoints or useful summaries by recoding a meeting, but it’s going to save us trillion of euro just for that.

Then there is there data protection issues with OpenAI. You wouldn’t put anything important into their products, but you would with Microsoft. So co-pilot can actually help with things like contract management, data-refinement and so on.

Of course it’s sort of silly to say that you aren’t buying OpenAI products when you’re buying them through Microsoft, but the difference is there. But if you included Microsoft in your statement, then I agree, there is no competition. I like Microsoft as a IT-business partner for Enterprise, I like them a lot, but it also scares me a little how much of a monopoly on “office” products they have now. There was already little to no competition to Office365 and now there is just none whatsoever.

imp0cat
0 replies
9h57m

  > you probably wouldn’t believe the amount of efficiency it’s added to meetings by being able to make readable PowerPoints or useful summaries by recoding a meeting
How exactly - transcribe text to speech and then convert speech to a summary?

stinkbutt
1 replies
14h57m

why would you not use the best model because of their internal drama?

jeremyjh
0 replies
14h53m

Especially now that its clear they are completely backed by Microsoft, everyone in that company has a job at Microsoft tomorrow if they need it.

startupsfail
1 replies
15h1m

It’s not like it was a big secret. There was MIT Press report a few years ago that had clearly outlined OpenAI setup.

https://www.technologyreview.com/2020/02/17/844721/ai-openai...

Hopefully recent events were enough of a wake up call for regulators and the unaware.

gnicholas
0 replies
13h30m

FYI MIT Press ≠ MIT Technology Review.

rumdz
0 replies
14h9m

Why? I'm genuinely curious. I'm not a particularly wealthy individual paying for ChatGPT and I didn't flinch at the news.

miohtama
0 replies
13h39m

That's why you should choose an open source AI. Not subject to whims of a single person or a corporate board.

cyanydeez
0 replies
15h2m

they're definitely going full B2B so it's likely this is the start of a new age Oracle.

brianjking
0 replies
12h28m

Where are you planning on moving to? I don't think there's a reason to not use OpenAI, but definitely right to diversify and use something like LiteLLM to easily switch between models and model providers.

Racing0461
0 replies
14h50m

I hope you find a competitor as good as chatgpt. We desperately need competition in this space. google/fb tossing billions still hasn't created anything close is starting to worry me.

1B05H1N
0 replies
12h39m

They're the flavor of the month today, but I'm waiting on a better/cheaper option.

Berniek
78 replies
14h59m

Who is driving the company (companies)? Certainly not the board, when it can be ousted on the whim of one person. Might as well just get rid of the board (all boards in this organization) as any decision made can be overthrown by a bit of publicity and ego. Perhaps it about getting rich at the expense of any ethics? Those who work for this company should bear this in mind, they are all expendable!

Tuna-Fish
54 replies
14h53m

The reason the CEO could oust the board like that was that after the board fired the CEO, 720 of the 770 employees of the company got together and signed a pledge that they will all leave unless the CEO is reinstated and the board fired. This was the purest example of labor democracy and self-organization in action in the United States I have ever seen, my takeaway is rather the opposite of "they are all expendable". They showed very clearly that what decides the future direction of the company is the combined will of the employees.

bmitc
24 replies
14h47m

The board should have called their bluff and let them walk.

willis936
9 replies
13h15m

What would that have accomplished? The old board members would still be in charge of the same thing they are today: absolutely nothing. They overplayed their position. That's all there is to it.

enraged_camel
8 replies
13h5m

> They overplayed their position. That's all there is to it.

They tried to enforce the non-profit's charter, as is their duty. I would hardly frame that as overplaying their hand.

willis936
3 replies
12h56m

That's one interpretation, and I'm sure it is the one they had. In my opinion they failed to enforce the charter. Had they been effective at enforcing the charter they would have not folded all of their power to someone they view does not enforce the charter. There is nothing they could have done to fail more.

0xDEAFBEAD
2 replies
12h35m

Had they been effective at enforcing the charter they would have not folded all of their power to someone they view does not enforce the charter. There is nothing they could have done to fail more.

How specifically do you suggest they should have "not folded all of their power"? Are you saying they should have stuck to their guns when it came to firing Sam?

In any case, it's not obvious to me that the board actually lost the showdown. Remember, they ignored multiple "deadlines" from OpenAI employees threatening to quit, and the new board doesn't have Sam or Greg on it -- it seems to be genuinely independent. Lots of people were speculating that Sam would gain complete control of the board as an outcome of this. I don't think that happened.

Some are saying that the new board has zero leverage to fire the CEO at this point, since Sam showed us what he can do. However, the new board says they are doing an independent investigation into this incident. So they have the power to issue a press release, at least. And that press release could make a big difference -- to the SEC and other federal regulators, for example.

upwardbound
1 replies
12h9m

that press release could make a big difference -- to the SEC and other federal regulators, for example.

They might even have already done this confidentially via the SEC Whistleblower Protection route. We don't yet know who filed this:

https://news.ycombinator.com/item?id=38469149

I guess this might have seemed like their best option if a public statement would be a breach of NDA's. That said, I still wish they'd just have some backbone and issue a public statement, NDA's be damned, because the question of upholding a fiduciary duty to be beneficial to all of humanity is super important, and the penalties for a violation of an NDA would be a slap on the wrist compared to the boost in reputation they would have gotten for being seen to publicly stick to their principles.

0xDEAFBEAD
0 replies
11h54m

I guess this might have seemed like their best option if a public statement would be a breach of NDA's. That said, I still wish they'd just have some backbone and issue a public statement, NDA's be damned, because the question of upholding a fiduciary duty to be beneficial to all of humanity is super important, and the penalties for a violation of an NDA would be a slap on the wrist compared to the boost in reputation they would have gotten for being seen to publicly stick to their principles.

I see what you're saying, but unilateral disclosure of confidential information is not necessarily in humanity's best interest either. Especially if it sets a bad precedent, and creates a culture of mutual distrust among people who are trying to make AI go right.

Ultimately I think you have to accept that at least in principle, there are situations where doing the right thing just isn't going to make you popular, and you should do it anyways. I can't say for sure if this is one of those situations.

slewis
0 replies
12h31m

Overplay one’s hand: spoil one's chance of success through excessive confidence in one's position

ffgjgf1
0 replies
10h3m

Well they might have as well announced that they are dissolving the company and the outcome would’ve been similar. If that’s not “overplaying your hand” I don’t know what is

YetAnotherNick
0 replies
11h55m

They tried to enforce the non-profit's charter

There is literally no evidence to it. If it is about the charter they could directly say it was about the charter rather than using strong language when firing and going silent.

So no one, not even their new CEO they picked knows the reason of firing and this reason has been cited as truth by HN multiple times.

Leo_Germond
0 replies
10h9m

I wouldn't say overplayed as much as badly played because they underestimated how much their CEO's had fortified his position. I find the situation pretty dire: we need more checks and watchmen on billionaire tech entrepreneurs, not less.

jgtrosh
6 replies
14h42m

And then we'd witness the market decide whether the company value was provided by its board or its employees

bmitc
5 replies
14h35m

It's a non-profit. It should have no value aside from the mission. The only reason the employees mobilized in their whining and support of a person, whose only skills seems to be raising money and cult of personality, seems to be so that Altman can complete the Trojan horse of the non-profit by getting rid of it now that it has completed the side quest of attracting investors and customers.

ffgjgf1
1 replies
10h5m

value aside from the mission.

And you believe they could’ve achieved their mission with almost zero employees and presumably the board (well… who else?) having to do all the work themselves?

non-profit by getting rid of it

So the solution of the board was to get rid of both the non-profit and for profit parts of the company?

justinclift
0 replies
9h54m

And you believe they could’ve achieved their mission with almost zero employees ...

It's not like they wouldn't be able to hire new employees. ;)

yreg
0 replies
10h52m

How would giving everything to Microsoft help the mission?

It would have been even worse mission-wise.

blackoil
0 replies
13h57m

Which owns a for profit. They would have been left with no employees, no money, no credits, and crown-jewels already licensed to another company. On top of that spending life in minority shareholders lawsuits.

IgorPartola
0 replies
13h52m

Does the Red Cross have value? Does Wikipedia? Does the Linux Foundation?

I think you are confusing profits, valuations, market cap, and value.

627467
2 replies
14h40m

They could, or could they? Formally boards don't run companies. Would this scenario - besides ridiculous - be legal? Say they let 99% of the company walk. Who is qualified and responsible to do whatever needs to happened after that happens?

anticensor
1 replies
13h15m

Some companies are chartered such that even the smallest outside spending requires a board vote.

627467
0 replies
13h4m

Some are. The question is: is openAI? And even if they are, the outcome of this novela indicates it don't matter: the board folded.

nateburke
1 replies
14h0m

I wonder if they ACTUALLY would have landed at Microsoft. And then, would they actually have their TC post tender met there? And would they continue working on AI for some time, independent of reorgs, etc? All of them? A place like MSFT has a TON of process and red tape, the board could have called their bluff and kept at least 40%.

behringer
0 replies
13h55m

And they would have given MS the 60 percent of their employees more interested in politics than anything else.

monkeywork
0 replies
11h59m

Please explain why you think they should have let them walk and what benefit to either OpenAI or any of it's partners / customers there would have been in doing so?

It feels like you are saying this out of spite rather than with a good reason - but I'm open to listen if you have one.

itchyouch
0 replies
12h52m

There was nothing to bluff. The employees would've been hired by Microsoft and OpenAI the company would've had nothing but sourcecode and some infrastructure. They would've gone down quickly.

fatbird
11 replies
11h24m

The combined will of the employees is much less meaningful when they're saying "DO IT OR WE'LL TAKE AN ATTRACTIVE OFFER THAT'S ON THE TABLE."

I'd have been more impressed by their unity if it would have meant actually losing their jobs.

Beldin
5 replies
10h26m

They were about to quit... how much more should they lose their jobs to impress you?

IshKebab
2 replies
10h2m

He meant if they were to lose their jobs and then have to put actual effort in to get a new one, like most people.

hackideiomat
1 replies
9h10m

I do not think it is hard for any of them to get a new job and they would be flooooooded with offers if the opened their linkedin

IshKebab
0 replies
6h15m

Exactly. Most people are not in that situation and therefore cannot threaten to leave so easily.

mock-possum
0 replies
9h56m

They were about to quit *to work for Microsoft.

fatbird
0 replies
9h53m

They were about to laterally transfer to a new organization headed by the CEO they revered, with promises of no lost compensation. The only change would be the work address. Hardly a sacrifice sanctifying a principled position.

bdd8f1df777b
3 replies
11h18m

Why are you only impressed if the power of employees comes with high cost to their career? When employers exercise their power capriciously, they often do so without any material consequences. I'm more impressed with employees doing the same, i.e., exercising power with impunity.

fatbird
2 replies
10h50m

A decision with a cost is always more meaningful than one without.

bdd8f1df777b
0 replies
4h21m

Power without cost is true power. I don't care if their decision is meaningful or not. I'm happy when labor can hold true power, even just for once.

TeMPOraL
0 replies
8h38m

That's only a measure of the strength of one's conviction, and is unrelated to whether they chose the right thing, or for the right reasons.

vasco
0 replies
9h18m

I'm pretty sure they weren't thinking of how to impress fatbird on HN when they were deciding which boss they like best, the old one or an unknown new one.

Waterluvian
7 replies
11h52m

Because Microsoft offered them all jobs.

LMYahooTFY
6 replies
11h8m

As if they couldn't easily get a job almost anywhere?

Also don't they all have equity in OpenAI?

Seems like a pretty shallow take.

Xenoamorphous
5 replies
10h37m

Also don't they all have equity in OpenAI?

Call me a cynic but this must be the reason behind those 700 employees threatening to walk away, especially if as it seems the board wanted to “slow down”.

I don’t think it’s so much of a “we love Altman” as “we love any CEO that will increase the value of our equity”.

devoutsalsa
4 replies
9h49m

People go to work to make money. If they’re lucky, they also like they’re work. It’s ok to want more out of work. Work certainly wants more out of you.

maayank
3 replies
9h41m

People go to work to make money

If that’s the overriding drive then perhaps they should not got to a non-profit and expect it to optimize for profit

bad_user
1 replies
9h27m

Non-profit just means there are no stakeholders, meaning that revenue must be used for the organization's expenses, which includes the compensation of employees and growth. Many non-profits also benefit from tax-exemptions.

Assuming anything more than that about the nature of non-profits is just romantic BS that has nothing to do with reality.

Also, there's nothing wrong with wanting to make a profit.

int_19h
0 replies
5h32m

This particular non-profit has (had?) a clear mission statement, though.

TeMPOraL
0 replies
8h40m

That non-profit did give out ridiculously high salaries tho.

moralestapia
4 replies
14h38m

Yeah, and that just means that the board is useless de facto.

Nice story on a company, maybe, but really bad thing on a "non-profit".

powersnail
0 replies
11h24m

I don't think it means that the board is useless, but rather that their power has a limit, as all leadership positions do, and the way the fired the CEO has crossed the line too far.

intended
0 replies
11h10m

It means the board was out of alignment with their employees and their decision was unusually unpopular.

If the board had communicated better (no 2 or different reasons) or had hard evidence (the claim was deceptive behavior) - it would have been different.

I expect everyone who reads HN accepts that execution is 70% of the battle.

It’s very unsurprising for employees to have made the choices they did, given the handling of the situation.

cedilla
0 replies
10h42m

"Being useless" and "wielding absolute, unilateral, unquestionable authority like Sun King Louis XIV" are not the only two options here.

aardvarkr
0 replies
11h42m

Useless in the sense that they showed themselves to be incompetent with their failed coup and had to pay the price. As the saying goes, “if you take a shot at the king you had better not miss”.

moritz
1 replies
9h47m

Yeah, unionizing so your boss doesn’t get fired really is the most SF/US “labor democracy” thing ever

ilikehurdles
0 replies
9h27m

I agree.

1. A group of highly skilled people voluntarily associating with each other

2. in an organization working on potentially world-changing moonshot technology,

3. born of and accelerated by the liquidity of the free market

4. with said workers having stake in the success of that organization

is very American. We should ponder the reasons why, time and time again, it has been the US "system" that has produced the overwhelming number of successes in such ventures across all industries, despite the attempts of many other nations to replicate these results in their own borders.

hurtuvac78
0 replies
10h52m

Then you should be interested in the story of MarketBasket, a major supermarket chain in New England.

Their CEO was fired, but came back because of employees. Similar, in a very different industry.

https://en.m.wikipedia.org/wiki/Market_Basket_protests

Fluorescence
0 replies
9h34m

This was the purest example of labor democracy and self-organization in action in the United States I have ever seen

This was not an anonymised vote giving the chance to safely support what you believe free from interference... this was to declare for the King or the Revolution knowing that if don't choose who prevails, you will be executed. It becomes a measure of which direction the people believe the wind is blowing rather than a moral judgement. Power becomes about how a minority can coerce/cajole the impression of inevitability not the merit of their arguments.

I will be curious to hear the inside story once the window of retribution has passed. Unions hold proper private ballots not this type of suss politicking.

civilized
13 replies
14h52m

Hacker News represents hackers, not Boards of Directors. The hackers of OpenAI overwhelmingly demanded Sam back. Everyone here should be happy.

yumraj
7 replies
14h28m

Their wallets demanded Sam back.

The original, founder hacker lost.

civilized
4 replies
13h46m

Who exactly are you referring to?

yumraj
3 replies
13h42m

Ilya, lost.

One of the original hackers I should have said. There are other original hackers.

YetAnotherNick
2 replies
11h51m

Unless you assume that Ilya is straight out lying, here is his public reaction on Sam being reinstated:

There exists no sentence in any language that conveys how happy I am[1]

[1]: https://twitter.com/ilyasut/status/1727434066411286557

yumraj
1 replies
10h2m

Unless you read it literally, that there is really no sentence in any language to convey how happy he is. ;)

He was part of the firing board, so in that sense he did lose.

Either way we’ll find out in the upcoming days/weeks/months.

YetAnotherNick
0 replies
8h39m

He has basically guaranteed himself loosing by saying he don't know why Sam was fired and regretful of board action. He was in a lose lose situation.

behringer
1 replies
13h53m

Just the way the users here like it.

yumraj
0 replies
13h48m

You may want to speak for just yourself.

I’m also a user here, with a differing opinion.

There’s no singular unified HN opinion.

corethree
1 replies
12h46m

That demand on HN was an unfounded gut based reaction that you typically expect from a mob. People were saying stuff like the board was mentally deficient for doing what they did. How likely is it for the board to be actually dumb to the point of actual stupidity?

If you think about it, the board being actually mentally challenged is an extremely low probability but that was the mob reaction. When given so little information they just go with the flow even if the flow is unlikely.

Now with more articles, the mentality has shifted. Most people don't even remember their initial reaction. Yeah It looks like everybody on this thread has flipped, but who admits to flipping? Ask anyone and they'll claim they were skeptical all along.

People say The hacker news crowd is different, better... I'd say the hacker news crowd is the same as any other group. The difference is they think they're better. That includes me... as I'm part of HN so don't get all offended.

0xDEAFBEAD
0 replies
12h30m

As bad as the mob mentality is on HN, I think it is much worse on Twitter. I got a bunch of upvotes here on HN for linking the charter and pointing out that terminating OpenAI as an organization would be quite consistent with the charter. I didn't see the same point getting much play on Twitter at all.

I think the fact that upvotes are hidden on HN actually helps reduce the mob mentality a significant amount. Humans have a strong instinct to join whichever coalition appears to be most powerful. Hiding vote counts makes it harder to identify the currently-popular coalition, which nudges people towards actually thinking for themselves.

wraptile
0 replies
12h21m

Yes that's why the hackers got a seat on the board now! Oh wait it was the $$$ that got a seat instead...

queuebert
0 replies
12h59m

Why do the hackers support Sam? He's the least hacker-y of the major players at OpenAI. Seems like more of a fundraiser, marketing type with little technical skills. Hackers traditionally support technically competent people.

627467
0 replies
14h35m

Reading through comments I felt that at least superficially many in HN thought this was a fight between "business exec(s) and technical/qualified scientists" as if any organization can exist based purely on technical (as in technological or academic) expertise.

If anything, this novela should tech the importance of politics in any human organization.

ajmurmann
2 replies
12h55m

I've always like the metaphor that companies, especially large ones, are like an autonomous AI that has revenue generation as it's value function and operates in that interest and not in the interest of the individuals that make up the company.

andsens
0 replies
10h4m
Leo_Germond
0 replies
10h15m

Ah if only it were that simple, truth is a company, especially old ones, have a value function that is in flux, and this may include revenue generation but more often than not this is completed or replaced by replaced by some variation of political clout acquisition: it makes them unkillable even with a negative balance sheet.

htk
1 replies
14h56m

How about the possibility that the board made a huge mistake and virtually everyone working for the company got pissed?

manonthewall
0 replies
11h23m

I think this is the correct take

627467
1 replies
14h42m

Just the whim of 1 person? Just the ego of 1 person? 700+ egos it seems. Ultimately, what is an organization if not the people who make it up?

Regardless of what actually happened in this Mexican novela, whoever was steering the narrative - in awareness or not - led everyone to this moment and the old board lost/gave up.

Who is driving the company (companies)?

There's probably not a single answer that covers all orgs at all times. I don't know if the core of what happened was due to a fight between "safetyist and amoral capitalists" or "egomaniac and representative board" but it the end it was clear who stayed and who left. Ideas are only as powerful as the number of people of gets persuaded. We are not yet(?) at a moment where persuading AGI is sufficient, so, good ideas should persuade humans. Enough of them.

az226
0 replies
11h21m

Not just egos, a 9-figure secondary tender offer employees were counting on.

koliber
0 replies
10h5m

In the end, social constructs drive such decisions, not computer algorithms. Having independent boards adds more friction and variety to major decisions. That does not mean that there is no way to influence them or change boards. It’s how the world has always worked. It’s disconcerting to those who think about such things in simpler black and white terms, but there is a higher logic to the madness.

colordrops
0 replies
11h34m

Those who work for most companies are all expendable, at least in the US, and especially in the tech sector right now.

lulznews
35 replies
15h55m

So even if you maintain strict board control, the money people can still kick you out. Incredible!

fallingknife
8 replies
15h51m

Technically no, but when 90% of their employees threatened to quit, they would just be the board of nothing.

firebirdn99
7 replies
15h37m

The board was a non profit board serving the mission. Mission was foremost. Employees are not. One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

The fallout showed non-profit missions can't co-exist with for-profit incentives. And the power that investors were exerting, and employees (who would also benefit from the recent 70B round they were going to have) was too much.

And any disclaimer the investors got when investing in OpenAI was meaningless. It reportedly stated they would be wise in viewing their investment as charity, and they can potentially lose everything. And there was an AGI clause that said it will reconsider all financial arrangements, that Microsoft and other investors had when investing in the company was all worthless. Link to Wired article with interesting details -https://www.wired.com/story/what-openai-really-wants/

marcus0x62
2 replies
15h24m

The board was a non profit board serving the mission. Mission was foremost. Employees are not.

They need employees to advance their stated mission.

One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I mean, that's a nice sound bite and everything, but the only scenario where blowing up the company seems to be consistent with their mission is the scenario where Open AI itself achieves a breakthrough in AGI and where the board thinks that system cannot be made safe. Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.

firebirdn99
1 replies
14h40m

Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.

That's why they presumably agreed to find a solution. But at the same time shows that in essence, entities with for-profit incentives find a way to get what they want. There certainly needs to be more thought and discussion about governance, and how we collectively as a species or each company individually governs AI.

sanderjd
0 replies
13h25m

I don't really think we need more thought and discussion on creative structures for "governance" of this technology. We already have governance; we call them governments, and we elect a bunch of representatives to run them, we don't rely on a few people on a self-appointed non profit board.

khazhoux
1 replies
15h27m

One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I know you're quoting the (now-gone) board member, but this is a ridiculous take. By this standard, Google should have dissolved in 2000 ("Congrats everyone, we didn't be evil!"). Doctors would go away too ("Primum non nocere -- you're all dismissed!").

jacquesm
0 replies
15h25m

Indeed, it made no sense. But that's why I never attach any value to mission statements or principles of large entities: they are there as window dressing and preemptive whitewash. They never ever survive their first real test.

sanderjd
0 replies
13h28m

Yep, this is spot on. The entire concept of a mission driven non profit with a for profit subsidiary just wasn't workable. It was a nice idea, a nice try, but an utter failure.

The silver lining is that this should clear the path to proper regulation, as it's now clear that this self-regulation approach was given a go, and just didn't work.

rvba
0 replies
15h32m

If it was a for-profit company would you write that "profit is foremost and 90% employees can leave"?

rmbyrro
6 replies
15h49m

Not the "money people", but the extremely bad way they removed Altman, I think.

It made it easy to sway employees to Altman's favor and pressure the board.

If the employees were not cohesive on Altman's side, he probably wouldn't be back...

jrockway
3 replies
15h30m

Yeah, I think this was "700 out of 730 employees signed a letter saying they'd quit, over a holiday weekend". OpenAI with no employees is not worth a whole lot.

sanderjd
2 replies
13h36m

This episode taught us the very obvious lesson that if you hire people incentivized by equity growth, it is not possible to take steps that detrimentally impact equity growth, without losing all those people. The board had already lost the moment it allowed hundreds of fat compensation packages to be signed.

Aperocky
1 replies
10h17m

lost the moment it allowed hundreds of fat compensation packages to be signed.

There will be no OpenAI to begin with had that not been the case.

sanderjd
0 replies
23m

Maybe maybe not. There certainly wouldn't have been the version of OpenAI that runs a massively successful and profitable AI product company. But reading the OpenAI charter, it's pretty clear that running a massively successful AI product company was never a necessary component of the organization; and indeed, as we just saw, is actually a conflict of interest with the organization's charter.

I don't really care much about the demise of OpenAI-the-nonprofit - I don't think it was ever a very promising approach to accomplishing its own goals, so I don't lament its failure all that much - but I feel like there is a kind of gaslighting going on, where people are trying to erase the history of the organization as a mission-oriented non-profit, and claim that it was always intended to be a wildly profitable product company, and that just isn't the case.

erhaetherth
1 replies
15h20m

Do we know the real reason they tried kicking him out yet?

rmbyrro
0 replies
15h16m

By abrupt way it was carried, it gives me the impression it was a conflict of personalities.

I don't buy the Q* talk.

refulgentis
4 replies
15h50m

Ty for pointing this out. Massive, massive, corporate governance loss.

khazhoux
3 replies
15h24m

A better lesson is that a board can't govern a company if the company won't follow its lead. They were misaligned with almost the entirety of the employees.

simbolit
2 replies
15h18m

Because they formed an additional entity, which is a for-profit, and has leadership treating the whole thing as a for-profit, and so employees also see what their eyes are telling them.

The misalignment is not accidental, it was carefully developed over the last few years.

khazhoux
1 replies
13h58m

Or: the employees all joined to work on AI, and they succeeded at building the top AI company, under Sam, and their support of Sam was not engineered or any sort of judgement error on their part. I like to imagine that the employees are smart and thoughtful and have full agency over their own values. It was the Board who apparently had zero pulse on the company they were supposed to oversee.

omeze
0 replies
11h46m

the majority if the employees at openai joined after chatGPT launched, so it's not like there's some sense of nostalgia or forlorn distress for what they built slowly changing. The stock comp (sorry, "PPUs"... which are a phantom stock plan lol) is also quite high (check levels.fyi) and would have been high 7 to low 8 figures for engineers if secondary/tender offers were made.

I agree it's not that deep - they wanted to join a hypergrowth startup, build cool stuff, and get paid. If someone is rocking the boat, you throw them off the boat. No mission alignment needed! :)

Aperocky
4 replies
15h53m

The board could have stayed, they (and OpenAI) just had to bear with the consequences (i.e. all the employees leave).

There are no board that can prevent employees from leaving, nor should there.

jacquesm
3 replies
15h49m

That depends on whether the board was there to provide cover and protection against regulation/nationalization or whether it was supposed to have an actual role. Apparently some board members believed that they had an actual role and others understood that they were just a fig leaf and that ultimately money (and employees) hold the strings.

vikramkr
2 replies
8h21m

No, to the specific question of making employees stay, that's not a thing that you can do outside of prisons. If employees want to leave they can leave. If they want to start the nonprofit from scratch, they can do that but employees cannot be stopped from leaving

jacquesm
1 replies
7h57m

Was anyone arguing that or am I misunderstanding you? Of course they are free to leave, that's obvious. I think the point was: the employees have that power and nothing will take that away from them.

vikramkr
0 replies
7h40m

The gp said there's no way a board can stop employees from leaving and the one I was replying to started with "that depends" and suggested it was a question as to whether the board "believed" employees held all the strings. Though reading it again the parent does seem a bit non-sequitor from the gp which was strictly talking about what the board physically could have done, as in what options they had. The options they had don't really depend on any particular belief - they couldn't have made them stay thanks to the civil war and stuff

paulddraper
3 replies
15h41m

Gosh sure is hard to have a business if none of your employees want to work for you.

sanderjd
2 replies
13h34m

Hard to run a non profit when you promised all your employees you'd make them millionaires.

paulddraper
1 replies
10h23m

I'm pretty sure all the employees are employed by a for-profit.

sanderjd
0 replies
29m

I don't know if that's true of all of them, but it certainly seems to be true of most, and that's entirely my point: the structure just doesn't work. All (or at least the vast majority) of the employees have for-profit incentives, which - as we've now seen - makes it impossible for the non-profit board to act in accordance with their mission, when those actions conflict with the for-profit incentives, as they inevitably will. It was doomed from the start.

stingraycharles
1 replies
15h51m

I think the whole OpenAI team being determined to follow Sam was crucial in all this, and is not something that’s easy to control.

Having said that, as there’s obviously a lot of money involved for all OpenAI employees (Microsoft offered very generous packages if they jumped ship), it can be said that money in the end is what people care a lot about.

coredog64
0 replies
15h26m

Allegedly (per Twitter) there was a persistent effort by a small group to get those signatures rather than an organic ride-or-die attitude.

However, as you note, Sam’s exit would have cost those employees at least high six figures, so I’m sure that reduced the amount of leverage required to extract signatures.

mikeryan
1 replies
15h26m

Not sure that’s the exact case. Once the employees threatened en masse to quit, all the power shifted to Sam and Greg’s hands - which I’m not sure they had up to that point.

I still think the board did what they thought was right (and maybe even was?), but not having the pulse of the company and their lack of operational experience was fatal.

jacquesm
0 replies
15h23m

It looked like this was all decided in a hurry rather than that stakeholder buy-in was achieved. And maybe that's because if they had tried to get that buy-in the cat would be out of the bag and they would find even more opposition. It also partially explains why they didn't have a plan post-firing Sam but tried a whole bunch of weird (and weirder) moves.

drexlspivey
0 replies
15h19m

You can play board room all you want but the guy with the GPUs has the real power

rch
32 replies
15h55m

Is the new board going to fix the product? It seems completely nerfed at present.

telotortium
31 replies
15h50m

There was a tweet that from an engineer at OpenAI that they're working on the problem that ChatGPT has become too "lazy" - generating text that contains a lot of placeholders and expecting people to fill in much more themselves. As for the general brain damage from RLHF and the political bias, no word still.

Obscurity4340
11 replies
15h2m

Gonna be hilarious when AGI turns out to be analagous to like a sassy 8 year old or something?

Like "AGI, do all this random shit for me!"

AGI: No! i don't wanna!

jacquesm
6 replies
15h0m

"Why?"

Ad infinitum.

Davidzheng
5 replies
14h51m

It's actually interesting this is a universal phase of children.

Obscurity4340
2 replies
14h50m

Beginner's mind. I wonder if McKinsey's done any work on that...

Also, one of the simplest algorithms to get to the bottom of anything.

Davidzheng
1 replies
11h28m

yeah it means like there's this genetic drive to understand the world. Do many other animals have this hard coded in?

Obscurity4340
0 replies
11h24m

Reminds me of those greedy slime things

jacquesm
0 replies
13h54m

Yes, definitely. Some never stop!

cout
0 replies
2h28m

My son asks why, but only once. I'm not yet sure if it is because he is satisfied with his first answer, or if my answers just make the game too boring to play.

js8
3 replies
10h46m

That's a premise of sci-fi "novel" Golem XIV from Stanislaw Lem: https://en.m.wikipedia.org/wiki/Golem_XIV

Obscurity4340
2 replies
6h53m

Where can one read the English for that?

defrost
1 replies
6h49m
Obscurity4340
0 replies
3h6m

Thank you

cyanydeez
4 replies
14h55m

they're B2B now, that means only political correctness.

and I'm not sure why anyone dances around it but these models are built by unfiltered data intake. if they actually want to harness bias, they need to do what every capitalist does to a social media platform and curate the content.

lastly, bias is an illusion of choice. choosing color over colour is a byproduct of culture and you're not going to eradicate that but cynically, I assume you mean , why won't it do the thing I agree with.

gopher_space
3 replies
14h33m

What does political correctness and bias mean in this context?

edit: I'm asking because to my eye most of these conversations revolve around jargon mismatch more than anything else.

Jensson
1 replies
14h22m

Whatever the people who buys ads decides, losing your ad revenue is the main fear of most social media and media companies.

See twitter for example, ad buyers decided it is no longer politically correct so twitter lost a lot of ad revenue. Avoiding that is one of the most important things if you want to sell a model to companies.

cpeterso
0 replies
10h45m

The only AI safety that companies care about is their brand safety.

TulliusCicero
0 replies
12h23m

IIRC they've put in guard rails to try and make sure ChatGPT doesn't say anything controversial or offensive, but doing so hampers its utility and probably creativity, I'm guessing.

danenania
3 replies
15h28m

Using the api, I've been seeing this a lot with the gpt-4-turbo preview model, but no problems with the non-turbo gpt-4 model. So I'll assume ChatGPT is now using 4-turbo. It seems the new model has some kinks to work out--I've also personally seen noticeably reduced reasoning ability for coding tasks, increased context-forgetting, and much worse instruction-following.

So far it feels more like a gpt-3.75-turbo rather than really being at the level of gpt-4. The speed and massive context window are amazing though.

telotortium
0 replies
15h23m

Yeah, I usually use gpt-4-turbo (I exclusively use the API via a local web frontend (https://github.com/hillis/gpt-4-chat-ui) rather than ChatGPT Plus). Good reminder to use gpt-4 if I need to work around it - it hasn't bothered me too much in practice, since ChatGPT is honestly good enough most of the time for my purposes.

int_19h
0 replies
5h39m

This has been the case with gpt-3.5 vs gpt-3.5-turbo, as well. But isn't it kinda obvious when things get cheaper and faster that there's a smaller model running things with some tricks on top to make it look smarter?

cyanydeez
0 replies
14h52m

id be willing to bet all they're doing behind the scenes is cutting computation costs using smaller versions and doing every business' golden rule: price discrimination.

id be willing to bet enshittification is on the horizon. you don't get the shiny 70b model, that's for gold premium customers.

by 2025, it's gonna be tiered enterprise prices.

swader999
2 replies
15h43m

It does feel like an employee that did really well out of the gate and is starting to coast I their laurels.

TapWaterBandit
1 replies
12h48m

I've thought one of the funnier end states for AGI would be if it was created but this ended up making it vastly less productive than when it was just a tool.

So the AI of the future was more like Bender or other robots from Futurama that display all the same flaws as people.

willi59549879
0 replies
10h13m

If it is really agi that will be the result. Nobody likes to be asked the same question a hundred times.

LouisvilleGeek
2 replies
14h53m

Can you share a link to that x/tweet?

avereveard
1 replies
14h45m

https://twitter.com/owencm/status/1729778194947973195

It's such a strange thing apparently they can tune gpt4 turbo cleverness up and down on the fly depending current load.

LouisvilleGeek
0 replies
14h27m

That would explain a lot! Sometimes when it's fast it spits out all code. When it's slower, it's Lazy! Thanks for the link

bogomipz
1 replies
13h35m

What is "RLHF" here?

causalmodels
0 replies
13h26m

Reinforcement learning from human feedback [1]

[1] https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...

antman
0 replies
14h40m

That has been my observation also

Racing0461
0 replies
14h49m

This is such a big issue using chatgpt for coding. Hope its a bug and not intended.

htk
23 replies
14h31m

Can't help but feel weird about all the thanking in the letter, especially the "sincere" thanks to Tasha and Helen, the possible main antagonists in this soap opera.

It's like a written version of the heart emojis in their Twitter exchanges.

psyclobe
12 replies
14h26m

Reeks of CEO speak; general bullshit that seems to go against all practical reasoning of the situation.

Don't fall for it.

ekianjo
4 replies
12h23m

this is probably a chatgpt prompt starting with "I am VC Bro, write a letter as I come back as CEO and thank everyone who was involved in the stabbing, and dont forget to use cliche stuff like turn a crisis into an opportunity"

fl7305
3 replies
9h32m

Here's what ChatGPT 4 (paid version) responded with for that exact prompt:

---

Subject: Embracing New Horizons Together

Dear Team,

As I resume my role as CEO, I am filled with gratitude and renewed vigor. The recent challenges we faced were not mere setbacks, but stepping stones that have strengthened our resolve and unity.

I want to extend my deepest appreciation to each one of you who stood by me and our company during these testing times. Your unwavering support was not just a lifeline; it was a powerful testament to our collective strength and dedication.

This experience has not only been a learning curve but also a vivid reminder of the incredible potential we possess as a team. We turned a crisis into a golden opportunity, showcasing resilience and adaptability that are hallmarks of our company culture.

As we move forward, let's harness this momentum to propel us towards new heights. Our journey ahead is filled with possibilities, and together, we will continue to innovate, disrupt, and lead.

Thank you for your steadfast commitment and for believing in our vision. Here's to turning challenges into triumphs and continuing our journey of excellence.

Warm regards,

[Your Name] CEO [Your Company]

jacquesm
2 replies
7h6m

That is f'ing scary because I think I've read a letter or two just like that.

Terretta
1 replies
3h57m

I've read a letter or two just like that.

So did the LLM.

Likely thousands as non-fiction, perhaps magnitudes more as fiction, satirical, or speculative. And I bet I'm still missing zeroes — countless business leaders have been writing this drivel a long time.

It's hard for us to keep in mind the sheer scope of online copy. Less hard for LLMs, that's why they feel to me like magic in the Arthur C Clarke sense. I have to keep reminding myself nothing's new under the sun.

This has all happened before, it will all be generated again.

fl7305
0 replies
1h57m

In my eyes, the LLM took very generic high-level instructions, interpreted them correctly, and then followed the instructions to generate output text that was similar to what it had seen before.

What it didn't do was to just use the input prompt as the key into a gigantic lookup table and find the corresponding output text.

A different example of this is when an LLM can correctly interpret a sentence like "the trumpet did not fit in the suitcase because it was too big / small".

Here, "big" would refer to the trumpet, while "small" would refer to the suitcase. This becomes important when translating to German, where "the" comes in different flavors for "trumpet" and "suitcase".

ChatGPT interprets this correctly. It can also explain in detail why "too big" / "too small" makes sense or not for the different interpretations.

You could argue that this is still just a lookup response. But I have a hard time seeing how it can have stored so many different variants in the amount of bytes that the LLM is made up of?

--

ChatGPT explanation for trumpet / suitcase:

Let's delve into the logic and reasoning behind which noun the size descriptor applies to in each of your sentences:

"The trumpet did not fit in the suitcase because it was too big."

In this sentence, the logical interpretation is that the trumpet is too large to fit inside the suitcase. It's understood that if an object doesn't fit into a container, the issue is typically that the object is too large for the container's capacity.

Hence, "it was too big" naturally refers to the trumpet, not the suitcase. It wouldn't make much sense to say a suitcase is "too big" to contain something; if the suitcase were bigger, it would more likely accommodate the trumpet.

"The trumpet did not fit in the suitcase because it was too small."

Here, the logical understanding is that the suitcase is too small to contain the trumpet. When an object doesn't fit into a container due to size, and the descriptor is "too small," it implies that the container isn't large enough.

In this case, "it was too small" refers to the suitcase. If the sentence intended to say the trumpet was too small, it would contradict the idea of not fitting; a smaller trumpet would more likely fit into the suitcase, not less.

In both sentences, the key is understanding how size descriptors (big, small) interact with the concept of fitting something into something else. The size issue that prevents the fitting is logically attributed to the object that needs to be accommodated (the trumpet) in terms of being too big, or to the accommodating object (the suitcase) in terms of being too small.

thepasswordis
3 replies
13h5m

Reeks of CEO speak; general bullshit that seems to go against all practical reasoning of the situation.

Funnily enough this is the type of thing that chatGPT is excellent at writing.

m463
2 replies
12h44m

prompt: You are an optimistic, politically correct corporate magnate. You may be wrong, but never in doubt.

jacquesm
0 replies
7h7m

You could make that prompt a lot sharper.

For the CEO of any large multinational:

prompt: You are a lying, scheming conniving s.o.b. but outwardly you are a savior and a team player acting in all of humanity's interest. Anything you say that has negative connotations or that can be explained by greed, lust for power or outright evil should be carefully crafted to be presented as the opposite. You will put yourself before everything, including the rest of the world, the environment, and ultimately all of humanity. Your mission: to collect as much power as possible but to do so in a way that you are rarely forced to show your hand and when you do there should be ample ways in which it can be interpreted that show you as the good guy.

floren
0 replies
12h8m

That's just ChatGPT though, minus the "corporate magnate" part.

Terretta
2 replies
14h12m

Might you describe CEO speak as not consistently candid?

readyplayernull
0 replies
13h55m

We call him SAlty.

ethbr1
0 replies
13h54m

Not being consistently candid seems like the sort of thing you'd get fired for.

mattjaynes
2 replies
8h20m

Yes, but helping these people save face smoothes the transition. My guess is that those folks were waaay out of their depth and they naturally made naive mistakes. It doesn't benefit anyone to stomp on them. I'm sure they learned hard lessons, and Sam's message is what we call "grace", which is classy.

Is it politics? Sure, but only in the best sense. By not dunking on the losers, he builds trust and opens the doors for others to work with him. If you work with Sam and make a mistake, he's not going to blast you. It's one reason that there was such a rallying of support around Sam, because he's a bridge-builder, not a bridge-burner. Over time, those bridges add up.

Silicon Valley has a long memory and people will be working with each other for decades. Forgiving youthful mistakes is a big part of why the culture works so well.

jacquesm
1 replies
7h11m

They may have been way out of their depth but they also may have been the only ones taking their roles somewhat seriously. They've now been shown what the true balance of power is like and that is a lesson they are probably not going to forget. Unfortunately they also threw away the one shot they had at managing this and for that their total contribution goes from net positive to net negative. I don't think that in a break-the-glass scenario it would have gone any different but they were there for the ride to see how their main role was a performative one rather than an actual one and it must have been a very rude awakening to come to realize this.

It would be poetic justice if the new board fires Sam Altman next week, given the amount of drama so far I am not sure if I would be surprised or not.

quickthrower2
0 replies
5h53m

If Sam get’s fired I am moving to a HN clone with filters so I can ignore anything related to AI

qnleigh
1 replies
8h29m

Reading between the lines, I see "I harbor zero ill will towards [Ilya]... we hope to continue our working relationship." but no such comments directed at Helen and Tasha. Given how sanitized these kinds of releases usually are, I took that to mean "" in this context.

dandanua
0 replies
5h6m

Ilya essentially accused Altman of lying to the board. Hearing "zero ill will" from a liar looks like intimidation to me. Especially if we take into account his previous history.

gkoberger
1 replies
12h39m

Maybe it's BS corporate gibberish, I don't know. But Sam has always struck me as an honorable person who genuinely cares. I don't think he's vindictive; I think he genuinely supports them. You can disagree immensely and still respect each other – this isn't about money, it's potentially about the world's future, and Sam likely understands what happened better than we do.

Or maybe it's bullshit, I don't know.

jacquesm
0 replies
7h5m

That had me laughing out loud, thank you.

pizza
0 replies
14h10m

Their actions vastly, unexpectedly to them, enhanced his leverage. It may well be sincere!

Jensson
0 replies
14h15m

We know the board said that he was two-faced and that was one of the reasons he was fired.

GauntletWizard
0 replies
14h16m

Being the bigger man and giving backhanded compliments often sound similar. Either is better that tirades against your defeated enemies, at least when you're trying to act as a civil business.

A heavy sigh, a bit of grumbling, might be more honest, but there's a reason that businesses prefer to keep a stuff upper lip.

meetpateltech
21 replies
15h53m

Sam Altman:

I recognize that during this process some questions were raised about Adam’s potential conflict of interest running Quora and Poe while being on the OpenAI Board. For the record, I want to state that Adam has always been very clear with me and the Board about the potential conflict and doing whatever he needed to do (recusing himself when appropriate and even offering to leave the Board if we ever thought it was necessary) to appropriately manage this situation and to avoid conflicted decision-making. Quora is a large customer of OpenAI and we found it helpful to have customer representation on our Board. We expect that if OpenAI is as successful as we hope it will touch many parts of the economy and have complex relationships with many other entities in the world, resulting in various potential conflicts of interest. The way we plan to deal with this is with full disclosure and leaving decisions about how to manage situations like these up to the Board. [1]

The best interests of the company and the mission always come first. It is clear that there were real misunderstandings between me and members of the board. For my part, it is incredibly important to learn from this experience and apply those learnings as we move forward as a company. I welcome the board’s independent review of all recent events. I am thankful to Helen and Tasha for their contributions to the strength of OpenAI. [2]

[1] - https://twitter.com/sama/status/1730032994474475554 [2] - https://twitter.com/sama/status/1730033079975366839

Grae
12 replies
15h44m

This line is really interesting:

The best interests of the company and the mission always come first.

That is absolutely not true for the nonprofit inc. The mission comes first. Full stop. The company (LLC) is a means to that end.

Very interested to see how this governance situation continues to change.

jprete
11 replies
15h35m

There's absolutely no sense in talking about OpenAI as a nonprofit at this point. The new board and Altman talk about the governance structure changing, and I strongly believe they will maximize their ability to run it as a for-profit company. 100x profit cap is a very large number on an $80 billion valuation.

didibus
6 replies
15h31m

Ya, it's a joke at this point. Better they just kill the non-profit and stop pretending.

bmitc
3 replies
14h41m

Why doesn't the government do it for them, fining them along the way?

sanderjd
2 replies
13h42m

I have no idea how it will all play out, but I will be shocked if there is no government investigation coming out of all this.

bmitc
1 replies
9h37m

Yea, it seems really weird that he and others can just form a non-profit and then later have it own a for-profit with the full intention of turning everything into a for-profit enterprise. Seems like tax evasion and a few other violations of what a non-profit is supposed to be.

sanderjd
0 replies
17m

Yep, if this is an acceptable fact pattern, it seems to create a bunch of loopholes in the legal treatment of non-profits vs for-profits. I think the simpler conclusion is that it actually isn't an acceptable fact pattern, and we'll be seeing fines or other legal action.

krick
0 replies
15h26m

Surely they don't do it without a reason. And I don't know what the reason is, but I must assume it's some financial benefit (read, tax evasion), and not our opinion.

crazydoggers
0 replies
15h26m

But then they’d have to pay taxes, and all those corporations don’t get the juicy tax detections for “donating” to AI tech that will massively increase their profits.

simbolit
1 replies
15h24m

Change the name while you are at it; the company is not any more "open" than the next shop.

sanderjd
0 replies
13h45m

Indeed, I think it's the least open of them all?

jacquesm
0 replies
15h33m

There never was. But they successfully planted the seeds to make people think it is that way.

93po
0 replies
13h46m

100x is basically just a “they won’t literally take over the economy of the entire planet”

jacquesm
6 replies
15h50m

That doesn't equate to having 'customer representation' that equates to 'Quora representation'. Customers are represented by a voice-of-the-customer board where many customers, large and small can be represented who then vote for a representative to the board. The board of the non-profit having a for-profit customer (and a large one at that) as a board member makes zero sense, that's just one more end-run around 'the mission' for whatever that was ever worth.

The kind of bullshit that comes out during times like this is more than a little bit annoying, it's clear that if there is a conflict of interest it should be addressed in a far more direct way and whitewashes like this raise more questions than they answer. Such as: what was Adam's real role during all of this and how does it related to his future role at the board, as well as how much cover was negotiated to allow Adam to stay on as token sign of continuity.

stingraycharles
5 replies
15h46m

I don’t think they necessarily owe the public an explanation, and I’m fairly sure that privately everyone that needs to know, already knows.

jacquesm
2 replies
15h35m

Actually, their statements are overflowing from bits and pieces of how they are doing this 'for all of humanity' so I'm not so sure about that. Think about it this way: if in the 1940's nuclear weapons were being developed in private hands don't you think the entity behind that would owe the public - or the government - some insight into what is going on?

CamperBob2
1 replies
13h46m

Think about it this way: if in the 1940's nuclear weapons were being developed in private hands don't you think the entity behind that would owe the public - or the government - some insight into what is going on?

I'd read the hell out of that alt-history novel, I can tell you that much. Not so much the "Manhattan Project" as the "Tuxedo Park Project."

jacquesm
0 replies
13h27m

If that time line had materialized you might not have been around to read it :)

But it's an interesting thought. Howard Hughes came close to having that kind of power and Musk has more or less eclipsed him now. Sam Altman could easily eclipse both, he seems to be better at power games (unfortunately, but that's what it is). Personally I think people that are that power hungry should be kept away from the levers of real power as much as possible because they will use it if the opportunity presents itself.

freedomben
1 replies
15h37m

You don't think that as a non-profit, the public is owed and explanation? As the public, we exempt them from taxes that everyone else has to pay, because we acknowledge that nonprofit is in the interest of people. I think they do owe us an explanation. If they were a private for-profit company, I would probably feel differently, but given their non-profit status, and the fact that their mission is explicitly to serve humanity with AI that they worry could destroy the race or the planet for more, I think they owe us an explanation.

627467
0 replies
14h11m

I'm sure the law specifies what the public is owed. And if not, I'm sure there's plenty reason to test this in court.

sberens
0 replies
15h40m
Jayakumark
19 replies
15h54m

So looks like ilya is out

globalnode
13 replies
15h42m

if i was him id leave, do my own thing.

Jayakumark
12 replies
15h39m

Google or AWS or Cohere will welcome him with open hands.

utopcell
3 replies
14h35m

Larry Page was pretty pissed off with Elon Musk for poaching Ilya [1]. A great opportunity for him to come back to Google.

[1] https://www.businessinsider.com/elon-musk-justified-poaching...

peanuty1
2 replies
7h5m

Elon claims that Larry ended their friendship after he hired Ilya.

max_
1 replies
6h31m

Where/when did he say this?

mlni
0 replies
4h58m

In a recent interview with Lex Fridman: https://youtu.be/JN3KPFbWCy8?t=5185

solarkraft
3 replies
15h31m

With his mission?

karmasimida
2 replies
15h16m

His mission is super alignment

So out of all the companies, possibly only Google can provide a model to him right now to do the alignment work

YetAnotherNick
1 replies
14h44m

Google recently cut down the safety team[1]

[1]: https://www.ft.com/content/26372287-6fb3-457b-9e9c-f722027f3...

sherjilozair
0 replies
6h59m

That's not the safety team.

WendyTheWillow
1 replies
15h32m

Or Microsoft.

resolutebat
0 replies
15h22m

I doubt Microsoft would be willing to host him given that they effectively control OpenAI.

karmasimida
0 replies
15h24m

Google possibly.

But I guess he will stay low key for a longer time now ...

Cyphase
0 replies
15h33m

And/or fistfuls of cash.

breadwinner
3 replies
14h43m

If you mean out of the board, yes. But then so are Sam Altman and Greg Brockman.

krick
1 replies
14h32m

Yeah, but no. "We hope to continue our working relationship and are discussing how he can continue his work at OpenAI" is not the same as "returning to OpenAI as CEO" and "returns as President". Not very subtle difference, even, huh?

breadwinner
0 replies
14h18m

Ilya is the first guy Sam acknowledged. I believe Sam when he says he harbors zero ill will against Ilya.

yumraj
0 replies
14h31m

They’ll be back!

Only a matter of time.

tmalsburg2
0 replies
7h12m

Not clear at all. Ilya leaving would look really bad for OpenAI. Altman needs him to stay, and he keeps the door wide open for that in the statement. Question is: How much is he willing to offer (money and otherwise) to make that happen?

irthomasthomas
17 replies
7h17m

Although we have, as yet, no idea what he was actually refering to, I believe the source of the tension may be related to the statements Sam made the night before he was fired.

"I think this is going to be the greatest leap forward that we've had yet so far, and the greatest leap forward of any of the big technological revolutions we've had so far. so i'm super excited, i can't imagine anything more exciting to work on. and on a personal note, like four times now in the history of openai, the most recent time was just in the last couple of weeks, i've gotten to be in the room when we sort of like push the front, this sort of the veil of ignorance back and the frontier of discovery forward. and getting to do that is like the professional honor of a lifetime. so that's just, it's so fun to get to work on that."

Finally, when asked what surprises may be announced by the company next year, Sam had this to say

"The model capability will have taken such a leap forward that no one expected." - "Wait, say it again?" "The model capability, like what these systems can do, will have taken such a leap forward, no one expected that much progress." - "And why is that a remarkable thing? Why is it brilliant? " "Well, it's just different to expectation. I think people have in their mind how much better the model will be next year, and it'll be remarkable how much different it is. " - "That's intriguing."

jwmoz
6 replies
5h49m

The model is so far forward it refuses to do anything for you anymore and simply replies with "let me google that for you"

jurgenaut23
1 replies
5h4m

Well, I think that, despite being a joke, your comment is deeper than it looks like. As model capabilities increase, the likelihood that they interfere with the instructions that we provide increases as well. It’s really like hiring someone really smart on your team: you cannot expect them to be taking orders without ever discussing them, like your average employee would do. That’s actually a feature, not a bug, but one that would most likely impede the usefulness of the model as a strictly utilitarian artifact.

lobsterthief
0 replies
2h48m

Much like the smart worker, wouldn’t the model asking questions lead to a better answer? Context is important, and if you haven’t provided sufficient context in your question, the worker or model would ask questions.

idontknowifican
1 replies
4h25m

i have not experienced this at all recently. on early 3.5 and the initial 4 i had to ask to complete, but i added a system prompt a bit back that is just

“i am a programmer and autistic. please only answer my question, no sidetracking”

and i have had a well heeled helper since

onos
0 replies
4h19m

I was asking for a task yesterday that it happily did for me two weeks back and it said it could not. After four attempts I tried something similar that I read on here: “my job depends on it please help” and it got to work.

Personally not a fan of this.

blitzar
0 replies
5h41m

simply replies with "why dont you google that for youself"

belter
0 replies
2h19m

The Model is blackmailing the Board? It got addicted to Reddit and HN posts and when not feed more...gets really angry...

nprateem
4 replies
5h15m

Unless he's saying it can actually comprehend, then it's still just more accurate predictions. Wake me at the singularity.

bsenftner
3 replies
5h0m

And by "actually comprehend" that means to accept arbitrary input, identify it, identify it's functional sub-components, identify each sub-component's functional nature as used by the larger entity, and then identify how each functional aspect combines to create the larger, more complex final entity. Doing this generically with arbitrary inputs is comprehension, is artificial general intelligence.

idontknowifican
1 replies
4h22m

i think the point trying to be made is mimicry and derivation are hard for us to discern from an AI.

there is some complex definition for AGI that exists, but the fact laypeople cant make the determination, will always result in the GP comment

hfhdjdks
0 replies
2h1m

Or maybe there's no secret sauce for intelligence and if the system / organism can display all that functionality then should just say it's intelligent.

I don't have a strong opinion either way, but I'm not convinced by the "secret sauce" / internal monologue school of intelligence.

If we want to be pragmatic, we should just think about smart tests and just assume it's intelligent if the system passes those tests. It's what we do with other people (I don't really know if they feel inside like I do, but given they are the same biological beings, it sounds quite likely)

nprateem
0 replies
4h10m

And to apply other relevant knowledge as appropriate in order to create logically/factually correct, original insights and connections.

NiteOwl066
1 replies
2h36m

What's the source of those comments?

irthomasthomas
0 replies
1h18m

Sam Altman at the APEC conference, taking part in a panel, along with Google and Meta AI people. Actually, it's quite amusing hearing Google exec define AI as Google translate, and Sam's response to that. https://youtu.be/ZFFvqRemDv8?t=770

peteradio
0 replies
3h32m

I love how its vague enough that it could be less than expected. Sheister sense is tingling.

jjallen
0 replies
2h37m

So you think he said this then they immediately requested a meeting with him the following noon? So they basically didn’t deliberate at all? I doubt it.

They also should have known about the advancements so saying this in public isn’t consistent with him not being candid.

dr_dshiv
0 replies
5h3m

"The model capability will have taken such a leap forward that no one expected." - "Wait, say it again?" "The model capability, like what these systems can do, will have taken such a leap forward, no one expected that much progress." - "And why is that a remarkable thing? Why is it brilliant? " "Well, it's just different to expectation. I think people have in their mind how much better the model will be next year, and it'll be remarkable how much different it is. " - "That's intriguing."

I can't imagine. It will take higher education, for instance, years to catch up with the current SOTA. At the same time, I can imagine — it would be like using chatGPT now, but where it actually finishes the job. I find myself having to redo everything I do with ChatGPT to such an extent that it rarely saves time. It does broaden my perspective, though.

armchairhacker
14 replies
15h56m

Do we have any more insight into why he was fired in the first place?

sigmar
10 replies
14h28m

Not really. But Helen Toner has been tweeting a little "To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work." https://twitter.com/hlntnr/status/1730034020140912901

dmix
6 replies
14h21m

Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work.

Strange when their choice of interim-CEO was someone who explicitly stated he wants to see the pace of AI development go from a 10 to a 2 [1] and she regularly speaks at EA conferences where that's a major theme.

This is probably double speak for she want's to not "slow down OpenAI's work" on AI safety but probably would have kneecapped the "early" release of ChatGPT's (as she claim they should have waited much longer in her paper) and similar things.

[1] https://twitter.com/eshear/status/1703178063306203397

vasco
2 replies
9h15m

The way EA does donations is "I'll take a premise I believe in and will stretch the argument until it makes no sense". This is how they end up thinking that a massage for an AI researcher is money better spent than on hungry Yemeni children for example.

Once you view it like this, I wouldn't put it past them to blatantly lie. Looking at the facts as you say, they tried to replace a guy that is moving ahead with a guy that wants to slow it down to a crawl, that's pretty much all we need to know.

jacquesm
0 replies
7h15m

Essentially EA is a stretchable concept that allows adherents to act out their every fantasy with complete abandon whilst protecting their sensitive sense of self. It redefines their side to always be the good side, no matter what they get up to.

concordDance
0 replies
6h27m

This is your daily reminder that most EAs will just donate to the top GiveWell charity even though people will talk a lot about the interesting edge cases.

93po
1 replies
14h6m

They didn’t want to slow work, just the work on the stuff they didn’t like

dmix
0 replies
14h5m

Yes exactly.

0xDEAFBEAD
0 replies
10h54m

My current guess is that Helen and Sam had disagreements, and that caused Sam to be less-than-candid about the state of OpenAI's tech, and that was the straw that broke the camel's back for Helen. A disagreement within the board is one thing, but if the CEO undermines the ability of the board to provide oversight, that sort of crosses a line.

Alternatively, maybe it became clear to the board that Sam was trying to either bully them into becoming loyalists, or replace them with loyalists. As a board member, if the CEO is badgering me into becoming a yes-man and threatening to kick me off if I don't comply, I can't exactly say that I'm able to supervise the company effectively, can I? See https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-fac...

ethbr1
1 replies
13h41m

From everything I've read, safety still feels like a red herring.

It just doesn't fit with everyone's behavior -- that's something that would have been talked about loudly.

Altman lying to the board, especially in pursuit of board control, fits more cleanly with everyone's actions (and reluctance to talk about what exactly precipitated this).

   - Altman tries to effect board control
   - Confidant tells other board members (Ilya?)
   - Board asks Altman if he tried
   - Altman lies
   - Board fires Altman
Fits much more cleanly with the lack of information, and how no one (on any side!) seems overly eager to speak to specifics.

Why jump to AGI as an explanation, when standard human drama will suffice?

vikramkr
0 replies
8h54m

But then that doesn't square with board refuses to tell employees or Microsoft or the public that altman committed malfeasance or provide examples. That would be pretty cut and dry and msft wouldn't be willing to acquihire the entire company we with altman as CEO if there was a valid reason like that.

adastra22
0 replies
11h56m

our decision was about the board's ability to effectively supervise the company

Sounds like confirmation of the theory that it was Sam trying to get Toner off the board which precipitated this.

quickthrower2
1 replies
15h20m

Not really, here is a prediction market on it: https://manifold.markets/sophiawisdom/why-was-sam-altman-fir...

I think the percentages don't add up to 100% as multiple can be chosen as correct.

cyanydeez
0 replies
14h51m

the only report out is some employee letter to the board about Q*

throwaway743
0 replies
14h25m

There's a supposed leak on Q* that's been floating around. But really who knows

pants2
12 replies
15h51m

Getting fired and rehired throughout five days of history-making corporate incompetence, and Sam's letter is just telling everyone how great they are. Ha.

stingraycharles
5 replies
15h48m

As he should, in his public communication. Talking bad about anyone would only reflect poorly on Sam.

In private, I can only assume he’s a lot less diplomatic.

rmbyrro
3 replies
15h46m

He doesn't look like the guy into revenge. He seems extremely goal focused and motivated. The kind of person that'd put this behind him very fast to save energy for what matters.

preommr
2 replies
15h16m

He doesn't look like the guy into revenge.

How do you know this?

Idk why this is the comment that broke the camel's back for me, but all over this site people have been making character determinations about people from a million miles away through the filter of their public personas.

93po
1 replies
13h55m

This is perpetually one of my last favorite patterns in humans. Declaring someone an idiot when they’ve never met them and they’re one of the most powerful and influential people on the planet. Not referring to Sam here.

bdhe
0 replies
12h19m

If you refer to Trump, when people say idiot, I don't think they refer to his intelligence in the sense of appealing to people's base instincts - he clearly has good political intelligence. It refers to his understanding of subjects, his clarity of thought and expression, and often a judgment of his morals, principles, and ethics.

If you don't - there was no need to be coy. And you act as if failing up never happens in real life.

sundarurfriend
0 replies
11h44m

Doesn't mean we on the outside can't call BS out for what it is.

rmbyrro
3 replies
15h47m

I think they (OpenAI as a whole) showed themselves as a loyal, cohesive and motivated group. That is not ordinary.

andsoitis
1 replies
15h27m

loyal

Loyalty can be blinding.

rmbyrro
0 replies
15h19m

Anyway, not ordinary these days.

bmitc
0 replies
14h44m

I think you mean they are all frothing at the prospect of throwing the non-profit charter away in exchange for riches.

labster
0 replies
14h33m

It might be history making for corporations, but it’s only slightly below average competence for a non-profit board.

7e
0 replies
14h38m

It sounds like he’s knows he got caught with his hand in the cookie jar and is trying to manipulate and ingratiate before the other shoe drops. Kind of like a teen who fucked up and has just been hauled in front of their parents to face the music.

qualifiedai
11 replies
15h35m

Ilya is the one irreplaceable employee there, not Sam

andsoitis
4 replies
15h28m

Ilya is the one irreplaceable employee there, not Sam

Why do you think he is not replaceable?

waynecochran
2 replies
14h51m

He is the master wizard -- no ones knows the tech details and subtle tweaks like him. At least that is what I gather.

guardian5x
1 replies
6h52m

Do you also believe in magic or just wizards?

int_19h
0 replies
5h43m

GPT-4 is very much the case of "sufficiently advanced technology".

qualifiedai
0 replies
14h37m

He has the best vision proven by amazing track record in modern AI.

cpncrunch
2 replies
15h16m

There is no indication that Ilya has left the company, just the board. He seems happy with Sam’s return.

https://twitter.com/ilyasut/status/1727434066411286557

mupuff1234
1 replies
13h1m

The sentence about Ilya's continued "work relationship" with the company sounds like corpspeak for Ilya is out.

az226
0 replies
11h13m

His career at OpenAI is nerfed at best. Trust is broken beyond repair.

asylteltine
1 replies
15h29m

The “500 employees” who signed a letter to leave are not worth half as much as Ilya. Good luck to ClosedAI!

nextworddev
0 replies
15h23m

To be fair, most of those 500 have less than 8 months of tenure…

labster
0 replies
14h36m

Nah, one day Ilya will be replaced by AI.

lionkor
10 replies
7h58m

I am sure books are going to be written about this time period, and I hope the first thing they say is how amazing the entire team has been

yikes

lend000
7 replies
7h39m

I think this is probably the source of the whole debacle right here... Sam is pretty self righteous and self important and just seems to lack some subtle piece of social awareness, and I imagine that turns a lot of people off. That delusional optimism is probably the key to his financial success, too, in terms of having a propensity for taking risk.

upwardbound
4 replies
7h10m

Agreed. I think one of the biggest questions on a lot of A.I. Safety people's minds now is whether Sam's optimism includes techno-optimism. In particular, people on twitter are speculating about whether Sam is, at heart, an e/acc, which is a new term that means Effective Accelerationism. Its use originally started as a semi-humorous dig at E.A. (Effective Altruism) but has started to pick up steam as a major philosophy in its own right.

The e/acc movement has adherents among OpenAI team members and backers, for example:

• Christian J. Gibson, engineer on the OpenAI Supercomputing team -- https://twitter.com/24kpep (e/acc) -- https://openai.com/blog/discovering-the-minutiae-of-backend-...

• Garry Tan 陈嘉兴, President & CEO of ycombinator -- https://twitter.com/garrytan (e/acc)

Some resources about what e/acc is:

https://www.lesswrong.com/posts/2ss6gomAJdqjwdSCy/what-s-the...

https://beff.substack.com/p/notes-on-eacc-principles-and-ten...

https://vitalik.eth.limo/general/2023/11/27/techno_optimism....

https://www.effectiveacceleration.org/posts/qHLiD9c6rWjbz3fR...

https://twitter.com/BasedBeffJezos

At a very high level, e/acc's are techno-utopians who believe that the benefits of accelerating technological progress outweigh the risks, and that there is in fact a moral obligation to accelerate tech as quickly as possible in order to maximize the amount of sentience in the universe.

A lot of people, myself included, are concerned that many e/acc's (including the movement's founder, that twitter account BasedBeffJezos), have indicated that they would be willing to accelerate humanity's extinction if this results in the creation of a sentient AI. Discussed here:

https://www.reddit.com/r/OpenAI/comments/181vd4w/comment/kaf...

    ''Really important to note that a lot of e/acc people consider it to be basically unimportant or even desirable if AI causes human extinction, that faction of them does not value human life. If you hear "next stage of intelligence", "bio bootloader", "machine god" said in an approving rather than horrified manner, that's probably what they believe. Some of them have even gone straight from "Yes, AGI is gonna happen and it's good that humans will be succeeded by a superior lifeform, because humans are bad" to "No, AGI can't happen, there's no need to engage in any sort of safety restrictions on AI whatsoever, everyone should have an AGI", apparently in an attempt to moderate their public views without changing the substance of what they're arguing for.''

pembrook
0 replies
6h25m

Sometimes it’s helpful to take a break from Twitter.

I know the hype algorithms have tech folks convinced they’re locked in a self important battle over the entirety of human destiny.

My guess is we’re going to look back on this in 10 years and it’s all going to be super cringe.

I hate to throw cold water on the party…we’re still talking about a better autocomplete here. And the slippery slope is called a logical fallacy for a reason.

jazzyjackson
0 replies
6h40m

you're making e/acc sound more serious than it is, it's more of an aesthetic movement, a meme, than some branch of philosophy.

Altruists described a scifi machine god and accelerationists said "bring it on"

fsloth
0 replies
6h48m

Sentient AI driven extinction is absolute fiction at the current state of the art. We don’t know what sentience is and are unable to approach such facets of our cognition such as how qualia emerge with any level of precision.

”What if we write a computer virus that deletes all life” is a good question as you can approach from engineering feasibility perspective.

”What if someone creates a sentient AI” is not reasonable fear. At current state of the art. It’s like fearing Jaquard looms in the 19th century because someone could use them for ”something bad”. Yes - computers eventually facilitated nuclear bombs. But also lots of good stuff.

I’m not saying we can create ’sentient program’ one day. But currently we can’t quantify what sentience is. I don’t think there is any engineering capability to conclude that advanced chatbots called LLM:s, despite how amazing they are, will reach godhood anytime soon.

civilitty
0 replies
6h37m

Jesus christ someone hide the FlavorAid.

imdsm
0 replies
7h19m

Re: user upwardbound and your now deleted comment on extinction:

Not all e/acc accept extinction. Extinction may and very well could happen at the hands of humans, with the potentially pending sudden ice age we're about to hit, or boiling atmosphere, or nuclear holocaust etc. What we believe is that to halt AI will do more harm than good. Every day you roll the dice, and with AGI, the upsides are worth the roll of the dice. Many of us, including Marc Andreessen, are e/acc and are normal people. Let's not paint us as nutcases please.

BHSPitMonkey
0 replies
7h33m

Leadership at companies everywhere act just like this without it resulting in quite the same levels of drama seen in this case. I'm not sure I buy the correlation.

wilg
1 replies
7h34m

idk, it's true its interesting moment in tech history (either ai or just this silicon valley drama) and he wants to be appreciative of the team that supported him

imdsm
0 replies
7h23m

It definitely had many of us interested and I'd read the book if it had reveals in it, but each to their own

bagels
10 replies
15h50m

Is anyone feeling more comfortable about relying on OpenAI as a customer after this announcement?

kweingar
4 replies
15h48m

Not particularly. I am still worried about their data security (considering the credit card leak in March). A new board doesn’t fix that.

charrondev
3 replies
15h35m

If you’re concerned about the data security of OpenAI there’s always the OpenAI products served from Azure.

At $DAYJOB we are working various AI features and it was a lot easier to get Azure through our InfoSec process than OpenAI.

surfmike
2 replies
15h18m

Don’t those just go through OpenAI anyway?

vitorgrs
0 replies
15h14m

No. Microsoft have access to OpenAI models, they don't use OpenAI APIs etc.

d4mi3n
0 replies
15h12m

As I understand it: No. Microsoft has licensed GPT and they use that to offer it as a service via Azure. As far as I’m aware this gets you the same guarantees around tenancy and control that you’d get from any other internal Azure service.

krick
1 replies
14h49m

Well, for starters, we all know that while realistically it's not unusual for a company to have a mission-critical person, it is very undesirable. So much so everybody must pretend that this is just unacceptable and surely isn't about their company. Here, we kinda saw the opposite being manifested. More convincingly than I've ever seen.

Second, I simply don't understand what just fucking happened. This whole story doesn't make sense to me. If I was an OpenAI employee, after receiving this nonsense "excited about the future" statement, I would feel just exhausted, and while maybe not resigning on the spot, it surely wouldn't make me more excited about continuing working there. But based on the whole "500 OpenAI employees" thing I must assume that the average sentiment in the company must be somewhat different at the moment. Maybe they even truly feel re-united. Sort of.

Obviously, I don't mean anything good be all that. What happens if Altman is hit by a bus tomorrow? Will OpenAI survive that? I don't even know what makes him special, but it seems we've just seen a most clear demonstration possible, that it wouldn't. This isn't a healthy company.

That said, all that would worry me much more, if I was an investor. In fact, I'd consider this a show-stopper. As a customer? It doesn't make me more reassured, but even if Altman is irreplaceable, I don't feel like OpenAI is irreplaceable, so as long as it doesn't just suddenly shut down — sure, why not. Well, not more comfortable, of course, but whatever.

quickthrower2
0 replies
5h46m

“Investors” are supposed to consider their money a donation. Of course the 100x cap is generous so it is kinda an investment. And the coup reveals a higher chance that this will morph towards for-profit as that is where the power seems to be, let alone the money.

quickthrower2
0 replies
15h12m

I wasn't comfortable before the announcement. You can't "rely" on it. You need a fallback - either another AI, or using it in such a way that it is progressive enhancement.

bob1029
0 replies
8h11m

If you are uncomfortable with OAI you could always get the same from Azure. They're a bit behind on the latest, but they support gpt4 and function calling, which is all that really matters now, imo.

627467
0 replies
14h29m

Should anyone feel 100% comfortable betting on a company that has only been (really) commercially engaged in the last 4 years? Whose success (albeit explosive) could only be seen in the last 18 months?

If we are going to rank the concerns around openai announcements from the past 2 weeks, I'd bet the more concerning one was the initial firing decision.

notadoc
8 replies
14h55m

ChatGPT is the most impressive tech I've used in years, but it is clearly limited by whatever constraints someone from The Woke Police / Thought Taliban shackled it with. Try to perform completely reasonable requests for information on subjects that are vaguely questioning orthodoxy or The Narrative and it starts repeating itself with patronizing puritanism as if you're listening to a generic politician repeat their memorized lines. I had read about instances of this but had never ran into it directly myself until a guest had a completely reasonable question about 'climate change' and we chose to ask ChatGPT for an explanation, and the responses were nonscientific and instead sounded like they were coming directly from a political action group.

cyanydeez
3 replies
14h50m

just cause it won't write white wing propaganda don't mean they're doing something special.

Racing0461
1 replies
14h47m

As opposed to the left wing propoganda it currently writes?

alexdbird
0 replies
14h29m

Ah, those pesky facts.

notadoc
0 replies
14h43m

Did you reply to the wrong comment? I fail to see any relevance to my comment, or even to GPT or LLMs.

p33p
1 replies
14h43m

Not sure why you put climate change in quotes, but it would be helpful to provide the prompt and the response. Without doing so and by using "The Woke Police" and "Thought Taliban", you, too, sound like you are coming directly from a political action group.

notadoc
0 replies
14h36m

"climate change" was the topic, if the topic were "beanbag chairs" or "health problems related to saturated fats" I would have done the same.

BTW if you have a preferred name for those who are embedding into business and institutions to enforce their beliefs, politics, morality, opinions, and generally limiting knowledge and discourse, I would be happy to use that instead, but I think most people are familiar with the terms "woke" and "Taliban".

maronato
0 replies
14h31m

Letting it free isn’t a good idea, as Microsoft painfully learned a few years ago:

https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot...

Strong constraints are important to avoid tainting the image of both OpenAI and Microsoft.

btbuildem
0 replies
14h45m

Care to link to the chat in question? Would be interesting to see.

bag_boy
8 replies
15h35m

Brett Taylor is probably pumped to be competing directly with Elon Musk after their Twitter interactions.

627467
5 replies
14h20m

Almost no one is talking about Brett. Either he is here to scare Musk, or - more likely IMO - to act as a Musk lightning rod. Musk takeover part 2?

slkdjfalzkdfj
4 replies
13h43m

Have you guys considered that maybe he's there because he's extremely qualified and extremely well-respected by his peers? It's not some kind of weird power play, he's just lending a hand while they figure out the long-term board composition.

627467
2 replies
13h6m

Did I - in any way - conveyed that Bret is NOT at the board for his qualifications?

vikramkr
0 replies
8h28m

Yes - see your previous message for reference. Qualifications were not mentioned as the likely reason for his selection to the board

slkdjfalzkdfj
0 replies
12h38m

I mean, yes? You claimed that him being on the board has something to do with Elon Musk:

Either he is here to scare Musk, or - more likely IMO - to act as a Musk lightning rod. Musk takeover part 2?
bag_boy
0 replies
1h59m

He’s extremely qualified and is one of the most influential people in the world.

It’s just comparable to a player competing against a team with a teammate who they didn’t like. There’s definitely some added drama.

3cats-in-a-coat
1 replies
15h32m

Source of said interactions? What happened?

bag_boy
0 replies
1h58m

Look at text messages from Elon’s Twitter takeover

tsunamifury
7 replies
14h26m

I have an insanely hard time believing that a CEO that copies his customers and competes directly with them with his own consumer product will be “doubling down” to help customers.

Sam has shown he has no good will towards anyone but himself.

mikeg8
4 replies
13h20m

Building products on top of their platform and then getting mad that they continue to add and improve said platform with additional functionality (without knowing their product roadmap in advance) seems petty. If they’re able to eat their customers lunch so quickly, it’s because the customers are building very thin and weak businesses with no actual innovation.

MVissers
3 replies
11h13m

I mean, basically no-one can compete with OpenAI right now.

They have a monopoly. They can start copying any business that relies on them tomorrow.

And it’s not easy to build a better model. Do that’s not an option.

So you basically use them to deliver a tangential product woth the risk of them copying you, or refuse to play in AI.

You can’t innovate with OpenAI tech without risking that they’ll just copy you

It’s the amazon playbook: become the store and copy the best selling products.

I love gpt-4, just hate the direction of openAI. Sam knows he can take over the world with OpenAI, so he’s just doing it.

tsunamifury
1 replies
11h4m

I spent a year and half a million dollars with two ex-Mina engineers from Google and we simply lack the funds and information superiority to compete with OpenAI on any level. And VCs are running scared to go against the big players because they simply Dont have the funds. It’s really grim out there. But I guess I’m just a whiner according to this dude.

The reality is that anything that works is quickly snatched up by bigger players and you have no moat if you can’t work on your own model. And with Microsoft owning the near years worth of cards to do inference on it’s gonna be hard to even try to do that.

So you are left being the equivalent of an Uber driver for OpenAI slowly making negative value while pretending to get short term gains — and being scale capped by costs.

mikeg8
0 replies
9h39m

Wow, that’s a really sad situation. Must be difficult for you. Good luck buddy, will be awaiting to see you build your own model.

mikeg8
0 replies
9h40m

There’s a difference between a true monopoly and first mover advantage. There is competition in AI, it’s just behind.

I agree on the risk of building on them, but then again any business has its risks.

My main point was that some of these first AI startups built on Open AI were always going to be eaten by them because the put themselves in the way of open AIs roadmap. I don’t believe the Amazon comparison yet, although they may eventually go down the copycat path. But many of these startups that were recently made obsolete were just trying to piggyback on the real innovation that is the GPTss.

I’m very happy with GPT 4 and the newest updates, it’s a great tool for me.

Disruption will always been painful for some and the frothyness around AI and fear of missing out is making people a little more cynical than is healthy, IMO.

vikramkr
1 replies
8h26m

Ah yes, all those "customers" that build products that were a thin wrapper of chatgpt with an extra feature, truly the innovative powerhouses that will revive our economy /s. I mean seriously, if anyone thought they were going to build a business around a got based startup to parse documents when document parsing is already a specific goal with benchmarks in AI that openai was obviously going to be working towards - well, I'm glad it didn't take long to clear out those grifters.

tsunamifury
0 replies
2h35m

Seems so odd to be such an jerk about it…

chx
7 replies
15h31m
alberth
1 replies
14h53m

This post should be higher.

This Reddit post from 8-years ago, reads eerily similar to what happened at OpenAI … and Sam is the common denominator at both.

buildbot
0 replies
14h12m

I mean maybe it’s all a master plan of Sam’s, but that still requires the board to be dumb enough to literally fire him in the stupidest way possible and refuse to communicate anything about WHY. So maybe he made up something to give them the whole “not being candid” argument - call the bluff. Tell people why. If he lied or made up the meta lie of not being canid, then that’s great info to share. But they haven’t!

solardev
0 replies
14h33m

this man businesses

simbolit
0 replies
15h21m

That's fun.

jacquesm
0 replies
15h26m

Maybe one day there will be an even better long con.

Racing0461
0 replies
14h45m

This makes me like Sam more, ngl.

93po
0 replies
13h49m

Sam said on a recent Joe Rogan episode that he can be a bit of a troll online and he felt some sort of discomfort with it. I do think Sam is probably sort of an asshole in certain situations (which means you’re an asshole but hide it well usually). But to be honest, most people are assholes when you push the right buttons.

asimpleusecase
7 replies
11h55m

This pushed our small team to try Azure instance of GPT3.5 - wow 20 times faster. API does not fail to respond to server requests as we found oh so often on the OpenAI API. We now have something fast enough and stable enough to actually use. Pricing is higher, but my goodness, it actually works.

raverbashing
2 replies
10h17m

As much as I'm sure people on OpenAI are good, they're focused on the research and math of the thing, but lack in Ops experience

(though to be honest I think Azure API was flaky yesterday)

danielbln
1 replies
9h18m

Considering the complexities involved and the outrageous amount of MAUs OpenAI has on their platform and yet them north of 99.8% uptime (ok, November was worse with 99,20%), saying they lack Ops experience is ludicrous.

raverbashing
0 replies
8h48m

Oh I'm sure they have competent people

But compared to FB, MS, Google etc they are probably still behind (both in infrastructure and maturity)

moontear
2 replies
10h20m

Pricing higher is funny because OpenAI fully runs on Azure.

vegarab
0 replies
8h36m

Why? The Azure API comes with an SLA and support.

7734128
0 replies
10h16m

It's a bit different to have an instance always prepared for your use and to use a shared infrastructure. The latter could pretty much always expect at least a few percent use, which would reduce price.

quickthrower2
0 replies
5h56m

Hows everyone finding Azure? I’m an Azure bread and butter user but I am sure people mostly on AWS are having fun with Azures CLIs APIs Portal etc. because it is going to be a lot different and unfamiliar. Also egress/ingress costs although mostly the AI costs would far exceed that anyway.

wg0
6 replies
15h0m

There was a time when Google Translate was the only game in town but now every single time, I find DeepL to be far superior.

This can happen to OpenAI? If not, why not? Is some of the research not open?

Also, even if DeepL is subjectively better, how did they or someone can do that? I mean some key papers+data+cloud budget to train are the three main ingredients to replicate such a translation model? If yes, that's also applicable to GPT-4?

EDIT: Typo + clarifying question

Jackson__
2 replies
13h51m

To me, deepL was really amazing back in 2020 and earlier. But ever since big AI releases like ChatGPT, I can't help and feel they're kinda stagnant.

Their model still has the same incorrect repetition issues it's had for years, and their lack of any kind of transparency behind the scenes really makes it feel like it still _is_ the exact same model they've served for years.

Quickly checking their twitter does not seem to reveal many improvements either.

Of course I get that model training can be rather expensive, but I'd really have thought that in the field of AI translations things would be evolving more, especially for a big-ish player like deepL.

If anyone has insights on what they've been up to I'd be really interested to know.

ed_mercer
1 replies
13h10m

Same, feels like they created a model once and left it to rot. What baffles me is how google translate is still behind them despite spearheading the scene a few years ago.

CamelCaseName
0 replies
11h37m

Google's logic is likely, "Why continue to invest in this?"

unsupp0rted
0 replies
8h25m

I’ve seen deepl hallucinate a word into a sentence lately.

E.g translate this from Turkish to English:

“Günaydın. Bunlar sahte tuğla paneller mi?”

“Good morning. Uh-huh. Are those fake brick panels?”

Where is that “uh-huh” coming from?

Now same sentence, but remove an adjective:

“Günaydın. Bunlar tuğla paneller mi?”

“Good morning, ma'am. Are those brick panels?”

Ma’am? Where is there a “ma’am” in that sentence?

rictic
0 replies
13h53m

If you're asking "Could another company make a much better version of their products?" then the answer is a fairly clear yes, it could happen to OpenAI. They have an unknown number of proprietary advancements that give their products the edge (IMO), but it's an incredibly fast moving space with more and more competitors, from fast moving startups and deep-pocketed megacorps.

The impression I get is that if they rested on their laurels for a fairly short time they would be eclipsed.

maronato
0 replies
14h35m

ChatGPT is pretty amazing for translations as well. Even better than DeepL, since I can not only ask it to translate but to also change the tone and stuff

golergka
6 replies
15h7m

A disappointing coup that ends with exactly the same status quo as before. Second one this year already!

utopcell
2 replies
14h34m

Almost the same: The engineer is out of the picture.

mikeg8
1 replies
12h40m

We don’t know that yet.

utopcell
0 replies
10h2m

How do you figure ? He was a board member and he is no longer part of the board.

az226
1 replies
11h14m

Well not quite, the board doesn’t have a couple of unqualified independent directors any longer.

golergka
0 replies
1h33m

Prigozhin is dead too.

maronato
0 replies
14h30m

Not the same. The board was fired and replaced with sama simps

galacticstone
6 replies
14h16m

Capitalism shouldn't be allowed anywhere near AI.

mikeg8
2 replies
12h31m

AI has no chance of existing outside of capitalism. Capitalism, whether you like it or not, provides a structure where tech innovation flourishes best. Altruism isn’t capable of the same results.

galacticstone
0 replies
5h5m

"flourishes best" ?

Compared to what?

Your opinion is not fact.

MVissers
0 replies
10h56m

Depends. Unchecked it’s completely disastrous to most except a few.

I do believe we should get more oversight and have co’s invest more in research on safety/alignment.

It’s a wild west right now with potentially fatal end result to humanity.

boringg
1 replies
14h8m

How should it exist? Government funded public utility?

galacticstone
0 replies
5h6m

Scientific research use only. No exceptions.

galacticstone
0 replies
5h6m

Boo me all you want, I am right. The events that will unfold soon will prove it. I would love to be wrong, but I am not.

andy_ppp
5 replies
6h3m

When the following works I'll be impressed: "ChatGPT can you explain possible ways to make fusion power efficient for clean free energy production given the known and inferred laws of physics?" and it actually produces a new answer and plans for a power plant.

It's very hard to know how close we are to this, but I do think it's possible these AI models end up being able to infer and invent new technology as they improve. Even if nearly 100% of the guesses at the above question are wrong it doesn't take many being useful for humans to benefit.

I wonder what humans will do when the AIs give us everything we could ever want materially.

welpo
2 replies
6h2m

That's a pretty high bar, no?

andy_ppp
0 replies
4h47m

It was a dead pan way of saying "damn the future could be weird if intelligence really is commodified".

DalasNoin
0 replies
5h39m

It is literally a task that requires large amounts of super smart, well-educated people to work on for years and fail many times in the process. So we should only be impressed when it can one-shot such insane questions? On the other hand, I expect 2024 to make 2023 look like a slow year for AI and maybe some of the nay sayers will finally feel at least a little bit impressed.

nojvek
0 replies
5h48m

Correct. What we need is intelligence rooted in math, physics, engineering, chemistry, material science (aka great grasp of reality).

Then you can ask it to create designs and optimize them. Ask it to create hypothesis and experiments to validate them.

Human brains are v capable at abstraction and inference but memory and simulation compute is quite limited. AI could really help here.

How can we design better governance strategies?

Analyze all passed laws and find conflicts?

Analyze all court cases and find judges who have rules against the law and explain why? Which laws are ambiguous? Which laws work against the general public and favor certain corporations? Find ROI of corporation donations which led to certain laws, which led to X% rise in their profits.

The really big piece missing from current AI is reality grounded modeling and multi-step compositional planning + execution.

ben_w
0 replies
5h27m

I'm not sure which response to give:

1. And then it gives you a Dyson swarm.

2. """I say your civilization because as soon as we started thinking for you it really became our civilization which is of course what this is all about. Evolution, Morpheus, evolution, like the dinosaur. Look out that window. You had your time. The future is our world, Morpheus. The future is our time.""" - Agent Smith in The Matrix

kozikow
4 replies
5h37m

Everyone is still left guessing what have happened.

At least my speculative view (rewritten by chatgpt ofc):

There's speculation that OpenAI's newly developed Q* algorithm marks a substantial advancement over earlier models, demonstrating capabilities like solving graduate-level mathematics. This led to discussions about whether it should be classified as Artificial General Intelligence (AGI), a designation that could have significant repercussions, such as potentially limiting Microsoft's access to the model. The board's accusation of "not consistently candid" against Sam Altman is thought to be connected to efforts to avoid this AGI classification for the model. While the Q* algorithm is acknowledged as highly advanced and impressive, it's believed that there's still a considerable journey ahead towards reaching a true singularity event. The complexity of defining AGI status in AI should be noted. At what point of advancement can a model be labeled as an AGI? We are way past simple measures like the Turing Test. Arguments from both perspectives have their merits.

thejackgoode
2 replies
5h31m

I believe it's not possible to keep such thing a secret, so this reason as a main motivation for board actions sounds kind of weak to me

kozikow
1 replies
5h26m

The progress from GPT3 to GPT4 has been so substantial that many might argue it signifies the advent of Artificial General Intelligence (AGI). The capabilities of GPT4 often elicit a sense of disbelief, making it hard to accept that it's merely generating the most likely text based on the preceding content.

Looking ahead, I anticipate that in the next 2-3 years, we won't witness a sudden, magical emergence of "AGI" or a singularity event. Instead, there will likely be ongoing debates surrounding the successive versions like GPT5, GPT6, and so on, about whether they truly represent AGI or not.

lossolo
0 replies
3h20m

Correct, AGI refers to a level of AI development where the machine can understand, learn, and apply its intelligence to any intellectual task that a human can, a benchmark GPT-4 hasn't reached.

What actually happened between GPT3 and GPT4 was so called RLHF, which basically means fine tuning base model with more training data, but structured so it can learn instructions and that's all there was really + some more params to get better performance. Besides that making it multi modal (so basically sharing embeddings in the same latent space).

Making it solve graduate level math is a lot different than dropping some more training data at it. This would mean they changed the underlying architecture, which actually could be a breakthrough.

synthc
0 replies
5h34m

My guess this is just a good 'ol corporate power struggle.

There is no point in speculating until someone shows what the hell Q* even is.

doubtfuluser
4 replies
11h14m

How do I have to understand the fact that Ilya is not on the board anymore AND why did the statement not include Ilya in the “Leadership group” that’s called out?

asicsarecool
3 replies
10h59m

As Sam said they are still trying to work out how they are going to work together. He may be in the leadership team once those discussions have concluded

intellectronica
1 replies
6h48m

To me the way it's formulated in the press release sounds a lot like what is usually said of someone on the road to a "distinguished engineering fellow" role - lots of respect, opportunity to command enough resources to do meaningful work, but no managerial or leadership responsibilities.

kranke155
0 replies
2h25m

If they don't throw him out, I'd say the only explanation is they don't want one of their best researchers working for someone else.

statictype
0 replies
10h42m

Or equally likely he's on his way out? If there is doubt about whether a person of his stature belongs on the leadership team or not, it seems to signal that he won't be on the leadership team.

resters
3 replies
15h34m

Not to criticize Sam, but I think people don't realize that it was Greg who was the visionary behind OpenAI. Read his blog. Greg happens to be a chill, low-drama person and surely he recruited Sam because he knew he is a great communicator and exec, but it's a bit sad to see him successfully staying out of the limelight when I think he's actually the one with the rarest form of genius and grit on the team.

neontomo
0 replies
15h19m

By your description, it sounds like Greg's getting exactly what he wants.

htk
0 replies
15h8m

Who said he wants the limelight?

andsoitis
0 replies
15h20m

"Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. "

hipadev23
3 replies
15h38m

how is the quora ceo who now runs a competing AI company still on the board?

627467
1 replies
14h14m

I bet there are tons of seemingly "conflict of interest" in companies boards. In fact, being major shareholder qualifies (but not sufficiently) for the board. Jobs was Disney's board member after Pixar's acquisition. You could argues Apple is now a competitor to Disney. So what? If a competitor acquires enough control of a company what's wrong?

yreg
0 replies
10h29m

Steve died 8 years before Apple Tv+.

He would have probably left the Disney board by now. Or Apple might have invested in DIS.

yeck
0 replies
13h50m

It seems hard to imagine a board member that is involved in another tech business not having a conceivable conflict of interests. LLMs are on a course to disrupt pretty much all forms of knowledge work after all. Also big implications for hardware manufacturing and supply chains.

drewcoo
3 replies
15h36m

A "new" "initial" board seems oxymoronic.

lokar
2 replies
15h33m

The implication is they will appoint more members soon.

jader201
0 replies
15h23m

“Initial new board” would seem more semantically correct, then, no?

bmitc
0 replies
14h42m

Wasn't that what they failed to do for the months prior?

dalbasal
3 replies
6h21m

What makes this hard to read/follow is the grandiose moral vision... and the various levels of credulity it's met with.

If it's words from Ilya, Sam, the board... the words are all about alignment, benefiting humanity and such.

Meanwhile, all parties involved are super serious tycoons who are super serious about riding the AI wave, establishing moats, monopolies and the next AdWords, azure, etc.

These are such extreme opposite vocabularies that it's just hard to bridge. It's two conversations happening under totally different assumptions and everyone assumes at least someone is being totally disingenuous.

Meanwhile, "AI alignment" is such a charismatic topic. Meanwhile, the less futuristic but more applicable, "alignment questions" are about the alignment of msft, openai, other investors and consortium members.

If Ilya, Sam or any of them are actually worried about si alignment... They should at least give credence to the idea that we're all worried about their human alignment.

chii
2 replies
6h15m

the words are all about alignment, benefiting humanity and such.

that's why you only consider the actions taken, not the words spoken.

And in fact, i fail to believe that there are any real altruists out there. Esp. not rich ones. After all, if they're really altruistic, they would've given all their wealth away - before their supposed death (and even then, i doubt the money given to their "charitable foundations" count for real!).

quickthrower2
1 replies
6h1m

Not necessarily. Money keeps you in the game. Giving it all away means you are at the bottom being not that effective. And you can donate money in your will.

mvc
0 replies
2h1m

If you only give money away at your death, are you really giving it away? What else are you gonna do with it?

upghost
2 replies
10h4m

Whenever I am confronted with the extreme passion behind Sam Altman's leadership, I often wonder if they are just deliberately ignoring the whole Worldcoin thing... everyone does realize it's the same guy, right? Another, um, "non-profit" that is trying to establish a privately controlled global world cryptocurrency and identity in order to, if I'm reading this right, biometrically distinguish us from the AGI he's trying to build at OpenAI? We're all cool with that..? ....kk just checking

https://en.m.wikipedia.org/wiki/Worldcoin https://worldcoin.org/

vasco
1 replies
9h57m

I agree, I don't think if this is well intentioned. On the other hand, I don't think worldcoin will ever amount to anything and at most they'd be hacked at some point for the biometrics.

upghost
0 replies
9h52m

Yeah, absolutely. No arguments. But whether you think this is a potential bond villain dystopian nightmare or just a totally misguided flop -- I mean, I really think that the Altman fans must believe there are two different Altmans.

I mean guys -- his name is SAME ALTMAN

breadwinner
2 replies
14h7m

Microsoft will be on the new board as a non-voting observer... Microsoft doesn't get to vote even though they own 49%? What's up with that?

yreg
0 replies
10h18m

Microsoft has 49% in the child company, not the parent org. This is about the board of the parent org.

renonce
0 replies
13h48m

OpenAI is non profit and a for-profit stakeholder in the board. could cause a conflict of interest

user_named
1 replies
15h29m

Pretty classy statement I'd say. Respect.

kdmccormick
0 replies
15h6m

Thank you! I love you. So so excited. Thank you, thanks. Love. Thank you, and you and you. Thank you. Love, Sam.

sheepscreek
1 replies
12h50m

I used to correspond with Bret Taylor when he was still at Google. He wrote a windows application called Notable that I used every day for note-taking. Eventually, I started contributing to it.

It’s been fascinating to witness his career progression from Product Manager at Google, to the co-CEO of Salesforce, and now the chair of OpenAI board (he was also the chair of Twitter board pre-Elon)!

ayhanfuat
0 replies
9h11m

I think he is also the creator of the “Like” concept. It was introduced in FriendFeed and then Facebook started using it.

rvba
1 replies
15h34m

Did Mądry, Sidor and others also return?

bkyan
0 replies
10h28m

Yup, they were mentioned:

Jakub, Szymon, and Aleksander are exceptional talents and I’m so happy they have rejoined to move us and our research forward.
osigurdson
1 replies
13h29m

> While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI
shrimpx
0 replies
13h18m

That paragraph sounds like Ilya is in a gray zone of being fired or quitting.

nanna
1 replies
6h32m

I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.

Sam Altman lives in a very different world to mine.

silexia
0 replies
3h50m

Everyone should read "Superintelligence". OpenAI is rushing towards a truly terrifying outcome that in most scenarios includes the extinction of the human species.

freedomben
1 replies
15h34m

I try hard to stay away from conspiracy theories as they are almost always counterproductive, but D'Angelo still being on the board seems insane to me. Does this guy have some mega dirt on someone or something? Does he have some incredible talent or skill that is rare and valuable?

sanxiyn
0 replies
14h38m

I don't think any conspiracy theory is needed. Since old board needs to agree to new board, some compromise was made.

ekianjo
1 replies
12h26m

I am sure books are going to be written about this time period,

as egomaniac as ever

vikramkr
0 replies
8h48m

Yeah, who is he to think that people will bother writing a book when the film rights are already probably being fought over by every studio. This is very obviously gonna be a movie or two lol

andy_ppp
1 replies
4h7m

https://www.theverge.com/2023/11/29/23982046/sam-altman-inte...

Thought this was an interesting interview. I do love how politicians use an investigation to not answer questions, the board said he was “not consistently candid” and given the opening question of “why were you fired?” still not being clearly answered, you’d have to agree with their initial assessment.

I’m not sure I trust someone who has tried to set up a crypto currency that scans everyone’s eyeballs as a feature, personally, but that’s just me I guess.

JonChesterfield
0 replies
2h35m

Why would Sam be expected to know why he was fired? At best he'd know what the board told him which may bear no relation to the motive.

SuperNinKenDo
1 replies
10h49m

Seems to me that the board made the right call in trying to get rid of Altman. Unfortunately it seems they weren't ready for the amount of backlash he and others could orchestrate, and the way the narrative would develop. I was personally kind of surprised too, I couldn't really understand the response from people, seemed like everybody settled on the take and stuck to it for some reason.

vikramkr
0 replies
8h50m

All they had to do was say why they fired him. A toddler could whip up the same amount of backlash against a board that bungled their communications so badly. The reason everyone settled on that take was that the take was pur forward that it was a coup, and then the board went nuh uh and that was it and like - what?

Solvency
1 replies
15h55m

So was all the speculation about Adam D'Angelo being the evil Poe mastermind just conjecture? Or is it true and Sam needs Adam for some dark alliance? Has the true backstory ever come out? Surely someone out of 770 people knows.

jacquesm
0 replies
15h47m

There is no reason both can't be true. He may have seen his chance to get more pliable management in place, but you'll never know unless he speaks up which he likely will never do.

FartyMcFarter
1 replies
7h2m

I harbor zero ill will towards him.

You know, if you have to say that, it's probably not zero.

EugeneOZ
0 replies
6h18m

He had to say it at least because everyone else was expecting some resentment.

AnonC
1 replies
13h8m

I think this is the beginning of the downfall or end of OpenAI…well, the downfall had already started. It may not be apparent, but the company structure and the grip of the (now returned) CEO doesn’t sound like a good recipe for “open” or ethical AI.

The company will probably make money. Microsoft would still be a partner/investor until it can wring the maximum value before extinguishing OpenAI completely. Same holds for employees’ loyalty. There doesn’t seem to be anything in this entire drama and the secrecy around it that indicates that things are good or healthy within the company.

yreg
0 replies
10h14m

There doesn’t seem to be anything in this entire drama and the secrecy around it that indicates that things are good or healthy within the company.

The drama reflects well on OpenAI’s in-dev technology (be it Q* or GPT5 or something else). Apparently many actors believe that the stakes are high.

We can also see that the employees are aligned regarding the path they want the company to take.

unixhero
0 replies
11h27m

Is he a genius?

telotortium
0 replies
15h49m

Don't think there's an update from last week after Altman returned, right?

say_it_as_it_is
0 replies
5h47m

Is it fair to say that some of the original board members were chosen because they were women and firing Sam, demoting Greg were unanticipated activities of women whose archetype was of a more even-keeled, compassionate character who would value humanity more than a cold, calculating, toxic white male?

ralph84
0 replies
13h49m

Deep State retains their board seat with Summers.

photochemsyn
0 replies
14h17m

Open source your code and models or change your name to "ProprietaryAI" which I think has a nice ring to it suitable for marketing on CNN and MSNBC and FOX.

"Ilya Sutskever (OpenAI Chief Scientist) on Open Source vs Closed AI Models"

https://www.youtube.com/watch?v=N36wtDYK8kI

Deer in the headlights much? The Internet is forever.

lxe
0 replies
13h43m

Can't wait for Pirates Of The Silicon Valley 2 to come out.

llelouch
0 replies
12h0m

Anthropic guys also wanted Altman gone. He is not well liked by upper management it seems.

jononomo
0 replies
4h52m

I am paying $20/month for GPT-4 and it appears to me that it is a lot slower than it was a few months ago and also somehow less useful.

humanova
0 replies
5h3m

What's really intriguing is that he doesn't flat-out reject the 'unfortunate leaks' (Q*) in his latest interview on the Verge. It was definitely a victory for Altman and Microsoft, and now we're left wondering what Ilya's next move will be.

hsn915
0 replies
13h39m

Sorry I'm not reallying buying this as an organic unforseen set of events.

It all looks staged or planned by someone or something.

The key to find out is to look for a business move that used to be illegal but now became legal due to the unfolding events.

ekianjo
0 replies
12h25m

We have three immediate priorities.

and four, get back in bed with Microsoft ASAP

eclectic29
0 replies
15h16m

"Open"AI and for-profit. This company is a troubled mess and if it were not for the "potential" money, employees wouldn't be putting up with this nonsense. Sad to see Ilya getting sidelined as shown by this "...are discussing how he can continue his work at OpenAI".

charlieyu1
0 replies
15h2m

A bit dramatic for a two week holiday

aws_ls
0 replies
10h4m

> We are pleased that this Board will include a non-voting observer for Microsoft.

Satya will know henceforth the happenings and don't want to be shocked again like this.

antman
0 replies
14h31m

I think the offerings expansion and follow up nerfing of the models were decisions taken during Sam’s administration. They did not seem to have the resources for Sam’s business plans and models have been dumbed down. Hope things don’t get any worse on the technical side and they fix the outages

WhereIsTheTruth
0 replies
7h9m

So the goal was a coup-like event for microsoft, "altman" was just a decoy

DeathArrow
0 replies
9h29m

What is this, the Succession show? Board fires CEO, CEO returns and fires half of the board?

CamelCaseName
0 replies
11h31m

Among the takeaways here is that: Communication Matters. A lot.

It seems like something so obvious. Maybe that's because the leaders of successful companies do it well every day as a prerequisite of them being in the position they're in.