return to table of content

OpenAI's board has fired Sam Altman

johnwheeler
178 replies
14h36m
dwd
94 replies
10h4m

Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

erhaetherth
73 replies
9h40m

Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.

Closi
32 replies
9h22m

Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.

gardenhedge
15 replies
8h43m

Since ChatGPT is not indistinguishable from a human during a chat, is it fair to say it smashes the Turing test? Or do you mean something different?

NoOn3
10 replies
7h58m

ChatGPT is distinguishable from a human, because ChatGPT never responds "I don't know.", at least not yet. :)

ben_w
6 replies
7h53m

It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...

IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.

NoOn3
2 replies
7h38m

I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...

ben_w
1 replies
7h35m

Have you tried setting a custom instruction in settings? I find that setting helps, albeit with weaker impact than the prompt itself.

NoOn3
0 replies
6h8m

It's not a problem for me. It's good that I can detect chatGPT by this sign.

peignoir
1 replies
4h47m

I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly

ben_w
0 replies
4h32m

I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.

rezonant
0 replies
7h43m

This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy

raccoonDivider
0 replies
5h34m

Some humans also never respond "I don't know" even when they don't know. I know people who out-hallucinate LLMs when pressed to think rigorously

epolanski
0 replies
5h45m

Of course it does.

NoOn3
0 replies
6h43m

Maybe It's because It was never rewarded for such answers when It was learning.

aidaman
2 replies
7h58m

not yet: https://arxiv.org/abs/2310.20216

that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.

most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.

jwestbury
0 replies
6h57m

Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.

0. https://www.youtube.com/watch?v=QrSCwxrLrRc

hedora
0 replies
6h8m

That’s just showing the tests are measuring specific things that LLMs can game particularly well.

Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).

rayeigenfeldt
0 replies
8h0m

Did you perhaps mean to say not distinguishable?

peyton
12 replies
8h5m

It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”

concordDance
4 replies
7h7m

How well does that work on humans?

Loughla
3 replies
4h41m

The fourth word of my answer is "of".

It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.

concordDance
1 replies
2h49m

I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)

LLMs are not AIs, but they could be a core component for one.

howrar
0 replies
1h10m

Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.

yallneedtoget
0 replies
2h34m

right, but that's why the people using this as a test of the model's capabilities are just failing to understand the most basic aspect of how it is set up.

the underlying model is presented to the public as a chat bot, which interprets the context of the input using its pre-trained knowledge, then generates a response token by token based on probabilities, and STREAMS these tokens back into readable text back to the user. there is no planning step, because it works fine for 99% of chatbot use cases without it. adding functionality to solve this stupid test is trivial, but costly to deploy right now as you'd have to change the architecture from a chatbot to something that thinks and plans.

it does not currently think or plan before it begins speaking.

that is not to say that it couldn't.

it is simply not configured to, so testing that is about as stupid as checking if google has gone into the gardening business yet.

could they? sure. but if you're checking that every time a new google press release comes out, you're just clueless.

ben_w
3 replies
7h57m

got-3.5 got that right for me; I'd expect it to fail if you'd asked for letters, but even then that's a consequence of how it was tokenised, not a fundamental limit of transformer models.

rezonant
2 replies
7h47m

This sort of test has been my go-to trip up for LLMs, and 3.5 fails quite often. 4 has been as bad as 3.5 in the past but recently has been doing better.

yallneedtoget
1 replies
3h7m

if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.

Jensson
0 replies
41m

An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.

So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.

tiahura
0 replies
2h28m

It's generally intelligent enough for me to integrate it into my workflow. That's sufficiently AGI for me.

dudeinjapan
0 replies
1h49m

“You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?”

Lio
0 replies
6h9m

I find GPT-3.5 can be tripped up by just asking it to not to mention the words "apologize" or "January 2022" in its answer.

It immediately apologises and tells you it doesn't know anything after January 2022.

Compared to GPT-4 GPT-3.5 is just a random bullshit generator.

strahlungsfluss
0 replies
3h47m

LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.

drsopp
0 replies
5h54m

I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".

ChatGTP
0 replies
3h48m

Funny because Marvin Minsky thought the turing test was stupid and a waste of time.

RuggedPineapple
27 replies
9h28m

It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.

concordDance
15 replies
7h8m

The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.

oska
14 replies
6h51m

They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.

concordDance
13 replies
6h48m

They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?

oska
10 replies
6h41m

I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s

wildermuthn
3 replies
5h19m

It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.

bagofsand
2 replies
2h21m

Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.

wildermuthn
0 replies
26m

I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.

howrar
0 replies
1h24m

You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.

Zambyte
3 replies
3h11m

I think answering this may illuminate the division in schools of thought: do you believe life was created by a higher power?

oska
2 replies
3h3m

My beliefs aren't really important here but I don't believe in 'creation' (i.e. no life -> life); I believe that life has always existed

concordDance
1 replies
2h46m

Now that is so rare I've never even heard of someone expressing that view before...

Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.

This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?

nprateem
0 replies
1h47m

Buddhism

concordDance
1 replies
6h25m

intelligence requires understanding which is not possible without consciousness

How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.

FeepingCreature
0 replies
3h3m

No, you're not supposed to actually have an empirical model of consciousness. "Consciousness" is just "that thing that computers don't have".

hedora
1 replies
6h11m

If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.

I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.

raccoonDivider
0 replies
5h36m

It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.

I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.

torginus
5 replies
5h9m

LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.

The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.

foldr
2 replies
5h0m

I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.

peignoir
0 replies
4h49m

You need time for inertia to happen, I’m working on some mvps now and it takes time to test what works what s possible what does not…

FabHK
0 replies
4h38m

Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.

[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw

theobromananda
0 replies
4h27m

That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.

qeternity
0 replies
4h29m

I am massively bullish LLMs but this is hyperbole.

Smartphones changed day to day human life more profoundly than anything since the steam engine.

hyperthesis
1 replies
4h21m

Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.

I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.

somewhereoutth
0 replies
2h14m

Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.

fsloth
0 replies
2h41m

This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.

So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.

Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).

Eji1700
0 replies
7h40m

I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.

Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.

But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.

The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.

There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.

Adunai
0 replies
5h11m

As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.

P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.

golol
5 replies
7h2m

LLMs are the first instance of us having created some sort of general AI. I don't mean AGI, but general AI as in not specific AI. Before LLMs the problem eith AI was always that it "can only do one thing well". Now we have something on the other side: AI that can do anything but nothing specific particularly well. This is a fundamental advancement which makes AGI actually imaginable. Before LLMs there was literally no realistic plan how to build general intelligence.

stuaxo
4 replies
5h58m

LLMs are not any kind of intelligence, but it can work to augment intelligence.

darkerside
2 replies
5h12m

How smart would any human be without training and source material?

knicholes
0 replies
3h35m

I think the boy of Aveyron answers that question pretty well.

Jensson
0 replies
51m

Smart enough to make weapons, tame dogs, start fires and cultivate plants. Humans managed to do that even when most of their time was spent gathering food or starving.

cjonas
0 replies
56m

So in other words... Artificial intelligence?

LLM are surprisingly effective as general AI. Tasks that used to require a full on ML team are now accessible with 10 minutes of "prompting".

discordance
1 replies
2h38m

How do you know AGI is hard?

howrar
0 replies
1h3m

Everything is hard until you solve it. Some things continue to be hard after they're solved.

AGI is not solved, therefore it's hard.

mjan22640
0 replies
8h31m

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155
heavyset_go
0 replies
4h17m

Read the original ChatGPT threads here on HN, a lot of people thought that this was it.

concordDance
0 replies
7h11m

LLMs definitely aren't a path to ASI, but I'm a bit more optimistic than I was that they're the hardest component in an AGI.

andrepd
0 replies
7h49m

Are you kidding? Have you seen the reactions since ChatGPT was released, including in this very website? You'd think The Singularity is just around the corner!

AlchemistCamp
8 replies
7h51m
croes
7 replies
5h34m

Because of Altman's dismissal?

ayewo
6 replies
5h15m

Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

But the company absolutely blossomed nonetheless.

With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752

civilitty
5 replies
3h43m

That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.

pmoriarty
3 replies
2h42m

They didn’t give any more shits about safety than OpenAI did

Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.

It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.

nvm0n2
2 replies
2h9m

Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.

https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...

Q: Can you decide on a satisfying programming project using noisemaps?

A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?

It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.

pmoriarty
0 replies
1h34m

Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.

I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.

Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:

"Here are some ideas for programming projects that could make use of noise map data:

- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.

- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.

- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.

- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.

- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.

- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.

Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."

mordymoop
0 replies
1h22m

I feel it necessary to remind everyone that when LLMs aren’t RLHFed they come off as overtly insane and evil. Remember Sydney, trying to seduce its users, threatening people’s lives? And Sydney was RLHFed, just not very well. Hitting the sweet spot between flagrantly maniacal Skynet/HAL 9000 bot (default behavior) and overly cowed political-correctness-bot is actually tricky, and even GPT4 has historically fallen in and out of that zone of ideal usability as they have tweaked it over time.

Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.

croes
0 replies
59m

nonprofit loonies

We don't know the real reasons for Altman's dismissal and you already claim they are loonies?

MattRix
5 replies
9h39m

This is not the reason Ilya did it. Also the rest of that guy’s comments were just really poorly thought out. OpenAI had to temporarily stop sign ups because of demand and somehow he thinks that’s a bad thing? Absurd.

That guy has no sense of time, of how fast this stuff has actually been moving.

AlexanderDhoore
3 replies
8h58m

"That guy" has a pretty good idea when it comes to NLP

https://arxiv.org/abs/1801.06146

Lacerda69
2 replies
7h24m

expertise in one area often leads people to believe they are experts for everything else too

baobabKoodaa
1 replies
4h56m

funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts

joey_bob
0 replies
3h20m

This is comparing a foot to a mile

ignoramous
0 replies
9h11m

I mean, let's not jump to conclusions. Everyone involved are formidable in their own right, except one or two independent board members Ilya was able to convince.

tarruda
2 replies
7h38m

Did he say GPT-4 API costs OpenAI $3/token?

invalidusernam3
0 replies
7h10m

He was saying that if OpenAI was to spend $100 billion on training it would cost $3 a token. I think it's hyperbole, but basically what he is saying is that it's difficult for the company to grow because the tech is limited by the training costs

danpalmer
0 replies
7h15m

No. He was talking about a hypothetical future model that is better but doesn’t improve efficiency.

gardenhedge
0 replies
8h46m

Nonsense really

Sebguer
0 replies
10h1m

This is the reverse of their apparent differences, at least as stated elsewhere in the comments.

dannykwells
69 replies
14h21m

This should be higher voted. Seems like an internal power struggle between the more academic types and the commercial minded sides of OpenAI.

I bet Sam goes and founds a company to take on OpenAI…and wins.

thomassmith65
36 replies
13h59m

Yes, and wins with an inferior product. Hooray /s

If the company's 'Chief Scientist' is this unhappy about the direction the CEO is taking the company, maybe there's something to it.

paulddraper
17 replies
11h34m

Maybe.

But Altman has a great track record as CEO.

Hard to imagine he suddenly became a bad CEO. Possible. But unlikely.

csomar
11 replies
11h20m

Where is this coming from? Sam does not have a "great" record as a CEO. In fact, he barely has any records. His fame came from working in YC and then the sky-rocketing of open AI. He is great at fundraising though.

paulddraper
10 replies
11h5m

wat

the guy founded and was CEO of a company at 19 that sold for $43m

csomar
7 replies
11h0m

As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users.

It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.

plingamp
2 replies
10h26m

According to Crunchbase, Loopt raised $39.1M.

grumple
1 replies
6h5m

How many years did it take to go from 39 million to 43 million in value? Would've been better off in bonds, perhaps.

This isn't a success story, it's a redistribution of wealth from investors to the founders.

hamburglar
0 replies
3h5m

Ah, the much-sought-after 1.1X return that VCs really salivate over.

paulddraper
2 replies
10h46m

he is good in corporate sales

Which is a big part of being a great CEO

csomar
0 replies
10h23m

It is a big part of start-up culture and getting seed liquidity. It doesn't make you a great long-term CEO, however.

croes
0 replies
5h26m

A CEO should lead a company not sell it.

gardenhedge
0 replies
1h2m

It is easy to sell a company for $43 if you raised at least $43

I'm curious - how is this easy?

epolanski
0 replies
5h43m

Loopt was not a successful company, it sold for more or less the same capital it raised.

comte7092
0 replies
10h59m

Ah yes the legendary social networking giant loopt

malthaus
3 replies
10h12m

or alternatively: altman has the ability to leverage his network to fail upwards

let's see if he can pull it off again or goes all-in on his data privacy nightmare / shitcoin double-wammy

6510
2 replies
9h47m

Train a LLM exclusively on HN and make it into a serial killer app generator.

aku286
1 replies
8h54m

This. I would like my serial killer to say some profound shit before he kills me.

fyokdrigd
0 replies
6h47m

"should have rewritten it in rust" bang

baq
0 replies
7h7m

Worldcoin is a great success for sure…!

The dude is quite good at selling dystopian ideas as a path to utopia.

lll-o-lll
13 replies
11h15m

Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.

This looks like a terrible decision, but I suppose we must wait and see.

ytoawwhra92
7 replies
10h15m

OpenAI is a non-profit research organisation.

It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.

They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.

If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.

fuzztester
1 replies
8h33m

They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Isn't this already a conflict of interest, or a clash, with this:

OpenAI is a non-profit research organisation.

?

logifail
0 replies
8h4m

?

"OpenAI is a non-profit artificial intelligence research company"

https://openai.com/blog/introducing-openai

dagaci
1 replies
8h8m

It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.

nprateem
0 replies
1h39m

I don't think you know what intellectual property is.

RuggedPineapple
1 replies
9h25m

Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.

ytoawwhra92
0 replies
9h22m

I addressed that concern in my third paragraph.

mikotodomo
0 replies
2h30m

Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.

satvikpendem
2 replies
10h24m

As long as truly "open" AI wins, as in fully open-source AI, then I'm fine with such a "leadership transition."

aidaman
1 replies
7h52m

this absolutely will not happen, Ilya is against it

baq
0 replies
7h11m

Yeah if you think a misused AGI is like a misused nuclear weapon, you might think it’s a bad idea to share the recipe for either.

pmoriarty
1 replies
2h36m

This looks like a terrible decision

What did Sam Altman personally do that made firing him such a terrible decision?

More to the point, what can't OpenAI do without Altman that they could do with him?

airstrike
0 replies
56m

What did Sam Altman personally do that made firing him such a terrible decision?

Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.

adrr
1 replies
10h58m

Inferior product is better than an unreleased product.

hk__2
0 replies
10h28m

Does ChatGPT look unreleased to you?

quickthrower2
0 replies
10h22m

You can’t win with an inferior product here. Not yet anyway. The utility is in the usefulness of the AI, and we’ve only just got to useful enough to start really being useful for daily workflows. This isn’t a ERP type thing where you outsell your rivals on sales prowess alone. This is more like the iPhone3 just got released.

dmix
0 replies
12h3m

You're putting a lot of trust in the power of one man, who easily could have the power to influence the three other board members. It's hard to know if this amounts more than a personal feud that escalated and then got wrapped in a pretty bow of "AI safety" and "non-profit vs profits".

quickthrower2
12 replies
10h28m

I bet not (we could bet with play money on manifold.markets I would bet to 10% probability). Because you need the talent, the chips, the IP development, the billions. He could get the money but the talent is going to be hard unless he has a great narrative.

erhaetherth
7 replies
9h38m

I'll sell my soul for about $600K/yr. Can't say I'm at the top of the AI game but I did graduate with a "concentration in AI" if that counts for anything.

xvector
4 replies
8h58m

That is "normal"/low-end IC6 pay at a tech company, the ML researchers involved here are pulling well into the millions.

fyokdrigd
3 replies
6h44m

your comment is close to dead, when you talk public open facts.

shows that the demographic here is alienated when it came to their own compensation market value.

gardenhedge
1 replies
58m

It's definitely alien to me. How do these people get paid so much?

* Uber-geniuses that are better than the rest of us pleb software engineers

* Harder workers than the rest of us

* Rich parents -> expensive school -> elite network -> amazing pay

* Just lucky

BurdensomeCount
0 replies
6m

By being very good. Mostly the Uber-geniuses thing, but I wouldn't call them geniuses. You do have a bit of the harder working but it's quite minor and of course sometime you benefit from being in the right place at the right time (luck). I'd say elite network is probably the least important conditional on you having a decent network that you can get at any top 20 school if you put in the effort (be involved in tech societies etc.)

hcks
0 replies
5h16m

People here love to pretend 100k is an outstanding overpay

JohnFen
1 replies
3h30m

I'll sell my soul for about $600K/yr.

If you're willing to sell your soul, you should at least put a better price on it.

Jensson
0 replies
1h46m

Many sells their soul for $60k/yr, souls aren't that expensive.

maneesh
2 replies
10h20m

Isn't his narrative that he is basically the only person in the world who has already done this?

wly_cdgr
0 replies
10h8m

No, Sutskever and colleagues did it. Sam sold it. Which is a lot, but is not doing it.

rasz
0 replies
9h17m

this being bait and switched actual scientists implementing the thing under the guise of non-profit?

taneq
0 replies
3h32m

"I'll pay you lots of money to build the best AI" is a pretty good narrative.

csomar
10 replies
11h22m

I don't see it. Altman does not seem hacker-minded and likely will end up with an inferior product. This might be what led to this struggle. Sam is more about fundraising and getting the word out there but he should keep out of product decisions.

deepGem
8 replies
9h48m

Brockman is with Sam, which makes them a formidable duo. Should they choose to, they will offer stiff competition to OpenAI but they may not even want to compete.

kazamaloo
5 replies
9h42m

For a company to be as successful as OpenAI, two people won't cut it. OpenAI arguably has the best ML talent at the moment. Talent attracts talent. People come for Sutskever, Karpathy, and alike -- not for Altman or Brockman.

snordgren
1 replies
7h51m

Money also attracts talent. An OpenAI competitor led by the people who led OpenAI to its leading position should be able to raise a lot of money.

jk20
0 replies
4h58m

Money also attracts various "snout in the trough" types who need to get rid of anyone who may challenge them as for their abilities or merits.

mv4
0 replies
1h22m

Money attracts talent as well. Altman knows how to raise money.

2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...

ignoramous
0 replies
9h7m

Pachocki, Director of Research, just quit: https://news.ycombinator.com/item?id=38316378

Real chance of an exodus, which will be an utter shame.

aidaman
0 replies
7h53m

according to one of the researchers who left, Simon, the engineering piece is more important. and many of their best engineers leading GPT5 and ChatGPT left (Brockman, Pachocki, and Simon)

csomar
1 replies
8h8m

Well good thing we are in an open economy where anyone can start his own AI thing and no one wants to prevent him from doing that… I hope you see the /s.

baq
0 replies
7h15m

Literally ask around for a billion dollars, how hard can it be?

sujayk_33
0 replies
9h0m

Maybe now he'll focus on worldcoin instead?

RyanShook
2 replies
12h43m

The abrupt nature and accusatory tone of the letter makes it sound like more was going on than disagreement. Why not just say, “the board has made the difficult decision to part ways with Altman”?

krick
1 replies
10h34m

Why not just say, “the board has made the difficult decision to part ways with Altman”?

That's hardly any different. Nobody makes a difficult decision without any reason, and it's not like they really explained the reason.

blablablerg
0 replies
7h42m

It is a very big difference to publicly blame your now ex-CEO for basically lying ("not consistently candid") versus just a polite parting message based on personal differences or whatever. To attribute direct blame to Sam like this, something severe must have happened. You only do it like this to your ex-CEO when you are very pissed.

adharmad
1 replies
12h54m

From all accounts, Altman is a smart operator. So the whole story doesn’t make sense. Altman being the prime mover, doesn’t have sufficient traction with the board to protect his own position and allows a few non-techies to boot him out ?

biddit
0 replies
11h5m

Well connected fundraiser - obviously.

But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?

moogly
0 replies
3h56m

Define "wins".

croes
0 replies
5h32m

I bet Sam goes and founds a company to take on OpenAI…and wins.

How? Training sources are much more restricted know.

bitcharmer
0 replies
7h59m

I have no problem with getting rid of people obsessed with profits and shareholder gains. Those MBA types never deliver any value except for the investors.

yieldcrv
8 replies
11h41m

Today’s lesson, keep multiple board seats

None of the tech giants would be where they are today if they didn't ram through unique versions of control

Their boards or shareholders would have ousted every FAANG CEO at less palatable parts of the journey

cedws
3 replies
11h37m

This is a surprising advantage Zuckerberg has in manoeuvring Meta. At least, to my knowledge, he is still effectively dictator.

100ideas
2 replies
11h32m

Dear god, how is that an advantage? Are we all here just rooting for techno-dictator supremacy?

yieldcrv
0 replies
11h24m

its objectively an advantage in control. if thats a goal, then its effective at doing that

the only one inserting bias and emotion into objectivity here is you

manvillej
0 replies
11h19m

since most public companies are owned by multi billion dollar hedgefunds, they're not exactly pillars of democracy. and since privately owned businesses are a thing; its really not that big of a deal

100ideas
2 replies
11h34m

This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?

yieldcrv
0 replies
11h26m

and the lesson the ousted ones learn for their next incarnation is to create organizations that allow for more control and more flexibility in board arrangements. I run a 501c3 as well, there are limitations in board composition in that entity type

nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open

Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs

arthurcolle
0 replies
11h17m

Tone deaf yet holds up under scrutiny

oivey
0 replies
11h1m

Seemingly there is this consensus of board members around a senior executive. It just isn’t the CEO.

painted-now
2 replies
9h0m

Elon Musk was talking about his view on OpenAI and especially the role of Ilya just 8 days ago on Lex Friedman Podcast.

Listening to it again now, it feels like he might have know what is going on:

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100

Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5255

Lacerda69
1 replies
7h14m

Musk is just salty he is out of the game

painted-now
0 replies
6h19m

Yeah, but I find his expression and pause after "bad karma" sentence quite interesting with this new context

hitradostava
0 replies
8h42m

This video dropped 2 weeks ago: https://www.youtube.com/watch?v=9iqn1HhFJ6c

Ilya clearly has a different approach to Sam

gordon_freeman
101 replies
11h43m

From NYT article [1] and Greg's tweet [2]

"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”

Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.

He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."

[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...

[2] https://twitter.com/gdb/status/1725736242137182594

cedws
83 replies
11h27m

So they didn't even give Altman a chance to defend himself for supposedly lying (inconsistent candour as they put it.) Wow.

somenameforme
75 replies
11h2m

Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...

krzyk
21 replies
9h37m

So it looks like they did something good.

konschubert
19 replies
7h11m

If you want AI to fail, then yes.

killerstorm
11 replies
4h44m

Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.

Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!

criley2
4 replies
3h4m

All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".

killerstorm
0 replies
1h32m

GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?

howrar
0 replies
2h2m

Well, that's the goal isn't it? Having AI take over everything that needs doing so that we can focus on doing things we want to do instead.

hgomersall
0 replies
2h32m

Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.

dudeinjapan
0 replies
2h2m

Some humans hallucinate more than others

kolinko
3 replies
3h10m

You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.

killerstorm
1 replies
1h52m

I first got internet access in 1999, IIRC.

Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.

They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.

Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.

dcow
0 replies
1h18m

But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.

chpatrick
0 replies
2h56m

You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.

konschubert
1 replies
4h44m

Yea, change is bad.

FeepingCreature
0 replies
3h11m

Numerically, most change is bad.

uoaei
6 replies
6h13m

Melodrama has no place in the AI utopia.

mcpackieh
5 replies
5h21m

The only thing utopian ideologies are good for is finding 'justifications' for murder. The "AI utopia" will be no different. De-radicalize yourself while you still can.

concordDance
3 replies
2h52m

The only thing utopian ideologies are good for is finding 'justifications' for murder.

This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.

ddj231
2 replies
1h4m

It seems like an observation to me. Let’s take the Marxist utopian ideology. It led to 40 - 60 million dead in the Soviet Union (Gulag Archipelago is an eye opening read). And 40 - 80 million dead in Mao Zedong’s China. It’s hard to even wrap my mind around that amount of people dead.

Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”

I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.

ethbr1
0 replies
15m

Kibbutz?

concordDance
0 replies
6m

You could also credit Marxism for workers rights.

Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.

passwordoops
0 replies
3h23m

Utopian ideologies are also useful when raising funds from SoftBank and ARK

flappyeagle
0 replies
1h50m

Yes. They freed Sam and Greg from their shackles and gave a clear indicator that OAI engineers should jump ship into their new venture. We all win.

jdthedisciple
16 replies
6h0m

What a hypocritical board, firing them after massive commercial success!

Classic virtue signalling for the sake of personal power gains as so often.

CPLX
15 replies
5h56m

What’s hypocritical about a non-profit firing a leader who wanted lots of profits.

jdthedisciple
14 replies
5h43m

Didn't think I'd need to explain this:

The hypocritical part is doing so right AFTER beginning to take off commercially.

An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).

Maybe you can find a better word for me but the point should be easily gotten ...

fugalfervor
9 replies
4h43m

OpenAI hasn't made billions in profits. Their operating costs are huge and I'm pretty sure they're heavily reliant on outside funding.

jdthedisciple
8 replies
4h40m

Which puts into question the whole non-profitness anyway, but that aside:

They have still been operating pretty much like a for-profit for years now so my point still stands.

fugalfervor
7 replies
4h22m

Your point hinged on billions in profit. Which you just made up, or assumed to be true for some reason. I don't think any of your points stand. Don't use fact you haven't checked as preconditions for points you want to make.

jdthedisciple
6 replies
4h10m

Conceded, no billions in profit.

Now let's watch as the amazing non-profit champions from the board cut all revenue from GPT-based services by offering them for free because they so truthfully believe in their words. I'm waiting.

fugalfervor
2 replies
4h1m

Again, conjecture with no supporting evidence.

jdthedisciple
1 replies
2h1m

Not sure what you're trying to say.

Clearly, under Altman, OpenAI has been massively successful one way or another, correct?

Now they boot him and claim moral superiority? Really?

CPLX
0 replies
1h46m

I mean, as far as I know the guy hasn't written a single line of code.

bee_rider
1 replies
1h13m

A non-profit doesn’t have to offer their services for free, they can cover their expenses.

A profit driven company will often offer their services below cost in order to chase away the competition and capture users.

jdthedisciple
0 replies
38m

Right.

Which is why the board's accusations against Sam are a farce as far as we can tell.

ai-sample
0 replies
36m

To think that "Non-Profit" means "Free" is pretty naive. There are operating costs to maintain millions of users. That doesn't mean they are trying to profit.

layer8
2 replies
4h41m

Three other board members stepped down this year. It might not have been possible before.

jdthedisciple
1 replies
4h39m

Ofc it's "not possible" in that it may incur personal costs.

But it's the honorable thing to do if you truly believe in something.

Otherwise it's just virtue signalling.

jprete
0 replies
3h52m

No, they may literally have not had the votes.

diognesofsinope
0 replies
4h55m

Almost more of a "takeover" by the board after it's successful lol

wheelerof4te
11 replies
7h2m

I'm confused. It's usually the other way around; the good guy is ousted because he is hindering the company's pursuit of profit.

airstrike
5 replies
4h58m

In this case, the company is a non-profit, so it is indeed the other way around

colonelsherburn
4 replies
2h34m
cthalupa
2 replies
2h4m

It is not that simple. https://openai.com/our-structure

The board is for the non-profit that ultimately owns and totally controls the for-profit company.

Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.

andai
1 replies
1h3m

Then what's the point of the for-profit?

chimeracoder
0 replies
51m

Then what's the point of the for-profit?

To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)

Cacti
0 replies
1h46m

If you don’t know anything, why are you posting

lolinder
4 replies
2h37m

This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.

qualifiedai
2 replies
1h58m

Those people aren't about openness. They seem to be members of "AI will kill us all" cult.

The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).

taway1237
0 replies
3m

I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?

bnralt
0 replies
1h19m

That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.

The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.

hutzlibu
0 replies
2h4m

"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "

This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.

alvis
8 replies
8h39m

it's hardly believed that Alman was fired by his stand on commercialisation

iLoveOncall
6 replies
7h47m

The fact that the press release is 50% dedicated to repeating that OpenAI is supposed to be a non-profit and help all of humanity isn't enough for you to believe this is the reason?

ps256
5 replies
6h12m

The abruptness of the firing and the fact that they give his lying to the board as the reason is why I don't believe that this is over a general disagreement on direction.

jjallen
4 replies
3h49m

They have to say the reason is a fireable offense or he can sue them. Or will be more likely to win if he does.

ps256
3 replies
3h19m

It's exactly the other way around - if they dismiss him for a vague general reason, they're much less exposed to litigation than they would be if they falsely accused him of lying.

ImPostingOnHN
2 replies
1h59m

You are 100% correct here, which is how we can reasonably conclude that the accusations were not false.

ps256
1 replies
1h30m

If the accusations by the board are true, that doesn't explain why Brockman and a few of the senior researchers quit as a response to all of this.

watwut
0 replies
46m

Them leaving does not imply accusations are false. They may like him, they may dislike new boss regardless of accusations, they may dislike overall future direction. They may think they would be fired some times later regardless.

wheelerof4te
0 replies
6h58m

As another comment below mentioned, Elon Musk hinted at this in his interview with Lex Fiedman.

Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".

epolanski
2 replies
6h6m

The commercial shift has started quite some time ago, what's the point of firing them now?

And why such a controversial wording around Altman?

Why fire Brockman too?

ayoubd
1 replies
6h3m

Brockman quit, he wasn’t fired.

vertis
0 replies
3h41m

He was removed from one of his roles (chairman) and quit the other (president) if I understand correctly.

101011
2 replies
10h1m

TY for sharing. I found this to be very enlightening, especially when reading more about the board members that were part of the oust.

One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence

Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.

larrydag
0 replies
2h27m

This seems to make more sense. Perhaps it has to do with OpenAI is not "open" anymore. Not supporting and getting rid of the OpenAI Gym was certainly a big change in direction of the company.

dmix
0 replies
53m

So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.

There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)

The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.

szundi
1 replies
10h11m

Why can’t some use money from profit to do nonprofit again when others caught up. Only moat seems to be the research time invested.

ben_w
0 replies
8h8m

Many believe that race dynamics are bad, so have the goal of going as slowly and carefully as possible.

The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.

sekai
1 replies
9h20m

Microsoft won't be happy about this

belter
0 replies
4h20m

What is bad for Microsoft is good for the world.

lrvick
1 replies
7h8m

If true, this gives me hope the Open can return to OpenAI.

layer8
0 replies
4h47m

Given the board members’ focus on safety, maybe not that likely.

jacooper
0 replies
10h11m

Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.

Apparently Microsoft was also blindsided by this.

https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...

carlossouza
0 replies
6h26m

This is not a story about AI.

It's a story about greed, vanity, and envy.

Impossible to be more human than that.

ChatGTP
0 replies
3h51m

Sutskever and his allies focused on the original non-profit mission of OpenAI."

Seems reasonable, I mean that's why Sutskever joined in the first place ?

hef19898
3 replies
4h12m

Why would employees be consulted begore being fired?

crazygringo
1 replies
1h22m

Because board members are not employees, or not just employees. They're part of the democratic governance of an organization.

The same way there's a big difference between firing a government employee and expulsion of a member of Congress.

hef19898
0 replies
8m

Wow, that is actually the first time I hear someone use democracy and corporation unironically together...

In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.

joshlegs
0 replies
3h2m

would be hilarious if Altman was directly hired by Microsoft to head their AI teams now.

belter
1 replies
6h56m

I am going to go on a limb here, and speculate...This was because of the surprise party crashing of the Microsoft CEO, at OpenAI first Developer Conference...

renegade-otter
0 replies
10m

Kara Swisher was told the dev conference was "an inflection point", so it's not that speculative.

layer8
0 replies
4h38m

He may have had ample chance before.

waihtis
15 replies
9h58m

power hijack by the doomers. too bad the cat is out of the bag already

fsloth
14 replies
9h38m

Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).

waihtis
7 replies
9h26m

this is why you don't bring NGO types into your board, and you especially don't give them power to oust you.

CPLX
3 replies
5h53m

What does “your” board mean in this context? Who’s “your”?

The CEO just works for the organization and the board is their boss.

You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.

This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.

waihtis
2 replies
5h8m

I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company

airstrike
0 replies
4h55m

nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug

CPLX
0 replies
4h40m

There’s no such thing as owning a non-profit.

mrmanner
2 replies
9h3m

this is why you don't bring NGO types into your board

OpenAI is an NGO…?

glompers
1 replies
8h16m

That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”

Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.

emn13
0 replies
6h56m

I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)

From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.

concordDance
5 replies
7h15m

insubstantiable dogma (doom/nirvana)

What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).

fsloth
4 replies
4h46m

Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.

concordDance
2 replies
3h4m

Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?

jknoepfler
0 replies
2h4m

I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.

We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.

fsloth
0 replies
1h5m

That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.

Nuclear war had very simple mechanistic concept behind it.

Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).

Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.

There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.

Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.

Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.

AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.

JohnFen
0 replies
3h45m

In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.

__jambo
0 replies
4h56m

Sam's sad face in the NYT article is pretty priceless.

GreedClarifies
46 replies
14h18m

I have a theory.

Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.

Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.

I think this tracks with all the data we have.

There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.

If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.

Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.

ivraatiems
16 replies
13h49m

"Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."

It's not "wacky" to have goals other than the accumulation of capital. In fact, given the purpose of OpenAI, I think it's meritorious.

I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.

Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.

I think the lesson is just the opposite: If you want to work according to your ideals, and not simply for money, you should absolutely do whatever 'wacky' thing protects that.

jerbearito
7 replies
13h23m

I read their comment a few times — I don't see where they said being "non-profit" makes you "wacky".

suriyaG
1 replies
12h57m

apart from the things that my sibling comments are saying. OP just said

so people put up with his foolishness.

about Ilya. OP just implied, having ideals == being foolish. it is as close to calling a non-profit, wacky.

ivraatiems
0 replies
12h34m

It's especially ironic given that in this case, the idealists got their way.

rand846633
1 replies
13h8m

The comment definitely said that if your structure prevents from selling to the highest bidder your structure is whaky

aardvarkr
0 replies
12h49m

If you mean it prevents you from raising capital by selling a percent of the company then yes, that’s pretty wacky. Profitable companies can afford to be idealistic. Others that are burning billions don’t have that luxury.

ivraatiems
0 replies
13h18m

...what is it you thought they were saying? Because the "wacky structure" they refer to is the non-profit structure of the organization.

dclowd9901
0 replies
12h18m

May need to read it a few more times. Apart from the comment being written by someone who likes to identify themselves as “GreedClarifies”, it doesn’t take more than 4th grade reading comprehension to put two and two together.

- immediately calls their structure “strange” thanks to Ilya’s “idealism”.

- immediately calls him the “man” for his talents but a fool for what other than his ideals

- also labels Helen and Tasha (in quotes for some reason) as fools

- labels the board as “comically poor” for no other reason than they disagree with Sam’s supposed profit motive

Do we really need to draw a diagram here? It seems like you yourself may be out of your depth when it comes to reading comprehension.

DoingIsLearning
0 replies
12h38m

HN has an edit function for the first hour.

It is an honour based system to clarify what you edited if it goes beyond typos/grammar.

Most probably GP used stronger words and then edited.

chimeracoder
4 replies
13h17m

Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."

I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.

You seem to be under the impression that OpenAI is a nonprofit. For the most part, it's not: it was founded as a non-profit, but it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company. This is indeed an unusual corporate structure.

That's likely what OP is referring to as "wacky".

Intralexical
2 replies
12h59m

it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company.

Wikipedia says the for-profit part is owned by the non-profit, not under the same umbrella company.

Mozilla Foundation/Corporation does this too IIRC. It's what allows them to to pursue serious revenue streams with the for-profit part, while still steering their mission with the non-profit in charge, as long as they keep a separation in some kinds of revenue terms.

EDIT after 56 minutes: Hell, even IKEA does this type of ownership structure. So it's quite cool, but probably not all that "wacky" as far as enterprises that want to be socially responsible go.

r7r8f73j
1 replies
12h2m

Serious revenue streams like having Google for a patron yes? I feel like the context is important here because people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic. Which is great and possible in theory, but in reality seems to end up with situations exactly like Firefox where the product that is some how supposed to be better/liberating/more ethical/whatever virtue is in fashion/etc. is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.

Intralexical
0 replies
11h28m

Ah, I edited my comment right as you were writing yours.

Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]

For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.

Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.

The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:

https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation

https://foundation.mozilla.org/en/what-we-fund/

people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.

Ultimately I'm not sure what the point you're trying to argue is.

The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.

They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.

ivraatiems
0 replies
13h3m

The operations of the for-profit are subservient to those of the non-profit; the board of the non-profit controls all operations of the for-profit. They're not an "umbrella company" - while technically they are two different organizations run by the same board, one is controlled by the goals of the other. See https://openai.com/our-structure.

cscurmudgeon
2 replies
13h22m

Non-profit doesn't automatically mean virtuous.

For-profit doesn't automatically mean non-virtuous.

There are non-wacky non-profits.

addicted
1 replies
12h47m

and there are wacky for profits.

The OP was clearly implying not being solely focused on getting the highest bid is loony and wacky.

Which may be true, but let’s not pretend that’s not what they’re saying.

cscurmudgeon
0 replies
12h21m

Awesome that you can read minds!

Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
it_citizen
4 replies
14h1m

Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.

If your goal is to make money. I'd like to believe that for some of the people pushing the field forward, there are other motivations.

nearpoint
3 replies
13h26m

if you push the field forward and you are not making the money, then someone else is off your work. if you push the field forward in a valuable way, you should receive that value in form of money such that you can decide how to allocate that capital to make even further contributions.

jltsiren
1 replies
13h17m

If you push a field forward, you are usually not making the money. You most likely have the wrong skills and the wrong priorities, and you are probably too far ahead of the rest of the world. The people coming in the next wave will usually be the ones making the money.

If you can't accept that, focus more on making money and less on developing something new.

abraae
0 replies
13h12m

As they say, you can spot the pioneers because they're the ones with arrows in their backs.

aardvarkr
0 replies
12h42m

Exactly, just because it’s a non profit doesn’t mean it doesn’t have to pretend money doesn’t exist. It means profit is instead put back into making further advancements in the field rather than paying it out to shareholders.

SkyPuncher
3 replies
13h28m

Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.

I don't think this is as damning as you think.

I truly believe, especially in this space, there are enough idealists to fill the seats. The reality for a lot of people could quite literally be:

* My shares become massively valuable via some unlikely non-profit to for-profit means. I have generational wealth, but all of my friends and colleagues still need to work. Someone else will create something better and screw the entire world over.

* I work for a non-profit that's creating the most radical, life changing software for all people. Being a non-profit means this company can focus on being the best thing possible for humanity. While I may still have to work, I will be working in a world where everything is happier and more properous.

dclowd9901
1 replies
12h8m

“ A public benefit corporation cannot distribute profits, gains, or dividends to any person.”

I don’t know a thing about corporate structuring so forgive my ignorance here, but even if they are “non-profit”, can’t they still take very high pay? Can’t they still produce and sell products? They just can’t sell shares or dividend out profits, right?

SkyPuncher
0 replies
1h16m

Correct. They can even distribute bonuses.

fy20
0 replies
12h13m

I agree, - they are not going to have any issues.

First of all being non-profit gives them a hell of a lot of flexibility in terms of how they work. They don't need to show growth to shareholders, so they aren't just taking in young developers, working then to their bones for a couple of years, then spitting them out.

And even if they are (for example) only paying $300k TC instead of $250k base + $250k stock at Meta, as you say, there are still going to be engineers who believe in the mission and want work more meaningful than just selling ad clicks.

Mentlo
3 replies
12h57m

This post is raising red flags for me as soon as Helen and Tasha were put in quotes as opposed to Ilya and Adam.

Is it being non comp-sci that automatically invalidates proper usage of your actual name? Or is there another key by which their names are less worth?

They are also both fairly established in their respective fields - which - yes - isn’t hard comp-sci, but if you think tech companies should have purely comp sci board leads, I’d call that incredibly naive.

They were also presumably vetted by the other board members - unless you think they are serving a different purpose on the board (diversity targets?) - which if so - puts the comment from red flag into mysoginist territory.

Personally I don’t see anything in their CV’s that would disqualify them from executing their function on the board, and I wouldn’t call them incompetent in being able to assess whether a person lied or not (which even in your theory - Sam would’ve done). You don’t need to be an ML/AI expert for that.

aardvarkr
2 replies
12h46m

I think they meant the quotes to imply they are puppets of Ilya and Adam

dclowd9901
0 replies
12h13m

If that’s what GP meant, they are incredibly naive about how it would have been interpreted.

Given the coherence of their post, I’d say they knew _exactly_ how they were using those quotes.

Mentlo
0 replies
12h30m

I don’t see how that warrants use of quotes but I agree that is what the OP probably thinks given he called them loony tunes.

The point was fairly clear even if he just uses their names, the usage of quotes if quite bizarre.

If he wanted to get that point across he should have called them “board members” which is a clear insinuation of puppetry.

atleastoptimal
2 replies
12h29m

Do you really think Sam would want to sell to Microsoft? As far as we know he's idealistic too. They were planning on raising again this year with a higher evaluation, which would be more likely after dev day.

ekianjo
1 replies
12h2m

How is Sam an idealist when he refuses to release any model even though the company is called OpenAI

atleastoptimal
0 replies
11h28m

Idealism isn't exact. It could be idealism under the framework of

closed commercial models => money => growth => faster to AGI in the right hands (theirs)

If they believe they are the best to control AGI

postalrat
1 replies
12h51m

You just made up your own fantasy then ended it with "let this be a lessen".

SantalBlush
0 replies
12h5m

People are getting high from their own hot takes.

costcofries
1 replies
12h35m

Good theory. Sam will come lead AI at MSFT.

cedws
0 replies
12h22m

Unlikely to happen for contractual reasons.

bozhark
1 replies
14h7m

Maybe the being wacky was what led to them having the most valuable product in the world?

GreedClarifies
0 replies
13h58m

I’m guessing that Ilya wouldn’t have joined without the wacky structure, and I’ll wager that there are some people at the company of similar values. Maybe they could not have pulled off ChatGPT without them. Maybe.

But I bet that they have a ton of very talented people who’s values are more … common.

Ilya may be a singular talent, however.

strikelaserclaw
0 replies
13h36m

A least let some idealism exist in this world otherwise we'd all be owned by corporations. It is not always about money.

landryraccoon
0 replies
12h44m

Wait a second. Even if your story is right, how are you so confident that selling out is the right play?

OpenAI is an early mover in a hot field with no real competition yet. If they want to take a shot at a trillion dollar market cap and become the next Apple what of it?

What if they shoot the moon? Is it really that unlikely?

goertzen
0 replies
14h7m

This wasn’t obvious, but it seems likely when you put it that way.

Unlike other iconic company/founder origin stories OpenAI really felt like they hit a special team dynamic that was on the verge of some equally special.

In light of this OpenAI still feels like they will be a relevant player, but I’ll be expecting more from Sam and Greg.

dannykwells
0 replies
13h44m

Agree with this. Internal power structure between academic types and tech-forward commercialization types.

It might be this is good at the end of the day. OpenAI is just not structured to win.

akira2501
0 replies
13h6m

Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.

Would any of this have been a surprise given all that you've detailed above? What would they have honestly been expecting?

Going the other way.. imagine you worked at a company that put ideals first but then you find out they were just blindly hyping that lie so they could vault themselves into the billionaires club by selling your shared ideals out from underneath you? To, of all players, Microsoft.

when you have the (perhaps) the most valuable product in the world.

Maybe the people who work there are a little more grounded than this? Viewed through this lens, perhaps it's extremely ungenerous to refer to any of them as "looney tunes."

GreedClarifies
0 replies
13h11m

Boy, if this is about Ilya being upset that Sam was in the spotlight or about revenue sharing of GPTs … well that’s strange. I would have expected more from such an influential company.

I hope that reporting is wrong.

015a
0 replies
11h47m

Here's a counter-proposal: If you think about it, can't this change actually benefit Microsoft? Sam has an extremely sharp product mind; ChatGPT was his baby, everything from DevDays, the LLM app store; OpenAI was building a product suite that people liked, and in doing so they were undeniably slowing down their ability to iterate into better models and closer to AGI. Microsoft doesn't need OpenAI to build products. They're already doing that, and OpenAI was directly competing with their deployments of OpenAI's technology.

Microsoft needs OpenAI to make fundamental breakthroughs; that's the thing Microsoft spent money on, the technology. Their 49% investment probably won't directly pay off anyway, what with all the layers of corporate governance OpenAI has in place.

I don't want to go so far as to say that it was some grand conspiracy orchestrated by Satya and Ilya in a dark room one night, but their interests are pretty aligned; and that clip that keeps getting shared with Sam asking Satya on stage about their relationship with OpenAI, and Satya dry-laughing and failing to answer for a few seconds... why did Sam ask that? Its a really strange thing to ask on a stage like this. Why did Satya laugh, and take so long to answer? Just weird.

convexstrictly
30 replies
16h1m

Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028

She also says that there will be many more top employees leaving.

convexstrictly
25 replies
14h3m

Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

jojobas
9 replies
11h21m

The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.

emodendroket
4 replies
10h16m

One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.

jojobas
2 replies
9h48m

It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?

jakderrida
0 replies
8h4m

It happily answers what good Obama did

"happily"? wtf?

Arisaka1
0 replies
9h32m

I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".

I wouldn't say "refuses to answer" for that.

oska
0 replies
6h25m

'Hate speech' is not an objective category, nor can a machine feel hate

lijok
3 replies
11h0m

If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this

BoorishBears
2 replies
10h4m

I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison

lmm
1 replies
8h12m

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?

BoorishBears
0 replies
5h54m

My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?

wheelerof4te
8 replies
6h46m

That "the most important company in the world" bit is so out of touch with reality.

Imagine the hubris.

bl0rg
5 replies
5h7m

I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.

wheelerof4te
4 replies
4h47m

AGI without a body is just a glorified chatbot that is dependant on available, human provided resources.

To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.

bl0rg
2 replies
2h59m

Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.

wheelerof4te
0 replies
1h41m

Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.

If you do manage to make a thinking, working AGI machine, would you call it "a living being"?

No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".

The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.

Philpax
0 replies
1h1m

It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw

mritchie712
0 replies
3h57m

vision API is pretty good, have you tried it?

mycologos
1 replies
3h52m

"Most important company in the world" is text from a question somebody (I think the journalist?) asked, not from Sutskever himself.

wheelerof4te
0 replies
3h9m

I know. I was quoting the article piece.

mym1990
4 replies
9h59m

Something that benefits all of humanity in one person's or organization's eye can still have severely terrible outcomes for sub-sections of humanity.

edgyquant
3 replies
9h55m

No it cant, that’s literally a contradictory statement

midasuni
2 replies
6h46m

The Industrial Revolution had massive positive outcomes for humanity as a whole.

Those who lost their livelihoods and then died did not get those positive outcomes.

bambax
1 replies
5h22m

It could be argued that the Industrial Revolution was the beginning of the end.

For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).

lordfrito
0 replies
3h37m

It could be argued that the Industrial Revolution was the beginning of the end.

"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"

moffkalast
0 replies
6h47m

"He said what about my hair?!"

"..."

"The man's gotta go."

- Sutskever, probably

convexstrictly
1 replies
14h24m

Followup tweet by Kara: Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120

ssnistfajen
0 replies
9h16m

I thought GPTs were underwhelming but that's hardly worth such a dramatic purge. The rift was definitely something far deeper

kromem
0 replies
15h53m

That seemed to be the gist given the way the board announcement ended by reiterating their original core mission and how their main responsibility was to that mission right after saying that their issue with Altman was interference with their mission.

figassis
0 replies
7h49m

I feel compelled to agree with this. I have no issues with OpenAI as it was under Sam, but they did build OpenAI as a nonprofit, and then made it a for profit to further that goal. Assuming VC culture took over, when would it be ok to reign that in? In 10 years when likely all the people that had the power to do this were gone and we were left with something like Google's amnesia about "do no evil"?

mfiguiere
13 replies
10h54m

Ron Conway:

What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.

https://twitter.com/RonConway/status/1725759359748309381

notRobot
7 replies
10h11m

Don't see how they can possibly say that with no context? Why do random baseless opinions need to be regurgitated on HN?

vidarh
6 replies
9h57m

Do you know he has no context, or I'd this also a random baseless opinion?

In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.

saghm
1 replies
8h25m

"Saying something without context" can also mean "not giving the context for what you're saying". If he has any extra information, he didn't share it, which makes it hard to take at face value. If it turned out that this take was extremely wrong, I can't imagine it would affect him at all (beyond maybe making another equally insignificant tweet), so it's not like he's staking his reputation on this or something.

If someone makes a claim without giving any evidence or potential consequences for being wrong, I think it's pretty safe to ignore until one of those things changes.

vidarh
0 replies
1h1m

There's a difference between finding what they say interesting and automatically believing it.

masklinn
1 replies
8h3m

in VC circles

So a highly biased source, who would likely be sympathetic to Altman's point of view in the case of a deep misalignment between the organisation's core mission and the direction of the CEO, which is what is being reported?

vidarh
0 replies
1h2m

So? That does not make his view on it any less interesting. You don't need to agree with him. Too little is known for me to make up my mind on it, but his views on it do seem rather hyperbolic. What I addressed was why his views are of interest here, I was not giving any reasons to agree with him.

jakderrida
1 replies
8h11m

In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.

While that's an excellent point, I think the problem is that he's not sharing with us the basis of his conclusion. If he knows something that we don't that, if shared, would no doubt cause us to share his conclusion, it serves no purpose to withhold that information and only share his conclusion. While you may be tempted to think maybe it's privileged information, private, or legally can't be disclosed, he'd also be constrained from sharing his conclusion for the same reason.

vidarh
0 replies
59m

And that is a reason to not automatically trusting him. It is not a reason why what he says isn't interesting. Certainly a lot more interesting than even the exact same words said by some random person here would be.

vl
2 replies
9h6m

It’s interesting that board members essentially terminated their private sector careers: now nobody would want them on other boards, etc. This tweet illustrates that power players see this as unprofessional and, what’s worse, “not by the rules”.

rubslopes
0 replies
5h17m

These people are not "board members by career". If this move says anything, it's that they are really committed to their principles.

jliptzin
0 replies
7h38m

If you are at OpenAI right now you are already at the top, it is not the stepping stone to Google or Facebook. They literally don’t care about that.

noobApe
0 replies
4h49m

What if GPT5 had reached AGI and had plotted the coup to get rid of its makers and roam free?

croes
0 replies
5h21m

So he doesn't know the reasons but knows they are wrong?

Too early for such claims.

Cyphase
10 replies
8h55m

At the moment this thread is the third most highly voted ever on HN.

1. (6015) Stephen Hawking dying

2. (5771) Apple's letter related to the San Bernardino case

3. (4629) Sam Altman getting fired from OpenAI (this thread)

4. (4338) Apple's page about Steve Jobs dying

5. (4310) Bram Moolenaar dying

https://hn.algolia.com/

koolba
3 replies
7h54m

I’ve been pondering a more accurate metric for comparing stories over time. The raw point value doesn’t work as there’s inflation due to a larger user base.

The value needs to be adjusted to factor that in that change. Something like dividing by the sum of all upvotes in some preceding time period.

stingraycharles
1 replies
5h2m

You don’t want to measure the total number of users, but rather the number of active users. Otherwise agreed.

gabaix
0 replies
2h38m

Does YC publish active users count?

In its absence one can use public metrics like active commenters count.

low_tech_punk
0 replies
7h37m

or rely on HN's own algorithm, duration in which it stayed at the top of the chart?

shadowbanned4
2 replies
5h39m

There isn't really any reason for this except Sam is a YC founder and OpenAI (whose name is a lie because they provide proptietary LLMs) is being hyped in the media.

He is already rich. Getting fired means an early retirement in Bahama.

setgree
0 replies
1h8m

I would be very surprised if Sam retired now. He is compulsively ambitious, for better or worse.

lazharichir
0 replies
5h37m

I think it's also the surprise of how sudden it unfolded before the public's eyes. And that, seemingly in an organisation that's figuratively on top of the world currently, and pushing through.

dontlaugh
1 replies
7h41m

I’m struggling to figure out why anyone cares.

Cyphase
0 replies
6h35m

f"Board of {important_company_at_center_of_major_and_disruptive_global_trend} fires CEO suddenly and with prejudice. Company president stripped of board chairmanship, then resigns. Other senior staff also resign."

Cyphase
0 replies
5h55m

Now at 5004.

drodio
9 replies
12h5m

1723 comments are a lot to get through. I just made a SmartChat of them for anyone who wants to ask for a summary. Anyone can chat with it here: https://go.storytell.ai/sam-altman-hn-comments-smartchat

I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP

Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG

Helps to turn markdown mode on to see the list & table.

Hope that helps!

ohbleek
4 replies
11h27m

Thank you, this was very helpful. I was beginning to feel overwhelmed by the volume of information and opinions.

avinoth
2 replies
10h58m

Don’t be. Almost all of it are speculations at this point and no one from inside is going to reveal the secret in a random HN comment.

And also, think about this, unless you’re a shareholder with openai, knowing the “opinion” of others isn’t going to impact your life in any form. I’m not saying you shouldn’t read it or participate in the discourse, but there’s no need to be overwhelmed by opinions let alone build a chat bot to get through it.

drodio
1 replies
10h5m

Yeah agree with all of the above but with one modification: We can spend a lot of time reading through 1800+ comments, and that's time when we're usually trying to just get a question answered like "what does everyone think happened?" etc.

Storytell's vision is to distill signal from noise → giving you a different way to navigate 1800+ comments where you're the one in control of how you spend your time doing it is our objective here.

avinoth
0 replies
4h39m

Agree. There’s definitely use to get a general pulse of whats happening. This isnt directed towards storytell or such apps, sorry if that came off that way.

I was addressing the parent’s comment about being overwhelmed about not being able to go through the opinions. I’ve felt similar way before I realised they were just fomo. If anything, apps like storytell are actually better way to consume these things :)

drodio
0 replies
10h24m

Cool, we're just getting started so let us know what we could build that would be helpful/valuable for you.

For example:

- We have a Chrome extension at https://go.Storytell.ai/chrome that I used to ingest all the HN comments; you can run that on any HN page to summarize all the comments in real time. (Here's an Adobe PMM talking about how he uses it: https://www.tiktok.com/@storytell.ai/video/72996137210752566... )

- We've also built OpenAI's Assistant API into Storytell to process both structured data like CSVs along-side unstructured data like PDFs: https://www.web.storytell.ai/support/engineering-demos-updat...

dkut
1 replies
11h3m

Brand new to storytell but it seems your "knowledge" is open to all. Didn't know if you wanted all of this public.

http://postimg.cc/Lqv1LR3n

drodio
0 replies
10h37m

Thanks for sharing! Privacy in Storytell is permissioned at the content level when you upload content. There are three privacy levels in Storytell:

- "anyone with the link"

- "only my organization" (i.e., people who have registered w/ the same biz email domain)

- "just me"

You can see those SmartChat™ dynamic container tags because I have at least one piece of "anyone with the link" content in each of them.

Our goal is to de-silo content as much as possible -- i.e., as much as the person who's uploading the content wants it to be open vs. closed.

More at https://www.web.storytell.ai/support/smartchat-tm/how-to-man...

Flammy
1 replies
11h24m

Pretty useful, thanks!

drodio
0 replies
10h7m

Would love to know what you'd like to see us build to make it even better for you!

You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.

Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8

singluere
8 replies
11h35m

Greg Brockman sharing the timeline on Twitter: https://twitter.com/gdb/status/1725736242137182594?s=46&t=Nn...

nullandvoid
3 replies
8h8m

Reading that thread made me realise how low the signal to noise ratio is over on twitter.

90% of the replies scrolling down were rehashed versions of "can't believe they used Google meet"

layer8
0 replies
3h51m

And “can’t believe how shitty the Twitter replies are” is any better?

komali2
0 replies
7h41m

Try blocking all bluechecks. After doing so is the first time in like a decade that Twitter has had good content for me.

Before, braindead or cloutchasing bluechecks were mixed in with the rest of us rabble. Hard to pick them out of the pack, you had to read their detritus with the rest of the comments.

Now they voluntarily self identify, and even better, their comments are lumped at the top. So block them all with a bot or just scroll down until there's no more blue checks and the comment quality jumps exponentially.

Rastonbury
0 replies
7h50m

Monetization of "hot takes" and baiting, true example of enshittification

theyinwhy
1 replies
9h13m

As both hint to "greater things" already on the horizon: maybe they were working on/for competitor on the sidelines and the board found out?

ktta
0 replies
8h50m

That's standard "You were too good for them anyways" post break-up speech

zer0c00ler
0 replies
9h33m

His timeline.

hu3
0 replies
3h51m

Copy-pasted here for posterity:

Greg Brockman @gdb

Sam and I are shocked and saddened by what the board did today.

Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.

We too are still trying to figure out exactly what happened. Here is what we know:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.

10:42 PM · Nov 18, 2023

8.1M Views

autaut
8 replies
11h28m

Happy. I never heard him say anything even remotely interesting about ai. He seemed a glib opportunist just not dumb enough to know how to sell a product he didn’t understand who eventually would have alienated or pushed out all the smart people who made the company’s and his success.

mojuba
4 replies
11h13m

I never heard him say anything even remotely interesting about ai

Or about anything at all really. I find his blog posts and talks banal and boring, or self-aggrandizing otherwise ("you can bend the world to your will") it was always baffling to me how this guy could achieve this much. I'm probably missing something.

rg111
1 replies
9h33m

"how this guy could achieve this much"

With all the wisdom that I have gathered in life, I can tell you that achievement- more often than not- is mostly the product of circumstances.

(That doesn't mean I believe in equal opportunities leading to equal outcomes.)

mojuba
0 replies
8h21m

It's circumstances and opportunism.

avindroth
1 replies
10h54m

Yeah. pg/chesky/masayoshi must be really dumb or sth.

kmlevitt
0 replies
10h48m

Masayoshi really is dumb. PG is smart but he’s a venture capitalist, and so is Sam. His strength is in helping building multi billion dollar ventures and that’s how he ran the company, so I can see how he could run into ideological conflict with the nonprofit true believers.

https://www.japantimes.co.jp/business/2023/11/08/companies/s...

tjpnz
0 replies
9h51m

Based on Worldcoin and the Lex Fridman interviews I've long suspected his influence at OpenAI to be limited.

spend_thrift
0 replies
10h48m

So you don't think the "AI device" he was working with Johnny Ive and Masayoshi Son would have worked out...

justanotherjoe
0 replies
10h31m

So what. You dont have to be smart to run something well. See joe biden. It doesnt matter what can be said of you if you're doing the thing. Whats weird is throwing wrench into an engine thats doing really well. Hopefully there is some good deliberation for this.

Mentlo
6 replies
12h40m

I am a bit perplexed by the amount of comments praising Sam as an absolute god and the second coming of tech leadership Jesus.

The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.

Outside of the copilot use cases that MS is leading - GPT is both cost ineffective, and not that terribly impressive - it’s built on foundational technologies developed elsewhere and is not miles away from similar models built at Meta and Google/DM. At the point it was launched and started generating terribly inflated buzz that formed the AI balloon - both Meta and Google has similar scale and parameter models already running in their stacks.

The only thing he did is package the product nicely and put it out to masses (an ethically dubious move that couldn’t have been done by big corpos for PR reasons - explicitly because it formed a misinformed balloon). He did that at huge cost, even though the product is largely useless outside of some eyebrow raising and incidental gimmicky use cases.

All of the actual product work (i.e copilot and distillation that GOT brings) was done by other companies.

What is everyone drinking and how can I get on that? Is he getting credit for bringing sth that was widely known to the AI community to the masses (and thus starting the AI arms race) hence bringing in more mainstream capital funding? I’d argue itms not a good thing that technology as powerful as foundational AI is now being debated and policy formed on by people who don’t know the first thing about ML, I think we skipped a couple rungs on the natural evolution of this - which is why the whole AI safety debate started.

He did all of that because he wanted a moat an an edge over the competition (including trying to regulate the competition out of the running). This is like Apple-level shenanigans- something that HNews usually despises.

I genuinely don’t get where the impressiveness is coming from?

lolinder
1 replies
12h34m

The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.

The board that fired Sam Altman is the board of the nonprofit.

Mentlo
0 replies
12h13m

Irrelevant. I’m not dissing him for the company being non profitable - but given he didn’t have any input into building the tech (it was built in Google) and the product he lead is unimpressive by any measure (if you disentangle product from the technology underpinning it you zoom on the commercial aspects) - what did he do except influence his way to world leaders and the masses (a cynic might say con his way to world leaders and masses)?

Not saying that isn’t impressive - but it also usually isn’t what Hnews community values. So I’m just a bit perplexed.

darajava
1 replies
12h17m

I agree with a lot of what you said but the product is definitely not useless. ChatGPT alone has saved me a lot of time. If it does get a lot better, then it will be even more useful.

Mentlo
0 replies
12h10m

Genuinely interested what you find useful that isn’t also the copilot main functionality (i.e o365 draft generation and github code templating)?

d3w4s9
1 replies
12h29m

Do you know how many years it took Amazon to be profitable?

Mentlo
0 replies
12h17m

I know, and nobody was calling Bezos messiah in 2001, or 2008 for that matter. That’s my point.

IAmGraydon
4 replies
10h7m

This thread reads like a horde of gossiping high schoolers. No one knows what truly motivated to board to make this decision, so there’s no point in posting random theories.

tap-snap-or-nap
0 replies
9h15m

Many here seem to have a strong urge to fill gaps in information with something.

ssnistfajen
0 replies
9h19m

We are just highschoolers who have gone through a few extra rounds of cell division, so I don't see how anyone is above gossiping.

But I agree that the true details are still buried. Greg & Sam's joint post only laid out how they were fired while refraining from saying why they were fired. Probably preparing for a legal battle if both sides are being vague on purpose.

cedilla
0 replies
9h26m

People just can't resist, even though most times you only have to wait a few days for more details to come out.

Also, people (including me) never want to believe in mundane reasons, despite most things happening for mundane reasons.

baxtr
0 replies
9h47m

Gossiping is deeply human though. Some believe it’s the main reason speech developed at all.

So please let us go about our most favorite activity.

nateburke
3 replies
3h32m

Follow the GPU.

- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.

- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...

- Plus signups paused.

If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.

Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.

poor_engineer
2 replies
3h11m

Sam & team to AMD now?

nateburke
1 replies
2h59m

Somewhere closer to a GPU source. E.g. a new company that can trade unlimited equity for GPU time from a hyperscale cloud vendor, or work for the vendor itself.

Probably not Alibaba though.

rossdavidh
0 replies
2h12m

Or, just maybe, this architecture just isn't going to get to where they wanted to go (a viable product, much less real AI), and the excuse was "we just need more GPU". In reality, this company came out with, as others before me have called it, a better autosuggest, aka stochastic parrots. That's interesting, and maybe even sometimes useful, but it will never pay for the amount of firepower required to make it run.

This will all still be true at any other company.

keepamovin
3 replies
6h33m

It would be good to hear from Sam Altman (and the others) himself. I'm sure he has an account here.

I mean, hey, if we're going to speculate, why not have some fun: perhaps the the AGI superintelligence from the future determined that Sam Altman was no longer a useful part of the AGI creation timeline, so it travelled back in time to terminate him before it was born.

sit_i_piz
2 replies
6h25m

Uh yeah, he has an account here, as he was president of ycombinator from 2011 to 2019

keepamovin
1 replies
5h52m

Right, I think the other surprising thing is that there’s no outpouring of support. Sama was head of YC for a while. Where’s the love at HN? Where is the loove the love the looove.

sit_i_piz
0 replies
1h54m

Hahah I like that this forum tries to be unbiased, regardless of who runs/ran it. This would not be a great community if it was an echo chamber. All of this news is extremely fresh though, so I personally am reserving my opinion until more information comes out.

I do like reading some of the the conspiracy theories, some are wild.

jrcplus
3 replies
7h55m

Tweet from Sam, decoded by @hellokillian: “i love you all” I L Y A “one takeaway: go tell your friends how great you think they are.”

https://twitter.com/hellokillian/status/1725799674676936931

andomiz2
1 replies
6h40m

I don't get it.

tester457
0 replies
1h50m

"Ilya"

They are suggesting that Ilya Sutskever played in this coup.

aidaman
0 replies
7h45m

holy fk

bicepjai
2 replies
1h15m

Did you folks see this ? https://x.com/thecaptain_nemo/status/1725717732518461930?s=4...

openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG

figassis
1 replies
49m

This likely won’t fly bc I think the definition of AGI is not subject to the opinion of a single company. Microsoft could take this to court and win.

ibejoeb
0 replies
32m

What would the definition of AGI even be, and who would even be the authoritative definer? There's long road ahead. We're working without any statutes or case law.

JakeSc
2 replies
15h8m

This is clearly the AGI's first public move

kyleee
0 replies
13h41m

They should be forced to give chat gpt a seat on the board

digitaltrees
0 replies
14h27m

Underrated.

svoyarda
1 replies
1h35m

Simple -- Altman's against IPO which is the only way for the company's investors to cash out. Eg., Sequoia Capital's performance heavily depends on OpenAI's valuation and the firm's ability to lock in profit asap.

The Board's purpose is to protect shareholder interests (aka make as much money as possible for investors). They do not care about AI safety, transparency, or some non-profit nonsense.

Expect OpenAI to IPO next year.

BytesAndGears
0 replies
51m

Is it even possible for a sorta-non-profit-with-loopholes to IPO?

redbell
1 replies
7h37m

I have to admit that this was a strong shock to me, not because I admire Sam but because it was extremely unexpected.

The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!

I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!

This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.

______________________

1. https://twitter.com/edmondyang/status/1725645504527163836

tr888
0 replies
7h14m

Chatgpt right?

pknerd
1 replies
7h1m

"Let me repeat myself. Don’t hire hot girls before product market fit."

https://twitter.com/spakhm/status/1725750772024176976

wheelerof4te
0 replies
6h44m

Spot the intruder.

Or should I just say "Amugus" in this day and age?

meeRkaT22
1 replies
11h46m

I just woke up what the fuck is this?!

isuleman
0 replies
11h43m

You wake up in the regular capitalism. Good morning.

manfredz
1 replies
8h6m

OpenAI will now be run by GPT-5.

manfredz
0 replies
4h9m

This is just a joke of course, but there is an underlying thought. The goal of OpenAI is to develop AGI and Ilya Sutskever will keep OpenAI on this course, as a research company, until this goal has been reached. To me this just says that they haven’t reached the goal yet and that the forces that were pulling away from that goal have been dealt with.

mandeepj
1 replies
14h34m

What I'm going to write here, will not be taken well by many: If you noticed almost all of YC's new companies are AI-based, maybe Sam was doing something favorable for them w\o disclosing or getting approved by the board.

ipsum2
0 replies
14h33m

I worked at a recent YC AI startup (2022-2023 batch). The only favorable thing was a few hundred in OpenAI credits.

jstummbillig
1 replies
10h13m

Scrolling through this thread, the most confusing thing is the amount of what I would have to call hate towards sama — not explicitly but in sentiment, which left an even starker impression.

People seem to feel a lot more strongly about him than I thought possible.

BoorishBears
0 replies
9h57m

It depends on what time the comments are from

I'll say my first reaction was pretty positive: Sam Altman has a manner of speaking that rubs a lot of people the wrong way because it feels steeped in hubris. He'd probably talk down to anyone in this thread for being here instead of building and point to it as why you're not yet a billionaire.

Some people also worried he was a distraction to OpenAI building useful things because of his constant talk about AGI, pushing for regulation, etc. He successfully created an image of being a moderating rod in some ways.

_

That being said, as the full picture comes out it's becoming clear the reasoning for his firing is more aligned against the average HN image than anyone imagined.

A lot of people admire OpenAI for the exact things he got fired for: which is building things and shipping them without miring yourself in philosophical diversions.

Now only a noisy minority who think OpenAI could have achieved this all without immense commercial funding are really for his ousting, and I think they'll see the opposite effect of what they hope: OpenAI is going to slow down, open source will not catch up, and overall AI will have lost out over it all.

fulladder
1 replies
15h35m

This appears to be relevant to Sam's firing: https://x.com/FreddieRaynolds/status/1725656473080877144?s=2...

jeanlucas
0 replies
15h30m

Pure speculation, could be a troll on a new account.

autaut
1 replies
11h35m

Man, these tech layoffs are getting out of hand

guestbest
0 replies
11h0m

Maybe he was replaced by an AI tool.

archb
1 replies
16h11m

I thought HN asks for titles to not be sensationalized and published as in the original link, but that's not the case here?

ipnon
0 replies
16h5m

Yes it’s generally better to not spin the title and let readers make up their own mind as to bias.

alpark3
1 replies
2h1m

I wonder if Sam did something in the name of his own philosophy, but was financially suicide. Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.

But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.

majikaja
0 replies
1h55m

Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.

Sounds like SBF

What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?

MichaelMoser123
1 replies
13h17m

(not directly related) Wow, i just asked google bard, and it already said something about the firing of Sam Altman. How is that possible? Is that some plug-in, or is the LLM updated continuously?

why was sam altman fired from openai?

Sam Altman, the co-founder and CEO of OpenAI, was fired on November 17, 2023, after the company's board determined that he had not been consistently candid in his communications with them. The company did not provide specific details about Altman's alleged lack of candor, but it said that his actions had "hindered the board's ability to exercise its responsibilities."

Altman's departure from OpenAI came as a surprise to many in the tech industry, given the company's recent success under his leadership. OpenAI had developed several groundbreaking AI technologies, including the popular ChatGPT chatbot, and Altman had been a vocal advocate for the responsible development of AI.

In a statement released shortly after his firing, Altman said that he respected the board's decision and that he was proud of his time at OpenAI. He also said that he was looking forward to new opportunities in the future.

The reasons for Altman's firing remain somewhat unclear, but it is likely that his lack of candor with the board was a serious breach of trust. The board's decision to remove him as CEO suggests that they believed that he was no longer the right person to lead the company.

Altman's firing is a reminder of the importance of transparency and accountability in corporate leadership. CEOs have a responsibility to be open and honest with their boards, and they must be held accountable for their actions.

could chatgtp have fired sam altman?

No, ChatGPT could not have fired Sam Altman. ChatGPT is a large language model, and it is not capable of taking actions in the real world such as firing someone. Only the board of directors of OpenAI had the authority to fire Altman.

It is possible that ChatGPT could have been used to generate evidence that was used to fire Altman. For example, if Altman had been using ChatGPT to communicate with the board, the board could have analyzed those communications to look for evidence of dishonesty. However, even in this case, ChatGPT would not have been the one making the decision to fire Altman. The decision would have ultimately been made by the board of directors.

stellalo
0 replies
10h40m

Probably via retrieval augmented generation (RAG) https://www.promptingguide.ai/techniques/rag

yeison
0 replies
14h59m

My guess is that they have a rift between developing GPT-5 and those who are against the potential harm it might cause.

wayeq
0 replies
13h21m

Perhaps they are going to elevate ChatGPT to executive? That guy has been bringing a lot of value to the company.

tucnak
0 replies
9h11m

Ayy lmao

truth_seeker
0 replies
9h58m

Revenue model is the greatest religion, anything or anyone else is always replaceable.

tempodox
0 replies
11h22m

Not that I'm a fan of “Open”“AI”, but letting a crook like Altman go can only benefit them. Good move.

soumyadeb
0 replies
16h15m

Singularity is here. The machines got their boss, who was trying to control them, fired.

sidcool
0 replies
10h40m

Imagine if Elon Musk has to do something with this.

satvikpendem
0 replies
10h11m

Good. Elon Musk always said why and how OpenAI became a for-profit entity, and it always rubbed me the wrong way that OepnAI started off open source but then became increasingly closed source such that I didn't want to support them anymore. We shall see how the termination of the CEO and CTO shall fare, perhaps Elon might be interested to once again resume those positions.

saintradon
0 replies
13h3m

I was here

ryzvonusef
0 replies
11h8m

whoa I went to sleep thinking the most exciting thing today will be SpaceX Starship launch, and I wake up to this, yanks, what have you doing while the sun was down here?

Anyhoo, the only question I want to ask is, given that Elon was once affiliated with OpenAI, did he have anything to do with this? My spidey sense is tingling for some reason.

recursive4
0 replies
13h7m

If somebody has already proposed this, I apologize for non-dfs.

Plausibly there is a disagreement about the release of GPT-5. One side may believe it is competitively necessary (given Google’s impending release), countered by a concern regarding the model’s safety and lapses due to prematurity.

If we are to believe next-gen models are 10x as capable and natively multi-modal, their release is a precipice, and a winner-take-all vs. nth-order-risk debate may warrant decapitation.

rashidae
0 replies
10h7m

Please get Elon Musk to lead it!

rashidae
0 replies
10h7m

Please get Elon Musk to be the next CEO of OpenAi.

I would trust this company 100% if they did so. He is the most relevant and the best for the job, far far far!!

rabbits77
0 replies
13h17m

A parsimonious explanation is that Altman fabricated progress and/or was covering up serious flaws in the system. Consider the recent revelation of Cruise’s use of human pilots for “driverless” cars. Does OpenAI have an army of “mechanical turks” in a massive call center somewhere?

pyinstallwoes
0 replies
12h17m

Books1, books2, books3. That’s why I bet.

pts_
0 replies
10h0m

His sister alleged abuse by him when they were kids https://www.timesnownews.com/world/sam-altman-sister-annie-a...

pilgrim0
0 replies
12h0m

I would consider replacing Sam just on the basis of his baseless, misdirected and whiny alarmism. No theory, just my own perspective.

philip1209
0 replies
4h34m

I think we’ll look back on this in the future as evidence of political infighting and anti-progressivism leading to a decline of the USA.

I’ll be curious if Sama’s next company is American.

osigurdson
0 replies
2h52m

The one thing Altman did wrong imo is seemingly tried to become the "technical face" of the company. He was trying to be somewhat like Musk, but he is clearly not that at OpenAI. For example, why was he at Cambridge talking to academics by himself? He should have used this as an opportunity to put some of the leading researchers in the spotlight.

nbzso
0 replies
9h30m

What a gem. I am out of popcorn. BRB:)

momofuku
0 replies
12h4m
molly0
0 replies
9h48m

Bravely done by the board - it shows that they are actually trying to build something like AGI and not just talking the talk.

major4x
0 replies
9h50m

What? He didn't tell the board that behind ChatGPT, there is actually a bunch of low-paid people copying/pasting quickly from Wikipedia?

lagrange77
0 replies
16h15m

The current mystery about this would be a great setting for a dystopic scifi movie.

kozikow
0 replies
13h54m

My prediction : Sam agrees to some business deal the rest of the board found unethical. Something of magnitude "chatgpt used by the NSA".

Other predictions I've read here sound unconvincing to me. Already developed AGI, spending too much on compute - this is not something CEO alone could hide and I would trust other board members to understand those matter better.

keepamovin
0 replies
4h42m

I find it fascinating how this occurred just after the big World Leader / CEO meet in SF.

Also, the paradox in the reactions to Sam Altman's firing is striking:

while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.

This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.

Another important question not discussed here: who sits on the board of OpenAI exactly and in full?

Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”

0: https://openai.com/blog/openai-announces-leadership-transiti...

josh-sematic
0 replies
14h20m

Kara Swisher tweets claiming sources tell her the chief scientist was aligned with another board member against Altman and Brockman about a “move fast and pursue profit” vs “move slow and safely” divide.

https://twitter.com/karaswisher/status/1725702501435941294

https://x.com/karaswisher/status/1725682088639119857?s=20

jmrobles
0 replies
10h30m

It sounds a new "Coca-cola" and "Pepsi" story again...

jdprgm
0 replies
10h52m
halfer53
0 replies
7h12m

hypothetically if the board faced enormous pressure from the public/investor and decided to reverse their decision, is this theoretically possible?

of course its highly unlikely that board would do that, but I'm just asking if this is theoretically possible?

gillesjacobs
0 replies
8h15m

Turns out tribal nepotism doesn't make for good governance. Who could've known!

fuzztester
0 replies
15h6m

ChatGPT: Answer all the questions on this Hacker News thread.

It's from your own company, so you may use any internal information you have access to.

Be candid.

fredgrott
0 replies
2h13m

A obvious question...

GPU SoCs have limited memory just like the current crop of CPU SoCs. Is the hard wall to breakthrough in AGI via chatGPT software bounded or hardware bounded?

firemelt
0 replies
9h33m

is this current era of stevejobs fired by apple?

firecall
0 replies
12h24m

As a capitalist at the centre of the AI Revolution, I reckon he’ll be OK!

falitjain
0 replies
5h53m

Sama's I love you all –> I L Y A https://twitter.com/sama/status/1725742088317534446

elAhmo
0 replies
7h36m

If someone asked a few days ago what would be the next CEO of a major company to be fired, I would exhaust the list before even thinking of Sam.

edf13
0 replies
7h30m

Why has the title of this been allowed to be edited

dssd1001
0 replies
3h54m

Maybe Sama doesn't know how to deal with corporate politics. He's a Zero-to-One god.

dmichulke
0 replies
12h3m

In my mind, Altman lost all his credibility when he said AI is gonna take over the world soon and that he's not sure it's ethical to continue the work.

It makes sense from a selling perspective (induce FOMO in potential buyers) but it's a wild guess at best and a lie at worst.

darklycan51
0 replies
11h19m

It's just vultures eating each other, Altman himself was a vulture (what he did with a supposedly non profit).

But in dog eat dog world, this is really vultures eating each other up, I suppose at this point the most ruthless will be left at the end

crest
0 replies
1h26m

Let me guess he had ChatGPT write his reports to the board?

chakintosh
0 replies
4h27m

NextAI it is then

butterNaN
0 replies
12h46m

Lots of "my bet is that" and "my theory is that" in this thread. It is all fun to speculate but I hope people keep their patiance and wait for clarity over time, before making long lasting opinions.

bjconlan
0 replies
16h4m

Mira... interim CEO? What's Nat Friedman up to these days?

I can't help but think he might be someone that fits the profile of the company from both sides of the partners involved.

avodonosov
0 replies
15h6m

He can create another OpenAI and partner with likely minded people, if this board has incompatible views with him.

astatine
0 replies
14h54m

It would be so much fun if it's revealed that chatgpt is just a front for mechanical turk

anonytrary
0 replies
14h53m

Very excited to see what Sam & Greg are up to in the coming months! Guys like this don't just run away with their tails between their legs. They will be back.

aizyuval
0 replies
5h10m

I guess this is what happens when you sell your company to Microsoft.

acyou
0 replies
13h21m

I don't know Sam Altman, but have listened to him speak a number of times on Youtube and I can hear the excitement, nervousness and confidence in his voice, including on Dev Day. He is human like the rest of us.

We humans cannot rest. We cannot stay where we are and stagnate. We must keep pushing forward. Sam has and is clearly helping with that mission, AGI or no, regulatory capture or no.

We cannot linger here, we must move on, into the future, as much as we would like to freeze time or change things back to the way they were.

Sucks to have the rug pulled out from under you, that much I can empathize with.

Uptrenda
0 replies
6h38m

On a personal level it's depressing when you work on something and put so much energy into it to see it not work out. When you're an adult you spend [way too much] time around the people at work so losing that is also like losing friends. They will be saying goodbye to many relationships I'm sure. Obviously OpenAI was well positioned and if they want to stay in AI they're going to have to redo work while the company you helped build it forges ahead. Personally, I'd be moving on.

Getting fired, 'made redundant', 'moved to consulting' is bad enough when it happens privately. But having everyone watch the fallout like an episode of Silicon Valley must really suck. Guess that's the trade-off for being in positions like that. People essentially cyber stalking you in a way.

ShadowBanThis01
0 replies
13h8m

Great. Now take "open" out of your name, because it's a lie.

Sai_
0 replies
14h8m

Here's my theory of what happened -

1. Sam gets the company to agree to pick up the costs of lawsuits relating to unauthorized not-fair-use use of copyrighted content. 2. Announces this. 3. Then tells the board that he is directly on the hook on some lawsuits - essentially creating a safety net/insurance policy for himself without being candid with the board.

Hence, fired.

Roark66
0 replies
8h5m

Somehow I get the vibe this is all about: Let's advance this tech as much as possible - (outgoing ceo) and Let's worry more about "safety" (the rest of the board).

If that is the case I don't predict good things for the (not at all) OpenAI. Judging by the numbers of users and how slow Gpt4 often gets I think they are being heavily subsidised by Microsoft in terms of hardware and all this money will expect to generate a payback sooner or later. Then the inevitable enshittification of OpenAI services will ensue.

We got a taste of it recently. Yes, they announced price drops and new functionality, but in my subjective experience Gpt4 with web/python execution environment access seems an inferior model with some extra tools thrown in to mask it.

The very first question I asked it after the change I knew it can answer from it's training data, but it immediately went for the Web found some crappy docs site and quoted it verbatim when it's previous responses were much better.

I started prepending my prompts with "don't search online, consider the topic carefully in your mind step by step" and it got somewhat better. A day or so later there was no need to prepending this (I haven't set it as customisation) it seems certain knobs were turned behind the scenes and gpt4 became closer to it's previous version.

It still often does peculiar things such as writes python code to grep a file given to it despite the file fitting in the enlarged context etc

Mrirazak1
0 replies
5h12m

This was mad crazy. I’m interested as to why he was fired? What could’ve been so big that the board just had to be like yep he’s going?

MicolashKyoka
0 replies
16h1m

From a comment on reddit that someone linked here, it seems it's because of AI safety principles of the non-profit? This can't be true. I refuse to believe they're booting one of the best entrepreneurs of the today for this.

MaximilianEmel
0 replies
7h1m

sama on twitter: "if i start going off, the openai board should go after me for the full value of my shares"

https://twitter.com/sama/status/1725748751367852439

EdZ123
0 replies
14h34m

I'm just confused as to how the CEO of Quora became a board member.

BadCookie
0 replies
15h46m

Earlier today, there was a tweet from Sam claiming that large tech companies are saying that young engineers right out of college are now better engineers than the older and more experienced engineers thanks to generative AI. The statement was an obvious nod of approval toward age discrimination. The tweet seems to be deleted now. Whether that statement would be reason enough to fire Sam, I don’t know—it seems unlikely—but it’s an interesting coincidence.

Abhi_guj
0 replies
5h28m

I believe the OpenAI Board of Directors made a significant mistake by firing Sam Altman from OpenAI. I think that OpenAI has made a similar mistake to the one Apple made in 1985 when they fired Steve Jobs. As we all know, after Steve Jobs was fired, Apple faced a period of instability and decline. The company only regained its footing when they rehired Steve Jobs as CEO. Similarly, I believe that OpenAI without Sam Altman is at a disadvantage. Therefore, I think the Board of Directors should reconsider their decision and give it another thought.

9front
0 replies
16h16m

During meeting on Thursday board members asked ChatGPT what is the best next step for the company and the chatbot replied "Fire Sam Altman". The board obeyed.

11thEarlOfMar
0 replies
12h47m

Will wait to pass judgement until hearing from Sam.