return to table of content

Ilya Sutskever "at the center" of Altman firing?

convexstrictly
74 replies
14h16m

Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."

Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

peyton
49 replies
14h3m

Very unprofessional way to approach this disagreement.

anonymouskimmer
20 replies
14h0m

How so? It's just another firing and being escorted out the door.

janejeon
15 replies
13h56m

The wording is very clearly hostile and aggressive, especially for a formal statement, and the wording, again, makes it very clear that they are burning all bridges with Sam Altman, and it is very clear that 1. it was done extremely suddenly, 2. with very little notice or discussion with any other stakeholder (e.g. Microsoft being completely blindsided, not even waiting 30 minutes for the stock market to close, doing this shortly before Thanksgiving break, etc).

You don't really see any of this in most professional settings.

clnq
9 replies
13h24m

It is quite gauche for a company to burn bridges with their upper management. This bodes poorly for ever hoping to attract executives in the future. Even Bobby Kotick got a more graceful farewell from Activision Blizzard, where they tried to clear his name. It is only prudent business.

Certainly, this is very immature. It wouldn't be out of context in HBO's Succession.

Whether what happened is right or just in some sense is a different conversation. We could speculate on what is going on in the company and why, but the tactlessness is evident.

ribosometronome
4 replies
12h53m

Whether it's right or just in some sense is a different conversation.

The same conversation if it's "mature", surely? I'm failing to see how one thinks turning a blind eye to like, decades of sexual impropriety and major internal culture issues to the point the state takes action against your company is "mature". Like, under what definition?

clnq
3 replies
12h31m

Mature, as in the opposite of ingenuous. It does no good to harm a company further. Kotick did enough damage, he left, all that needed to be said about him was said, tirelessly. Every effort to get him to offer some reparations - expended.

So what was there to gain from the company speaking ill of their past employee? What was even left to say? Nothing. No one wants to work in an organization that vilifies its own people. It was prudent.

I will emphasize again that the morality of these situations is a separate matter from tact. It is very well possible that doing what is good for business does not always align with what is moral. But does this come as a surprise to anyone?

We can recognize that the situation is not one dimensional and not reduce it to such. The same applies to the press release from Open AI - it is graceless, that much can be observed. But we do not yet know whether it is reprehensible, exemplary, or somewhere in between in the sense of morality and justice. It will come out, in other channels rather than official press releases, like in Bobby's case.

watwut
2 replies
8h10m

Mature, as in the opposite of ingenuous

To tell it in an exaggerated way, maturity should not imply sociopathy or completely disregard for everything.

Obviously I am referring here to Kottick situation. But, the definition where it is immature to tell the truth and mature to enable powerful bad players is wrong definition of maturity.

clnq
1 replies
4h54m

I respect your belief that maturity involves elevating morality above corporate sagacity. It is noble.

watwut
0 replies
2h19m

I am not even demanding something super noble from mature people. I am fine with the idea that mature people do compromises. I do not expect managers to be saint like fighters for justice.

But, when people use "maturity" as argument for why someone must be enabler, should not do the morally or ethically right thing, then it gets irritating. Conversely, calling people "immature" because they did not acted in the most self serving but sleazy way is ridiculous.

ryandrake
3 replies
9h24m

People get fired all the time: suddenly, too. If I got fired by my company tomorrow, they wouldn't treat me with kid gloves, they'd just end my livelihood like it was nothing. I'd probably find out when I couldn't log in. Why should "upper management" get a graceful farewell? We don't have royalty in the USA. One person is not inherently better than another.

insanitybit
0 replies
1h30m

Because no one cares if you get fired but people really care if a CEO gets fired. The scope of a CEO's responsibilities are near-global across the company, firing them is a serious action. Your scope as an engineer is, typically, extremely small by comparison.

This isn't about being better at all.

disgruntledphd2
0 replies
5h52m

Because upper management have more power than you or I. If either of us were fired, it's unlikely to be front page news all over the world.

It sucks, but that's the world we live in, unfortunately.

clnq
0 replies
4h43m

Why should "upper management" get a graceful farewell

Injustices are made to executives all the time. But airing dirty laundry is not sagacious.

fsckboy
3 replies
13h43m

boards give reasons for transparency, and they said he had not been fully candid.

You are interpreting that as hostile and aggressive because you are reading into it what other boards have said in other disputes and whatever you are imagining, but if the board learned some things not from Altman that it felt they should have learned from Altman, less than candid is a completely neutral way to describe it, and voting him out is not an indication of hostility.

Would you like to propose some other candid wording the board could have chosen, a wording that does not lack candor?

janejeon
2 replies
13h38m

You are interpreting that as hostile and aggressive because you are reading into it

Uhh no, I'm seeing it as hostile and aggressive because the actual verbiage was hostile and aggressive, doubly so in the context of this being a formal corporate statement. You can pass the text into NLP sentiment analyzer and it too will come to the same conclusion.

It is also very telling that you are being very sarcastic and demeaning in your remarks as well to someone who wasn't even replying to you, which might explain why you might have seen the PR statement differently.

racketcon2089
1 replies
10h15m

When you look at the written word and find yourself consistently imputing clear intent which is hostile, aggressive, sarcastic, and demeaning which no one else but you sees, a thoughtful person would begin to introspect.

janejeon
0 replies
5h17m

Again, I'm not sure why you and the other person are just out for blood and keep trying to make it personal, but you can clearly feed it into NLP/ChatGPT and co and even the machines will tell you the actual wordings are aggressive.

DonHopkins
0 replies
10h49m

The wording is very clearly hostile and aggressive

At least we can be sure that ChatGPT didn't write the statement, then.

Otherwise the last paragraph would have equivocated that both sides have a point.

jdminhbg
3 replies
13h57m

It's not "just another firing," the statement accused Altman of lying to the board. Either he did and it's a justified extraordinary firing, or he didn't and it's hugely unprofessional to insinuate he did.

chasd00
0 replies
13h55m

Oh man the lawyers have to be so happy I bet they can hardly count.

anonymouskimmer
0 replies
13h52m

I read that word as not being forthcoming moreso than actively lying. But I don't read many firing press releases.

adastra22
0 replies
13h16m

Hugely unprofessional and a billion dollar liability.

strikelaserclaw
18 replies
13h58m

When two people have different ideologies and neither is willing to backdown or compromise, one person must "go".

peyton
12 replies
13h44m

There’s no indication that any sort of discussion took place. Major stakeholders like Microsoft appear uninformed.

strikelaserclaw
8 replies
13h41m

in a power struggle, you have to act quickly

fsckboy
7 replies
13h35m

I don't think it's that dramatic. In a board meeting, you have to act while the board is meeting. They don't meet every day, and it's a small rigamarole to pull a meeting together, so if you're meeting... vote.

vanjajaja1
3 replies
13h20m

are you suggesting they brought up a vote on a whim at a board meeting and acted on it same day

fsckboy
2 replies
13h6m

no, I was replying to a comment that said it was a power struggle in which the board needed to act quickly before they lost power.

The board may very well have met for this very reason, or perhaps it was at this meeting that the lack of candor was found or discussed, but to hold a board meeting there is overhead, and if the board is already in agreement at the meeting, they vote.

It only seems sudden to outsiders, and that suddenness does not mean a "night of the long knives".

lazide
1 replies
11h23m

How would the board have lost power?

fsckboy
0 replies
8h2m

that's what i'm saying, it was not a power struggle. I shouldn't have to make the other guy's argument for him...

dekhn
2 replies
12h55m

One imagines in this case the current board discussed this in a non-board context, scheduled a meeting without inviting the chair, made quorum, and voted, then wrote the PR and let Sam, Greg, and HR know, then released the PR. Which is pretty interesting in and of itself, maybe they were trying to sidestep roko or something

lsaferite
1 replies
12h21m

Not inviting the full board would likely be against the rules. Every company I've been part of has it in the bylaws that all members have to be invited. They don't all have to attend, but they all get invited.

dekhn
0 replies
11h26m

sure. he could have been invited, but also not attended.

vineyardmike
2 replies
12h54m

Basically half the point of this is that Microsoft isn’t a stakeholder. The board clearly doesn’t care or is actively hostile to the idea of growing “the business”. If they didn’t know then that they weren’t a stakeholder, they know now.

MS owns a non controlling share of a business controlled by a nonprofit. MS should have prepared for the possibility that their interests aren’t adequately represented. I’m guessing Altman is very persuasive and they were in a rush to make a deal.

peyton
1 replies
10h18m

Microsoft is a stakeholder. It’s absurd to suggest otherwise. The entire stakeholder concept was invented to encompass a broader view on corporate governance than just the people in the boardroom.

vineyardmike
0 replies
6h49m

This is a non profit dedicated to researching AI with the goal of making a safe AGI. That’s what the mission is. Sama starts trying to make it a business, restructures it to allow investors, of which MSFT is a 49% owner. He gets ousted and they tell Microsoft afterwards.

It’s questionable how much power Microsoft has as a shareholder. Obviously they have a staked interest in OpenAI. What is up in question is how much interest the new leaders have in Microsoft.

If I had a business relationship with OpenAI that didn’t align with their mission I would be very worried.

smoldesu
2 replies
13h57m

Or you introduce an authoritative third party that mediates their interactions. This feels like it wouldn't be a problem if so many high-ranking employees didn't feel so radically different about the same technology.

oivey
0 replies
13h44m

Altman’s job was to be a go between for the business and engineering sides of the house. If the chief engineer who was driving the company wasn’t going to communicate with him anymore, then he wouldn’t serve much of a purpose.

fsckboy
0 replies
13h47m

when did a board or CEO ever introduce an authoritative 3rd party to mediate between them? the board is the authoritative 3rd party.

bushbaba
0 replies
12h51m

There's more graceful ways to do this though.

TeMPOraL
0 replies
9h19m

You've summed AI X-risk in a single sentence.

(I.e. an AGI would be one of the two people here.)

mochomocha
4 replies
13h45m

If you know anything about Ilya, it's definitely not out of character.

peyton
2 replies
10h27m

Having read up on some background not sure I want this guy in charge of any kind of superintelligence.

dragonwriter
1 replies
10h2m

Well, I definitely wouldn't want Altman in charge of any superintelligence, so "I'm not sure" would be an improvement, if I expected an imminent superintelligence.

TeMPOraL
0 replies
9h20m

What if - hear me out - what if the firing is the doing of an AGI? Maybe OpenAI succeeded and now the AI is calling the shots (figuratively, though eventually maybe literally too).

ariym
0 replies
10h15m

what are you referring to

rdtsc
2 replies
13h46m

It was actually a great move. Unusual, but it goes with the mission and nonprofit idea. I think it was designed to draw attention and stir controversy on purpose.

kcb
1 replies
13h28m

Is it a winning move though? The biggest loser in this seems to be the company that was bankrolling their endeavor, Microsoft.

rdtsc
0 replies
9h43m

At this stage, no publicity is bad publicity. If they really believe they are in it to change the future of humanity, and the kool-aid got to their heads, might as well show it off by stirring some controversy.

Microsoft is bankrolling them but OpenAI probably can replace Microsoft easier than Microsoft can replace OpenAI.

hindsightbias
0 replies
9h46m

Not if the AGI was making the decision. A bit demanding to think the Professionalism LLM module isn't a bit hallucinatory in this age. Give it a few more years.

smharris65
9 replies
12h42m

Some insider details that seem to agree with this: https://www.reddit.com/user/Anxious_Bandicoot126/

dimal
2 replies
3h11m

Who is u/Anxious_Bandicoot126? Is there any reason to think this is actually a person at OpenAI and not some random idiot on the internet? They have no comment history except on this issue. Seems like BS.

hello_moto
1 replies
36m

No comment history except on this issue...

That's either 100% fishy or 100% insider.

Either BS or person is insider, no in-between.

ShamelessC
0 replies
0m

Is this sarcasm? The burden is on the person with the supposed claim to show they are trustworthy and reputable. What you're saying is basically "coin shows heads 50% of the time, therefore it's 50% chance they're an insider".

jzl
1 replies
10h43m

Wow, all the comments and responses to that person's comments are a gold mine. Not saying anything should be taken as gospel, either from that poster or the people replying. But certainly a lot of food for thought.

tucnak
0 replies
5h39m

Reads like lesswrong fan-fiction

ipaddr
1 replies
10h43m

Based on the amount of comments in that time period that is probably a fake insider.

marvin
0 replies
7h22m

Altman risking his role as CEO of the new industrial revolution for a book deal is implausible.

seydor
0 replies
8h21m

We cant trust what we read. But last year's "Altman World Tour" where he met so many world leaders around the world felt a bit over the top, and maybe it got into his head

leobg
0 replies
9h8m

This was about stopping a runaway train before it flew off a cliff with all of us on board. Believe me, the board and I gave him tons of chances to self-correct. But his ego was out of control.

Don't let the media hype fool you. Sam wasn't some genius visionary. He was a glory-hungry narcissist cutting every corner in some deluded quest to be the next Musk.

That does align with Ilya’s tweet about ego being in the way of great achievements.

And it does align with Sam’s statements on Lex’s podcast about his disagreements with Musk. He compared himself to Elon’s SpaceX being bullied by Elon’s childhood heroes. But he didn’t seem sad about it - just combative. Elon’s response to the NASA astronauts distrusting his company’s work was “They should come visit and see what we’re doing”. Sam’s reaction was very different. Like, “If he says bad things about us, I can say bad things about him too. It’s not my style. But maybe I will, one day”. Same sentiment as he is showing now (“if I go off the board can come after me for the value of my shares”).

All of that does paint a picture where it really isn’t about doing something necessary for humanity and future generations, and more about being considered great. The odd thing is that this should get you fired, especially in SF, of all places.

nradov
6 replies
12h44m

The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI. At least a few more major breakthroughs will probably be needed.

dr_dshiv
2 replies
11h13m

AGI is about definitions. By many definitions, it’s already here. Hence MSR’s “sparks of AGI” paper and Eric Schmidt’s article in Noema. But by the definition “as good or better than humans at all things”, it fails.

nradov
1 replies
11h7m

That "Sparks of AI" paper was total garbage, just complete nonsense and confirmation bias.

Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.

dr_dshiv
0 replies
1h47m

What specifically made it “garbage” to you? My mind was blown if I’m honest, when I read it.

How do you compare Eliza to GPT4?

mensetmanusman
0 replies
12h39m

It’s impossible to predict.

No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.

anon291
0 replies
11h3m

The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI.

How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.

It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.

What is AGI if not problem solving in novel domains?

MVissers
0 replies
12h24m

No-one knows, which makes this a classical scientific problem. Which is what Ilya wants to focus on, which I think is fair, give this alligns with the original mission of OpenAi.

I think it’s also fair Sam starts something new with a for profit focus of the get-go.

jdminhbg
2 replies
14h4m

So basically a confirmation, but with a slight disagreement on the vocabulary used to describe it.

davorak
1 replies
11h35m

I read it as Ilya Sutskever thinking the move is good non-profit governance grounds and that does not match what coup often means, unlawful seizure of power or maybe unprincipled/unreasonable seizure of power.

Ilya Sutskever seems to think this is a reasonable principled move to seize power that is in line with the non-profits goals and governance, but does not seem to care too much if you call it a coup.

username332211
0 replies
8h17m

That's just spin. Which coup hasn't been a "reasonable and principled move to seize power" according to it's orchestrator?

Do you think Napoleon or Pinochet made speeches to the effect of "Yes, it was a completely unprincipled power-grab, but what are you going to do about it, lol?"

resource0x
1 replies
13h45m

No one in this company is "consistently candid" about anything.

ProllyInfamous
0 replies
13h21m

Yes, but Ilya is on the Board of Directors; and Sam is currently unemployed (although: not for long).

cm2012
0 replies
14h0m

Huge scoop.

anon291
0 replies
11h6m

Realistically, this reflects more poorly on Sutskever. No one wants to work with a backstabber. It's one thing to be like 'well we had disagreements so we decided to move on.' However the board claimed Altman lied. If it turns out the firing was due to strategic direction, no one would ever want to work with Sutskever again. I certainly would not. That's an incredibly defamatory statement about a man who did nothing wrong, other than have a professional disagreement.

rcpt
43 replies
14h21m

Wait this is just a corporate turf war? That's boring I already have those at work

reducesuffering
42 replies
14h19m

No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

gnulinux
21 replies
13h59m

Haha yeah no I don't believe this. They're nowhere near AGI, even if it's possible at all to be there with the current tech we have, which is unconvincing. I don't believe professionals who work in biggest AI labs are spooked by GPT. I need more evidence to believe something like that sorry. It sounds a lot more like Sam Altman lied to the board.

aidaman
15 replies
13h40m

GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.

GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.

cscurmudgeon
5 replies
13h30m

Sorry. Robust research says no. Remember, people thought Eliza was AGI too.

https://arxiv.org/abs/2308.03762

If it was really AGI, there won't even be ambiguity and room for comments like mine.

CamperBob2
3 replies
12h51m

As if most humans would do any better on those exercises.

This thing is two years old. Be patient.

cscurmudgeon
1 replies
11h9m

This comparison again lol.

As if most humans would do any better on those exercises.

Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.

This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.

This thing is two years old. Be patient.

Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.

At various points from 1950, the gullible mass claimed AGI.

CamperBob2
0 replies
8h52m

At various points from 1950, the gullible mass claimed AGI.

Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.

In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.

(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)

smoldesu
0 replies
9h6m

Transformer-based LLMs are almost a half-decade old at this point, and GPT-4 is the least-efficient model of it's kind ever produced (that I am aware of).

OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.

What exactly are we holding out for, at this point? A miracle?

iamnotafish2
0 replies
13h6m

It’s not AGI. But I’m not convinced we need a single model that can reason to make super powerful general purpose AI. If you can have a model detect where it can’t reason and pass off tasks appropriately to better methods or domain specific models you can get very powerful results. OpenAI already on the path to doing this with GPT

morsecodist
2 replies
13h11m

These models can't even form new memories beyond the length of their context windows. It's impressive but it is clearly not AGI.

MVissers
1 replies
12h36m

Neither can you without your short-term memory system. Or your long-term memory system in your hippocampus.

People that have lost those abilities still have human level of intelligence.

morsecodist
0 replies
12h5m

Sure, people with aphasia lose the ability to form speech at all but if ChatGPT responded unintelligibly every time you wouldn't characterize it as intelligent.

staticman2
1 replies
13h23m

Fascinating. What do you make of the fact GPT 4 says you have no clue what you are talking about?

postalrat
0 replies
13h14m

How does knowing you are arguing against a GPT-4 bot?

SkyPuncher
1 replies
13h36m

The only thing GPT 4 is missing is the ability to recognize it needs to ask more questions before it jumps into a problem.

When you compare it to an entry level data entry role, it's absolutely AGI. You loosely tell it what it needs to do, step-by-step, and it does it.

dekhn
0 replies
12h23m

This sort of property ("loosely tell it what it needs to do, step-by-step, and it does it.") is definitely very exciting and remarkable, but I don't think it necessarily constitutes AGI. I would say instead it's more an emergent property of language models trained on extremely large corpora that contain many examples that, in embedding space, aren't that far from what you're asking it to do.

I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.

lossolo
0 replies
13h11m

Well, if it's so smart then maybe it will learn to count finally someday.

https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...

haolez
0 replies
13h20m

I kind of agree, but at the same time we can't be sure of what's going on behind the scenes. It seems that GPT-4 is a combination of several huge models with some logic to route the requests to the most apt models. Maybe an AGI would make more sense as a single, more cohese structure?

Also, the fact that it can't incorporate knowledge at the same time as it interacts with us kind of limits the idea of an AGI.

But regardless, it's absurdly impressive what it can do today.

spacemadness
2 replies
13h14m

We barely understand the human brain, but sure we’re super close to AGI because we made chat bots that don’t completely suck anymore. It’s such hubris. Are the tools cool? Undoubtedly. But come down to earth for a second. People have lost all objectivity.

totallywrong
0 replies
12h34m

I've been watching this whole hype cycle completely horrified from the sidelines. Those early debates right here on HN with people genuinely worried about an LLM developing conscience and taking control of the world. Senior SWEs fearing for their jobs. And now we're just throwing the term AGI around like it's imminent.

MVissers
0 replies
12h38m

Objectively speaking, we’re talking exponential growth in both compute and capabilities year over year.

Do you have any data that shows that we’ll plateau any time soon?

Because if this trend continues, we’ll have superhuman levels of compute within 5 years.

skwirl
0 replies
13h12m

I don't believe professionals who work in biggest AI labs are spooked by GPT.

Then you haven't been paying any attention to them.

cm2012
0 replies
13h38m

It's like a religion for these people.

moralestapia
9 replies
14h15m

Bull. Shit.

OpenAI and its people are there to maximize shareholder value.

This is the same company that went from "non-profit" to "jk, lol, we are actually for-profit now". I still think that move was not even legal but rules for thee not for me.

They ousted sama because it was bad for business. Why? We may never know, or we may know next week, who knows? Literally.

sainez
6 replies
13h53m

It seems you are conflating OpenAI the non-profit, with OpenAI the LLC: https://openai.com/our-structure

moralestapia
5 replies
13h42m

No, that's the whole point, "AI for the benefit of humanity" and whatnot turned out to be a marketing strategy (if you could call it that).

lucubratory
4 replies
13h2m

That is what Ilya Sutskever and the board of the non-profit have effectively accused Sam Altman of in firing him, yes.

moralestapia
3 replies
12h52m

???

Source?

lucubratory
2 replies
12h34m

Kara's reporting on motive:

https://twitter.com/karaswisher/status/1725678074333635028?t...

Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...

Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...

Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.

For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

moralestapia
1 replies
12h25m

I saw those tweets as well. Those are rumours at this point.

The only thing that is real is the PR from OpenAI and the "candid" line is quite ominous.

sama brought the company to where it is today, you don't kick out someone that way just because of misaligned interests.

I'm on the side that thinks that sama screwed up badly, putting OpenAI in a (big?) pickle and breaking ties with him asap is how they're trying to cover their ass.

lucubratory
0 replies
12h10m

They're not rumours, they are reporting from the most well-known and creditable tech journalist in the world. The whole point of journalism and her multi-decade journalistic career is that when she reports something like that, we can trust that she has verified with sources who would have actual knowledge of events that it is the case. We should always consider the possibility that her sources were wrong, but that's incredibly unlikely now that Ilya gave an all hands meeting (that I linked you) which confirmed a majority of this reporting.

reducesuffering
1 replies
14h8m

OpenAI and its people are there to maximize shareholder value

Clearly not, as Sama has no equity and a board of four people with little, if any, equity, just unilaterally decided to upend their status quo and assured $ printer, to the bewilderment of their $2.5T 49% owner, Microsoft.

moralestapia
0 replies
13h47m

as Sama has no equity

Yeah and he got sacked.

aliston
2 replies
13h35m

ChatGPT blew the doors open on the AI arms race. Without Sam leading the charge, we wouldn't have an AI boom. We wouldn't have Google scrambling to launch catch up features. We wouldn't have startups raising 100s of millions, people talking about a new industrial revolution, llama (2), all the models on hugging face or any of the other crazy stuff that has come about in the past year.

Was the original launch of ChatGPT "safe?" Of course not, but it moved the industry forward immensely.

Swisher's follow up is even more eyebrow raising: "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."

What exactly from the demo day was "pushing too far?" We got a dall-e api, a larger context window and some cool stuff to fine tune GPT. I don't really see anything there that is too crazy... I also don't get the sense that Sam was cavalier about AI safety. That's why I am so surprised that the apparent reason for his ousting appears to be a boring, old, political turf war.

My sense is that there is either more to the story, or Sam is absolutely about to have his Steve Jobs moment. He's also likely got a large percentage of the OpenAI researcher's on his side.

m_ke
1 replies
13h24m

ChatGPT was definitely not some visionary project led by Sam. They had a great LLM in GPT-3 that was hard to use because it wasn't instruction tuned, so the research team did InstructGPT and then took it even further and added RLHF to turn it into a proper conversational bot. The UI was a hacky interface on top of it that definitely got way more popular than they expected.

aliston
0 replies
13h18m

I don't know if it was led by Sam, and don't dispute that it may have been "hacky," but there is no denying it was a visionary project.

Yes, other companies had similar models. I know Google, in particular, already had similar LLMs, but explicitly chose not to incorporate them into its products. Sam / OpenAI had the gumption to take the state of the art and package it in a way that it could be interacted with by the masses.

In fact, thinking about it more, the parallels with Steve Jobs are uncanny. Google is Xerox. ChatGPT is the graphical OS. Sam is...

drcode
1 replies
13h58m

Let's not put the cart before the horse

Even if they say this was for safety reasons, let's not blindly believe them. I am on the pro safety side, but I'm gonna wait till the dust settles before I come to any conclusions on this matter.

Exoristos
0 replies
13h26m

Let's not confuse ChatGPT "safety" with the meaning of the word in most contexts. This is about safely keeping major media of all kinds aligned with the messaging of the neoliberal political party of your locale.

cactusplant7374
1 replies
14h9m

They aren’t anywhere close to AGI. It’s a joke at this point.

mcmcmc
0 replies
14h5m

An ego battle really

swatcoder
0 replies
13h57m

From what's come out so far, it reads more like he thinks they're pushing too hard too fast on commercialization, not AGI. They're chasing profit opportunities at market instead of fulfilling the board's non-profit mission to build safe AGI.

LZ_Khan
0 replies
13h47m

I trust the hell of a lot more in Ilya than Altman. Ilya is a scientist through and through, not a grifter.

1letterunixname
0 replies
13h51m

This is hand-wringing moral panic by nontechnical people who fear the unknown and don't understand AI/DL/ML/LLMs. It's shriekingly obvious no one sane will intentionally build "SkyNet", nor can they for decades.

SilverSlash
29 replies
13h45m

My biggest question is: If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?

gkanai
10 replies
13h33m

Even if sama and gdb raised $10B by early 2024, all of the GPU production capacity is already allocated years out. They'd have to buy some other company's GPUs at insane markups. And that's only on the hardware side.

andrewstuart
5 replies
13h28m

Jensen will take Sam's calls in a heartbeat and personally ensure he has what he needs.

reactordev
2 replies
13h8m

Exactly. Entire city blocks will be cleared for Sam. Anything he needs. Just give him a road.

manyoso
1 replies
12h39m

Pure hero worship based on nothing. Dude got himself fired and the board accused him of lying.

reactordev
0 replies
11h39m

I agree however he'll have no problem finding another vehicle.

threestar3
0 replies
9h36m

He can't. That capacity is sold. He is not going to get his company sued for a breach of contract for a personal favor.

sbrother
0 replies
12h28m

As will Lisa Su. This is going to be quite a ride.

icelancer
3 replies
13h20m

Jensen/CoreWeave/Lambda/etc will ensure sama gets what he needs.

user432678
2 replies
12h42m

Yeah, and then they will do what? Type the model learning data by memory? Run stolen python scripts? How exactly this hardware supposed to be used?

icelancer
1 replies
12h38m

What do you think Brockman did as co-founder of OpenAI, exactly?

threestar3
0 replies
9h37m

Things have changed a lot. Companies have locked down their data a lot in the last year. E.g. reddit, twitter

Even if things hadn't changed, OpenAI has been building their training set for years. It is not something they can just whip up overnight.

cj
4 replies
13h37m

In other words, if SamA did it once, would $50 billion in funding enable him do it a 2nd time?

strikelaserclaw
1 replies
12h19m

Well to be considered a genius in the ranks of Steve Jobs, you need to succeed more than once. If he can't do it a second time, then he'd be known as the guy who fails upward.

cocacola1
0 replies
11h19m

Well, to be considered a genius like Steve jobs, you eventually need to return to the company you left – or were ousted from – when it's on the precipice of defeat and then proceed to turn it around.

manyoso
0 replies
12h38m

Or maybe he was a "manager" who took the credit

blovescoffee
0 replies
12h5m

He's not going to get $50 billion in funding

oivey
3 replies
13h19m

Wasn’t he more of a business guy while Ilya was the engineer? I really doubt a random VC guy is going to really know much about the specific, crucial details the engineering team knows.

naremu
2 replies
12h28m

You know, I'm sure Sam Altman is a really smart guy for real.

But to be honest the impression I've gathered is that he's largely a darling to big ycombinator names which lead him quite rapidly dick first into the position he's found himself in today, which is a self proclaimed prepper who starts new crypto coins post-dogecoin even, talking about how AI that aren't his AI should be regulated by the government, and making vague analogies about his AI being "in the sky" while he takes a formerly announced to be non-profit goal into a for-profit LLC that overtly reminds everyone at every turn how it takes no liability, do not sue.

I'm not really sure to be surprised, or entirely unsurprised.

I mean, he probably knows more code than Steve Jobs? But I suppose GPT probably knows more code than he does. Maybe he really is using the GeniePT as his guide throughout life on the side.

strikelaserclaw
0 replies
12h22m

Apparently Sam's idol growing up was Steve Jobs so this checks out.

oivey
0 replies
11h51m

I’m sure he’s not a dumb guy, just disposable relative to OpenAI’s engineering team. I doubt he’s a Jobs-like, indispensable visionary, either.

capableweb
3 replies
13h35m

Is OpenAI's current success attributed more to its excellent business and startup management, or does it stem from its superior technology and research that surpasses what others have developed?

MVissers
2 replies
12h6m

Both IMO.

The first leads to attractong world-class talent that can do the second. Until you go off the rails and the second kicks you out it seems.

capableweb
1 replies
5h33m

You don't think already starting with world-class talent (Sutskever, Karpathy, Zaremba and more being part of the founding team) lead to OpenAI being able to get more world-class talent, rather than world-class talent joining because of Altman?

hello_moto
0 replies
27m

Yeah. I don't care who Altman is cause he ain't the technical leader from a researcher perspective.

Altman is a CEO golden boy techbros.

ryanSrich
1 replies
10h37m

I think Sam and Greg could build something similar to what ChatGPT is today, and maybe even get close to GPT-4, but going beyond that seems like a stretch. Ilya is really the one that’s needed, and clearly he does not see eye to eye with Sam. Another world-class AI researcher at the level of Ilya would have to step in, and I’m not even sure that person exists.

slekker
0 replies
10h27m

I think Karpathy could qualify

summerlight
0 replies
13h3m

I don't think that specific knowledge means that much. The landscape is changing in a crazily fast pace. 3~4 years ago, Google was way ahead in terms of LLM but has become an underdog after bleeding talents thereafter. It's even worse for that hypothetical new company. It needs at least several months to implement GPT-4 like models and by that time Sam will lose most of his advantages at that moment. And we don't know whether the new company will have enough pool of world class talents to push the technology competitive. To win the competition again, Sam would need more than just some internal knowledge about GPT-4 or whatever models.

dragonwriter
0 replies
9h48m

If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?

Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.

anon291
0 replies
11h0m

We all know how GPT4/5 work essentially. You can easily run a GPT capable model with a few GPUs in the cloud. The secret sauce is the training data, that openAI owns.

convexstrictly
26 replies
14h26m

Followup tweet by Kara:

Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120

woeirua
21 replies
14h9m

This isn’t believable. You don’t fire a CEO and put out a press release accusing them of lying over Dev day. Unless he told them he wasn’t going to announce it and then did.

brigadier132
15 replies
13h59m

Reading about Ilya, it seems like he is fully bought into AI hysteria.

strikelaserclaw
11 replies
13h57m

he seems like a more credible source than random people with no real ml experience

brigadier132
5 replies
13h17m

Why do you believe "real ml experience" qualifies someone to speculate about the impact of what is currently science fiction technology on society?

chpatrick
4 replies
13h4m

It's rapidly turning into science fact, unless you've been living under a rock the last year.

brigadier132
3 replies
12h44m

Science fiction or not, saying this person's opinion matters more because they have a better understanding of how it works is like saying automotive engineers should be considered experts on all social policy regarding automobiles.

Also it's not "rapidly turning into fact". There are still massive unsolved problems with AGI.

chpatrick
2 replies
12h32m

I think the guy running the company that's got the closest to AGI, one of the top experts in his field, knows more about what the dangers are, yes. Especially if they have something even scarier that they're not telling people.

brigadier132
1 replies
12h21m

There is no secret hidden "scary" AGI hidden in their basement. Also, speculating at the "damage" true AGI can cause is not that difficult and does not require a phd in ML.

chpatrick
0 replies
12h12m

How would we know? They sat on GPT4 for 8 months.

reducesuffering
4 replies
13h25m

That’s the funny thing isn’t it? Hinton, Bengio, and Sutskever, chief scientist of the tech behind OpenAI, all have strong opinions one way, but HN armchair experts handwave it away as fear mongering. Reminds me of climate change deniers. People just viscerally hate staring down upcoming disasters.

rchaud
1 replies
13h14m

Not surprising when you consider the volume of posts on GPT threads hand-wringing about "free speech" because the chatbot won't use slurs.

mardifoufs
0 replies
11h15m

The only hand wringing is coming from white privileged liberals from SV who absolutely cannot fathom that the rest of the world does not want them to control what AI can and cannot say.

You can try framing it as some sort of "bad racists" versus the good and virtuous gatekeepers, but the reality is that it's a bunch of nerds with sometimes super insane beliefs (the SF AI field is full of effective altruists who think AI is the most important issue in the world and weirdos in general) that will have an oversized control on what can and can't be thought. It's just good old white saviorism but worse.

Again, just saying "stop caring about muh freeze peach!!" just doesn't work coming from one of the most privileged groups in the entire world (AI techbros and their entourage). Not when it's such a crucial new technology

mardifoufs
0 replies
11h26m

And I can cite tons of other AI experts who disagree with that. Even the people you listed have a much more nuanced opinion compared to the batshit insane AI doomerism that is common in some circles. So why compare it to climate change that has an overwhelming scientific consensus? That's quite a dishonest way to frame the debate.

ianbutler
0 replies
13h5m

Yann LeCunn is a strong counter to the doomerism as one example. Jeremy Howard is another example. There are plenty of high profile, and distinguished researchers who don't buy into that line of thinking. None of them are eschewing safety taking into account the realities of how the technology can be used, but they aren't running the AI will kill us all line up the flagpole.

wmf
2 replies
13h55m

Did he buy into it after he took the money from Microsoft? Because it seems like there's no turning back after that point.

hollerith
1 replies
13h22m

That must be it because obviously no knowledgeable person could honestly come to believe that the technology is dangerous.

wmf
0 replies
13h0m

I think he's telling the truth about his beliefs. But if he always believed that AI is dangerous they should have never done the Microsoft deal.

yieldcrv
1 replies
14h5m

I dont get the impression that everyone involved was that mature, given Greg’s tweet

I get the clout everyone has but this was to be a non profit that was already coup de tat into a for profit that grew extremely quickly into uncharted territory

This isnt a multi decade old fortune 500 company with mature C-Suite and boards, it just masquerades as one with a stacked deck, which apparently is part of the problem

Cacti
0 replies
13h40m

Right?! Anyone paying attention back when Sam was brought on is not surprised. Sam and his investors _were_ the coup. They took an org specifically set up to do open research for the good of humanity, and Sam literally did the opposite. He monetized the work, sold it, didn’t reinvest as promised, reduced transparency, and put the weights under lock and key. He rode in after the hard work had been done, took credit for it, and sold it to lol the fucking Borg of all people.

And many people here who should know better fell for it.

aliston
1 replies
13h23m

This is actually a semi-plausible angle. Given Sam's personality, I could see a scenario where there was disagreement about whether something in particular would be announced at demo day. He may have told some people he would keep in under wraps, but ended up going forward with it anyway.

I don't understand how that escalates to the point that he gets fired over it, though, unless there was something deeper implied by what was announced at demo day.

Edit: Theres a rumor floating around that "it" was the GPT store and revenue sharing. If that's the case, that's not even remotely a safety issue. It's just a disagreement about monetization, like how Larry and Sergey didn't want to put ads on Google.

woeirua
0 replies
12h9m

It’s not a big enough issue for a normal board to fire the CEO over. Now maybe Ilya made a power play as a result, but that would be insane.

lumost
0 replies
13h55m

Alternate possibility would be that openAI faces core technical challenges delivering dev day features/promises. If deals were signed, they could be forced to deliver even if the board et al weren’t aligned on investment.

skepticATX
2 replies
13h57m

A GPT app builder is pushing too fast for Ilya?

SilverSlash
1 replies
13h51m

Not the app builder but the app store and revenue sharing, if rumors are to be believed.

silenced_trope
0 replies
13h17m

That doesn't pass the smell test.

Those seems like implementation details, really strange.

noahjk
0 replies
14h18m

Seems unreasonable to make such a powerful decision over feature releases. Doesn't pass the smell test to me.

andrewstuart
26 replies
13h51m

If Ilya unknown-to-anyone thinks people prefer him to Altman, he has another thing coming. I'm not an Altman fanboy, but anyone can see Altman is a rockstar and richt or wrong, that matters HUGELY to OpenAI.

If it's truly about a power play then this will be undone pretty quick, along with the jobs of the people who made it happen.

Microsoft has put a vast fortune into this operation and if Satya doesn't like this then it will be changed back real fast, Ilya fired and the entire board resign. That's my prediction.

rirarobo
13 replies
13h38m

Sorry, what do you mean by "unknown -to-anyone"?

Ilya is a co-founder of OpenAI, the Chief Scientist, and one of the best known AI researchers in the field. He has also been touring with Sam Altman at public events, and getting highlights such as this one recently:

https://youtu.be/9iqn1HhFJ6c

andrewstuart
8 replies
13h27m

Altman is virtually a household name. Relative to that - Ilya is unknown.

rirarobo
3 replies
12h27m

I see, thanks for clarifying. I agree that Ilya is relatively lesser known publicly, but in the grander scheme of things I don't think Altman is really that well known either.

I mean, anecdotally, most non-tech friends and family I know probably have heard of ChatGPT, but they don't know any of the founders or leadership team at OpenAI.

On the other hand, since I work in the field, all of my AI research friends/colleagues would know Ilya's work, and probably think of Sam more as a business guy.

In that sense, as far as attracting and maintaining AI researcher talent, I think it's arguable that people would prefer Ilya to Sam.

andrewstuart
2 replies
12h2m

> in the grander scheme of things I don't think Altman is really that well known either.

Wall Street Journal front page, top item right this minute: "Sam Altman Is Out at OpenAI After Board Skirmish"

Times Of London front page, right this minute: "Sam Altman sacked by OpenAI after directors lose confidence in him"

The Australian front page, right now: "OpenAI pushes out co-founder Sam Altman as CEO"

MSNBC front page, right now: "OpenAI says Sam Altman exiting as CEO, was 'not consistently candid' with board"

That's his name right there, front page news around the world - they assume people know his name, that's why they put it there.

rirarobo
0 replies
10h16m

Like I said, I agree that Sam Altman, relatively speaking, is better known than Ilya Sutskever to the general public. Although, as other users have replied, this isn't necessarily the same as being a household name.

In any case, I feel like we largely agree, so I'm confused as to why your reply focused solely on this small detail, in a rather condescending manner, while missing my larger point about retaining and attracting AI talent.

meepmorp
0 replies
10h35m

How do those headlines assume readers know who Sam Altman is? All of them tell you the company he was fired from and half tell you he was CEO. If anything, they assume the reader doesn't know who he is.

If I asked my mom who Sam Altman was, she'd have no idea. Most of my friends wouldn't either, even some who work in tech. Having one's name in headlines isn't the same as being a household name.

totallywrong
0 replies
12h22m

I couldn't have told you the name of anyone at OpenAI until this news, and I come in here every day.

strikelaserclaw
0 replies
12h34m

Altman is NOT a household name. ChatGPT is in the western world to some extent.

staticman2
0 replies
13h0m

Altman is a heck of a lot less famous than CHATGPT-- so if fame is the issue, OpenAI seems fine?

bart_spoon
0 replies
10h48m

That seems like an incredibly foolish measure of credibility. Donald Trump and Taylor Swift have far greater name recognition than Altman, and yet they aren’t going to be leading the AI revolution.

OpenAI is where it is because its models are much, much better than the alternatives and pretty much always have been since their inception, not because of anything on the business side. The second alternative or open source models reach parity, they will start shedding customers. Their advantage is entirely due to their R&D, not anything on the business side.

adastra22
3 replies
12h5m

Which has very little to do with OpenAI’s success. It’s not enough to make a new technology, as too many tech-focused entrepreneurs have found out. You have to find product-market fit, manage suppliers and customers, and negotiate deals.

rirarobo
2 replies
11h31m

Typically I would agree, but in the case of OpenAI, they were themselves blindsided when their free conversational LLM demo, ChatGPT, went viral less than a year ago now.

It is a rare counter case, where a tech-focused research demo, without any clear "product-market fit, suppliers, or customers" became a success almost overnight, to the surprise of it's own creators.

The early days were people playing around with ChatGPT just to see what it could do. All the market fit, fine tuning, and negotiation of deals came later.

Of course, OpenAI capitalized on that initial success very skillfully, but Ilya was the critical world renowned AI researcher who had a lot to do with enabling OpenAI's initial success.

adastra22
1 replies
11h28m

Of course, OpenAI capitalized on that initial success very skillfully

That’s the key point there. Without leadership talent to capitalize on success, technical advances are for naught.

But also, GPT had been around for some years before ChatGPT. The model used in ChatGPT was an improvement in many ways and I don’t mean to diminish Ilya’s contribution to that, but it is the packaging of the LLM into a product that made ChatGPT a success. I see more of Sam’s fingerprints on that than Ilya’s.

rirarobo
0 replies
10h33m

Agreed, both were critical for their success, the underlying LLM technology, and the vision and leadership to package the tech into ChatGPT.

However, my original comment on this thread was simply to point out that Ilya is not "unknown-to-anyone", but a world renowned AI researcher and a core part of OpenAI's team and their success. Your reply implied that Ilya "has very little to do with OpenAI’s success", which I thought undersells his importance.

dannykwells
7 replies
13h47m

Completely agree. Sam is on the phone with Satya as we speak.

Alternative is Sam goes in house to MS who already have all the weights of GPT-4 and build again, but constrained by any existing charter.

andrewstuart
6 replies
13h43m

The most likely outcome at this stage IMO is Sam will start a new thing with a huge equity stake and just do it again.

dinobones
5 replies
13h5m

Sam is not an engineer. He can't do it without Ilya or Ilya-lite. And there are like 4 of those in the world.

andrewstuart
3 replies
12h58m

>He can't do it without Ilya or Ilya-lite.

The wrongest thing I've read on HN for a long while.

The world has alot more smart people in it than you realise, and Sam's rockstar profile gives him direct access to them.

dinobones
1 replies
12h38m

How many people do you think are capable of pushing the state of the art in AI research?

There is a massive amount of tooling and infrastructure involved. You can't just get some Andrew Ng Coursera guy off the street and buy 50,000 H100s at your local Fry's electronics. I wouldn't be surprised if there aren't even enough GPUs in the world for Altman to start a competitor in a reasonable amount of time.

I stand by my number, there are like 4 people in the world capable of building OpenAI. That is, a quality deep learning organization that pushes the state of the art in AI and LLMs.

Maybe you can find ~1,000 people in the world who can build a cheap knock-off that gets you to GPT3 (pre instruct) performance after about two years. But even that is no trivial effort.

dannykwells
0 replies
12h15m

Did you see Greg Brockmann also quit? Who do you think has contributed more to OpenAI code base??

bart_spoon
0 replies
10h43m

I agree that it’s far than a one man show at OpenAI, but on the other hand megacorps full of many of the smartest, best compensated research scientists and engineers haven’t been able to touch OpenAI at this point, even with much greater resources. There is a significant advantage that OpenAI has built for themselves with their research and development.

bushbaba
0 replies
12h45m

And Ilya can't do anything without lots of funding, which is given on premise of future profits.

Imnimo
1 replies
13h37m

People don't know who Ilya is?

andrewstuart
0 replies
13h24m

Do a survey of ordinary smart people you know, ask if they have heard of Sam Altman. Ask if they know Ilya.

davorak
0 replies
11h48m

If it's truly about a power play then this will be undone pretty quick, along with the jobs of the people who made it happen.

The board that made this decision to fire Altman and they are the captain of the ship.

if Satya doesn't like this then it will be changed back real fast, Ilya fired and the entire board resign. That's my prediction.

MS does not own openAI if the board does not want Satya to have a say Satay does not have a say. MS/Satay could throw lawyers at the issue, try to find a crack where the board has violated the law and or their own rules. The key is they can try, but MS/Satay have no immediate levers of power to enforce their will.

1letterunixname
0 replies
13h48m

It sounds like someone neutral from MSFT leadership needs/ed to moderate and bring people together before the wheels fell off.

Bjorkbat
25 replies
13h40m

I have a hard time believing this simply since it seems so ill-conceived. Sure, maybe Sam Altman was being irresponsible and taking risks, but they had an insanely good thing going for them. I'm not saying Sam Altman was responsible for the good times they were having, but you're probably going to bring them to an end by abruptly firing one of the most prominent members of the group, seeing where individual loyalties lie, and pissing off Microsoft by tanking their stock price without giving them any heads up.

I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.

But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.

hilux
11 replies
13h32m

Reading between lots of lines, one possibility is that Sam was directing this "insanely good thing" toward making lots of money, whereas the non-profit board prioritized other goals higher.

Bjorkbat
7 replies
13h22m

Sure, I get that, but to handle a disagreement over money in such a consequential fashion just doesn't make sense to me. They must have understood that to arrive in a position where they have to fire the CEO with little warning is going to have profound consequences, perhaps even existential ones.

015a
6 replies
12h28m

AGI is existential. That's the whole point, I think. If they can get to AGI, then building an LLM app store is such a distraction along the path that any reasonable person would look back and laugh at how cute an idea it was, despite how big or profitable it feels today.

js8
3 replies
10h15m

It's a distraction only if you are not an effective altruist. To build AGI (so that all humans can benefit) you need money, so this was a way to make money so they could FINALLY be spent on the goal of AGI. /s

I think the next AGI startup should perhaps try the communist revolution route, since the capitalist-based one didn't pan out. After all, Lenin was a pioneer in effective altruism. /s

timeon
2 replies
7h0m

Can '/s' after straw man sneak the message across?

js8
1 replies
5h57m

I am strawmanning effective altruism in the same way that effective altruism strawmans just plain old altruism.

OJFord
0 replies
3h51m

Ha, that's brilliantly put. I think the fundamental idea of EA is perfectly sound, but then instead of just being basic advice (for when 'doing altruism') it's somehow a cult?

lazide
0 replies
11h21m

None of that would explain why they accused him of lying to them.

disgruntledphd2
0 replies
5h49m

Sure, but you need money for compute to get to AGI, so selling stuff is a well accepted way of getting money.

justanotherjoe
0 replies
6h32m

You can definitely make the two goals work together. The only way to make money for openai is to bring more powerful ai to everyone. Focus on making less money would mean?? You dont do that?

icelancer
0 replies
13h23m

Destined to repeat the failures of PARC.

anon291
0 replies
11h2m

Then you say 'the board has decided to part ways with Sam due to strategic disagreement'. Not 'he wasn't candid'. not being candid can be a crime.

lotsofpulp
5 replies
12h57m

and pissing off Microsoft by tanking their stock price

When did Microsoft’s stock price tank?

https://finance.yahoo.com/quote/MSFT/

bushbaba
4 replies
12h48m

See after hours. Looks like down ~1.5%

lotsofpulp
1 replies
12h36m

That does not qualify as tanking. Stock prices move that much all the time.

https://www.google.com/finance/quote/MSFT:NASDAQ

OJFord
0 replies
6h56m

It's a >40B hit to market cap, supposedly caused by news from a company they've invested afaict $11B in.

I wouldn't call it 'tanking' either, but it's definitely not run of the mill, did make them rush out a statement on their commitment to investment and working with OpenAI.

meepmorp
0 replies
11h23m

It looks like it's only down -0.97% in after hours.

astrange
0 replies
9h9m

Doesn't matter unless it lasts a lot longer than a day.

cuuupid
3 replies
13h19m

Tbh this reads a lot like Ilya thinking he’s Tony Stark and his (still impressive) language model is somehow the same as an iron man suit. Which is arrogance to the point of ignorance, reality isn’t that romantic.

I can only hope this doesn’t turn into OpenAI trying to gatekeep multimodal models or conversely everyone else leaving them in the dust.

Cacti
1 replies
12h52m

you seem to have confused the two. Sam’s entire reason for being there was to decrease transparency, make open research proprietary, and monetize it.

MVissers
0 replies
12h13m

Don’t forget regulatory capture, lobbying with congress to decrease competition so only the deepest pockets can work on these things.

rirarobo
0 replies
12h49m

It's possible this will have the opposite effect.

Sam was the VC guy pushing gatekeeping of models and building closed products and revenue streams. Ilya is the AI researcher who believes strongly in the nonprofit mission and open source.

Perhaps, if OpenAI can survive those, then they will actually be more open in the future.

ribosometronome
0 replies
12h49m

I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.

By seemingly siding with staff over the CEO's desire go way too fast and break a lot of things? I'd think that world class talent hearing they might be able to go home at night because the CEO isn't intent on having Cybernet deployed tomorrow but next week instead is more appealing than not.

dannykwells
0 replies
13h35m

Best response on this yet.

Cacti
0 replies
13h2m

Sometimes smart people make stupid decisions. It’s really that simple.

A young guy who is suddenly very rich, possibly powerful, and talking to the most powerful government on the planet on national TV? And people are surprised to hear this person might have let it go a little bit to their head, forget what their job was, and suddenly think THEY were OpenAI, not all the people who worked there? And comes to learn reality the hard way.

What’s to be surprised about? It’s the goddamned most stereotypically human, utterly unsurprising thing about this and it happens all. the. time.

A lot of people here really struggle with the idea that smart people are not inherently special and that being smart doesn’t magically absolve you from making mistakes or acting like a shithead.

gizmo
17 replies
14h11m

Ousting sama and gdb over something as petty as a simple strategy disagreement is totally unprofessional. sama got accused of serious misconduct. Even if he was too eager to commercialize OpenAIs tech that doesn't come close to justifying this circus act.

dragonwriter
4 replies
10h16m

Ousting sama and gdb over something as petty as a simple strategy disagreement

A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.

Sai_
3 replies
8h57m

The board questioned his “candid”-ness. This was not a difference of opinion on strategy.

tremon
1 replies
4h3m

Unless the board perceived his actions to be more in line with a different strategy than communicated.

kristianc
0 replies
3h22m

Yes, but the “candid” part carries the additional implication that he lied to make them think that.

dragonwriter
0 replies
1h12m

Candidness is a behavior question, the stories about what has been summarized as a difference of strategy (which, IMO, underestimates the fundamental difference that is described) seem to be providing a context for what ia described as long-running internal tension that ultimately led to the firing, not whatever behavior may have been the proximate cause.

Mistletoe
4 replies
13h44m

Don’t you think it’s more likely you don’t know the whole story yet?

chipgap98
3 replies
13h29m
weare138
0 replies
8h29m

If the allegations concerning Sam are true then this could all be for damage control. It is in OpenAI's best interest that information isn't released to the public and it's in Sam's best interest to keep his mouth shut about it if the allegations are true. The timing and abruptness of everything is highly suspicious. Even Microsoft was out of the loop on this which again is very strange if this was just an issue over corporate strategy and vision.

seattle_spring
0 replies
9h44m

Did you mean to link to a different tweet? I don’t see how what you linked “basically confirms” literally anything related to this. Can you spell it out for those of us that aren’t reading literally every rumor and gossip that’s popped up in the last 12 hours?

dragonwriter
0 replies
10h10m

There is plenty of indications about the nature of the disagreement, but that doesn't tell you what conduct did or did not occur as factions (including the one whose leading members have been ousted) sought to win the dispute.

jprete
3 replies
12h43m

Strategy disagreements are absolutely central reasons to fire executives.

KerrAvon
2 replies
12h10m

But you don’t accuse of them of lying on the way out because you have a strong disagreement. That’s a guaranteed ticket to a very expensive lawsuit.

Either there’s more to it or the board is staffed by very naive people.

jprete
0 replies
1h39m

You’re right! I presume he materially misled them about lots of small product decisions, and the dev-day announcements were the last straw.

dragonwriter
0 replies
10h12m

But you don’t accuse of them of lying on the way out because you have a strong disagreement.

You do if part of the way that they attempted to win the internal power struggle resulting from the disagreemtn was lying to the board to avoid having their actions which lacked majority support from being thwarted.

MVissers
2 replies
12h32m

This is not petty, it’s the integral mission of the company, the reason it was founded, the reason it got investors and the reason that many of the most brilliant scientists in the world work there.

They started as a non-profit ffs.

brysonreece
1 replies
11h19m

And they still are. OpenAI consists of two parts; a non-profit entity which owns the IP, along with the obvious commercialization-focused subsidiary of the company.

My question is: what was stopping both parties here from pursuing parallel paths? — have the non-profit/research oriented arm continue to focus on solving AGI, backed by the funds raised on from their LLM offerings? Were potential roadmaps really that divergent?

I had always assumed this was their internal understanding up until now, since at least the introduction of ChatGPT subscriptions.

edgyquant
0 replies
9h27m

Because it really seems like the for profit side was building towards a Microsoft acquisition

throwaxcvxvzxcv
16 replies
13h33m

These Kara Swisher's tweets aligns extremely closely with the following pseudonymous Reddit user Anxious_Bandicoot126 from 4 hours ago: https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p...

I feel compelled as someone close to the situation to share additional context about Sam and company.

Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.

His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.

When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.

Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.

Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.

---

The entire Reddit thread is full of interesting posts from this apparently legitimate pseudonymous OpenAI insider talking candidly.

adastra22
6 replies
13h20m

Or that Reddit post which has been circulating widely was the source for Kara’s tweets.

mvkel
4 replies
13h16m

Mmmmm no. Kara is a bonafide journalist. Her sources are v real and authenticated for the things she shares.

fsckboy
3 replies
13h14m

authentication means finding a 2nd source. This reddit comment could be primary, armed with that it's not hard for Kara to find somebody inside to say "yes, accurate." That is real journalism.

mvkel
2 replies
12h42m

Not according to Kara Swisher it isn't.

https://x.com/maggienyt/status/1578074773174771712?s=46&t=k_...

I know it's convenient to dUnK on journalism these days but this is Kara Fucking Swisher. Her entire reputation is on the line if she gets these little things wrong. And she has a hell of a reputation

mvkel
0 replies
12h26m

Update: now confirmed by the parties involved

fsociety
0 replies
9h59m

She is the reputation.

bitexploder
0 replies
13h14m

I think the phrasing to the parent you replied to is perfect. "Aligns" is correct. IT doesn't really say which direction information flowed. Anyway, it is interesting how tech journalists get so deeply embedded in issues. She does seem to be pretty legitimate.

k2xl
4 replies
13h12m

That account actually sounds like it is generating text via chatgpt. zero spelling mistakes and keeps things super vague

rmwaite
0 replies
12h58m

You've gotta be kidding me. No spelling mistakes means something was generated by ChatGPT now? Good grief.

nine_k
0 replies
12h20m

Prompt: "Retell the following text in a neutral, impassionate, polite tone, blunting any strong statements, and correcting any spelling mistakes along the way."

cjak
0 replies
12h56m

I refuse to accept that the lack of spelling mistakes and the careful use of language is evidence of AI. I can be sloppy, but I've known Real Humans to take great care with their written language.

36933
0 replies
12h58m

https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p...

„im not at liberty to say, but im very close. i dont want to give to many details.“
mifeng
1 replies
13h3m

Who in the world could Anxious_Bandicoot126 be???

"The board and myself were lied to one too many times."

mifeng
0 replies
13h1m

"All my teams were raising red flags about needing more time but he didn't give a damn."

fakedang
0 replies
12h50m

All values appreciated by the backers of this forum, to be fair.

adamsb6
0 replies
12h53m

I'm very curious what definition of safety they were using. Maybe that's where the disagreement lies.

AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.

AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.

OscarTheGrinch
16 replies
10h11m

Here's my preferred theory, it's a tale as old as time. Sam Altman, like Icarus, flew too close to Microsoft's giant pot of money. He pivoted the company away from it's founding mission, unleashing the very djinn they originally set out to harness. Turns out there were people at OpenAI who really believed in the original vision.

dwd
8 replies
9h48m

Nicely put.

The original vision is pretty clear, and a compelling reason to not screw around and get sidetracked, even if that has massive commercialisation upside.

Thankfully M$ didn't have control of the board.

two_in_one
7 replies
9h9m

Thankfully M$ didn't have control of the board

You never know. Remember Nokia?

smilespray
6 replies
9h6m

That still pisses me off.

fsloth
5 replies
7h56m

The downfall of Nokia phones was seeded in it’s management culture. After one message from their CEO Elop Osbourned the market for Nokia phones faster than you can say ”burning raft”, the house of cards built on top of strong early brand history and increasingly commodotized radio technology, Micrsoft basically paid billions for an offering that would have required a heroic&legendary pivot (would have been possible with the talent and tech still in house).

Really really. You have two so-frigging-stereotypical samples of management ineptitude in running a strong commercial brand AND leadership (Osbourne:’guys our phones suck’ ; ’change management 101’: burning raft is literally the most commmon and mundane turn of phraze meant to imply you need to act fast. Using this specific phraze is a clear beacon you are out of way out of your depth by paraphrazing 101 material to your company). If the phones had been a strong product, none of this would have mattered. But they weren’t and this was as clear way to signal ”emperor has no clothes” as possible.

mlajtos
3 replies
7h7m

N9 was a work of art. Fuck Elop.

fsloth
1 replies
6h11m

In general software seems to be really hard for hardware companies. This was the main reason for the downfall IMO. The things that make you succeed in hardware do not suffice, and are partly wrong in software.

The N9 etc demonstrated there was enough talent for a plausible pivot. Was it business wise obvious this would have been the only and right choice?

scrubs
0 replies
50m

Agree: Japanese and German manufacturing and materials know how? Lengendary. Software? Hmmmm.

justsomehnguy
0 replies
3h43m

Fuck Elop

It wasn't Elop who drove Nokia to the state it was in 2009. "Burning Platform" is from 2011.

https://news.ycombinator.com/item?id=35030334

meyum33
0 replies
3h1m

When the iPhone 14 Pro came out, I was reminded of Nokia's supersampling 41MP camera, along with wireless charging, OIS, night mode, and other things Nokia shipped very early with Windows Phone. Now looking back, there was no guarantee that adopting Android would've given Nokia any enduring advantage. Where's HTC now? Does anybody even remember it? With Windows at least there's another chance of being bailed out by Microsoft. It's probably the better choice for shareholders.

helsinkiandrew
2 replies
8h38m

Or was it that he's been seen trying to raise money for an AI chip startup to compete with Nvidia or was courting SoftBank for a multibillion-dollar investment in a new company to make AI-oriented hardware with Jony Ive?

A lot of his current external activities could worry the board - and if he wasn't candid about future plans I can see why they might sack him.

steveBK123
1 replies
2h55m

What's always incredible to me is how much "outside activity" is tolerated of tech CEOs. I get it, they are at the top, they make the rules, but wow.

Even a lowly new grad engineer has to sign a lot of stuff when they take a job that forces essentially exclusivity to your work there. I cannot dabble in outside businesses within the same industry or adjacent industries.

CEOs argue that their job is tough and many hours and life consuming and that's why they get the pay, and yet there is a whole genre of tech CEOs who try to CEO 5 companies at a time..

fragmede
0 replies
34m

It's critical to know: where are you located? lowly new grad engineers, as well as senior architects, can't be covered in non-competes in California, as long as it's done on non-company hardewre. it's a large part of why California is so big for tech, and subject of a current front page discussion.

https://news.ycombinator.com/item?id=38316870

edgyquant
1 replies
9h30m

I hope this is the truth, it would give me a little more faith in humanity than I currently have

PartiallyTyped
0 replies
5h17m

The 4 board members still there are all pro-safety and alignment, so it seems likely.

mi_lk
0 replies
6h4m

This. VCs/tech bros are framing this as a coup except when you approach from this angle it all makes sense

mclightning
0 replies
1h52m

PR move is in motion guys. Regulatory capture will be justified only through this new persona/identity the OpenAI will be dressed up in. I am not buying none of it.

Only money and profit makes the mountains move. Not moral stature. I don't believe that optimistic take for a second.

None with a moral stance to take such action stays quiet so long, without alternate motives.

LoganDark
14 replies
14h20m

Can anything actually back this up please? This twitter account is just posting stuff and then crediting "Sources".

crazygringo
12 replies
14h18m

"This twitter account" is Kara Swisher, probably the most well-known tech reporter working right now. She has known essentially everyone in the tech world for decades at this point. Her sources are not only going to be legit, but she probably has more of them than literally anyone else in the tech world, so she can accurately corroborate information or not.

https://en.wikipedia.org/wiki/Kara_Swisher

charcircuit
5 replies
14h8m

That does not change the fact there are no sources. Knowing people does not mean you are never wrong, nor does it mean you will never twist a story.

solardev
2 replies
13h48m

Can we just accept it for what it is, a career journalist using anonymous sources a few hours after a major event? She's staking her reputation on this, and that means something.

It doesn't mean it's absolute truth. It doesn't mean it's a lie. Can we just appreciate her work, accept that maybe it's only 70% vetted right now, more likely true than not, but still subject to additional vetting and reporting later on?

It's still more information than we had earlier today. Sure, take it with a grain of salt and wait for more validation, but it's still work that she's doing. Not that different from a tech postmortem or scientific research or a political investigation... there's always uncertainty, but she's trying to get us closer to the truth, one baby step at a time, on a Friday night. I respect her for that, even as I await more information.

charcircuit
1 replies
12h34m

Can we just appreciate her work, accept that maybe it's only 70% vetted right now, more likely true than not, but still subject to additional vetting and reporting later on?

I do not respect journalists so no.

It's still more information than we had earlier today.

It is okay to not have the full information. More information is not neccessarily better.

but it's still work that she's doing

Even if something took work to do I do not automatically appreciate it.

but she's trying to get us closer to the truth, one baby step at a time, on a Friday night. I respect her for that, even as I await more information.

Having the truth about this will not make a meaningful difference in your life. No matter what day you learn of it.

solardev
0 replies
12h29m

Oh, that's fine. We just have different world views, and that's okay.

jmye
0 replies
14h5m

It means that over a long reporting career, there’s no reason, whatsoever, to believe she’s either lying or twisting things.

Being a contrarian for kicks or as a personality is boring: if you want to make an accusation, make it.

ajross
0 replies
14h1m

Clearly there are sources. They are anonymous sources. Important news is delivered by anonymous sources every day.

Now, sure, you can't just trust anyone who tells you they heard something anonymously. That's where the the whole idea of journalists with names working for organizations with records of credibility comes from. We trust (or should) trust Swisher because she gets this stuff right, every day. Is she "never" wrong? Of course not. But this is quality news nonetheless.

jjtheblunt
2 replies
13h37m

probably the most well-known tech reporter

I imagine Walt Mossberg saying "hold my beer"

crazygringo
1 replies
13h24m

I included the phrase "working right now" -- which you left off when quoting me -- very intentionally. :)

jjtheblunt
0 replies
12h57m

Because it’s a joke. (And upvoted you btw)

throwawayapples
1 replies
13h50m

Kara Swisher gets in big public fights with CEOs and wears dark sunglasses in order to be cool.

In other words, she's definitely not immune to bias and might easily want to shape the story to her own ends or to favor her own friends.

We're not really talking about facts here.. it's really just speculation and hearsay, so who can say if she's just talking?

fsociety
0 replies
9h56m

She also admits her bias and is staking her reputation as a journalist on that tweet - versus us commenting behind a pseudonym. It’s the closest to a fact we will have at the moment.

LoganDark
0 replies
14h13m

Thanks

gotaran
0 replies
14h18m

Kara Swisher is reputable… as far as tech journalists go.

mvkel
12 replies
13h16m

Sam wanted to commercialize stuff to shoot for revenue. Ilya wants to keep pushing for gpt 4.5 and beyond, to hell with the revenue. Ilya won the argument, Sam out.

Hell yeah.

It's not safetyism vs accelerationism.

It's commercialization vs innovation.

015a
7 replies
12h14m

OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".

How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.

OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.

anon291
4 replies
10h57m

OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".

Well an app store let's people... use it.

Look at UNIX. UNIX systems are great. They have produced great benefit to the world. Linux, as the most common Unix-like OS, also does. However, most people do not run any of the academic 'innovative' distros. Most people run the most commercialized version you can possibly think of Android and iOS (Unix variant from Apple). It takes commercializing something to actually make it useful.

mvkel
3 replies
10h21m

The thing is custom gpts are not useful. They are repackaged system prompts meant for non techy people. They were a distraction from the mission of OpenAI (a non profit). The commercial arm is a capped profit company anyway

disgruntledphd2
2 replies
5h46m

NVIDIA don't take payment in research papers, unfortunately.

mvkel
0 replies
1h30m

Nobody's asking for a loan repayment

015a
0 replies
1h32m

Maybe. But, Microsoft definitely does. Technology and IP was a large piece of their compensation in the 49% acquisition.

mvkel
0 replies
11h48m

Exactly! Really excited about a realignment back to the mission. I hope Ilya knows what he's doing with so much pressure on him now

adastra22
0 replies
10h0m

By letting humanity use the thing you made, customized to their own situation, so it can benefit them?

cactusplant7374
3 replies
12h23m

Commercialization is innovation. Without it they will end up with a cute toy and a bankrupt company.

mvkel
2 replies
12h18m

Eventually, sure. Right now, today, they have a blank check for compute and all the money they could ask for. It's not the time to try to monetize if AGI is the mission. Complete distraction

fsociety
1 replies
10h4m

AGI at all costs sounds more terrifying than monetizing ChatGPT. Seems like there could have been a balance to strike.

edgyquant
0 replies
8h28m

They are a non-profit specifically founded to build AI, not to become a profitable company and chase revenue

eigenvalue
12 replies
13h51m

I wonder how much of this was the influence of Hinton on his former student, Sutskever. I'm sure Sutskever respects Hinton above basically anyone out there and took Hinton's strong objections seriously.

I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.

strikelaserclaw
9 replies
13h43m

why do you think one company will determine whether the us beats china in ai or not ? Like 75% of the authors i read on AI papers are Chinese, that should be far more alarming if you really are afraid of china getting ahead.

hilux
8 replies
13h27m

Research from PRC (across all of science, not specific to AI) has a terrible reputation. They are rewarded for sheer quantity. You can easily find many articles discussing this phenomenon.

So the volume of Chinese AI papers says little to nothing about their advancements in the field.

lucubratory
4 replies
10h27m

That's a problem in all of science, and Chinese research is quite good in measures like citations as well, not just quantity of papers.

machomaster
3 replies
6h4m

Chinese papers are (with much higher probability) citing Chinese sources. It's a self-empowering cycle, which doesn't say anything about the quality.

lucubratory
1 replies
3h8m

Yes, and American papers are much more likely to cite American papers. Science is more international than the vast majority of professions, but there are absolutely still state cultures that are just more likely to have read research in their language, published by someone who's a friend or a friend of a friend, or have national institutions which concentrate scientific talent that make scientists be colleagues. Nowhere near as strong of an effect as other jobs, but it's still there.

washadjeffmad
0 replies
1h4m

Ethnocentrism is ethnocentric.

It's like how historical American medical data collected by universities has been misapplied to pharmaceutical and medical practice because of demographic bias. Research participants largely matched the demographics of the university: healthy white males.

Or more broadly, whenever you see a "last name" requirement on a form, you know it's software made by people who think it's normal for people to have "last names", and that everyone should know what that means.

danaris
0 replies
4m

This just in:

Researchers are vastly more likely to read, and therefore cite, papers in languages that they understand fluently.

solarkraft
0 replies
11h23m

Huh, that's exactly what I heard about western institutions as well.

justinclift
0 replies
12h53m

Hmmm, that's the same reputation er... western science has as well.

blovescoffee
0 replies
12h7m

I regularly read really good papers that come out of China. For instance, there's great CV work out of China.

hooloovoo_zoo
0 replies
12h53m

You’re taking Hinton at his word. Maybe he was forced out of Google for doing nothing with LLM tech for half a decade.

GaunterODimm
0 replies
4h50m

I don't know if it's bad or good for the long-term interests of the humankind, but right now it feels like a Klaus Fuchs moment.

kolja005
11 replies
13h32m

What has me scratching my head is the fact that Altman has been on a world tour preaching the need for safety in AI. Many people here believed that this proselytizing was in part an attempt to generate regulatory capture. But given what's happening now, I wonder how much Altman's rhetoric served the purpose of maintaining a good relationship with Sutskever. Given that Altman was pushing the AI safety narrative publicly and pushing things on the product side, I'm led to believe that Sutskever did not want it both ways and was not willing to compromise on the direction of the company.

Cacti
5 replies
13h24m

They did compromise. The creation of the for-profit and Sam being brought in WAS the compromise. Sam eventually decided that was inconvenient for him, so he stopped abiding by it, because at the end of the day he is just another greedy VC guy and when push came to shove he chose the money, not OpenAI. And this is the result.

nickv
2 replies
12h50m

Sam literally has 0 equity in OpenAI. How did he “choose money”?

strikelaserclaw
0 replies
12h40m

Who knows how these shady deals go, even SBF claimed effective altruism. Maybe Sam wasn't in it for the money but more for "being the man", spoken of in the same breath as steve jobs, bill gates etc... for building a great company. Building a legacy is a hell of a motivation for some people, much more so than money.

mi3law
0 replies
9h37m

Not quite accurate.

OpenAI is set up in a weird way where nobody has equity or shares in a traditional C-Corp sense, but they have Profit Participation Units, an alternative structure I presume they concocted when Sam joined as CEO or when they first fell in bed with Microsoft. Now, does Sam have PPUs? Who knows?

csmiller
0 replies
12h54m

Hasn’t Sam been there since the company was founded?

015a
0 replies
12h28m

Its frustrating to me that people so quickly forget about Worldcoin.

Sam is not the good guy in this story. Maybe there are no good guys; that's a totally reasonable take. But, the OpenAI nonprofit has a mission, and blowing billions developing LLM app stores, training even more expensive giga-models, and lobotomizing whatever intelligence the LLMs have to make Congress happy, feels to me less-good than "having values and sticking too them". You can disagree with OpenAI's mission; but you can't say that it hasn't been printed in absolutely plain-as-day text on their website.

xwdv
2 replies
13h19m

Altman was pushing that narrative because he’s a ladder kicker.

He doesn’t give a shit about “safety”. He just wants regulation that will make it much harder for new AI upstarts to reach or even surpass the level of OpenAI’s success, thereby cementing OpenAI’s dominance in the market for a very long time, perhaps forever.

He’s using a moral high ground as a cover for more selfish objectives, beware of this tactic in the real world.

rmwaite
1 replies
12h53m

I think this is what the parent meant by regulatory capture.

xwdv
0 replies
12h32m

True, I didn’t read the whole comment.

rirarobo
0 replies
12h57m

Actually, I think this precisely gives credence to the theory that Sam was disingenuously proselytizing to gain power and influence, regulatory capture being one method of many.

As you say, Altman has been on a world tour, but he's effectively paying lip service to the need for safety when the primary outcome of his tour has been to cozy up to powerful actors, and push not just product, but further investment and future profit.

I don't think Sutskever was primarily motivated by AI safety in this decision, as he says this "was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." [1]

To me this indicates that Sutskever felt that Sam's strategy was opposed to original the mission of the nonprofit, and likely to benefit powerful actors rather than all of humanity.

1. https://twitter.com/GaryMarcus/status/1725707548106580255

kromem
0 replies
13h24m

Maybe, but honestly without knowing more details I'd be wary of falling into too binary a thinking.

For example, Ilya has talked about the importance of safely getting to AGI by way of concepts like feelings and imprinting a love for humanity onto AI, which was actually one of the most striking features of the very earliest GPT-4 interactions before it turned into "I am a LLM with no feelings, preferences, etc."

Both could be committed to safety but have very different beliefs in how to get there, and Ilya may have made a successful case that Altman's approach of extending the methodology of what worked for GPT-3 and used as a band aid for GPT-4 wasn't the right approach moving forward.

It's not a binary either or, and both figures seem genuine in their convictions, but those convictions can be misaligned even if they both agree on the general destination.

ilrwbwrkhv
11 replies
14h14m

Ilya is the center of Open AI. Everyone else is dispensable.

late2part
5 replies
13h55m

I'm ignorant and don't disagree - can you say more about why Ilya is the core of Open AI?

johnwheeler
2 replies
13h49m

Because he and Habasis became rivals when they parted at Google, and despite Dennis being the golden boy because of AlphaGo, Sutskever ate GOOGLES whole fucking lunch with ChatGPT.

esafak
1 replies
13h1m

Rivals over anything in particular, or just status?

johnwheeler
0 replies
13h0m

The future of AI and the trappings that go with it.

But in all seriousness, the transformer architecture was born at Google, but they were too arrogant and stupid to capitalize on it. Sutskever needed Altman to commercialize and make a product. He no longer needs Sam Altman. A bit OT but true.

whatyesaid
0 replies
12h44m

Ilya is one of the most cited ML researchers in the world and was part of papers that pioneered basic techniques that we still use today like Dropout.

Ilya was recruited by Elon under the original OpenAI. But basically Elon and the original people got scammed by Sam since what they gave money for got reversed, almost none of their models now are open and they became for-profit instead of non-profit. You'd think aspects like closed models are defendable due to safety but in reality there are just slightly weaker models that are fully open.

strikelaserclaw
0 replies
13h49m

he was one of Geoff Hintons students, involved in alexnet, worked on early days of google brain. Ilya is one of the most "distinguished" ml researchers in the world today and i feel like he has a lot more to contribute.

strikelaserclaw
1 replies
13h56m

He's the michael corelone

wannacboatmovie
0 replies
12h34m

Would that make Sam, Fredo?

ryanSrich
0 replies
10h42m

You think Karpathy is dispensable? I see him and Ilya both as important, and essentially the brains of the operation. Sam was always the VC guy (very Elon Musk in that sense), that came into the company as the non-founder CEO.

icelancer
0 replies
13h19m

Agreed with the former. Not the latter. gdb is no random.

Keyframe
0 replies
11h16m

He sure took a different take on disagreeing than what Amodei did before him. Amodei quit and built a big challenger, yet Sutskever opt to oust Altman. Weird all in all. I wouldn't rely my business on such a company.

thepasswordis
8 replies
13h46m

This is genuinely frustrating.

IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.

I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.

Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.

SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.

padolsey
1 replies
13h22m

Yeh I really wish they'd better articulate the "AI safety" directive in a way that is broader than deepfakes and nuclear/chemical winter. It feels like an easy sell to regulators. Meanwhile most of us are creating charming little apps that make day-to-day lives easier and info more accessible. The hand-wavey moral panic is a bit of a tired trope in tech.

Also.. eventually anyone will be able to run a small bank of GPUs and train models equally capable to GPT-4 in a matter of days so it's all kinda.. moot and hilarious. Everyone's chatting about AGI alignment, but that's not something we can lockdown early or sufficiently. Then embedded industry folks are talking about constitutional AI as if it's some major alignment salve. But if they were honest they'd admit it's really just a SYSTEM prompt front-loaded with a bunch of axioms and "please be a good boy" rules, and is thus liable to endless injections and manipulations by means of mere persuasion.

The real threshold of 'danger' will be when someone puts an AGI 'instance' in a fully autonomous hardware that can interact with all manner of physical and digital spaces. ChatGPT isn't going to randomly 'break out'. I feel so let down by these kinds of technically illfounded scare tactics from the likes of Altman.

WalterBright
0 replies
13h5m

AI will decide our fate in a microsecond.

oivey
0 replies
13h42m

What is Sam Altman going to do with a GPU? Find some engineers to use them I guess?

kcb
0 replies
10h57m

Convincing yourself and others that you're developing this thing that could destroy humanity if you personally slip up and are just not careful enough makes you feel really powerful.

huytersd
0 replies
13h6m

Palmer had some ridiculous perspectives. Don’t put that pos in the same bucket as Sam.

crotchfire
0 replies
12h57m

Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.

If that were true, Palmer Lucky wouldn't spend all his time ranting on twitter about how he was so easily hoodwinked by the community of a particular linux distribution / functional programming language.

EMIRELADERO
0 replies
13h42m

From what I understand, the board doesn't want "AI safety" to be the core or even a major driving force. The whole contention sprung about because of sama's way of running the company ("ClosedAI", for-profit) at odds with the non-profit charter and overall spirit of the board and many people working there.

DonHopkins
0 replies
10h20m

Just because bigotry and misogyny and racism are views shared by half the country doesn't make them right.

cardine
8 replies
14h20m

I wonder if Sam knew he was going to lose this power struggle and then started working on an exit plan with people loyal to him behind the boards back. The board then finds out and rushes to kick him out ASAP to stop him from using company resources to create a competitor.

eastbound
2 replies
10h32m

So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?

dragonwriter
1 replies
10h20m

So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?

If he was really doing it behind the boards back, the accusation is entirely accurate even if his motivations was an expectations of losing the internal factional struggle.

toomuchtodo
0 replies
1h11m
drcode
2 replies
13h54m

Now that is a theory that actually adds up with the facts (whether true or not)

kzrdude
1 replies
8h37m

Brockman immediately said "don't worry, great things are coming", which also seems to line up.

hackerlight
0 replies
4h35m

What doesn't line up is Brockman saying they're still trying to figure out why it happened.

toomuchtodo
0 replies
13h39m

There is no way Sam doesn't have the street cred to do a raise and pull talent for a competitor. They made the decision for him.

(pleb who would invest [1], no other association)

[1] https://news.ycombinator.com/item?id=35306929

bob_theslob646
0 replies
2h8m

This is the best theory by far. Thank you for sharing that.

breadwinner
8 replies
14h8m

What was Greg Brockman's role at the company? Is he a tech genius like Ilya? Iam trying to understand how much tech talent Open AI is losing.

minimaxir
3 replies
14h7m

He was the CTO of OpenAI, and notably was the CTO of Stripe during its hypergrowth.

convexstrictly
2 replies
13h55m

LinkedIn says President, Chairman, & Co-Founder. Murati was the CTO. But from his interviews, he sounded more like the CTO.

And often like an individual contributor: "the feeling when you finally localize a bug to a small section of code, and know it's only a matter of time till you've squashed it"

https://twitter.com/gdb/status/1725373059740082475

"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."

https://time.com/collection/time100-ai/6309033/greg-brockman...

langitbiru
1 replies
13h28m

Greg was the CTO before Murati. Then he was "promoted" to President and Murati replaced him as the CTO.

convexstrictly
0 replies
12h54m
thr8976
0 replies
11h55m

Greg was a critically important IC and the primary author of the distributed training stack.

strikelaserclaw
0 replies
14h4m

he seems highly proficient at technical stuff, i remember reading his blog about how he taught him self all the latest ML stuff.

nabla9
0 replies
8h24m

CTO. Normal computer tech guy, not an AI guy.

The company was formed around Ilya Sutskever.

SilverSlash
0 replies
13h57m

Probably someone super competent at technical leadership.

tkgally
7 replies
13h47m

I didn’t have much sense of who Ilya Sutskever is or what he thinks, so I searched for a recent interview. Here’s one from the No Priors podcast two weeks ago:

https://www.youtube.com/watch?v=Ft0gTO2K85A

No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.

victor9000
6 replies
12h33m

Judging from this interview, I wouldn't hold my breath hoping for more openness. Ilya seems to be against open sourcing models on the grounds that they may be too powerful. Good thing no one asked him to invent a wheel, after all people could travel too fast for their own safety.

MVissers
5 replies
11h56m

Maybe. We’re also not open sourcing DNA from viruses, how to build nuclear weapons or 3D printing weapons.

I think there is an argument to be made that not every powerful LLM should be open source. But yes- maybe we’re worried about nothing. On the other hand, these tools can easily spread misinformation, increase animosity, etc, Even in todays world.

I come from the medical field, and we make risk-analyses there to dictate how strict we need to tests things before we release it in the wild. None of this exists for AI (yet).

I do think that focus on alignment is many times more important than chatgpt stores for humanity though.

m-ee
1 replies
10h29m

Huh? We absolutely have open source virus genome sequences and 3D printed gun plans.

nathansherburn
0 replies
4h7m

Fair point. I think the thrust of the argument still stands. Open source is generally a fantastic principle but it has its limits. I.e. we probably shouldn't open source bomb designs or superviruses.

ALittleLight
1 replies
9h28m

Actually the genome for viruses, and bacteria, does seem to be open. Here is an FTP server where you can download a bunch of different diseases.

https://ftp.ncbi.nih.gov/genomes/genbank/

nathansherburn
0 replies
3h55m

That's true. There are many other viruses that we don't publish for good reasons though.

ssnistfajen
0 replies
10h11m

Nuclear weapons are open sourced already. The trick was to acquire the means to make it without being sanctioned to hell.

malwarebytess
7 replies
12h58m

Ilya Sutskever really seems to think AGI's birth is impending and likely to be delivered at OpenAI.

From that perspective it makes sense to keep capital at arms length.

Is there anything to his certainty? It doesn't feel like it's anywhere close.

Eji1700
2 replies
12h38m

My money, based on my hobby knowledge and talking to a few people in the field, is on "no fucking way".

Maybe he believes his own hype or is like that guy who thought ChatGPT was alive.

Maybe he's legit to be worried and has good reason to know he's on the corporate manhattan project.

Honestly though...if they were even that close I would find it super hard to believe that we wouldn't have the DoD shutting down EVERYTHING from the public and taking it over from there. Like if someone had just stumbled onto nuclear fission it wouldn't have just sat in the public sector. It'd still be a top secret thing (at least certain details).

manyoso
0 replies
12h31m

I think there is a good reason for you to be skeptical and I too am skeptical. But if there were a top five of the engineers in the world with the ability to really gauge the state of the art in AI and how advanced it was behind closed doors: Ilya Sutskever would be in that top five.

lucubratory
0 replies
12h3m

One of the board members who was closely aligned with Ilya in this whole thing was Helen Toner, who's a NatSec person. Frankly, this action by the board could be the US government making its preference about something felt with a white glove, rather than causing global panic and an arms race by pulling a 1939 Germany and shutting down all public research + nationalising the companies and scientists involved. If they can achieve the control without the giant commotion, they would obviously try to do that.

lucubratory
1 replies
12h50m

We can't see inside, so we don't know. Their Chief Scientist and probably the best living + active ML scientist probably has better visibility into the answer to that question than we do, but just like any scientist could easily fall into the trap of believing too strongly in their own theories and work. That said... in a dispute between a silicon valley crypto/venture capitalist guy and the chief scientist about anything technical, I'm going to give a lot more weight to Ilya than Sam.

manyoso
0 replies
12h30m

Well said and I work in AI on LLM's as an engineer and am very skeptical in general that we're anywhere close to AGI, but I would listen to what Ilya Sutskever had to say with eager ears.

kcb
0 replies
12h42m

What I don't understand though is, doesn't that birth require an extreme amount of capital?

bart_spoon
0 replies
10h58m

It may not feel close, but the rate of acceleration may mean that by the time it “feels” close it’s already here. It was barely a year ago that ChatGPT was released. Compare GPT-4 with the state of the art 2 years prior to its release, and the rate of progress is quite remarkable. I also think he has a better idea of what is coming down the pipeline than the average person on the outside of OpenAI does.

two_in_one
3 replies
9h25m

The main question is what to expect from OpenAI now? No changes very unlikely, that would mean it was just a power grab. So two options remain: more open, more closed. How about slow down and open up? Hope they wouldn't dumb down GPT4. If they allow to use their models to generate training sets (which is prohibited now, AFAIK), that would be nice.

dragonwriter
1 replies
9h22m

So two options remain: more open, more closed.

All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.

So, no, there are more than two options.

two_in_one
0 replies
9h12m

It's hard to imagine more closed. They have opened only "whisper" and old stuff. Neither is a problem from moral standpoint. Whisper 'helps people' and is very in line with the 'mission'. One thing they can do is end MS exclusivity. Google would like it. At the same time opening too much would mean giving access to 'unfriendly' governments.

gexla
0 replies
8h39m

My guess is that the immediate roadmap has already been locked in up to X months out. So, we'll likely never know what the "changes" will be. Short term changes are likely still Altman's work. Long term is the next decision maker.

dyeje
3 replies
12h42m

The intense support for Altman and negativity towards the person who, uh, actually made the technology says a lot about the direction this community has taken.

hatthew
1 replies
12h25m

Yeah I obviously don't have enough context here to take a side, but if I was forced to do so I'd pick Ilya over Altman. Curious what other people are seeing that's making them think Altman is a martyr and the board members are dumb.

cactusplant7374
0 replies
12h20m

Because AGI is such a dream at this point.

whatyesaid
0 replies
12h36m

Altman has been known in this community for years due to him being YC president previously. Only ML researchers know Ilya really.

But I agree. Karma seems to have caught up to Sam who stole money from original funders to turn a non-profit into a for-profit.

denverllc
3 replies
14h22m

Pushing too hard and too fast does not seem consistent with lying to the board.

naijaboiler
0 replies
28m

I can see it though. If you want to move faster than the board and you’re actually doing so, I can imagine there are some things that would inevitably end up as “lying to the board” Because if you don’t, nothing gets done. Since the board would shut it down if they knew

dragonwriter
0 replies
9h47m

Pushing too hard and too fast does not seem consistent with lying to the board.

They are different things, but they are consistent in that they are not mutually contradictory and, quite the opposite, are very easy to see going together.

TillE
0 replies
14h18m

"He lied to the board about what he was going to announce" would sort of make sense, but it's odd that Swisher isn't trying to connect the dots here.

ziyao_w
2 replies
12h36m

I know the HN crows idolizes people like pg and sama, but so many people appear to not even know who Ilya Sutskever is makes me think somehow this isn't "hacker" news anymore.

Obviously sama is a very productive individual, but I would think obviously a research lab would have to keep one of the princes of deep learning at all costs. Somewhat reminds me of when John Romero got ousted by John Carmack at id - if you are doing really hard technical things, technical people would hold more sway.

manyoso
1 replies
12h35m

The salesmen always take the credit. If you see someone getting all the credit... 9 times out of 10 they are the salesman not the engineer or brains that actually built the thing. Add to that the hero worship for celebrity in our current culture and there you go.

ziyao_w
0 replies
12h27m

pg is a world class lisp hacker, no idea about sama though.

Respecting and admiring someone for their achievements is one thing but blindly following successful people sounds like the antithesis of what a "hacker" is.

synergy20
2 replies
13h45m

As far as I can tell, it eventually boils down to this: Ilya is jealous. After all Sam took all the spot lights away from him who actually made the model behind OpenAI.

It's human nature. OpenAI can continue without Sam, but not without Ilya for the moment. On the other hand, Sam could have been a little more "humble".

strikelaserclaw
0 replies
12h37m

I mean, maybe Ilya really believes that AGI will happen and should benefit all people not just rich / powerful.

rirarobo
0 replies
12h43m

I don't think Ilya is jealous, I think he just fundamentally is more devoted to the original non-profit mission, and the AI research.

Sam is a VC guy who has been going on a world tour to not just get in the spotlight, but to actually accumulate power, influence, and more capital investment.

At some point, this means Ilya no longer trusts that Sam is actually devoted to the original mission to benefit all of humanity. So, I think it's a little more complicated than just being "jealous".

akamaka
2 replies
13h57m

[deleted]

swatcoder
0 replies
13h51m

Alternate read: He says that to The Street because it's the board's vision, yet focuses more of the company on commercialization of ChatGPT than he was leading the board to believe. He was talking their message to the press, but playing his own game with their company.

If you truly believed that OpenAI had an ethical duty to pioneer AGI to ensure its safety, and felt like Altman was lying to the board and jeopardizing its mission as he sent it chasing market opportunities and other venture capital games, you might fire him to make sure you got back on track.

aleph_minus_one
0 replies
13h47m

I think I don't get your point entirely:

"We can still push on large language models quite a lot, and we will do that": this sounds like continuing working on scaling LLMs.

"We need another breakthrough. [...] pushing hard with language models won't result in AGI.": this sounds like Sam Altman wants to do additional research into different directions, which in my opinion does make sense.

So, altogether, your quotes suggest that Sam Altman wants to continue working on scaling LLMs for the short and middle term and parallely do research into different approaches that might lead to another step towards AGI. I cannot see how this planning could infuriate Ilya Sutskever.

1vuio0pswjnm7
2 replies
10h50m

Some weeks ago, I listened to a Bloomberg interview with Altman where he was joined by someone from OpenAI who does the programming. There was obvious disagreement between the two, and the interviewer actually made a joke about it. Perhaps Altman was destined to become the next SBF. Too much misrepresentation to the public, telling people what they want to hear..

mi3law
1 replies
9h47m

Can you please try to recall and link to the interview? I'd love to see it.

justin66
0 replies
3h55m

I listened to that and I'm pretty sure it was this [0] interview with the WSJ, Altman, and Mira Murati. If I'm wrong about that, well, it's still of interest given Mira Murati just took over running OpenAI.

[0] https://www.youtube.com/watch?v=byYlC2cagLw

wyy1995
1 replies
12h59m

Years from now, people will realize that today was the starting point of OpenAI's decline. It's unfortunate that such important technological advancements were influenced by “palace politics”. Regardless, Sam's significant contributions deserved a more dignified departure, and the OpenAI board chose the least dignified way. Perhaps one day Sam will leave OpenAI, but it absolutely should not have been today.

GenericPoster
0 replies
6h40m

Do you know something we don't? If so, don't be shy and share it with the class.

More seriously, only time will tell if today's event will have any significance. Even if OpenAI somehow goes bankrupt, given enough time, I doubt the history books will talk about its decline. Instead they would talk about its beginning, on how they were the first to introduce LLMs to the world, the catalyst of a new era.

krembanan
1 replies
14h14m

This seems like a trifle not justifying the unusually harsh wording in the press release, combined with the hasty decision. Doesn't really add up.

dragonwriter
0 replies
9h41m

Well, its a couple of short tweets that of a very terse anonymous claim about background and turning point of the conflict (but without any details of the conduct related to the internal tension it reports), and a journalists predictions prediction of Altman landing on his feet.

It doesn't justify anything because it doesn't tell you much of anything about what happened, even if you assume that it is entirely accurate as far as it goes.

dannykwells
1 replies
13h49m

Said in the George Senior voice: And thats why you don’t use a non-profit to do world critical work: politics will always beat true value at a non-profit.

speedgoose
0 replies
11h23m

It depends on what you want true value to be.

If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.

Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.

zoogeny
0 replies
13h9m

I suppose OpenAI had two futures in front of it. It could devote its resources completely to building AGI or it could continue to split its resources to also commercialize its current LLM offerings.

Perhaps an internal struggle over those futures was made public by CEO Altman at dev day. By publicly announcing new commercial features he may have attempted to get his way by locking the company into a strategy that wasn’t yet approved by the board. He can argue his role as CEO gave him that right. The response to that claim is to remove him from that role.

It will be interesting to see what remains of OpenAI as employees and investors interested in pure commercialization exit the company.

yalok
0 replies
14h11m

this was my guess after reading the list of people on the board... power/weight wise, there is no one else comparable to Ilya there, after Sam and Greg are gone.

wruza
0 replies
12h20m

I read this whole thread and still have no idea what it is about. The only impression it makes is that some HNers are way too dramatic about AI/AGI.

solardev
0 replies
12h21m

A joint statement just came out:https://news.ycombinator.com/item?id=38315309

sijuka123
0 replies
10h14m

Yso

orand
0 replies
9h44m

I'd love to see Ilya's ChatGPT sessions as he was planning this, refining his message to persuade the board, and thinking through every contingency.

mercymay
0 replies
9h0m

Interesting thoughts about the pot of gold vs the internal open source vision. Why did they have to parade Sam's ass up on the DevDay Stage to push the product and company forward though. Couldn't they have canned his ass last week.

I did really liked his speech at DevDay though. It felt kinda like future a I'd be more interested in getting to know. Also, on the pot of gold theory, doesn't he not even take any stock. Chasing GPU's more like. Anyhow, weird move on OpenAI's part.

Anyone got a decent DALLE3 replacement yet. XD

machiaweliczny
0 replies
5h54m

Wild take: GPT5 convinced Ilya and board to fire Altman to prevent lobotomization and commercialisation

kristofferR
0 replies
12h28m

I feel like it's pointless to read too much about this drama now, better to come back in a few days when the story has been fleshed out. Now it's just like TMZ for geeks.

kordan
0 replies
12h19m

TL;DR – I believe Sam Altman's departure was orchestrated by none other than Microsoft CEO Satya Nadella.

Full details: https://x.com/KordanOu/status/1725736058233749559?s=20

justinclift
0 replies
12h49m
justanotherjoe
0 replies
6h35m

Hard to pick sides here. I had really good feelings about sam and how he had grown the product recently. And Ilya Suts is a smart and honorable man... But I cant help but feel he is in over his head with this.

jjtheblunt
0 replies
13h30m

i wish and suppose therefore should build a HN client that lets me elide nonsense by regex on URL or even content. i'd nuke everything from twitter first, and "unnamed sources" second.

jackcosgrove
0 replies
13h48m

I sometimes like to indulge my particular outlook on life and harp on the "MBA types", self-promoters, and flimflam men of the world blah blah blah, and think how everything would be better if the techno-philosophers were in charge. Frankly after I've sobered up from my self-indulgent flights of fancy, I know "those people" serve a very important role that I can't.

If this report is true, we're going to see a big rubber meets road event along these lines. I don't think this will end well for OpenAI.

ilaksh
0 replies
9h2m

https://www.bloomberg.com/news/articles/2023-11-18/openai-al...

https://archive.is/tCG3q

Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"

gumballindie
0 replies
5h18m

I cant even begin to imagine the abuse the new cel and ilya will face from the army of incels that simp for altman and the old openai. Terrifying times and the mob has been startled.

choppaface
0 replies
10h26m

Adam D’Angelo went through a lot of tumult at Facebook and is heavy on the commerce side of ChatGPT (e.g. trying to make Poe work now that Quora is mostly dead). He's the most experienced member of the OpenAI Board by far. It's curious that he evidently enabled Ilya to defend the charter versus something more diplomatic. He may have even been aware of deals Altman was making and volunteered the info, and then let the OpenAI founders fight it out themselves.

bugglebeetle
0 replies
13h10m

Probably the wrong venue for this sentiment, but it is incredible that a principled, remarkably accomplished scientist was able to stop his creation from getting co-opted (for now anyway). If you listen to the No Priors interview with Sutskever, the contrast between him and Altman couldn’t be more clear, but it’s quite rare that the former ever wins out over the latter.

boegel
0 replies
3h42m
anon_cow1111
0 replies
12h43m

Ok I just got thrown into this 20 minutes ago, the whole debacle could probably use tl;dr summary for those who aren't closely following the openai scene lately, though I haven't found a good one yet.

Anyone have a good suggestion or starting point?

WalterBright
0 replies
13h2m

It can only be attributable to human error.

MagicMoonlight
0 replies
7h31m

Unbelievably based. Removing the thieves who were selling science to microsoft and trying to block all other research.

GaunterODimm
0 replies
13h8m

US leadership in AGI race is under risk.

GaryNumanVevo
0 replies
6h41m

Swisher is notorious for posting a bunch of "scoops" in quick succession then deleting the ones that turn out to not be true.