return to table of content

Emmett Shear becomes interim OpenAI CEO as Altman talks break down

TechnicolorByte
205 replies
11h40m

I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

altdataseller
115 replies
11h34m

OpenAI has hundreds more employees, all of whom are incredibly smart. While they will definitely lose the leadership and talent of those two, it’s not as if a nuclear bomb dropped on their HQ and wiped out all their engineers!

So questioning whether they will survive seems very silly and incredibly premature to me

alsodumb
75 replies
11h29m

Pretty much every researcher I know at OpenAI who are on twitter re-tweeted Sam Atlman's heart tweet with their own heart or some other supportive message.

I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.

This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.

And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.

haldujai
17 replies
11h23m

Pretty much every researcher I know at OpenAI who are on twitter

Selection bias?

alsodumb
14 replies
11h18m

Not if it's a big sample set. There's a guy on twitter who make a list with every OpenAI researcher he could find on twitter and almost all of them did react to Sams tweet in a supportive way.

15457345234
5 replies
11h4m

every OpenAI researcher he could find on twitter

Literally the literal definition of 'selection bias' dude, like, the pure unadulterated definition of it.

alsodumb
4 replies
10h59m

Like I said, if the subset of OpenAI researchers who are on twitter is very small, sure.

But people in AI/learning community are very active on twitter. I don't know every AI researcher on OpenAIs payroll. But the fact that most active researchers (looking at the list of OpenAI paper authors, and tbh the people I know, as a researcher in this space) are on twitter.

haldujai
2 replies
10h14m

It seems like you're misunderstanding selection bias.

It doesn't matter if it's large, unless the "very active on twitter" group is large enough to be the majority.

The point is that there may be (arguably very likely) a trait AI researchers active on Twitter have in common which differentiates them from the population therefore introducing bias.

It could be that the 30% (made up) of OpenAI researchers who are active on Twitter are startup/business/financially oriented and therefore align with Sam Altman. This doesn't say as much about the other 70% as you think.

15457345234
1 replies
9h54m

You reckon 30% (made up) of staff having a personal 'alignment' with (or, put another way, 'having sworn an oath of fealty to') a CEO is something investors would like?

Seems like a bit of a commercial risk there if the CEO can 'make' a third of the company down tools.

haldujai
0 replies
9h21m

I randomly chose 30% to represent a seemingly large non majority sample which may not be representative of the underlying population.

I have no idea what the actual proportion is, nor how investors feel about this right now.

The true proportion of researchers who actively voice their political positions on twitter is probably much smaller and almost certainly a biased sample.

15457345234
0 replies
10h51m

But the fact that most active researchers ... are on twitter

On twitter != 'active on twitter'

There's a biiiiiig difference between being 'on twitter' and what I shall refer to kindly as terminally online behaviour aka 'very active on twitter.'

ethbr1
4 replies
11h0m

How childish are employees to publicly get involved with this on Twitter?

If the CEO of my company got shitcanned and then he/she and the board were feuding?

... I'd talk to my colleagues and friends privately, and not go anywhere near the dumpster fire publicly. If I felt strongly, hell, turn in my resignation. But 100% "no comment" in public.

djvdq
1 replies
10h29m

You should find a better place to work.

Work is work. If you start being emotional about it, it's a bad, not good, thing.

15457345234
0 replies
10h16m

Nah, it's fine to be passionate about your work and relationships with your colleagues.

You just need to temper that before you start swearing oaths of fealty on twitter; because that's giving real Jim Jones vibes which isn't a good thing.

dylan604
0 replies
10h36m

These are people very active on Twitter and work for a company that unashamedly harvested all of the data it could for free with out asking to make money. It's not like shame and self-respect are allowed anywhere near this company.

15457345234
0 replies
10h36m

tl;dr: Any OAI employee tweeting about this is unhinged.

ignoramous
0 replies
10h58m

A majority of the early team that joined the non-profit OpenAI over BigTech did not do so for money but for its mission. Post-2019 hires may be more aligned with Sam but the early hires embody OpenAI's charter, Sustkever might argue.

Of course, OpenAI as a cloud-platform is DoA if Sam leaves, and that's a catastrophic business hit to take. It is a very bold decision. Whether it was a stupid one, time will tell.

haldujai
0 replies
11h6m

Large sample =/= (inherently) representative. What percentage of OpenAI researchers are on Twitter?

Follow-up: Why is only some fraction on Twitter?

This is almost certainly a confounder, as is often the case when discussing reactions on Twitter vs reactions in the population.

djvdq
0 replies
10h32m

They can support Sam, but still stay in the company.

qwertox
1 replies
11h0m

Which would mean that he specifically selected who to follow due to their closeness to / alignment with Sam, pre-ousting? How would he do that?

15457345234
0 replies
10h32m

Big question!

alex_young
17 replies
11h1m

The heart tweet rebellion is about as meaningful as adding a hashtag supporting one side of your favorite conflict.

Come on. “By 5 pm everyone will quit if you don’t do x”. Response: tens of heart emojis.

hipadev23
9 replies
10h43m

Anyone worth a shit will leave and go work with Sam. OpenAI will be left with a bunch of below average grifters.

hef19898
4 replies
10h23m

What is it with all this personality cult about founders, CEOs and CTOs nowadays? I thpught the cult around Steve Jobs was, bad it pales in comparison to today.

As soon as one person becomes more important than the team, as in the team starts to be structured around said person instead of with the person, that person should be replaced. Because otherwise, the team will not be functioning properly without the "star player" nor is the team more the sum of its members anymore...

Closi
2 replies
10h15m

While your post sounds like something that would be true, there are loads of examples of where companies have thrived under a clear vision from a specific person.

The example of Steve Jobs used in the above post is probably a prime example - Apple just wouldn’t be the company it is today without that period of his singular vision and drive.

Of course they struggled after losing him, but the current version of Apple that has lived with Jobs and lost him is probably better than the hypothetical version of Apple where he never returned.

Great teams are important, but great teams plus great leadership is better.

hef19898
0 replies
9h5m

Steve Jobs is actually a great example: He was, sucessfully at each time, replaced twice, once aftwr he almost ran Apple into the ground and then after his death. In fact, he shoes how to build an org that explicitly does not depend on war star player.

_factor
0 replies
9h22m

Newsflash. Altman is no Steve Jobs.

OscarTheGrinch
0 replies
10h9m

People love to pick sides then retroactively rationalise that decision. None of us reading about it have the facts required to make a rational judgement. So it's Johnny vs Amber time.

austhrow743
2 replies
10h34m

In a dispute between people willing to sacrifice profit for values and those chasing the profit, why on earth would you put grifters on team values over profit?

throwawayhno
0 replies
10h21m

Welcome to hn. Here it's all about money

bertil
0 replies
10h10m

I'm assuming the original comment meant that the grifters would not be extended a new offer after their colleagues learned that they were not as good as their CV said at open AI.

Gigablah
0 replies
10h39m

Only on HN: your worth is tied to your choice of CEO.

happytiger
3 replies
10h40m

I take it you have never made a pledge to someone.

It’s a signal. The only meaning is the circumstances under which the signal is given: Sam made an ask. These were answers.

alex_young
1 replies
10h32m

This is how one answers if they actually intend to quit: https://x.com/gdb/status/1725667410387378559?s=46&t=Q5EXJgwO...

There’s nothing wrong with not following, it’s a brave and radical thing to do. A heart emoji tweet doesn’t mean much by itself.

happytiger
0 replies
10h18m

Did I say there was something wrong with either case? No. I said it was a signal. And it certainly can mean a lot by itself.

You can disagree. You can say only explicit non-emoji messages matter. That’s ok. We can agree to disagree.

15457345234
0 replies
10h27m

So is this a company or something else that starts with a c? (Thinking of a 4 letter word.)

teaearlgraycold
0 replies
10h29m

Talk is easy. But also the good employees will be paid well to get poached.

londons_explore
0 replies
10h34m

Sam hasn't yet lined up the funding, so therefore they can't yet offer decent jobs, so therefore the openai employees haven't left

But they will.

alsodumb
0 replies
10h56m

It wasn't a question of "will these people quit there jobs at OpenAI and get into the job market because they support Sam".

It was a question of whether they'd leave OpenAI and join a new company that Sam starts with billions in funding at comparable or higher comp. In that case, of course who the employees are siding with matters.

15457345234
12 replies
11h23m

It's always been my observation that the actual heavyweights of any hardcore engineering project are the ones that avoid snarky lightweight platforms like twitter like the plague.

I would imagine that if you based hiring and firing decisions on the metric of 'how often this employee tweets' you could quite effectively cut deadwood.

With that in mind...

OfficialTurkey
3 replies
11h19m

I have never used twitter but this strikes me as a strange take at best. Many of the most brilliant and passionate engineers I've had the pleasure to work with have been massive shitposters.

15457345234
2 replies
11h17m

massive shitposters

Yes, agreed, but on _twitter_?

The massive_disgruntled_engineer_rant does have a lot of precedent but I've never considered twitter to be their domain. Mailing lists, maybe.

xcv123
1 replies
9h38m

Yes, on Twitter. Mailing lists are old boomer shit.

15457345234
0 replies
7h46m

That's funny

dorkwood
2 replies
11h6m

It's always been my observation that the actual heavyweights of any hardcore engineering project are the ones that avoid snarky lightweight platforms like twitter like the plague.

What other places are there to engage with the developer community?

15457345234
1 replies
10h59m

Engagement is not necessarily constructive engagement

dorkwood
0 replies
10h34m

That's a strange thing to say. I find a lot of value in the developer community on Twitter. I wouldn't have my career without it.

I also wasn't being facetious. If there are other places to share work and ideas with developers online, I'd love to hear about them!

karmasimida
1 replies
11h17m

Discredit people using twitter is a weird take, and didn't resemble critical thinking to me.

gardenhedge
0 replies
7h35m

Since Twitter has been so controversial I don't think it's strange to discredit people using it. The people still using it are just addicted to attention.

alsodumb
1 replies
11h20m

That's not the case with AI community. Twitter is heavily used by almost every professor/researcher/PhD student who is doing learning. Ilya has one. Heck even Jitendra Malik who's probably as old as my grand father joined twitter.

haldujai
0 replies
10h4m

Mostly for professional purposes such as networking and promoting academic activities. Sometimes for their side startups.

I rarely see a professor or PhD student voicing a political viewpoint (which is what the Sam Altman vs Ilya Sutskever debate is) on their Twitter.

kvathupo
0 replies
11h13m

Completely disagree: Yann LeCun, John Carmack, Rui Ueyama, Andrei Alexandrescu, Matt Goldbolt, Horace He, Tarun Chitra, George Hotz, etc.

threeseed
10 replies
11h17m

Team Sam = Team Money.

If you're an employee at OpenAI there is a huge opportunity to leave and get in early with decent equity at potentially the next giant tech company.

Pretty sure everyone at OpenAI's HQ in San Francisco remembers how many overnight millionaires Facebook's IPO created.

majikaja
3 replies
11h4m

Money = building boring enterprise products, not building AI gods I would suspect

threeseed
2 replies
10h59m

OpenAI was building boring enterprise and developer products.

Which likely most of the company was working on.

sangnoir
1 replies
10h44m

OpenAI was building boring enterprise and developer products under Sam Altman's leadership

mirzap
0 replies
9h4m

And that could be a core problem. He wasn't really free to decide the speed of development. He wanted to change that and deliver faster. Obviously, they achieved something in the past weeks, so doomers pulled the plug to stop him.

bnralt
1 replies
11h0m

There's a financial incentive. And there will be more opportunity for funding if you jump ship as well (it seems like OpenAI will have difficulty with investors after this).

But also, if you're a cutting edge researcher, do you want to stay at a company that just ousted the CEO because they thought the speed of technology was going too fast (it's sounded like this might be the reason)? You don't want to be shackled when by the organization becoming a new MIRI.

hef19898
0 replies
10h19m

It seems that MS spent 10 billion to become a minority shareholder in company controlled by a non-profit. They were warned, or maybe even Sam oversold the potential profitability of the investment.

Just as another perspective.

zo1
0 replies
10h41m

All this talk of a new venture and more money makes this smell highly fishy to me. Take this with a grain of salt, it's a random thought.

It's created huge noise and hype and controversy, and shaken things up to make people "think" they can be in on the next AI hype train "if only" they join whatever Sam Altman does now. Riding the next wave kind of thing because you have FOMO and didn't get in on the first wave.

tempsy
0 replies
10h29m

being a lowly millionaire doesn’t get you much these days. almost certainly anyone who was hired into a mid level or senior role was probably already at least a millionaire

j7ake
0 replies
10h44m

Salaries at openai already make them millionaires.

behringer
0 replies
10h16m

If you're looking for money you probably chose wrong going with a non-profit.

moralestapia
7 replies
11h24m

Also, serious investors won't touch OpenAI with a ten foot pole after these events.

There's an idealistic bunch of people that think this was the best thing to happen to OpenAI, time will tell but I personally think this is the end of the company (and Ilya).

Satya must be quite pissed off and rightly so, he gave them big money, believed in them and got backstabbed as well; disregarding @sama, MS is their single largest investor and it didn't even warrant a courtesy phone call to let them know of all this fiasco (even thought some savants were saying they shouldn't have to, because they "only" owned 49% of the LLC. LMAO).

Next bit of news will be Microsoft pulling out of the deal but, unlike this board, Satya is not a manchild going through a crisis, so it will happen without it being a scandal. MS should probably just grow their own AI in-house at this point, they have all the resources in the world to do so. People who think that MS (a ~50 old company, with 200k employees, valued at almost 3 trillion) is now lost without OpenAI and the Ilya gang must have room temperature IQs.

visarga
3 replies
10h59m

200k MS employees can't do what 500 from OAI can, the more you pile on the problem, the worse the outcome. The problem with Microsoft is that, like Google, Amazon and IBM, they are not a good medium for radical innovation, are old, ossified companies. Apple used to be nimble when Steve was alive, but went to coasting mode since then. Having large revenue from old business is an obstacle in the new world, maybe Apple was nimble because it had small market share.

hn_throwaway_99
1 replies
10h25m

Apple used to be nimble when Steve was alive, but went to coasting mode since then

Give me a break. Apple Watch and Air pods are far and away leaders in their category, Apple's silicon is a huge leap forward, there is innovation in displays, CarPlay is the standard auto interface for millions of people, while I may question the utility the Vision Pro is a technological marvel, iPhone is still a juggernaut (and the only one of these examples that predate Jobs' passing), etc. etc.

Other companies dream about "coasting" as successfully.

Freedom2
0 replies
10h16m

Apple Watch and Air pods are far and away leaders in their category,

By what metric? I prefer open hardware and modifiable software - these products are in no way leaders for me. Not to mention all the bluetooth issues my family and friends have had when trying to use them.

codebolt
0 replies
10h42m

MS isn't starting from scratch, it already has the weights of the worlds most powerful LM, and it's all running on their datacenters. Even without Sam, they just need to keep the current momentum going. Maybe axe ChatGPT and focus solely on Bing/Copilot going forward. It would give me great satisfaction to see the laughing stock search engine of the past decade being the undisputed face of AI over the next.

cloverich
1 replies
10h59m

My first question to this scenario would be: Could MS provide the seed funding for Sam's next gig? As in, they bet on OpenAI, and either OpenAI keeps on keeping on or Sam's gig steals the thunder, and they presumably have the cash to play a role in both.

moralestapia
0 replies
4h56m
didibus
0 replies
10h2m

But OpenAI is a non for profit that was exploring a goal that it saw financial incentives as misaligned.

It's what kind of got it achieved. Because every other company didn't really see the benefit of going straight to AGI, instead working on incremental addition and small iteration.

I don't know why the board decided to do what it did, but maybe it sees that OpenAI was moving away from R&D and too much into operations and selling a product.

So my point is that, OpenAI started as a charity and literally was setup in a way to protect that model, by having the for-profit arm be governed by the non-for-profit wing.

The funny thing is, Sam Altman himself was part of the people who wanted it that way, along with Elon Musk, Illya and others.

And I kind of agree, what kind of future is there here? OoenAI becomes another billion dollar startup that what? Eventually sells out with a big exit?

It's possible to see the whole venture as taking away from the goal set out by the non for profit.

behringer
4 replies
10h18m

Why a researcher would concern him or herself with management politics is beyond me? Particularly with a glorified sales man. Sounds like they aren't spending enough time actually working.

wyager
0 replies
3h2m

Given that the board coup was orchestrated by AI safetyists, it likely has a pretty direct bearing on life as a researcher. What are you allowed to work on? What procedures and red tape are in place? Etc.

vvrm
0 replies
9h53m

Because a salesman’s skills complements those of a researcher. Salesman sells what the researcher built and brings in money to keep the lights on. Researcher gets to do what they love without having to worry about the real world. That’s a much sweeter deal than a micromanaging PI.

bertil
0 replies
10h8m

My experience of academic research is that there's a lot of energy spent on laboratory politics.

alsodumb
0 replies
9h59m

It's not just management politics - it's about money and what they want to work on.

A lot of researchers like to work on cutting edge stuff, that actually ends up in a product. Part of the reason why so many researchers moved from Google to OpenAI was to be able to work on products that get into production.

Particularly with a glorified sales man > Sounds like they aren't spending enough time actually working. Lmao I love how people come down to personal attacks on people.
zq
0 replies
10h5m

The two most important to OpenAI's mission - Alec Radford and Ilya Sutskever - did not respond with a heart.

babyshake
0 replies
10h11m

Presumably there is some IP assignment agreement that would make it tricky for Sam to start an OpenAI competitor without a lot of legal exposure?

karmasimida
21 replies
11h21m

Survive as existing? They will.

But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.

I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.

bottlepalm
17 replies
11h17m

Ah yes you're either a doomer or e/acc. Pick an extreme. Everything must be polarized.

astrange
16 replies
10h35m

There's a character in HPMOR named after the new CEO.

(That's the religious text of the anti-AI cult that founded OpenAI. It's in the form of a very long Harry Potter fanfic.)

FeepingCreature
11 replies
10h27m

Sorry, which character are you talking about? (Also lol "religious text", how dare people have didactic opinions.)

astrange
10 replies
10h21m

The one with the same name as the new CEO. Pretty straightforward.

Also lol "religious text", how dare people have didactic opinions.

That's not what a religious text is, that'd just be a blog post. It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

FeepingCreature
9 replies
10h13m

Oh hey there he is, cool. I had a typo in my search, I think.

That's not what a religious text is, that'd just be a blog post.

Yes, almost as if "Lesswrong is a community blog dedicated to refining the art of human rationality."

It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

I don't think anybody either asked somebody to, or actually did, donate all their money. As to "joining a cult group house polycule", to my knowledge that's just SF. There's certainly nothing in the Sequences about how you have to join a cult group house polycule. To be honest, I consider all the people who joined cult group house polycules, whose existence I don't deny, to have a preexisting cult group house polycule situational condition. (Living in San Francisco, that is.)

avalys
7 replies
9h51m

“The Sequences”? Yes, this doesn’t sound like a quasi-religious cult at all…

astrange
5 replies
9h33m

The message is that if you do math in your head in a specific way involving Bayes' theorem, it will make you always right about everything. So it's not even quasi-religious, the good deity is probability theory and the bad one is evil computer gods.

This then causes young men to decide they should be in open relationships because it's "more logical", and then decide they need to spend their life fighting evil computer gods because the Bayes' theorem thing is weak to an attack called "Pascal's mugging" where you tell them an infinitely bad thing has a finite chance of happening if they don't stop it.

Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

https://metarationality.com/bayesianism-updating

Bit old but still relevant.

FeepingCreature
4 replies
9h27m

This then causes young men to decide they should be in open relationships because it's "more logical"

Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).

The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.

Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.

astrange
3 replies
9h18m

The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place.

No, there isn't a correct way to do anything in the real world, only in logic problems.

This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)

The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".

https://metarationality.com/probabilism

Given that this didn't happen with anyone else

They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

And of course, they think evil computer gods are going to kill them.

FeepingCreature
2 replies
9h9m

No, there isn't a correct way to do anything in the real world, only in logic problems.

Agree to disagree? If there's one thing physics teaches us, it's that the real world is just math. I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were. Re counterfactuals, yes, the problem is uncomputable at the limit. That's not "unknown unknowns", that's just the problem of induction. However, it's not like there's any alternative system of knowledge that can do better. The point isn't to be right all the time, the point is to make optimal use of available evidence.

buying castles

They make the case that the castle was good value for money, and given the insane overhead for renting meeting spaces, I'm inclined to believe them.

scientific racism is real (though still buying mosquito nets for the people they're racist about)

Honestly, give me scientific racists who buy mosquito nets over antiracists who don't any day.

getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

As far as I can tell, that's one guy.

And of course, they think evil computer gods are going to kill them.

I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?

astrange
1 replies
7h10m

I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were.

Hmm, they're not a complete anything but they're pretty different as they're not discrete. That's how we can teach them undefinable things like writing styles. It seems like a good ingredient.

Personally I don't think you can create anything that's humanlike without being embodied in the world, which is mostly there to keep you honest and prevent you from mixing up your models (whatever they're made of) with reality. So that really limits how much "better" you can be.

That's not "unknown unknowns", that's just the problem of induction.

This is the exact argument the page I linked discusses. (Or at least the whole book is.)

However, it's not like there's any alternative system of knowledge that can do better.

So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it. (A religion meaning a principle you orient your life around that gives it unrealistically excessive meaning, aka the opposite of nihilism.)

I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?

That's a great argument. The book I linked calls it "reasonableness". It's not a rational one though, so it's hard to use.

Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.

Main "logical" issue with it though is that it seems to ignore that things cost money, like where the evil AI is going to get the compute credits/GPUs/power bills to run itself.

But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.

FeepingCreature
0 replies
6h19m

Iunno, quantized networks are pretty discrete. It seems a lot of the continuity only really has value during training. (If that!)

So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it.

I mean, nobody's actually done this. Honestly I hear more about Bayes' Theorem from rationality critics than rationalists. Do some people take it too far? Sure.

But also

the real world isn't discrete

That's a strange objection. Our data channels are certainly discrete: a photon either hits your retina or it doesn't. Neurons firing or not is pretty discrete, physics is maybe discrete... I'd say reality being continuous is as much speculation as it being discrete is. At any rate, the problem of induction arises just as much in a discrete system as in a continuous one.

Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.

Sure, but you should do that because you have no evidence for Russell's Teapot. The history of human evolution and current AI revolution are at least evidence for the possibility of superhuman intelligence.

"A teapot in orbit around Jupiter? Don't be ridiculous!" is maybe the worst possible argument against Russell's Teapot. There are strong reasons why there cannot be a teapot there, and this argument touches upon none of them.

If somebody comes to you with an argument that the British have started a secret space mission to Jupiter, and being British they'd probably taken a teapot along, then you will need to employ different arguments than if somebody asserted that the teapot just arose in orbit spontaneously. The catch-all argument about ridiculousness no longer works the same way. And hey, maybe you discover that the British did have a secret space program and a Jupiter cult in government. Proposing a logical argument creates points at which interacting with reality may change your mind. Scoffing and referring to science fiction gives you no such avenue.

But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.

The thing is that reality really has no obligation to limit itself to what you consider reasonable threats. Was the asteroid that killed the dinosaurs a reasonable threat? It would have had zero precedents in their experience. Our notion of reasonableness is a heuristic built from experience, it's not a law. There's a famous term, "black swan", about failures of heuristics. But black swans are not "unknown unknowns"! No biologist would ever have said that black swans were impossible, even if they'd never seen nor heard of one. The problem of induction is not an excuse to give up on making predictions. If you know how animals work, the idea of a black swan is hardly out of context, and finding a black swan in the wild does not pose a problem for the field of biology. It is only common sense that is embarrassed by exceptions.

FeepingCreature
0 replies
9h31m

As far as I can tell, any single noun that's capitalized sounds religious. I blame the Bible. However, in this case it's just a short-hand for the sequences of topically related blog posts written by Eliezer between 2006 and 2009, which are written to fit together as one interconnected work. (https://www.lesswrong.com/tag/sequences , https://www.readthesequences.com/)

astrange
0 replies
9h32m

Well, Berkeley isn't exactly San Francisco, but joining cults is all those people get up to there. Some are Buddhist, some are Leverage, some are Lesswrong.

The most recent case was notably in the Bahamas though.

tempusalaria
2 replies
9h57m

Imagine how bad a reputation EA would have if the general public knew about HPMOR

xvector
1 replies
8h27m

Even HP fanfiction lovers HATED HPMOR. It had a clowny reputation

It is wild to see how closely connected the web is though. Yudkowsky, Shear, and Sutskever. The EA movement today controls a staggering amount of power.

astrange
0 replies
7h32m

Here's the new CEO expressing the common EA belief that (theoretical world ending) AI is worse than the Nazis, because once you show them a thought experiment that might possibly true they're completely incapable of not believing in it.

https://x.com/eshear/status/1664375903223427072?s=46

whatshisface
0 replies
10h31m

Is Chat GPT writing this whole dialogue?

peanuty1
2 replies
11h12m

Surely the employees knew before joining that OpenAI is a non-profit aiming to develop safe AGI?

sgift
0 replies
10h54m

They thought so. Now, they know that instead they work for one aiming to satisfy the ego of a specific group of people - same as everywhere else.

alexgartrell
0 replies
10h54m

OpenAI's recruiting pitch was 5-10+ million/year in the form of equity. The structure of the grants is super weird by traditional big-company standards, but it was plausible enough that you could squint and call it the same. I'd posit that many of the people jumping to OpenAI are doing it for the cash and not the mission.

https://the-decoder.com/openai-lures-googles-top-ai-research....

jxf
5 replies
11h31m

But a number of those other employees have said they'll leave if Altman isn't rehired.

zombiwoof
4 replies
11h25m

Bullshit. They are not quitting

icy_deadposts
0 replies
10h3m

You're right. They're fired.

bartimus
0 replies
10h13m

Maybe not instantly. But there's a version where they don't agree with certain decisions and will now be more open to other opportunities.

TechnicolorByte
0 replies
11h20m

Even if you don’t believe many employees would consider leaving for Altman, I find it probable that many would consider leaving for financial reasons. What will their PPUs be worth if OpenAI is seen as a funding risk?

15457345234
0 replies
11h21m

They're either not quitting or they've outed themselves as being part of a personality cult and they'll just hinder things if they're not ejected promptly.

patapong
2 replies
11h21m

I am guessing they are super reliant on Microsoft to keep running ChatGPT... If Microsoft decides to get out and finds a way they would be in deep trouble.

sangnoir
1 replies
10h43m

I'm sure Google will throw a couple of billions their way, given the chance

exitb
0 replies
10h23m

Why though? Companies invest to see profit or get products they can sell. This is not only about the CEO. The CEO change signals a radical strategic shift.

thekevan
0 replies
11h3m

Funny you should reference a nuclear bomb. This was 14 minutes after your post.

https://twitter.com/karpathy/status/1726478716166123851

spaceman_2020
0 replies
11h4m

If the funding dries up for OpenAI, those engineers have no incentive to keep working there. No point wasting your career on an organization that's destined to die.

simultsop
0 replies
11h26m

With a PR damage such this one, if they survive it will be a miracle.

questime
0 replies
11h30m

The perception right now is that the board doesn't care about investors, this will kill this company that is burning money at an insane rate. Employees will run for the exits unless they are convinced that there is a future exit.

pedrosorio
0 replies
10h14m

The GPT-4 pre-training research lead quit on Friday.

laurels-marts
0 replies
7h53m

and talent of those two

You are aware that more than just 2 people departed?

blast
0 replies
11h4m

it’s not as if a nuclear bomb dropped on their HQ

Oh yes it is.

DantesKite
0 replies
10h27m

Andrej Karpathy literally just tweeted the nuclear radiation emoji lol.

chubot
53 replies
11h30m

What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.

Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.

Surely that's what you need for safety?

ryanSrich
48 replies
11h15m

Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?

dragonwriter
20 replies
10h29m

Certainly no one is suggesting these systems can become "alive",

Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.

The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.

idontwantthis
14 replies
10h20m

How does it actually kill a person? When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?

upwardbound
6 replies
9h54m

One route is if AI (not through malice but simply through incompetence) plays a part in a terrorist plan to trick the US and China or US and Russia into fighting an unwanted nuclear war. A working group I’m a part of, DISARM:SIMC4, has a lot of papers about this here: https://simc4.org

hurryer
3 replies
9h14m

Since you work on this, do you think leaders will wait until confirmation of actual nuclear detonations, maybe on TV, before believing that a massive attack was launched?

upwardbound
2 replies
9h2m

According to current nuclear doctrine, no, they won’t wait. The current doctrine is called Launch On Warning which means you retaliate immediately after receiving the first indications of incoming missiles.

This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.

Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.

Here is the paper that introduced Decide Under Attack:

A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.

https://carnegieendowment.org/2019/09/10/commonsense-policy-...

hurryer
1 replies
8h57m

I know about the doctrine.

Yet everytime there was a "real" attack, somehow the doctrine was not followed (in US or USSR).

It seems to me that the doctrine is not actually followed because leaders understand the consequences and wait for very solid confirmation?

Soviets also had the perimeter system, which was also supposed to relieve pressure for an immediate response.

upwardbound
0 replies
8h32m

Agree wholeheartedly. Human skepticism of computer systems has saved our species from nuclear extinction multiple times (Stanislav Petrov incident, 1979 NORAD training tapes incident, etc.)

The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.

The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.

The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).

https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...

Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.

justcool393
1 replies
9h19m

so the plot of WarGames?

upwardbound
0 replies
9h0m

Exactly. WarGames is very similar to a true incident that occurred in 1979, four years before the release of the film.

https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...

    In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.

dragonwriter
5 replies
10h8m

When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?

When someone runs a model in a reasonably durable housing with a battery?

(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)

idontwantthis
4 replies
9h57m

But my point is what is it actually doing to reach out and touch someone in the doomsday scenario?

LordDragonfang
1 replies
9h38m

I mean, the cliched answer is "when it figures out how to override the nuclear launch process". And while that cliche might have a certain degree of unrealism, it would certainly be possible for a system with access to arbitrary compute power that's specifically trained to impersonate human personas to use social engineering to precipitate WW3.

And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.

Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.

Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.

idontwantthis
0 replies
9h16m

How can it call one of those labs and place an order for the apocalypse and I can’t right now?

Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?

AuryGlenz
1 replies
9h26m

Nukes, power grids, planes, blackmail, etc. Surely you’ve seen plenty of media over the years that’s explored this.

idontwantthis
0 replies
9h11m

What is “nukes” though? Like the missiles in silos that could have been networked decades ago but still require mechanical keys in order to fire? Like is it just making phone calls pretending to be the president and everyone down the line says “ok let’s destroy the world”?

grey-area
0 replies
9h15m

The network is the computer.

If you live in a city right now there are millions of networked computers that humans depend on in their everyday life and do not want to turn off. Many of those computers keep humans alive (grid control, traffic control, comms, hospitals etc). Some are actual robotic killing machines but most have other purposes. Hardly any are air-gapped nowadays and all our security assumes the network nodes have no agency.

A super intelligence residing in that network would be very difficult to kill and could very easily kill lots of people (destroy a dam for example), however that sort of crude threat is unlikely to be a problem. There are lots of potentially bad scenarios though many of them involving the wrong sort of dictator getting control of such an intelligence. There are legitimate concerns here IMO.

mlindner
4 replies
9h47m

What does "properly aligned" even mean? Democracies even with countries don't have alignment, let alone democracies across the world. They're a complete mess of many conflicting and contradictory stances and opinions.

This sounds, to me, like the company leadership want the ability to do some sort of picking of winners and losers, bypassing the electorate.

upwardbound
1 replies
8h55m

Any AGI must at a minimum be aligned with these two values:

(1) humanity should not be subjugated

(2) humanity should not go extinct before it’s our time

Even Kim Jong Un would agree with these principles.

Currently, any AGI or ASI built based on any of the known architectures contemplated in the literature which have been invented thus far would not meet a beyond-a-reasonable-doubt standard of being aligned with these two values.

lwhi
0 replies
7h39m

I think this is a crazy set of values.

'.. before it's our time' is definitely in the eye of the beholder.

krisoft
1 replies
8h42m

What does "properly aligned" even mean?

You know those stories where someone makes a pact with the devil/djin/other wish granting entity, and the entity does one interpretation of what was wished, but since it is not what the wisher intended it all goes terribly wrong? The idea of alignment is to make the djin which not only can grant wishes, but it does them according to the unstated intention of the wisher.

You might have heard the story of the paper clip maximiser. The leadership of the paperclip factory buys one of those fancy new AI agents and asks it to maximise paperclip production.

What a not-well aligned AI might do: Reach out through the internet to a drug cartel’s communication nodes. Hack the communications and take over the operation. Optimise the drug traficking operations to gain more profit. Divert the funds to manufacture weapons for multiple competing factions on multiple crisis points on Earth. Use the factions against each other. Divert the funds and the weapons to protect a rapidly expanding paperclip factory. Manipulate and blackmail world leaders into inaction. If the original leaders of the paperclip factory try to stop the AI eliminate them, since that is the way to maximise paper clip production. And this is just the begining.

What a well alligned AI would do: Fine tune the paperclip manufacturing machinery to eliminate rejects. Reorganise the factory layout to optimise logistics. Run a succesfull advertising campaign which leads to a 130% increase in sales. (Because clearly this is what the factory owner intended it to do. Altough they did a poor job of expressing their wishes.)

mlindner
0 replies
6h34m

I like your extremist example, however I fear what "properly aligned" means for more vague situations, where it is not at all clear what the "correct" path is, or worse, that it's very clear what "correct" is for some people, but that "correct" is another man's "evil".

hsrada
18 replies
11h8m

Death.

The default consequence of AGI's arrival is doom. Aligning a super intelligence with our desires is a problem that no one has solved yet.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

----

Listen to Dwarkesh Podcast with Eliezer or Carl Shulman to know more about this.

bnralt
10 replies
10h57m

Aligning a super intelligence with our desires is a problem that no one has solved yet.

It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.

ethbr1
5 replies
10h52m

No, the problem with AGI is potential exponential growth.

So less like an alien invasion.

And more like a pandemic at the speed of light.

mlyle
3 replies
10h36m

That's assuming a big overshoot of human intelligence and goal-seeking. An average human capability counts as "AGI."

If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.

I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.

MrScruff
2 replies
10h0m

If AI gets to human levels of intelligence (ie. can do novel research in theoretical physics) then at the very least it’s likely that over time it will be able to do this reasoning faster than humans. I think it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains. That would imply there was some arbitrary physical limit to intelligence but even within humans the variance is quite dramatic.

mlyle
1 replies
9h38m

it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains.

I'm assuming you meant "aren't" here.

That would imply there was some arbitrary physical limit to intelligence

All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.

Also there's no guarantee the amount of raw computation is going to increase quickly.

In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.

I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.

MrScruff
0 replies
1h55m

I think it depends what is meant by fast take off. If we created AGIs that are superhuman in ML and architecture design you could see a significantly more rapid rate of progress in hardware and software at the same time. It might not be overnight but it could still be fast enough that we wouldn’t have the global political structures in place to effectively manage it.

I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.

astrange
0 replies
10h32m

Exponential growth is not intrinsically a feature of an AGI except that you've decided it is. It's also almost certainly impossible.

Main problems stopping it are:

- no intelligent agent is motivated to improve itself because the new improved thing would be someone else, and not it.

- that costs money and you're just pretending everything is free.

hurryer
0 replies
9h5m

Survivorship bias.

It's like saying don't worry about global thermonuclear war because we haven't seen it yet.

The Neandethals on the other hand have encountered a super-intelligence.

dminik
0 replies
9h56m

We see alignment problems all the time. Current systems are not particularly smart or dangerous. But they lie on purpose and funnily enough considering the current situation, Microsoft's attempt was threatening users shortly after launch.

MrScruff
0 replies
10h12m

The argument would be that by the time we see the problem it will be too late. We didn’t really anticipate the unreasonable effectiveness of transformers until people started scaling them, which happened very quickly.

FeepingCreature
0 replies
10h17m

It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.

But if we're seeing the existence of an unaligned superintelligence, surely it's squarely too late to do something about it.

ryanSrich
3 replies
10h46m

I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

I'm not suggesting we don't see ASI in some distant future, maybe 100+ years away. But to suggest we're even within a decade of having ASI seems silly to me. Maybe there's research I haven't read, but as a daily user of AI, it's hilarious to think people are existentially concerned with it.

upwardbound
0 replies
9h52m

maybe 100+ years away

I have two toddlers. This is within their lifetimes no matter what. I think about this every day because it affects them directly. Some of the bad outcomes of ASI involve what’s called s-risk (“suffering risk”) which is the class of outcomes like the one depicted in The Matrix where humans do not go extinct but are subjugated and suffer. I will do anything to prevent that from happening to my children.

hsrada
0 replies
9h31m

I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.

But when you go beyond that, it becomes difficult to ignore trend lines.

Here's a detailed scenario breakdown of how it might come to be –https://www.dwarkeshpatel.com/p/carl-shulman

FeepingCreature
0 replies
10h19m

I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Maybe they don't seem that to others? I mean, you're not really making an argument here. I also use GPT daily and I'm definitely worried. It seems to me that we're pretty close to a point where a system using GPT as a strategy generator can "close the loop" and generate its own training data on a short timeframe. At that point, all bets are off.

jbgt
0 replies
11h1m
jazzyjackson
0 replies
10h45m

I'm not sure that it's a matter of "knowing" as much as it is "believing"

ilrwbwrkhv
0 replies
9h45m

There is absolutely no AGI risk. These are mere marketing ploys to sell a chatbot / feel super important. A fancy chatbot, but a chatbot none the less.

cthalupa
2 replies
11h4m

Certainly no one is suggesting these systems can become "alive"

No, that very much is the fear. They believe that by training AI on all of the things that it takes to make AI, at a certain level of sophistication, the AI can rapidly and continually improve itself until it becomes a superintelligence.

ryanSrich
1 replies
10h38m

That's not alive in any meaningful sense.

When I say alive, I mean it's like something to be that thing. The lights are on. It has subjective experience.

It seems many are defining ASI as just a really fast self learning computer. And while sure, given the wrong type of access and motive, that could be dangerous. But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.

FeepingCreature
0 replies
10h22m

You're thinking about "alive" as "humanlike" as "subjective experience" as "dangerous". Instead, think of agentic behavior as a certain kind of algorithm. You don't need the human cognitive architecture to execute an input/output loop trying to maximize the value of a certain function over states of reality.

But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.

Seems to me that can be unboundedly dangerous? Like, I don't see you making an argument here that there's a limit to what kind of dangerous that class entails.

huytersd
1 replies
9h37m

They give it stupid terms like “alignment” to make it opaque to the common person. It’s basically sitting on your hands and pointing to sci-fi as to why progress should be stopped.

FeepingCreature
0 replies
5h19m

This is why the superior term is "AI notkilleveryoneism."

reducesuffering
0 replies
11h5m

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Signed by Sam Altman, Ilya Sutskever, Yoshua Bengio, Geoff Hinton, Demis Hassabis (DeepMind CEO), Dario Amodei (Anthropic CEO), and Bill Gates.

https://twitter.com/robbensinger/status/1726039794197872939

cornel_io
0 replies
10h6m

Smart people like Ilya really are worried about extinction, not piddling near-term stuff like job loss or some chat app saying some stuff that will hurt someone's feelings.

The worry is not necessarily that the systems become "alive", though, we are already bad enough ourselves as a species in terms of motivation so machines don't need to supply the murderous intent: at any given moment there are at least thousands if not millions of people on the planet that would love nothing more than be able to push a button an murder millions of other people in some outgroup. That's very obvious if you pay even a little bit of attention to any of the Israel/Palestine hatred going back and forth lately. [There are probably at least hundreds to thousands that are insane enough to want to destroy all of humanity if they could, for that matter...] If AI becomes powerful enough to make it easy for a small group to kill large numbers of people that they hate, we are probably all going to end up dead, because almost all of us belong to a group that someone wants to exterminate.

Killing people isn't a super difficult problem, so I don't think you really even need AGI to get to that sort of an outcome, TBH, which is why I think a lot of the worry is misplaced. I think the sort of control systems that we could pretty easily build with the LLMs of today could very competently execute genocides if they were paired with suitably advanced robotics, it's the latter that is lacking. But in any case, the concern is that having even stronger AI, especially once it reliably surpasses us in every way, makes it even easier to imagine an effectively unstoppable extermination campaign that runs on its own and couldn't be stopped even by the people who started it up.

I personally think that stronger AI is also the solution and we're already too far down the cat-and-mouse rabbithole to pause the game (which some e/acc people believe as the main reason they want to push forward faster and make sure a good AI is the first one to really achieve full domination), but that's a different discussion.

bartimus
0 replies
9h57m

It being "alive" is sort of what AGI implies (depending on your definition of life).

Now consider the training has caused it to have undesirable behavior (misaligned with human values).

zombiwoof
1 replies
11h24m

Two words: Laundry Buddy

Sam doomed himself. Laundry Buddy is the new Clippy

chubot
0 replies
10h56m

If we do not release Laundry Buddy, that increases humanity's extinction risk

asdkl234890
0 replies
10h16m

And as if researches in other nations like China will sit on their hands and do nothing. They are busy catching up but without any ethics boards.

0xDEAFBEAD
0 replies
10h51m

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

What we need at this point is a neutral 3rd party who can examine their safety claims in detail and give a relatively objective report to the public.

tempsy
7 replies
11h34m

Yeah Emmett Shear seems like an odd choice if they’re worried about retention because 1) Twitch was never known to be a particularly great place to work and 2) he stepped down for some reason and not because Twitch was in an amazing place or anything at the time

himaraya
5 replies
11h31m

Emmett's just a placeholder after Murati turned. I suspect he won't stay in his position for long.

PepperdineG
4 replies
10h55m

Recursive Interim CEOs. Will there be a Mandelbrot set of Interim CEOs?

dehrmann
1 replies
9h48m

And people say there aren't real-world applications to n-ary tree rebalancing.

vintermann
0 replies
9h2m

I laughed, but actually I think the utility of tree rebalancing is widely appreciated!

sgift
0 replies
10h47m

The new research focus demanded by the board in the name of safety will be an CEO AI, which will be aligned to humanities interests - the benchmark to show this will be if it does whatever the board wants. It's the only way to make sure they cannot be stabbed in the back again by a pesky human.

mickdarling
0 replies
10h43m

This particular board won’t even let ChatGPT help the CEO because they’re afraid there’s a Basilisk hiding in every response.

sanxiyn
0 replies
10h44m

Emmett Shear is probably the person most friendly to OpenAI board's AI safety agenda among possible candidates. Source: have a look at his Twitter.

theptip
4 replies
11h0m

The big question in my mind is the reported threat from MSFT to withhold cloud credits (i.e. the actual currency of their $10B investment). Is this true? And are they going to follow through?

I don't buy for a second that enough employees will walk to sink the company (though it could be very be disruptive). But for OpenAI, losing a big chunk of their compute could mean they are unable to support their userbase and that could permanently damage their market position.

borissk
1 replies
10h31m

MS is not going to randomly withhold cloud credits, as OpenAI is going to sue them for billions of damages.

vaxman
0 replies
10h21m

So they should keep buying H100s (and H200s) and pouring billions into their own chips on the expectation that OpenAI will fulfill its contractual obligations under THESE circumstances? If they stop doing that, how long before all of Azure is busy on a money losing chat program under all new leadership that doesn’t have the same plan that was sold to MSFT?

justcool393
0 replies
10h23m

was it even reported? i heard a bunch of stuff that seemed to be hypothetical guessing like "satya must be furious" that seemed to morph into "it was reported satya is furious"

i've seen similar with the cloud credits thing, people just pontificating whether it's even a viable strategy.

cthalupa
0 replies
10h42m

The report was that investors were talking to microsoft about the threat to withhold credits.

Which does not say whether microsoft was open to the idea or ultimately chose to pursue that path.

dehrmann
4 replies
9h55m

I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

That's kinda what happened. The latest gist I read was that the non-profit, idealistic(?) board clashed with the for-profit, hypergrowth CEO over the direction to take the company. When you read the board's bios, they're weren't ready for this job (few are; these rocket ship stories are rare), the rocket ship got ahead of their non-profit goals, and they found themselves in over their heads, then failed to game out how this would go over (poor communication with MS, not expecting Altman to get so much support).

From here, the remaining board either needs to either surface some very damning evidence (the memo ain't it) or step down and let MS and Sequoia find a new board (even if they're not officially entitled to do that). Someone needs to be saying mea culpa.

sanxiyn
3 replies
9h46m

I am not sure why. As far as I can tell, the board doesn't need to answer to anyone.

dehrmann
1 replies
9h41m

I think the diagram I saw showed they don't actually answer to MS, VCs, or employee investors? And not even that they're out-voted, they don't answer to them at all.

sanxiyn
0 replies
9h34m

As far as I can tell, this is correct.

g42gregory
0 replies
9h0m

Unfortunately (or fortunately?), you always have to answer to somebody. In this board's case, they have to answer to investors, Microsoft in particular. Why? Because Microsoft can pull the money (apparently they only sent a fraction of $10Bn so far) and can sabotage the partnership deal. The OpenAI won't meet the payroll and won't be able to run the GPU farm. Microsoft already threatened to do exactly that.

My suspicion is that Microsoft will do exactly that: they will pull the money, sabotage the partnership deal and focus on rebuilding GPT in-house (with some of the key OpenAI people hired away). They will do this gradually, on their own timetable, so that it does not disrupt the GPT Azure access to their own customers.

I doubt that there could be a replacement for the Microsoft deal, because who would want to go through this again? OpenAI might be able to raise a billion or two from the hard core AI Safety enthusiasts, but they won't be able to raise $10s of Billions needed to run the next cycle of scaling.

brotchie
3 replies
11h8m

Don't fully believe this, but the only rational explanation I can see is that Ilya knows they have AGI.

   - Nuke employee morale: massive attrition, not getting upside (tender offer),
   - Nuke the talent magnet: who's going to want to work there now?
   - Nuke Microsoft relationship: all those GPUs gone,
   - Nuke future fundraising: who's going to fund this shit show?
Just doesn't make sense.

vultour
0 replies
8h52m

People really need to stop with this AGI bullshit. They make a glorified Markov chain and suddenly they should have AGI? Self-driving cars are barely able to stay on the road after all this time, but sure, someone's hiding conscious machines in their basement.

upwardbound
0 replies
9h45m

Reminds me of the very ending of the show Silicon Valley. Crazy twist and great last two episodes of the show.

lazystar
0 replies
10h0m

burnout and sleep deprivation can lead to some pretty bad choices; thats why you want to surround yourself with people that will stand up to you when your ideas and plans suffer from too much tunnel vision. sounds like the other 3 board members were yes-men/women; the house of cards was there for a while, it seems.

yeck
2 replies
10h50m

Well, despite what Musk did, X (Twitter?) has still been limping along for quite a while now. While more abrupt and surprising, this doesn't seem nearly as bad as that.

extheat
1 replies
10h40m

This is far worse. OpenAI simply cannot survive without Microsoft and skeleton staff. It's not like a static codebase where you can keep the service up and running indefinitely barring bugs. Why would anyone building with the OpenAI APIs, their customers, have any faith in the company if they openly don't care about business? Working on AI is highly capital intensive, on the scale of many tens of billions of dollars. Where are they going to get that funding? How will they pay their staff? There is no way Microsoft is going to HODL after this embarrassment.

yeck
0 replies
10h24m

Musk fired most of the engineers. I'd be pretty surprised if we see the level of attrition at OpenAI getting within an order of magnitude of that. We are just making predictions, though. I could be way off the mark and many more people are willing to jump ship than I imagine.

As for Microsoft, if they let OpenAI go, then what? Does Google pick them up? Elon? They are still looking to invent AGI, so I'd be surprised if no one wants to take advantage of that opportunity. I'd expect Microsoft to be aware of this and weigh into their calculus.

ignoramous
2 replies
11h32m

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

Is it though? "No outcome where [OpenAI] is one of the big five technology companies. My hope is that we can do a lot more good for the world than just become another corporation that gets that big." -Adam D'Angelo

Palmik
1 replies
10h15m

I guess he would prefer is the existing incumbents got even larger, or if his competitor to ChatGPT (Poe) could capture significant fraction of the market.

_factor
0 replies
9h1m

Can’t beat em so join em? You’re framing this as a capitalist competition. Non-profits don’t care if their “competitors” win market share.

vaxman
1 replies
10h53m

No, OpenAI will not survive as a company with more than one shareholder. At the end of the day, MSFT has a fiduciary duty to its own shareholders. MSFT has set certain expectations for its own financial performance based on its agreements with OpenAI and MSFT shares traded based on those expectations. Now OpenAI has sustained a hemorrhage of its leadership that negotiated those agreements, including a public admission by OpenAI of deception in their boardroom and private talk of a potential competitor involving employees. The only question is if OpenAI will capitulate or the lawyers and supply chain will be leveraged to compel their cooperation with protecting the MSFT shareholders. MSFT has deep enough pockets to retain all of the workers. One way or another, the IP and their ops are now the property of the bank, in this case MSFT shareholders. Let’s hope nobody goes to jail by resisting what is a standard cleanup operation at this point.

vaxman
0 replies
8h18m

“Sorry, we are reporting a write down of $10 billion due to potential misrepresentations of commercial intent that occurred in our OpenAI portfolio.”

Things you will never hear Satya Nadella say. Way more likely he will coordinate to unify as much of their workers as he can to continue on as a subsidiary, with the rest left to go work something out with other players crazy/desperate enough to trust them.

seydor
1 replies
10h5m

Can microsoft buy IP from openAI? recruit their engineers? asking for a friend

BOOSTERHIDROGEN
0 replies
9h41m

Exclusive use up until pre-AGI tech.

mym1990
0 replies
11h24m

Middle East funding and fully self reliant seem to be at odds here.

jumelles
0 replies
10h56m

The dirty secret of the business world is that the C-suite is the most easily replaceable.

GreedClarifies
0 replies
10h52m

Don’t have twits on the board. Lesson learnt.

bmitc
113 replies
9h52m

Through all of this, no one has cogently explained why Altman leaving is such a big deal. Why would workers immediately quit their job when he has no other company, and does he even know who these workers are? Are these people that desperate to make a buck (or the prospect of big bucks)? It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.

What does Altman bring to the table besides raising money from foreign governments and states, apparently? I just do not understand all of this. Like, how does him leaving and getting replaced by another CEO the next week really change anything at the ground level other than distractions from the mission being gone?

And the outpouring of support for someone who was clearly not operating how he marketed himself publicly is strange and disturbing indeed.

jmerz
28 replies
9h27m

I think he's not as known in the outside world but it's really difficult to understate the amount of social capital sama has in the inner circles of Silicon Valley. It sounds like he did a good job instilling loyalty as a CEO as well, but the SV thing means that the more connected someone at the company is to the SV ecosystem, the more likely they like him/want to be on his good side.

This is kind of like the leadership of the executive branch switching parties. You're not going to say "why would the staff immediately quit?" Especially since this is corporate America, and sama can have another "country" next week.

bmitc
22 replies
9h25m

So it's a big deal because he has a cult of personality?

belugacat
15 replies
9h23m

It’s a big deal because he’s extremely charismatic and well connected and that matters much, much more for a tech company’s success than some programmers like to think.

bmitc
5 replies
9h15m

I have watched him speak, and he doesn't seem charismatic at all. I remember hearing the same things about Sam Bankman-Fried and then going and watching his interviews and feeling the same.

There is just a giant gap here where I simply do not get it, and I see no evidence that explains me not getting it is missing some key aspect of all this. This just seems like classic cargo cult, cult of personality, and following money and people who think they know best

joenot443
0 replies
4h5m

Surely you can understand that the persona one presents while giving a speech is often entirely different from the one they assume in private? I figured you knew him personally, this is a pretty funny justification.

If your analysis is based solely off YouTube interviews, I think your perspective on Sam’s capabilities and personality is going to be pretty surface level and uninteresting.

djokkataja
0 replies
9h7m

I have watched him speak, and he doesn't seem charismatic at all.

Consider the relative charisma of the people around him, though.

creshal
0 replies
8h48m

There's different types of charisma; some people appear extremely charismatic in person but not through a camera (there's a bunch of politicians you could name here), and vice versa (a lot actors).

bakuninsbart
0 replies
5h29m

Charisma is a euphemism for people starting to see dollar signs when they get close to him. The better you are connected, the more people want to connect with you, and Altman seems to have driven this to an exreme in SV, and the broader policy/tech world thanks to OpenAI. If you look at who is (probably) going to leave with him, it is mostly former ycombinator people or people clearly drawn to OAI through his connections.

aleph_minus_one
0 replies
7h6m

I have watched him speak, and he doesn't seem charismatic at all. I remember hearing the same things about Sam Bankman-Fried and then going and watching his interviews and feeling the same.

Beside the argument that creshal brought up in a sibling comment that some people are more charismatic live and some are more charismatic through a camera:

In my observation, quite some programmers are much more immune to "charisma influence" (or rather: manipulation by charisma) than other people. For example, in the past someone sent me an old video of Elon Musk where in some TV show (I think) he explained how he wants to build a rocket to fly to the moon and the respective person claimed that this video makes you want Musk to succeed because of the confidence that Elon Musk shows. Well, this is not the impression that the video made on me ...

Solvency
5 replies
8h58m

What am I missing here: Sam Altman has zero charisma or cool factor. Every talk I've seen him in, he comes off as lethargic and sluggish. I get zero sense of passion or rallying drive around the hype of AI from him. He's not an AI visionary. He's not a hype man. He simply "is", and just because he happens to have been the CEO he's been thrust into the spotlight, but there's literally nothing interesting about him.

fallingknife
1 replies
7h56m

What you are missing is his record of success and making the people under him rich. That's the kind of person people to work for. They want to make money, not to work for someone who looks good on camera.

aleph_minus_one
0 replies
7h2m

Your comment was the first about sama's "charisma" where the puzzle pieces fit together. :-)

sumitkumar
0 replies
7h42m

Is is not generic charisma. It is specific to who he can attract to work with him. You and I cannot figure it out just by going through how we perceive him from a distance. The average AI researcher/investor isn't looking for traditional charisma. In the interview with Lex Friedman he comes across as just the right person to lead the current GPT based products. Anyone else would be too traditional for this nascent product suite.

leobg
0 replies
7h28m

Read what pg has to say about him. He named Altman as one of the top 5 most interesting founders of the last 30 years.

startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

http://www.paulgraham.com/5founders.html

feraloink
0 replies
3h46m

Agreed. I like your adjectives of lethargic and sluggish. I have read all the responses to you and a few others who made a similar observation. I remain unconvinced about what is so essential about Sam Altman to OpenAI. I just don't get it.

vkou
0 replies
8h52m

I understand why this would be uniquely valuable for a startup, but why is this be uniquely valuable for MSFT? Are they planning on raising a series B next year?

iwsk
0 replies
8h53m

We live in a society.

ChatGTP
0 replies
8h33m

I don’t find him charismatic at all. I find Donald Trump more charismatic and I think he is the devil in disguise.

natch
2 replies
9h21m

I wouldn’t call the entire YC community a cult of personality. And that’s just a subset of his network.

toomuchtodo
0 replies
9h13m

People see what they want to see.

bmitc
0 replies
9h5m

It's not all, but you see plenty of it here.

joenot443
0 replies
4h13m

Your phrasing here suggests this is some kind of dunk on sama, but it’s really not. JFK and Huey Long both had cults of personality, it doesn’t mean they weren’t incredibly effective and influential.

feraloink
0 replies
3h50m

Yes. Well, it seems like it to me.

Here's more about Justin.tv the new interim CEO. It isn't paywalled. https://www.cnbc.com/2023/11/20/who-is-emmett-shear-the-new-...

andrepd
0 replies
5h16m

Yes.

steakscience
3 replies
8h31m

Every SV CEO has a "Sam Altman saved my butt during crucial incident X" story

Raptor22
1 replies
7h52m

What are some examples of these crucial incidents?

DonHopkins
0 replies
3h12m

It was a dark and stormy night off the coast of Maine. The winds were howling, the waves were monstrous, and there I was, stranded on my lobster fishing boat in the middle of a hurricane. The sea was a ferocious beast, tossing my vessel around like a toy. Just when all seemed lost, a figure appeared on the horizon. It was Sam Altman, riding a giant, neon-lit drone, battling the tempest with nothing but his bare hands and indomitable will.

As he approached, lightning crackled around him, as if he was commanding the elements themselves. With a deft flick of his wrist, he sent a bolt of lightning to scare away a school of flying sharks that were drawn by the storm. Landing on the deck of my boat with the grace of a superhero, he surveyed the chaos.

"Need a hand with those lobsters?" he quipped, as he single-handedly wrangled the crustaceans with an efficiency that would put any seasoned fisherman to shame. But Sam wasn't done yet. With a mere glance, he reprogrammed my malfunctioning GPS using his mind, charting a course to safety.

As the boat rocked violently, a massive wave loomed over us, threatening to engulf everything. Sam, unfazed, simply turned to the wave and whispered a few unintelligible words. Incredibly, the wave halted in its tracks, parting around us like the Red Sea. He then casually conjured a gourmet meal from the lobsters, serving it with a fine wine that materialized out of thin air.

Just as quickly as he had appeared, Sam mounted his drone once more. "Time to go innovate the weather," he said with a wink, before soaring off into the storm, leaving behind a trail of rainbows.

As the skies cleared and the sea calmed, I realized that in the world of Silicon Valley CEOs, having a "Sam Altman saved my butt" story was more than just a rite of passage; it was a testament to the boundless, almost mythical capabilities of a man who defied the very laws of nature and business. And I, a humble lobster fisherman, had just become part of that legend.

rightbyte
0 replies
7h40m

Like what?

basicoperation
0 replies
9h13m

Do you mean “difficult to overstate”?

“Difficult to understate” would mean he has little to no social capital.

MattGaiser
19 replies
9h39m

Why would workers immediately quit their job when he has no other company

It is Sam Altman. He will have one in a week.

It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.

I would imagine most employees at any organization are not really there because of corporate values, but their own interests.

What does Altman bring to the table besides raising money from foreign governments and states, apparently?

And one of the world's largest tech corporations. If you are interested in the money side, that isn't something to take lightly.

So I would bet it is just following the money, or at least the expected money.

The new board also wants to slow development. That isn't very exciting either.

bayindirh
15 replies
9h30m

It is Sam Altman. He will have one in a week.

Welcome to Cargo Cult AI.

alsodumb
13 replies
9h27m

What's wrong with that statement though?

It's the AI era - VCs are going crazy funding AI startups. What makes you think Greg and Sam would have a hard time raising millions/billions and starting a new company in a week if they want to?

bmitc
12 replies
9h20m

How will they come up with the idea? One is an investor and the other is an infrastructure software engineer.

alsodumb
11 replies
9h13m

What idea are you talking about? They are not your classic founders coming up with an idea to join Y combinator. They build OpenAI for many years, they know what to do.

It won't be hard for them to hire researchers and engineers, from OpenAI or other places.

Questions like this makes me wonder if you are a troll. I won't continue this thread.

bayindirh
10 replies
9h10m

Being able to hire researchers and, even the top talent doesn't guarantee that they'll be the top company or even succeed at what they're building.

This is what I referred as "Cargo Cult AI". You can get the money, but money is not the only ingredient needed to make things happen.

edit: Looks like they won't have a brand new company next week, but joining an existing one.

xcv123
8 replies
9h4m

Nothing can guarantee that. Investors always accept risk.

He has a better chance than some other random guy who was not the CEO of OpenAI.

bayindirh
4 replies
8h52m

Let's see whether Satya Nadella's bet on that risk will pay or not. Chance is a "biased random" in the real world. Let's see whether his bias is strong enough to make a difference.

xcv123
3 replies
8h47m

Are you talking about OpenAI or about Sam Altman's hypothetical new company?

OpenAI already had the best technology fully developed and in production when Microsoft invested in them.

I believe "cargo cult" means something quite different to how you're using it.

It's not "cargo cult" to consider someone's CV when you hire them for a new job. Sam Altman ran a successful AI company before and he most likely can do it again if provided enough support and resources.

bayindirh
2 replies
8h20m

Are you talking about OpenAI or about Sam Altman's hypothetical new company?

About him and Greg joining to Microsoft.

I believe "cargo cult" means something quite different to how you're using it.

I don't think so.

Tribes believed that building wooden air strips or planes would bring the goods they have seen during wartime.

People believe that bringing Altman will bring the same thing (OpenAI as is) exactly where it's left off.

Altman is just tip of the iceberg. Might have some catalyst inside him, but he's not the research itself or the researcher himself.

xcv123
1 replies
6h46m

OpenAI did not invent the transformer architecture. It was not their original research, but they implemented it well. Sam Altman led the company that implemented and executed it. Deep learning is not a secret. It just needs a lot of resources to be executed properly. OpenAI doesn't have any secret methods unknown to the rest of the AI community. They have strong engineering and execution. It is certainly within the CEO's power to influence that.

bayindirh
0 replies
6h14m

I don't claim that OpenAI will be the same without Sam, but Sam will be powerless without OpenAI.

What I say is, both lost their status quo (OpenAI as the performer, Sam as the leader), and both will have to re-adjust and re-orient.

The magic smoke has been let out. Even if you restore the "configuration" of OpenAI with Sam and all employees before Friday, it's almost impossible to get the same company from these parts.

Again, Sam was part of what made OpenAI what it is, and without it, he won't be able to perform the same. Same is equally valid for OpenAI.

Things are changing, it's better to observe rather than dig for an entity or a person. Life is bigger than both of them, even when combined.

mcv
2 replies
8h43m

He has a better chance than some other random guy who was not the CEO of OpenAI.

Yes, but that doesn't mean it's enough. Not every random guy who wasn't the CEO of OpenAI is about to start an AI company (though some probably are).

It's quite possible an AI company does need a better vision than "hire some engineers and have them make AI".

sage76
1 replies
6h13m

It's quite possible an AI company does need a better vision than "hire some engineers and have them make AI".

Seems like all these "business guys" think that's all it takes.

mcv
0 replies
3h53m

They often do. That doesn't make them right. There's probably going to be a massive AI bubble similar to what we've seen with cryptocurrencies and NFTs, and after that bubble pops, AI will probably end up discredited for a decade before it picks up again. It's happened before.

ChatGTP
0 replies
8h31m

Case in point: Google and Bard.

MattGaiser
0 replies
9h21m

All the more reason he will have one within a week. All sorts of people are raising millions for AI. One of the creators of modern startup venture capital who is buddies with many of the creators of modern startup venture capital as well as the CEOs of the major tech companies is unlikely to struggle here.

bmitc
2 replies
9h20m

> Why would workers immediately quit their job when he has no other company

It is Sam Altman. He will have one in a week.

His previous companies were Loopt and Worldcoin. Won't his next venture require finding someone else to piggyback off of?

If you are interested in the money side, that isn't something to take lightly.

I am interested in how taking billions from foreign companies and states could lead to national security and conflict of interest problems.

The new board also wants to slow development.

It's not a new board as far as I know.

alsodumb
1 replies
9h15m

His previous ventures don't matter. If he seeks funding, whether millions or billions, he will get it. Period. I don't know how people can reasonably argue that he will have a hard time raising money for a new AI startup along with Greg.

It's not a new board, but it's the time when the board decided to assert their power and make their statement/vision clear.

bmitc
0 replies
9h3m

So Sam and Greg are going to invent some new thing out of thin air in a matter of days? Or will they attach themselves to something else, like I implied? Or take on millions of dollars of funding to "figure it out"?

reissbaker
11 replies
9h39m

The board fired Altman for shipping too fast compared to their safety-ist doom preferences. The new interim CEO has said that he wants to slow AI development down 80-90%. Why on earth would you stay, if you joined to build + ship technology?

Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.

If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.

BoorishBears
5 replies
9h30m

Not enough people understand what OpenAI was actually built on.

OpenAI would not exist if FAANG had been capable of getting out of it's own way and shipping things. The moment OpenAI starts acting like the companies these people left, it's a no brainer that they'll start looking for the door.

I'm sure Ilya has 10 lifetimes more knowledge than me locked away in his mind on topics I don't even know exist... but the last 72 hours are the most brain dead actions I've ever seen out of the leadership of a company.

This isn't even cutting your own nose of to spite the face: this is like slashing your own tires to avoid going in the wrong direction.

The only possible justification would have been some jailable offense from Sam Altman, and ironically their initial release almost seemed to want to hint that before they were forced to explicitly state that wasn't the case. At the point where you're forced to admit you surprise fired your CEO for relatively benign reasons how much must have gone completely sideways to land you in that position?

jonbell
2 replies
9h18m

It’s possible be extremely smart in one narrow way and a complete idiot when it comes to understanding leadership, people, politics, etc.

For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”

mschuster91
1 replies
7h37m

For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”

That is, if you do not subscribe to one of the various theories that him sinking Twitter was intentional. The most popular ones I've come across are "Musk wants revenge for Twitter turning his daughter trans", "Saudi-Arabia wants to get rid of Twitter as a trusted-ish network/platform to prevent another Arab Spring" and "Musk wants to cozy up to a potential next Republican presidency".

Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.

dragontamer
0 replies
3h6m

Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.

Of the $46 Billion Twitter deal ($44 equity + $2 debt buyout), it was:

* $13 Billion Loans (bank funded)

* $33 Billion Equity -- of this, ~$9 Billion was estimated to be investors (including Musk, Saudis, Larry Ellison, etc. etc.)

So its about 30% other investors and 70% Elon Musk money.

xvector
1 replies
9h14m

I really hope this comes back around and bites Ilya and OAI in the ass. What an absurd decision. They will rightfully get absolutely crushed by the free market.

BoorishBears
0 replies
8h28m

Looks like you got your wish earlier than anyone would have expected: https://twitter.com/satyanadella/status/1726509045803336122

mianos
4 replies
9h19m

This is exactly why you would want people on the board who understand the technology. Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI. That is a technical thing that to understand seems to be beyond most people without real experience in the field. It is certainly beyond the understanding of some dude that lucked into a great training set and became an expert, much the same way the The Knack became industry leaders.

og_kalu
3 replies
9h10m

Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI.

So Ilya Sutskever, one of the most distinguished ML researchers of his generation does not understand the technology ?

The same guy who's been on record saying LLMs are enough for AGI ?

mianos
0 replies
8h8m

Sorry, I am not including Ilya when I say not understand the technology.

In fact, he is exactly the type to be on the board.

He is not the one saying 'slow down we might accidentally invent an AGI that takes over the world'. As you say, he says, LLMS are not a path to a world dominating AGI.

lucubratory
0 replies
8h12m

To be clear, he thinks that LLMs are probably a general architecture, and thus capable of reaching AGI in principle with enormous amounts of compute, data, and work. He thinks for cost and economics reasons it's much more feasible to build or train other parts and have them work together, because that's much cheaper in terms of compute. As an example, with a big enough model, enough work, and the right mix of data you could probably have an LMM interpret speech just as well as Whisper can. But how much work does it take to make that happen without losing other capabilities? How efficient is the resulting huge model? Is the end result better than having the text/intelligence segment separate from the speech and hearing segment? The answer could be yes, depending, but it could also be no. Basically his beliefs are that it's complicated and it's not really a "Can X architecture do this" question but a "How cheap is this architecture to accomplish this task" question.

fallingknife
0 replies
7h45m

AGI doesn't exist. There is no standard for what makes an AGI or test to prove that an AI is or isn't an AGI once built. There is no engineering design for even a hypothetical AGI like there is for other hypothetical tech e.g. a fusion reactor, so we have no idea if it is even similar to existing machine learning designs. So how can you be an expert on it? Being an expert on existing machine learning tech, which Ilya absolutely is, doesn't grant this status.

jstummbillig
7 replies
9h17m

I am so confused by how this question is asked, and the reactions.

It's "such a big deal" because he has been leading the company, and apparently some people really like how and they really don't like how it ended.

Why would it require any other explanation? Are you asking what leaders do and why an employee would care about what they do...?

bmitc
6 replies
9h8m

Do you understand why he was fired? The company had a charter, one the board is to help uphold. Altman and his crew were leading the company, and seemingly its employees, away from that charter. He was not open about how he was doing that. The board fired him.

This is like a bunch of people joining a basketball team where the coach starts turning it into a soccer team, and then the GM fires the coach for doing this and everyone calls the GM crazy and stupid. If you want to play soccer, go play soccer!

If you want to make a ton of money in a startup moving fast, how about don't setup a non-profit company spouting a bunch of humanitarian shit? It's even worse, because Altman very clearly did all this intentionally by playing the "I care about humanity card" just long enough while riding on the coattails of researchers where he could start up side processes to use his new AI profile to make the big bucks. But now people want to make him a martyr simply because the board called his bluff. It's bewildering.

jstummbillig
1 replies
8h58m

Do you understand why he was fired?

Do you? Because that part is way more irritating, and, honestly, starting to read your original comment I thought that was where you were going with this: Why was he fired, exactly?

The way the statement was framed basically painted him a liar, in a way, so vague, that people put forth the most insane theories about why. I can sense some animosity, but do you really think it's okay to fire anyone in a way, where to the outside the possible explanation ranges from a big data slip to molesting their sister?

Nothing has changed. That is the part that needs transparency and its lack is bewildering.

upwardbound
0 replies
8h43m

One of the comments here had a good possible explanation which is that sharing the details might expose the board to liability since they now would have admitted that they know the details of some illicit thing Sam did, for which a lawsuit is coming.

For example, one scenario someone in a different thread conjectured is that Sam was secretly green-lighting the intentional (rather than incidental) collection of large amounts of copyrighted training data, exposing the firm to a great risk of a lawsuit from the media industry.

If he hid this from the board, “not being candid” would be the reason for his firing, but if the board admits that they know the details of the malfeasance, they could become entangled in the litigation.

ffgjgf1
1 replies
9h2m

But if the board seems to be doing everything they can to make sure that longterm OpenAI wouldn’t be able to execute anything in their charter in a meaningful way (assuming they end up being left behind technologically and not that relevant) does it really make that much sense?

rightbyte
0 replies
7h35m

What does a potential future scenario matter? The board have to follow the charter today.

fevangelou
0 replies
5h16m

From my understanding (not part of the SF tech bubble), S.A. had his shot as the CEO of a company that came to prominence because of a GREAT product (and surely not design, manufacturing or marketing). Just consider WHEN MS invested in OpenAI. He probably went too far for reasons only a few know, but still valid ones to fire him...

His previous endeavor was YC partner, right? So a rich VC turning to a CEO. To make even more money. How original. If any prominent figure was to be credited here beyond Ilya S., well that would probably be Musk. Not S.A. who as a YC partner/whatever played Russian roulette with other rich folks' money all these years... As for MS hiring S.A., they are just doing the smart thing: if S.A. is indeed that awesome and everyone misses the "charisma", he'll pioneer AI and even become the next MS CEO... Or Satya Nadela will have his own "Windows Phone" moment with SamAI ;)

dragontamer
0 replies
3h19m

Do you understand why he was fired?

Wrong question. From the behavior of the board this weekend, it seems like the question is more "Do you understand how he was fired?".

IE: Immediately, on a Friday before Market close, before informing close partners (like Microsoft with 49% stake).

The "why" can be correct, but if the "how" is wrong that's even worse in some regards. It means that the board's thinking process is wrong and they'll likely make poor decisions in the future.

I don't know much about Sam Altman, but the behavior of the board was closer to a huge scandal. I was expecting news of some crazy misdeed of some kind, not just a simple misalignment with values.

Under these misalignment scenarios, you'd expect a stern talking to, and then a forced resignation over a few months. Not an immediate firing / removal. During this time, you'd inform Microsoft (and other partners) of the decision to get everyone on the same page, so it all elegantly resolves.

EDIT: And mind you, I don't even think the "why" has been well explained this weekend. That's part of the reason why "how" is important, to make sure the "why" gets explained clearly to everyone.

fevangelou
6 replies
9h32m

100% spot on.

The world is filled with Sam Altmans, but surely not enough Ilya Sutskevers.

exitb
4 replies
9h28m

Was Sutskever really that instrumental to OpenAI's success, if it was at all possible for him to be surprised at the direction the company is taking. It doesn't seem that he is that involved in the day-to-day operations.

modeless
1 replies
9h20m

Is operations responsible for their success? Or is it rather their technology?

exitb
0 replies
9h13m

I understand that we was instrumental in the earlier days, but does it seem like he is involved in the day-to-day work on the technology, today? When the new CEO advocates for a near-pause in AI development, does he mean operations?

fredoliveira
1 replies
9h15m

Anyone asking this question has never gone through Ilya's achievements. He is quite brilliant, and clearly instrumental here. And Sam is amazing in his own way too, for sure.

exitb
0 replies
9h9m

I understand his achievements, but is he involved right now? Does he, nowadays, provide to the company anything other than his oversight?

AbrahamParangi
0 replies
9h18m

This is deeply wrong. Just because you don’t see what’s special about him doesn’t mean he isn’t a rare talent.

varjag
4 replies
9h45m

A CEO typically builds up a network of his people within the org and if he falls hard they are next on the chopping block. Same deal as with dictators.

"Dozens" sounds like about right amount for a large org.

mcv
3 replies
8h47m

So having Altman's loyalists leave is probably exactly what Sutskever wants?

Still, what do they actually want? It seems a bit overly dramatic for such an organisation.

lucubratory
2 replies
8h22m

This is very short and explains exactly what they want: https://openai.com/charter

I think it's pretty obvious after reading it why people who were really committed to that Charter weren't happy with the direction that Sam was taking the company.

ric2b
1 replies
7h3m

It doesn't sound obvious to me, can you clarify on what Sam was doing that went against the charter?

ruszki
0 replies
3h22m

Looking on Windows 11 and Copilot, it’s easy to see that Microsoft deal violates “Broadly distributed benefits” on some level. But of course, who knows without an official statement.

3cats-in-a-coat
4 replies
9h5m

The new CEO, (Emmett, not Mura, who was CEO for two days I guess) has publicly stated on multiple occasions "we need to slow down from a 10 to a 1-2". Ilya is also in favor of dramatically "slowing down". That's who's left in this company, running it.

In the field of AI, right now, "slowing down" is like deciding to stop the car and walk the track by foot in the middle of a Formula 1 race. It's like going backwards.

Unless things change from the current status quo, OpenAI will be irrelevant in less than 2 years. And of course many will quit such a company and go work somewhere where the CEO wants to innovate, not slow down.

ChatGTP
1 replies
8h29m

Well many of the top researches in the world seem keen for a slow down so I’m not sure you’re right. You can’t force people to work on things at a pace they’re uncomfortable with.

3cats-in-a-coat
0 replies
7h46m

You'd find this hard to support with facts.

We have a bunch of people talking about how worried they are and how we should slow down, and among them Sam Altman, and you see he was shipping fast. And Elon Musk, who also was concurrently working on his own AI startup while telling everyone how we should stop.

There's no stopping this and any person of at least average intelligence is fully aware of this. If a "top researcher" is in favor of not researching, then they're not a researcher. If a researcher doesn't want to ship anything they research, they're also not a researcher.

OpenAI has shipped nothing so far that is in any way suggesting the end of humanity or other such apocalyptic scenario. In total, these AI models have great potency in making our media, culture, civilization a mess of autogenerated content, and they can be very disruptive in a negative way. But no SINGLE COMPANY is in control of this. If it's not OpenAI, it'll be one of the other AI companies shipping comparable models right now.

OpenAI simply had the chance to lead, and they just gave up on it. Now some other company will lead. That's all that happened. OpenAI slowing down won't slow down AI in general. It just makes OpenAI irrelevant in 1-2 years time max.

rdedev
0 replies
7h56m

Also keep in mind govt are keeping an eye on this. If they are not careful they may get regulated like hell

Xorakios
0 replies
7h17m

Not trying to be snarky, but I'm guessing more like two months.

lazystar
3 replies
9h51m

stability.

bmitc
2 replies
9h47m

But Altman, the ousted CEO, appears to have been adding to the instability. His firing seems like a step in getting back to a desired stability.

mdekkers
1 replies
9h19m

Can you, you know, bring facts and data to this discussion, as opposed to vague handwaving of weird accusations? Altman has been doing an amazing job at running the business he co-founded, and “instability” isn’t something _anyone_ at any side of the discussion is accusing him of.

What is this instability, in your view? And how is this “desired stability” going to come back?

bmitc
0 replies
8h58m

What discussion, specifically, as you're just joining in here?

If a CEO of a non-profit is raising billions of dollars from foreign companies and states to create a product that he will then sell to the non-profit he is CEO of, I view that as adding instability to the non-profit given its original mission. Because that mission wasn't to create a market for the CEO to take advantage of for personal gain.

hobofan
3 replies
9h40m

<deleted>

og_kalu
0 replies
9h38m

why else would they bring a hyper-capitalist like Sam Altman on board

They didn't "bring" a hyper capitalist. Sam Co-founded this entire thing lol. He was there from the beginning.

krystianantoni
0 replies
9h15m

He is the one of two original founders :)

MattGaiser
0 replies
9h35m

Who among the founders isn't a hyper-capitalist? Elon Musk? Peter Thiel? Reid Hoffman?

ssnistfajen
2 replies
9h18m

Based on Andrej Karpathy's comment on Twitter today, the board never explained any of this to the staff. So siding with Altman seems like a far better option since his return would mean a much higher likelihood of continuing business as usual.

If Ilya & co. want the staff to side with them, they have to give a reason first. It doesn't necessarily have to be convincing, but not giving a reason at all will never be convincing.

dmix
1 replies
7h51m

And the new CEO wants to slow down AI development and is a Yudkowsky fan which is another incentive to leave https://x.com/drtechlash/status/1726507930026139651?s=46&t=

ruszki
0 replies
3h26m

Making AI models safer is a type of AI development.

alex_young
2 replies
9h43m

Looks like they have about 700 employees. A handful quitting doesn’t seem like a mutiny.

zuppy
0 replies
9h28m

yes, but although we can all be replaced in a company, some of the people can be replaced much harder. so, i wouldn’t say that the number is high but maybe (and i only speculate) some of them are key people.

redlampdesk
0 replies
9h21m

More senior employees can easily know ~1000x more about the company than new employees. These employees are like lower branches on a tree, their knowledge crucially supporting many others. Key departures can sever entire branches.

tim333
1 replies
7h2m

Jessica Livingston's tweet may give some idea:

The reason I was a founding donor to OpenAI in 2015 was not because I was interested in AI, but because I believed in Sam. So I hope the board can get its act together and bring Sam and Greg back.

I guess other people joined for similar reasons.

As regards the 'strange and disturbing' support, personally I thought OpenAI was doing cool stuff and it was a shame to break it because of internal politics.

ruszki
0 replies
3h18m

This is classic startup PR nonsense. They just fear change for obvious reasons. It doesn’t mean that they will leave if OpenAI can work without Altman.

wyager
0 replies
3h0m

Altman was fired because people who want to slow the progress of AI orchestrated his firing.

Whether or not he works at the company is symbolic and indicative of who is in charge: the people who want to slow AI progress, or the people who want to speed it up.

vaxman
0 replies
8h33m

TBH, my primary concern is this will be the catalyst for another market crash by destroying the public trust in AI, which is currently benefiting from investor FOMO.

Bear in mind that the cause of an equity market crash and its trigger are two different things.

The 2000 crash in Tech was caused by market speculation in enthusiastic dot-com companies with poor management YES, but the trigger was simply the DOJ finally making Bill throw a chair (they had enough of being humiliated by him for decades as they struggled with old mainframe tech and limited staffing).

If the dot-com crash trigger had not arrived for another 12-18 months, I’m sure the whole mess could have been swept under the rug by traders during the Black Swan event and the recovery of the healthy companies would have been 5-6 months, not 5-6 years (or 20 years in MSFT’s case).

spoonjim
0 replies
9h19m

He is the CEO! He sets the entire agenda for the company. Of course he is important - how could he not be?

mdekkers
0 replies
9h27m

It is likely that wherever Altman goes next, @gdb would follow, and _he_ is deeply loved by many at OAI (but so is Altman).

CEOs should be judged by their vision for the company, their ability to execute on that vision, bringing in funding, and building the best executive team for that job. That is what Altman brings to the table.

You make it seem that wanting to make money is a zero-sum game, which is a narrow view to take - you can be heavily emotionally and intellectually invested in what you do for a living and wanting to be financially independent at the same time. You also appear to find it “disturbing” that people support someone that is doing a good job - there has always been a difference between marketing and operations, and it is rather weird you find that disturbing - and appreciate stability, or love working for a team that gets shit done.

To address your initial strawman, why would workers quit when the boss leaves? Besides all the normal reasons listed above, they also might not like the remaining folks, or they may have lost faith in those folks, given the epic clusterfuck they turned this whole thing into. All other issues aside, if I would see my leadership team fuck up this badly, on so many levels, i’d be getting right out of dodge.

These are all common sense, adult considerations for anyone that has an IQ and age above room temperature and that has held down a job that has to pay the bills, and combining that with your general tone of voice, I’m going to take a wild leap here and posit that you may not be asking these questions in good faith.

jncfhnb
0 replies
2h34m

I don’t get it either. Who gives two shits about a sv bigwig who’s playbook appears to have been promote open ai and then immediately try to pull up the ladder and lock it with regulatory action.

This guy is a villain.

doomleika
0 replies
9h21m

As much as @sama is not exactly "great" (World Coin is...ehem). The firing reeks political strife and anyone have enough days at any office knows what happens the next year at OpenAI will be anything but grandstanding for those "revolutionists" to stamp out any dissenting voice and fertile ground for the opportunists to use the chaos to make things worse. Most of the employee's prime objective will be navigating the political shitstorm than doing their job. The chance OpenAI stay as is before ChatGPT is little to none.

Better run for the lifeboat before the ship hits the iceberg.

colechristensen
0 replies
9h1m

A poorly planned poorly executing of a CEO with such a high profile and so important to investors that the CEO of Microsoft is surprised, angry, and negotiating his return… is the kind of absolute chaos that I would like to avoid. I would definitely consider quitting in that circumstance.

I would think to myself, what if management ever had a small disagreement with me?

I quit a line cook job once in a very similar circumstance scaled down to a small restaurant. The inexperienced owners were making chaotic decisions and fired the chef and I quit the same day, not out of any kind of particular loyalty or anger, I just declined the chaos of the situation. Quitting before the chaos hurt me or my reputation by getting mixed up in it… to move on to other things.

TrackerFF
0 replies
7h28m

OpenAI seems to be the product of two types of people:

- The elite ML/AI researchers and engineers.

- The elite SV/tech venture capitalists.

These types come with their own followings - and I'm not saying that these two never intersect, but on one side you get a lot of brilliant researchers that truly are in it for the mission. They want to work there, because that's where ground zero is - both from the theoretical and applied point of view.

It's the ML/AI equivalent of working at CERN - you could pay the researchers nothing, or everything, and many wouldn't care - as long as they get to work on the things they are passionate about, AND they get to work with some of the most talented and innovative colleagues in the world. For these, it is likely more important to have top ML/AI heads in the organization, than a commercially-oriented CEO like Sam.

On the other side, you have the folks that are mostly chasing prestige and money. They see OpenAI as some sort of springboard into the elite world of top ML, where they'll spend a couple of years building cred, before launching startups, becoming VP/MD/etc. at big companies, etc. - all while making good money.

For the latter group, losing commercial momentum could indeed affect their will to work there. Do you sit tight in the boat, or do you go all-in on the next big player - if OpenAI crumbles the next year?

With that said, leadership conflicts and uncertainty is never good - whatever camp you're in.

JanSt
0 replies
9h46m

Seems like the board wants to slow down progress which pretty much means sitting there waiting for alignment instead of putting out the work you came for. Sam will let them work to progress I guess, plus a mountain of cash/equity for them.

AbrahamParangi
0 replies
9h20m

This is worse than firing Jobs, at least when they fired him it was for poor performance not “doing too good a job”.

4734573
0 replies
8h58m

Professionals tend to value their work in the real way of assigning value to it. So I doubt it was desperation so much as having a sense of self worth and a belief that the structure of Open-AI was largely a matter of word games the lawyers came up with.

As for Altman... I don't understand what's insignificant about raising money and resources from outside groups? Even if he wasn't working directly on the product itself, that role is still valuable in that it means he knows the amounts of resources that kind of project will require while also commanding some amount of familiarity with how to allocate them effectively. And on top of that he seems understand how to monetize the existent product a lot better than the Ilya who mostly came out of this looking like a giant hazard for anyone who isn't wearing rose tinted sci-fi goggles.

mfiguiere
45 replies
10h28m

TheInformation: Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won’t Return

Dozens of OpenAI staffers internally announced they were quitting the company Sunday night, said a person with knowledge of the situation, after board director and chief scientist Ilya Sutskever told employees that fired CEO Sam Altman would not return.

https://www.theinformation.com/articles/dozens-of-staffers-q...

intellectronica
29 replies
10h15m

Tip for builders: you can use the GPT APIs on Microsoft Azure. Managed reliably, nobody's quitting, no drama. Same APIs, just with better controls, global availability, and a very stable, reliable, and trustworthy provider. (disclosure: I work at Azure, but this is just my own observation).

mlindner
11 replies
10h9m

I don't understand what point you're trying to make. Yes Microsoft uses OpenAI APIs. What is the point you're trying to make beyond that? It's still OpenAI software.

reissbaker
4 replies
10h4m

Microsoft doesn't "use" the APIs, they host them on their own servers and have a license to do so and re-license to Azure users. If something goes wrong with OpenAI (given that it sounds like many key employees are leaving), Azure will stay up and you can keep using the APIs from MS.

deeringc
2 replies
9h31m

That may provide short term stability, but medium term (which in this field is a few months) how will Azure's offering move forward if OpenAI is in such crisis? I guess it really comes down to OpenAI's ability to continue without Altman and Co. I don't believe that Microsoft's license allows them to independently develop the models? Wouldn't this become a stale fork pretty quickly while the rest of the industry moves on (llama2 etc ..)?

reissbaker
0 replies
9h20m

I agree that medium term is up in the air and highly dependent on what happens next. If many OAI employees defect to Sam's new company, maybe that becomes the thing everyone migrates to...

intellectronica
0 replies
8h30m

Or ... cut the middleman: Sam Altman and Greg Brockman joining MS to start a new AI unit - https://twitter.com/satyanadella/status/1726516824597258569

rob74
0 replies
9h58m

Or, to say it another way, they are cooperating with OpenAI - OpenAI uses Microsoft's cloud services, and Microsoft incorporates OpenAI's products in its own offerings. But the worries people have are not about OpenAI's products suddenly vanishing, it's about the turmoil at OpenAI affecting the future of those products.

Actually the exodus of talent from OpenAI may turn out to be beneficial for the development of AI by increasing competition - however it will certainly go against the stated goal of the board for firing Altman, which was basically keeping the development under control.

intellectronica
3 replies
10h1m

Yes, the model weights were developed by OpenAI. They are licensed exclusively and irrevocably to Microsoft, and operated by Microsoft, not OpenAI. If you are building with these APIs and concerned that consuming them from OpenAI (which also runs them on Azure, but managed by OpenAI staff) because of the drama there, you can de-risk by consuming from Azure directly.

nicce
1 replies
9h56m

They are licensed exclusively and irrevocably to Microsoft, and operated by Microsoft, not OpenAI.

No wonder why CEO got fired.

jiggawatts
0 replies
9h17m

Let me guess: Ilya and his team had developed GPT5, decided it very-nearly had consciousness, and then Sam immediately turned around and asked Microsoft what they're willing to pay for a copy to use and abuse.

ignoramous
0 replies
9h56m

If folks care enough to move to Azure, I think they might as well derisk entirely from OpenAI models, despite its quality?

apstls
1 replies
10h3m

The current models would presumably be accessible for customers regardless of OpenAI’s state. If OpenAI were to hypothetically somehow vanish into thin air, products and features built on their products could still be supported by Azure’s offering.

deeringc
0 replies
9h12m

Sure, but what's the point on building a product on top of a stable API that exposes a technology that won't evolve because it's actual creators have imploded? It remains to be seen whether OpenAI will implode, but at this point it seems the dream team is t getting back together.

zer00eyz
5 replies
9h43m

You want me to trust M$ in all this? Embrace, extend, extinguish.

Fellow nerds, you really need to go into work on Monday and have a hard chat with your C levels and legal (Because IANAL). The question is: Who owns the output of LLM/AI/ML tooling?

I will give you a hint, it's not you.

Do you need to copyright what a CS agent says, no, you want them on script as much as possible. An LLM parroting your training data is a good thing (assuming a human wrote it). Do you want an LLM writing code, or copy for your product, or a song for your next corporate sing along (Where did you go old IBM)? No you dont, because it's likely going straight to the public domain. Depending on what your doing with the tool and how your using it, it might not matter that this is the case (its an internal thing) but M$, or openAI, or whoever your vendor is, having a copy that they are free to use might be very bad...

irrational
1 replies
9h39m

Also, you might be given someone else’s proprietary IP, setting yourself up for a lawsuit.

zer00eyz
0 replies
9h32m

If I grab something off GitHub, and the license there is GPL, but it was someone else's IP I do have some recourse and for my infraction.

In the case of an LLM handing it to me can I sue MS or OpenAI for giving out that IP, or is it on me for not checking first? Is any of this covered in the TOS?

zabzonk
0 replies
9h29m

a hint - the "M$" thing is not smart or funny, just old.

lannisterstark
0 replies
9h21m

Embrace, extend, extinguish.

Microsoft hasn't embraced that ideology in close to more than a decade by now. Might be the time to let go of the boomer compulsion.

ascorbic
0 replies
9h18m

Have I just been transported to Slashdot in 2003?

I'm not sure you appreciate how enterprise licence agreements work. Every detail of who owns what will have been spelled out, along with the copyright indemnities for the output.

jiggawatts
3 replies
9h19m

The Azure-hosted versions are consistently behind the OpenAI versions.

For example, the GPT4 128K-token model is unavailable, and the GPT-4V model is also unavailable.

laurels-marts
2 replies
8h2m

This. Very frustrating. Why is Azure behind and when is the gpt-4-turbo version coming?

intellectronica
1 replies
7h18m

It's already available globally on Azure as of last week.

laurels-marts
0 replies
7h2m

Okay you’re correct. Last week when I checked I only saw the Dall-E 3 public preview announcement. Now I checked and the Azure page is updated also with GPT-4 Turbo announcement. Very nice!

_boffin_
3 replies
10h10m

Question: how difficult is it to get that no retention waiver on prompts and responses?

intellectronica
2 replies
10h9m

Not difficult. I've not heard of anyone who asked and _didn't_ get the waiver. It's just a responsible stop-gap in case a user does something questionable or dangerous.

apstls
1 replies
10h6m

The waiver still allows for logging of prompts for the specific purpose of abuse monitoring for some limited retention period, right? How difficult is it to have this waived as well?

haldujai
0 replies
10h2m

I work in academia and with somewhat protected data so YMMV but it wasn't hard for me at all (I just filled out the form and MS approved it).

mrtksn
1 replies
10h10m

How the same? Does it have the new Assistant API too?

intellectronica
0 replies
10h8m

Basically, yes (there are some variations but same functionality, and much more).

Xenoamorphous
0 replies
10h1m

GPT on Azure has become incredibly slow for us in the past few weeks.

chimney
10 replies
10h0m

Isn't this expected? Nearly everyone who joined post ChatGPT was primarily financially motivated. What is more interesting is how many of the core research team stays.

quietthrow
4 replies
9h50m

This. Very accurate. At the end of they day this is a battle between academics and capitalists and what they stand for. We generally know how this typically goes…

simseye
2 replies
9h32m

I don't see many academics indulge in sensationalist doomsaying. That's the real difference here. SETI wouldn't and couldn't seek grants by proposing to contact murderous aliens.

I think academics have a general faith in goodwill of intelligence.Benevolence may be a convergent phenomenon. Maybe the mechanisms of reason themselves require empathy and goodness

sudosysgen
1 replies
9h28m

Huh? There's plenty of AI doomerism amongst academics, see Bengio, Hinton, etc...

simseye
0 replies
9h9m

Hinton makes cliched statements as if he's not given much thought to safety but feels obliged for whatever reason

irrational
0 replies
9h41m

The capitalists run it into the ground while the academics stand around confused asking each other what happened?

ah765
3 replies
9h43m

This is actually pretty surprising to me, since a financially motivated person would normally wait until a better deal, and just collect their paycheck in the meantime.

There's also no guarantee that Altman will really start a new company, or be able to collect funding to hire everyone quickly. I wonder if these people are just very loyal to Sam.

MattGaiser
1 replies
9h34m

This is actually pretty surprising to me, since a financially motivated person would normally wait until a better deal, and just collect their paycheck in the meantime.

I imagine you need to signal that you want in on the deal by departing. Get founder equity.

jatins
0 replies
7h53m

Even if he had started a new company, there was no way a dozen employees were getting founder equity for showing loyalty

hurryer
0 replies
9h29m

Or they could be loyal to the e/acc cult.

exizt88
0 replies
9h6m

How do you know that? Maybe they wanted to ship AI products at an unprecedented speed at the most prestigious AI company in the world.

alsodumb
3 replies
10h14m

Does anyone have a non-paywall version of this? Or like excerpts from the article?

karmasimida
2 replies
9h55m

The information is a 300 dollar annual subscription, I don’t think they will allow it

alsodumb
1 replies
9h52m

Oh wow, that's like the most I've seen for any news subscription.

pg_1234
0 replies
9h31m
valine
36 replies
11h32m

Not a word from Ilya. I can’t wrap my mind around his motivation. Did he really fire Sam over “AI safety” concerns? How is that remotely rational.

ignoramous
12 replies
11h12m

Did he really fire Sam over "AI safety" concerns? How is that remotely rational.

Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.

(like LeCun, I am not a doomer; but I am also not Hinton to know any better)

mcpackieh
5 replies
11h2m

"[Artificial General Intelligence] in a very narrow domain."

Which is it?

ignoramous
3 replies
10h49m

Read the paper linked above, and if you don't agree that's okay. There are many who don't.

maxlin
1 replies
10h41m

Check it again, I think you might have misread the thing. It categorizes things in a way that clearly separates AlphaGO from even shooting towards "AGI". The "General" part of AGI can't really be skipped or words don't make any sense anymore.

ignoramous
0 replies
10h38m

Ah, gotcha; I meant "superintelligence" (which is ASI and not AGI).

calf
0 replies
10h30m

Has anyone written a response to this paper? Their main gist is to try to define AGI empirically using only what is measurable.

maxlin
0 replies
10h45m

I think the guy read the paper he linked the wrong way. The paper explicitly separates "narrow" and "AGI" types where AlphaGo is in the virtuoso bracket for narrow AI, and ChatGPT is in the "emerging" bracket for "general" AI. Only thing it puts to be AGI is few levels up from virtuoso, but in the "general" type.

sgregnt
3 replies
10h52m

Can you please share the sources for Ilyas views?

ignoramous
2 replies
10h50m
zxexz
1 replies
10h20m

For what it's worth, the MIT Technology Review these days is considered to be closer to a "tech tabloid" than an actual news source. I personally would find it hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models. And I'm of the 'faction' that AGI can't come soon enough.

ignoramous
0 replies
10h0m

hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models

Ilya is pretty serious about alignment (precisely?) due to his gut instinct: https://www.youtube.com/watch?v=Ft0gTO2K85A (2 Nov 2023)

esjeon
1 replies
10h49m

AGI in a very narrow domain

The definition of AGI always puzzles me, because "G" in AGI is general, and the word certainly don't play well w/ "narrow". AGI is a new buzzword I guess.

og_kalu
0 replies
10h41m

Well there's nothing narrow about sota LLMs. The main hinge is just competence.

i think the guy you're replying to misunderstood the article he's alluding to though. They don't claim anything about a narrow agi

bufferoverflow
8 replies
11h27m

Because that's not the actual reason. It looks like a hostile takeover. The "king" of, arguably, the most important company in the world, got kicked out with very little effort. It's pretty extraordinary, and the power shift is extraordinary too.

yreg
4 replies
11h12m

The board firing a CEO is hardly a hostile takeover.

surrealize
2 replies
11h9m

If you fire one board member (Altman) and remove another from the board (Brockman) it's not exactly friendly either

maxbond
1 replies
10h48m

Firing generally isn't friendly, but no one "took over." The people who had the power exercised it. Maybe they shouldn't have, I feel no compulsion to argue on their behalf, but calling it a "takeover" isn't correct.

I think when people say "takeover" or "coup" it's because they want to convey their view of the moral character of events, that they believe it was an improper decision. But it muddies the waters and I wish they'd be more direct. "It's a coup" is a criticism of how things happened, but the substantive disagreements are actually about that it happened and why it happened.

I see lots of polarized debate any time something AI safety related comes up, so I just don't really believe that most people would feel differently if the same thing happened but the corporate structure was more conventional, or if Brockman's board seat happened to be occupied by someone who was sympathetic to ousting Altman.

calf
0 replies
10h27m

"It's a coup" is loaded language and lets the user insinuate their position without actually explaining and justifying it.

juped
0 replies
10h27m

Boards have exactly one job.

(It's firing the CEO, if anyone wasn't aware.)

sdwvit
1 replies
11h17m

Maybe someone from higher up called the board?

borissk
0 replies
10h16m

ChatGPT-5?

voidfunc
0 replies
11h20m

Kicked out is a bit hyperbole. They don't have their champion anymore but the deal and their minority ownership stake are inked. They still get tech and profits. They might not have a path to owning OpenAI now but that was a problem a few years down the road. They can also invest in Altmans new thing and poach OpenAI talent to bolster their internal AI research which is probably going to get a massive funding boost.

The PR hit will be bad for a few days. Good time to buy MS stock on discount but this won't matter in a year or two.

tdubhro1
6 replies
11h0m

If it really was about “safety” then why wouldn’t Ilya have made some statement about opening the details of their model at least to some independent researchers under some tight controls. This is what makes it look like a simple power grab, the board has said absolutely nothing about what actions they would take to move toward a safer model of development.

snovv_crash
4 replies
10h40m

Because they want to slow down further research which would push AGI closer until the safety/alignment aspect can catch up.

lyu07282
1 replies
10h15m

But if you really cared about that why would you be so opaque on everything. Usually people with strong conviction try to convince other people of that conviction. For a non profit that is supposedly acting in the interests of all mankind, they aren't actually telling us shit. Transparency is pretty much the first thing everybody does who actually cares about ethics and social responsibilities.

upwardbound
0 replies
9h30m

Ilya might be a believer in what Eliezer Yudkowsky is currently saying, which is that opacity is safer.

https://x.com/esyudkowsky/status/1725630614723084627?s=46

Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.

But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.

I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.

He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.

ffgjgf1
1 replies
9h31m

wouldn’t that mean that they’ll just be left behind and it won’t matter what their goal is?

snovv_crash
0 replies
5h20m

It depends how far ahead they currently are.

victor9000
0 replies
9h37m

That's because it's not about safety, it's about ego, vanity, and delusion.

seanhunter
3 replies
11h15m

No he didn't fire Sam over AI safety concerns. That's completely made up by people in the twittersphere. The only thing we know is that the board said the reason was that he lied to the board. The guardian[1] reported that he was working on a new startup[1] and that staff had been told it was due to a breakdown in communication and not to do with anything regarding safety, security, malfeasance or a bunch of other things.

[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...

frabcus
1 replies
11h1m

The Atlantic Article makes it pretty clear that the fast growth of the commercial business was giving Ilya too few resources and too little time to do the safety work he wanted to do: https://archive.ph/UjqmQ

seanhunter
0 replies
7h12m

I can buy that they fired him over a disagreement about strategy (ie are we going too fast/we are concentrating on the wrong things etc), because in general of course board members get fired if they can't work together on a common strategy. But the narrative that has taken over lots of places at the weekend is more along the lines of he got fired because they had created a sentient AI and Ilya was worried about it. That just makes no sense to me.

Additionally, no-one (not insiders at OpenAI and certainly not a journalist) other than people in those conversations actually knows what happened, and noone other than Ilya actually knows why he did what he did. Everyone else is relying on rumor and heresay. For sure the closer people are to the matter the more insight they are likely to have, but noone who wasn't in the room actually knows.

sashank_1509
0 replies
10h23m

Spoken to a bunch of folk at OpenAI, it really does seem to be regarding safety. Ilya was extremely worried, did not like the idea of GPT’s as users can train AI’s to do arbitrarily harmful stuff.

ah765
1 replies
11h5m

It might be because of AI safety, but I think it's more likely because Sam was executing plans without informing the board, such as making deals with outside companies, allocating funds to profit-oriented products and making announcements about them, and so on. Perhaps he also wanted to reduce investment in the alignment research that Ilya considered important. Hopefully we'll learn the truth soon, though I suspect that it involves confidential deals with other companies and that's why we haven't heard anything.

ipaddr
0 replies
9h54m

It's to do with a tribe in openAI that believes ai will take over the world in the next 10 years so we need to spend much of our efforts towards that goal. What that translates to is strong prompt censorship and automated tools to ban those who keep asking things we don't want you to ask.

Sam has been agreeing with this group and using this as the reason to go commercial to provide funding for that goal. The problem is these new products are coming too fast and taking resources which affects the resources they can use for safety training.

This group never wanted to release chatGPT but were forced to because a rival company made up of ex openAI employees were going to release their own version. To the safety group things have been getting worse since that release.

Sam is smart enough to use the safety group's fear against them. They finally clued in.

OpenAI never wanted to give us chatGPT. Their hands were forced by a rival and Sam and the board made a decision that brought in the next breakthrough. From that point things snowballed. Sam knew he needed to run before bigger players moved in. It became too obvious after devday that the safety team would never be able to catch up and they pulled the breaks.

OpenAI's vision of a safe AI has turned into a vision of human censorship rather than protecting society from a rogue AI with the power to harm.

singularity2001
0 replies
11h6m

To shine some light on the true nature of the "AI safety tribe" aspects I highly recommend reading the other top HN post / article : https://archive.is/Vqjpr

ansk
25 replies
10h21m

The contrast between the media's depiction of today's events with this outcome demonstrates how much Sam and his aligned interests can influence a narrative. The vast majority of reporting indicated that Sam held all the leverage, the board members were about to be fired, and a new board would be appointed which would be much better aligned with Sam. These accounts took special care to portray the current board as inept, wavering, and eager to backtrack. However, the reality appears to be that the board was steadfast in their decision and focused on moving forward in search of a new CEO. The truth is, we still don't know the real reason why Sam was ousted. His removal may be justified and it may not. But in the absence of information, media speculation was overwhelmingly biased in Sam's interests. If the news that the board did not immediately backtrack and capitulate to Sam's demands seems to be coming out of left field, you've likely fallen victim to this narrative. Chances are the board is not as incompetent as you were led to believe, and any judgement of their competence should be reserved until further details are provided on the reasons for Sam's removal.

hooande
8 replies
10h7m

If this were true they never would have had talks to bring him back. That's the opposite of steadfast commitment to principles. If Sam wronged them or the company in a significant way they never should have let him back in the building.

The board's decisions may or may not turn out to be correct in hindsight. But it's very difficult to say that this was a good example of leadership or decision making.

dundarious
4 replies
9h57m

https://twitter.com/ashleevance/status/1726469283734274338

So, here's what happened at OpenAI tonight. Mira planned to hire Sam and Greg back. She turned Team Sam over past couple of days. Idea was to force board to fire everyone, which they figured the board would not do. Board went into total silence. Found their own CEO Emmett Shear

Written by the person who broke the story at Bloomberg.

So it appears a single person on the board wanted the talks to bring him back, and nobody else. I think that's 1 against 3, but the point is that the board wasn't totally united (which is not surprising).

ignoramous
2 replies
9h50m

Mira isn't on the OpenAI Board.

dundarious
1 replies
9h38m

I genuinely don't remember who's on it, so thanks for the data point. Regardless, my argument is they needn't act in total unanimity.

I presume this Mira person wasn't totally freelancing -- how would this even end up being presented to the board without some direction from someone on the board. So maybe more like 3.5 against 0.5. It could have been a total flip flop, but that's a bigger assumption. I have no problem not assuming grand narratives until the basic reporting shakes out.

dragonwriter
0 replies
9h2m

I presume this Mira person wasn't totally freelancing

She was the interim CEO; it seems that it was her and some of the rest of the executive team, not the board, that wanted Sam to come back. The board apparently was working on finding a new interim CEO to replace Mira that wasn't in Sam's camp more than it was trying to bring Sam back.

ffgjgf1
0 replies
9h37m

She was the interim CEO not a board member? Considering was supposedly happened appointing her was the smartest thing to do (from the perspective of the board)

lostmsu
0 replies
10h2m

That depends on what "talks" consisted of. If it was Altman throwing arguments at them, and them just replying "no thank you", I don't see a problem.

lajawfe
0 replies
10h4m

That is just current narrative, we don't know the details right? There was immense pressure from investors/Microsoft and board had to have that meeting. But the board probably already made their mind and did not balk under pressure.

jay_kyburz
0 replies
10h0m

If I had to guess, they had the talks to bring him back as a favor to somebody, or as a sign of good faith, but neither party had changed their position so there was no compromise to be had.

systemvoltage
4 replies
10h15m

There is one thing that sticks out:

Why didn't the board explain itself clearly?

There are times when saying anything publicly would be considered defamation and openining themselves to lawsuits, but it seems that they owe it to their own staff in plain words. They didn't explain the situation properly as per leaked internal announcements.

ah765
1 replies
9h53m

I think it's possible that Sam had some secret plans involving deals with external companies, that the board learned about. They can't reveal that information without potentially damaging other businesses and becoming liable.

xiphias2
0 replies
9h15m

While Sam tried to make the impression that he doesn't know why he was fired, he didn't even try to deny the allegations that he was in talks with the Saudis to create a for profit AI hardware company.

I think you are right that Ilya didn't want to give out secret information to not open up himself to lawsuits.

politelemon
0 replies
9h31m

I question if any board in history has explained itself with clarity and honesty. By my cynical lens they would never stoop to engage in virtues other than signalling.

choppaface
0 replies
10h0m

Maybe for a company of 12-50 there would be more candid discussion internally, but over 100 and especially at OpenAI size (and with Microsoft involved) liability control is at the max. Moreover, if the Board thought the decision would help improve retention, then they would comment, but that's clearly not the case.

OpenAI is not a typical LLC or S/C-corp though, so the Board also has to overcome that conceptual hurdle.

didibus
4 replies
9h55m

I'm still surprised how little attention there has been on the fact that Sam Altman's sister is accusing him of sexually assaulting her when she was younger.

His own sister, even if it's not true, it reflects poorly to have this kind of relationship with your sister that she'd say this, and if it's true, it's very problematic.

And for some reason, very very little mention of this. I just find it suspicious from a media behavior point of view.

emodendroket
1 replies
9h50m

Where has this been reported?

nearbuy
0 replies
9h36m

No, this is the proper way to handle accusations from someone who may not be credible. You investigate privately, and only report after, if the accusations seem credible. You don't blast across the media that so-and-so is accused of sexual assault, because then the damage is done, regardless of the truth. Signal boost it enough, and every lay-person's main association with the person will become, "oh, that guy accused of sexually assaulting his sister".

The accusations are about events 25 years ago, when they were children. No one will ever be able to disprove this, so there's no way to undo the reputational damage.

gizmondo
0 replies
6h48m

even if it's not true, it reflects poorly to have this kind of relationship with your sister that she'd say this

People making false accusations against you reflects poorly on _you_ now? What a world to live in.

b0tch7
1 replies
9h56m

The fact that this outcome occurred proves the board's resolve & determination (stubbornness?) but not much else.

some important questions: - if Ilya had 4:2, why not just sit Sam down and work all this out in private? - why has the board been completely unable to explain themselves to OAI employees? to the public? - why not take a more neutral "parting ways" tone? - latest reporting suggests that the board is doing this without any outside council (legal or professional network). it seems absolutely bonkers to risk funding sources eg MSFT on a decision like this.

altpaddle
0 replies
8h11m

Human character doesn't change that much, let's say that in his heart Sam is more of of a move fast and commercialize type of leader and Ilya doesn't want that. Do you think them sitting down and talking about it is really going to change things that much? People are who they are. If anything they've probably had these exact conversations many times already. Not every relationship is fixable, sometimes people are just incompatible.

As for the board's silence to the public, this should be obvious. Talking about their thinking/plans/reasons for firing Sam exposes them to all kinds of risk both legally and otherwise. The safe move is to stay quiet in public and continue talks with the relevant stakeholders (Microsoft, Sam + loyalists) in private

adastra22
1 replies
10h6m

The board demonstrated their incompetency in how they fired Sam without communicating this decision to their biggest stakeholder, Microsoft, or with any clear transition plan in place.

swalling
0 replies
10h1m

The board governs a mission-driven nonprofit, not a profit-focused enterprise where Microsoft has board-level influence as investor. This tension (operating as if it is a startup when it is not) is why Altman was fired.

nicce
0 replies
10h9m

How much controlling this influence of a narrative impacts for decision to resign for workers as a protest in OpenAI? I hope that not too much…

lajawfe
0 replies
10h7m

Yes, the media and the general conversation seems very one sided; and I do not see any basis to take a side. Sam being an overachiever, might actually be acting with maleficence and deceit. Why is no-one even considering that?

Everyone is just reiterating that board is inept and trying to undermine them. This does not sit right with me.

zoogeny
24 replies
10h42m

I’m genuinely surprised that they stuck to their guns. The PR push behind Altman’s return was convincing enough that I had my doubts.

Altman will be more than fine, he’ll get a bucket of money and the chance to prove he is the golden boy he’s been sold to the world. He will get to recruit a team that believes in his vision of accelerating AI for commercial use. This will lead to a more diverse market.

I hope for the best for those who remain at OpenAI. I hope for the best for Altman and Brockman.

tempusalaria
20 replies
10h3m

The whole situation should make it clear that SV media is beholden to VCs and will print anything they tell them to.

Bloomberg, the verge and the information all went to bat for Altman in a big way on this.

vintermann
6 replies
9h25m

I'm also pretty suspicious of people in forums like these who say nothing can compare to GPT4 and they're miles ahead of everyone else etc. How much of that is venture capital speaking?

It's not quite where it is (or was) with Tesla, where it was hopeless to know what was sincere and what was just people talking up their investment/talking down their short, but it's getting there.

xvector
1 replies
9h3m

I mean, try and compare for yourself. It is quite obviously miles ahead of everything else.

I want OpenAI to be absolutely crushed in the free market after this move. But it will take years for anyone to catch up with GPT-4, if even Anthropic is nowhere close.

OccamsMirror
0 replies
8h55m

Why do you want them to be crushed? They decided that Sam didn't represent the charter and acted accordingly. Do I think it was a boneheaded move? Sure. But maybe it was the right move for them even in spite of the optics?

lajawfe
1 replies
9h13m

The crypto bros switched to AI hype and are now hyping OpenAI/GPT4 hoping to pump MSFT/NVDA. In every HN conversation where someone mentions competing products, there are people talking it down and saying GPT4 is miles ahead, and in a tone to undermine the competition. I see a pattern and it is definitely not sincere.

xvector
0 replies
9h2m

You clearly haven't tried GPT-4 if you think people are lying about how much better it is.

tempusalaria
0 replies
9h13m

There are concrete benchmarks like “how good is it at answering multiple choice questions accurately or “how good is it at producing valid code to solve a particular coding problem”.

There’s also a chatbot Elo ranking which crowd sources model comparisons https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

GPT-4 is the king right now

exizt88
0 replies
9h0m

Anyone who works with text generation will tell you that GPT-4 is far, far beyond anything anyone else has put out for general purpose text gen. The benchmarks don’t really tell you the whole picture. It’s impossible to prompt other models for anything as complex as what GPT-4 can do, both semantically and stylistically.

browserman
5 replies
9h57m

Kara Swisher was basically working as Altman’s press secretary

ignoramous
4 replies
9h54m

I don't get the vibe that Swisher particularly likes techbros or billionaires, let alone bat for them.

bmitc
2 replies
9h50m

Have you read her recent Tweets on the matter? She was definitely editorializing quite subjectively in favor of Altman. That's not exactly unbiased journalism happening.

WendyTheWillow
1 replies
9h35m

I read her Threads post and do not see the same advocacy you claim here. Just valuable information.

bmitc
0 replies
9h26m

Well I wasn't referring to her Threads posts. I was referring to her recent Tweets.

fatbird
0 replies
9h47m

She's an access journalist. She'll shill for the biggest voice who'll talk to her.

zaptheimpaler
3 replies
10h0m

You could say the same thing if they were rooting for the board instead..

tempusalaria
2 replies
9h55m

It’s not the job of the media to root for anyone. The media should dispassionately report the truth. In this case they did not do so.

_factor
1 replies
9h39m

In most cases they do not. We haven’t had unfiltered media for a very long time now. Your voice is blocked from a large audience by many many barriers if you mention any forbidden keywords together.

staunton
0 replies
9h20m

We haven’t had unfiltered media for a very long time now.

When did we have it?

ahartmetz
1 replies
9h27m

My favorite was that part from Financial Times:

"Investors were hoping that Altman would return to a company “which has been his life's work”"

As opposed to Sutskever, who they found on the street somehow, yeah?

disgruntledphd2
0 replies
3h43m

I mean, both of them were involved from the beginning of OpenAI.

lajawfe
0 replies
9h58m

Yes, I felt the same. In every piece, there was very little news but a lot of fluff to lead the public with opinions. Probably VCs saw their money burning and wanted Sam back at the helm to protect their asset.

tunesmith
0 replies
9h24m

I read something a while ago: when trying to interpret the truth of what is happening, the value of public statements is only that it's an indication of what that source would like the public to believe. And when looked at that way, that signal does have value. Not as truth, but as motive.

So that helped cut through all the cruft with this. There was a lot of effort behind putting across the perception that the board was going to resign and that Altman was going to come back.

Looked at through that lens, it makes more sense: the existing board had little incentive to quit and rehire Sam/Greg. The only incentive was if mass resignations threatened their priorities of working on safety and alignment, and I get the sense that most of these resignations are more on the product engineering side.

So I don't really think this is a twist that no one saw coming.

pg_1234
0 replies
8h54m

I think that the pro-capitalist faction forgot that the opposing side are people capable of planing the development of an artificial consciousness.

Should they decide to sink to the level of VC scheming briefly, it will be like child's play for them.

karmasimida
0 replies
9h51m

Same

If OpenAI ceases to be Sam’s vision someone will replace it.

It is a good thing for the ecosystem I guess, we will have more diverse products to choose.

But making AI more safe? Not likely. The tech will spread and Ilya will probably not a safer AGI, because he will not control it

wokwokwok
20 replies
10h20m

Honest question:

Other than 1) Microsoft and 2) anyone building a product with the OpenAI api 3) OpenAI employees…

…is OpenAI crashing a burning a big deal?

This seems rather over hyped… everyone has an opinion, everyone cares because OpenAI has a high profile.

…but really, alternatives to chatGPT exist now, and most people will be, really… not affected by this in any meaningful degree.

Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?

Feels a lot like Twitter; people said it would crash and burn, but really, it’s just a bit rubbish now, and a bunch of other competitors have turned up.

…and competitive pressure is good right?

I predict: what happens will look a lot like what happened with Twitter.

Ultimately, most people will not be affected.

The people who care will leave.

New competitors will turn up.

Life goes on…

wg0
6 replies
9h33m

In time of Windows, around let's say mid 1990s, people thought Windows is irreplaceable.

Now turns out Linux is the workhorse everywhere for running workloads or consuming content. Almost every programming language (other than Microsoft's own SDKs) gets developed on Linux, has first class support for Linux and Windows is always an afterthought.

It has gone to that extent that to lure developers, Microsoft has to embed a Lunux in a virtual machine on Windows called WSL.

Local inference is going to get cheaper and affordable and that's for sure.

New models would also emerge.

So OpenAI doesn't seem to have an IP that can withstand all that IMHO.

bongodongobob
5 replies
9h21m

Linux isn't the workhorse in any business that isn't tech based. The dev bubble here is pretty strong. I've done IT for a couple MSPs now so I've seen 100s of different tech stacks. No one uses Linux for anything. ESXi for the hypervisors, various version of Windows server, and M365 for everything else. Graphics/marketing uses Macs sometimes but other than that, it's all Windows/MS. Seeing a Linux VM is exceeding rare and usually runs some bespoke software that no one knows how to service or support. Yes, Linux is much more viable these days, but it's not even close to being mainstream.

xigency
1 replies
9h2m

So I’m guessing you have never heard of AWS then…

bongodongobob
0 replies
7m

I'm not talking about cloud. I'm talking about businesses with 1-300 employees. Most of them I've seen use cloud for backups or a few services. Most business stuff is on prem. File storage is probably 50/50 on prem / SharePoint / Google Drive. In the hundreds of business I've worked with, I could count on my 2 hands the number of Linux server I've seen. Most of the stuff they're running doesn't even support Linux.

wg0
1 replies
8h14m

True. You'll find Windows XP based terminals on many industrial machines. Its pervasive but outnumbered where "running the workloads" comea into picture.

The dev bubble is not that small. This very website is I'm pretty sure not served from Windows.

Other than stack overflow or few handful of exceptions, very little is actually served from Windows if I'm not wrong.

bongodongobob
0 replies
4m

I'm consulting for a company with 5000 servers right now, and maybe a dozen run Linux. They've still got a few hundred Server 2008 boxes running with EoL licenses. We looked into migrating to Linux but it's not an option.

ascorbic
0 replies
9h12m

I think GP is referring to servers. Linux may still be tiny on the desktop, but it dominates servers (and mobile)

_fizz_buzz_
5 replies
9h52m

Totally agree. It seems like OpenAI is ahead of the curve, but even some free open source projects have become really good. I am no expert, so take this with a grain of salt. It seems OpenAI has a lead, but only of a few months or so and others are racing behind. I guess it really sucks if you built something that relies on the OpenAI api, but even then one could replace the api layer.

OccamsMirror
1 replies
8h57m

I mean, OpenAI aren't just going to close up shop. I would very much doubt they're just going to turn off their APIs. I would just keep building and if you have to swap LLMs at some point then do so.

laurels-marts
0 replies
7h56m

That’s my fear is that they will phase out Plus subscriptions and shut down APIs because the folks that will be left want nothing to do with product.

ComplexSystems
1 replies
9h44m

For coding, at least, nothing out there is even close to as good as GPT-4. Not Claude, not Grok, and certainly not llama.

hobofan
0 replies
9h27m

For coding tasks (without API access), especially in a conversational setting, Phind has been by far the best one for me. I sometimes still compare it to ChatGPT with GPT-4, but it almost always comes out on top (not missing the point of the questions + amount of required editing for integration into codebase), and it does produce the answers a lot faster.

huytersd
0 replies
9h41m

None of the open source stuff even comes close to GPT4, I’ve tried them repeatedly.

emodendroket
2 replies
9h53m

In Twitter's case that's the main product getting worse without any of the wannabes getting that much traction.

renegade-otter
0 replies
9h45m

It's different. People spend YEARS building their social media presence, following, and algorithmic advantage.

Jumping to a different platform is a huge sacrifice for power users - those who create content and value.

None of this is a factor here. ChatGPT is just a tool, like an online image resizer.

flarg
0 replies
9h27m

IMHO Twitter drove its own need and now that it has pretty much gone no one wants the hassle of serving a new master.

bayindirh
1 replies
9h44m

I'll be probably downvoted to hell, but, I think what is happening is healthy to the ecosystem.

Pine forests are known to grow by fires. Fires scatter the seeds around, the area which is unsustainable is reset, new forests are seeded, life goes on.

This is what we're seeing, too. A very dense forest has burned, seeds are scattered, new, smaller forests will start growing.

Things will slow down a bit, bit accelerate again in a more healthy manner. We'll see competition, and different approaches to training and sharing models.

Life will go on...

dmix
0 replies
7h43m

That's a very good analogy.

hobofan
0 replies
9h46m

In the "grand scheme of things", no, it's probably not a big deal. I think in the short term, I think it has the potential to set back the space a few months, as a lot of the ecosystem is still oriented around OpenAI (as they are the best at productivizing). I think that even extends to many community/open source models, which are commonly trained against GPT-4.

If they are able to retain enough people to properly release a GPT-5 with significant performance increases in a few months, I would assume that the effect is less pronounced.

IanCal
0 replies
8h14m

I've not found anything that really competes with GPT4, and that's been released for some time.

Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?

By other things getting better, not by stalling the leader of the pack.

alsodumb
18 replies
11h42m

I think Adam D'Angelo has a very strong conflict of interest and shouldn't have been on the board of OpenAI.

I'm sure Quora views took a hit after ChatGPT. Not like Quora was any good before ChatGPT, they just managed to get to the top of Google results for a lot of common questions.

Now, Poe by Quora was trying to go big on custom agents. The GPT Agents announcement on DevDay was a fundamental threat to Poe in many ways.

I'm convinced that Adam D'Angelo probably had some influence on the other two board members too. He should've left the board of OpenAI the moment OpenAI and his own company were competing in the same space.

dereg
7 replies
11h28m

Don't forget Tasha McCauley's husband, Joseph Gordon Levitt, has been vocally anti-AI during the SAG-AFTRA strike, an event for which AI was a huge point of contention. It's a poisoned board.

zombiwoof
5 replies
11h22m

Their whole board is a joke

The smartest white hot startup on the planet has the smallest board and most inexperienced

How did that even happen on Sam’s watch?

My take: he always thought Ilya would have his back with Greg and the 3 overrule ruled anybody , so they kept it small

Bad idea

sangnoir
3 replies
10h27m

How did that even happen on Sam’s watch?

What do you imagine he could have done about the board of a non-profit as CEO and fellow board-member?

dragonwriter
1 replies
10h11m

As CEO and a board member, he was better positioned than literally anyone else to move changes to the governance structure of the nonprofit.

sangnoir
0 replies
7h40m

Nevertheless, the board doesn't fall under any CEO's "watch" (even when they are 1/6th of the board). The reverse is true.

ytoawwhra92
0 replies
10h22m

What do you imagine he could have done about the board of a non-profit as CEO and fellow board-member?

As board members, both Altman and Brockman would have presumably had to vote on any changes to the board - including reduction in number of members and appointment of new members.

Do you think the composition of the board before Friday could've been reached without some level of support from Altman and Brockman?

0xDEAFBEAD
0 replies
10h31m

OpenAI was not founded to be a white hot startup: https://archive.is/Vqjpr

Klonoar
0 replies
10h5m

What? Being vocally anti-AI for the purposes of respecting artists is not being anti-AI period.

There is nuance to this point.

slkdjfalzkdfj
6 replies
11h6m

Adam was appointed to the OpenAI board in April 2018, long before ChatGPT and Poe. He's always been somewhat interested/involved in AI/ML so the appointment broadly makes sense to me.

Also keep in mind that a year earlier in Spring 2017 Sam Altman led Quora's Series D, after YC previously joined in on Quora's Series C in 2014. So the two of them clearly had some pre-existing relationship.

I don't think OpenAI and Quora (the product) are a serious conflict of interest. You claim "I'm sure Quora views took a hit after ChatGPT" but I really doubt that's true in any meaningful way. Quora's struggles are a separate issue and predate the GPT craze of the last year.

Nor were Poe and OpenAI competitors until recently; Poe was simply building on top of OpenAI models, the same as hundreds of other ventures in the space right now.

However...I do agree that the GPTs announcement two weeks ago now creates a very clear conflict of interest--OpenAI is now competing directly against Poe. And because of that, I agree that Adam probably should leave the board.

The timing also raises the question of whether booting Sam is in any way related to the GPTs launch and to Poe. Perhaps Sam wasn't candid about the fact that they were about to be competing with Adam's company. The whole thing is messy and not a good look and exactly why you try to avoid these conflicts of interest to begin with.

alsodumb
3 replies
11h3m

I never said Adam should've never been on board. I was arguing about the part after Poe was competing with OpenAI after DevDay. That's where he has a clear, very strong conflict of interest and to be honest that's where the board/Adam took the most impactful decision that OpenAI board ever made.

himaraya
2 replies
10h58m

On the other hand, Sam & Greg had the opportunity to confront Adam about the obvious conflict and likely could have forced him to step down if they wanted him to. They made their choice. Zero mention about Adam & Poe in the leaks from Sam's camp also suggests Sam doesn't fault Adam's character here.

alsodumb
1 replies
10h54m

I didn't read OpenAI's company charter, but forcing Adam down would probably require a board majority. It's not like they would have made Adam step down if they wanted to.

himaraya
0 replies
10h51m

It would, but a PR campaign like the one waged this weekend would probably leave Adam little choice. Sam clearly underestimated the board either way.

0xDEAFBEAD
1 replies
10h32m

I thought Poe was a partnership with Anthropic?

alsodumb
0 replies
10h16m

Nope, Poe was always building on OpenAI API and their GPTs. In fact Poe was one of the first companies to get access to GPT-4-32k context length a few months ago and they were the first to make it accessible to their users.

cj
1 replies
11h13m

I could be wrong, but I thought I saw something about Quora's use of LLMs improving their SEO (the LLM answer to most questions is embedded on the Quora page) and potentially driving more traffic.

If you look at Poe as a value add for existing Quora users, instead of a feature that is going to grow their userbase, it's still a net win for Quora even if GPT agents exist simultaneously.

alsodumb
0 replies
11h6m

Quora SEO'd their way to the top of most Google searches before the LLMs era.

Poe is not really meant as a value addition for Quora users. Poe was a general AI chat company, like ChatGPT.

Poe's unique selling point was their 'chat agents with customizable instructions/personality' and they were charging people money for this while pretty much building on OpenAI GPT API. They also had an agents store.

During DevDay when Sam announced GPTAgents and store, that was a fundamental threat to Poes existance.

Axsuul
0 replies
9h10m

The number of board members should've increased alongside OpenAI's growth. Too few board members means the higher potential for corruptibility and too much power being held by each member. It makes no sense for OpenAI in the future to be worth $1T and a leader in AI while still being governed by a small inner circle.

JCM9
16 replies
11h49m

You can hear Microsoft’s billions going up in flames. You literally couldn’t make this stuff up.

threeseed
10 replies
11h41m

Actually this is going to be great for Microsoft.

By cutting its other revenue streams OpenAI has lost all of its leverage.

So now will shrink to being effectively an external dev team for Bing / Windows.

nextworddev
6 replies
11h36m

Huh? Last time I checked I'm paying OpenAI for my Chatgpt plus subscription, not msft

toomuchtodo
3 replies
11h30m

Microsoft owns the GPUs and has rights to continue operating current models. The only value OpenAI had was cohesion of engineering talent as it relates to model development velocity.

https://news.microsoft.com/source/features/ai/openai-azure-s...

https://www.semafor.com/article/11/18/2023/openai-has-receiv...

Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.

That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.

One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.

Microsoft has certain rights to OpenAI’s intellectual property so if their relationship were to break down, Microsoft would still be able to run OpenAI’s current models on its servers.
pete762
1 replies
11h6m

I bet if Microsoft leaves anothe tech giant would be more than glad to provide some GPUs

toomuchtodo
0 replies
11h2m

To the people running OpenAI? I suppose in the same way you’d invest in a developing country where your investment was always at risk of a dictator saying you were no longer an investor. Just hire OpenAI engineers away and buy stability. Let the board try to make a better offer with no resources of their own. I do not doubt there is a small cohort who drinks the aligned safety koolaid; for everyone else, there is cash and equity.

toomuchtodo
0 replies
1h41m

Citation: https://www.semianalysis.com/p/microsoft-swallows-openais-co... ("Microsoft Swallows OpenAI’s Core Team – GPU Capacity, Incentive Structure, Intellectual Property, OpenAI Rump State")

skygazer
0 replies
11h30m

Your $20/mo isn’t covering their costs. They sought additional financing.

sharma-arjun
0 replies
11h30m

ChatGPT Plus was never intended to be OpenAI's main revenue stream. The real cash is in the GPT API, which they're currently making good money on, but which also directly competes with a GPT API offered by Azure.

OP might have a point: if OpenAI declines, devs might prefer the Azure API over the OpenAI API for factors like stability, quality, response time, better integration with existing Azure stack, etc.

eddiewithzato
1 replies
11h35m

No, it will switch to being something like Micorsoft Cambridge. AKA only for R&D

threeseed
0 replies
11h32m

Very true. And to be fair Microsoft does have a great reputation for research.

It really is a massive change in direction and a lost opportunity to set their own course.

zombiwoof
0 replies
11h22m

Google is fucking popping the champagne

peanuty1
1 replies
11h36m

You think Microsoft's investment is going up in flames because OpenAI is moving forward without Sam?

siva7
0 replies
11h2m

Microsofts CEO doesn’t seem too happy about the board.

swax
0 replies
11h38m

MS has already gotten a lot out of OpenAI... who knows, the next steps could be OpenAI focusing more on R&D and not products like a good non-profit. ChatGPT is handed over to MS and merged into Bing. Anything could happen.

hilux
0 replies
11h12m

The amount of cash that Microsoft has paid to OpenAI is relatively small, much less than the $13B number that gets thrown around.

And I'm sure they're getting their money's worth. E.g. the last time I heard or thought of Bing (outside of the ChatGPT context) was years ago. Now I see it all the time. That's worth $$$$$ to them.

acl777
0 replies
11h49m

Billions... of AzureBucks!!!

:-)

shippintoboston
14 replies
11h54m

Crazy how a 4 person board staved off pressure from a behemoth like Microsoft

LudwigNagasena
3 replies
11h38m

What can Microsoft do? Tell OpenAI they break their long-term agreement just because?..

karmasimida
1 replies
11h34m

Microsoft hasn't released all its investment into OpenAI, and I think this could trigger some special contingency in their agreement to stopping doing so.

I think one thing is for certain: OpenAI now, won't worth that much any more.

LudwigNagasena
0 replies
11h26m

Maybe they can. Maybe they can't. We don't even know if "the talks" to return Sama were just investors writing angry emails to Ilya and him replying "nuh-ah".

halfjoking
0 replies
10h4m
karmasimida
2 replies
11h44m

It will be funny if MSFT relicense GPT4 to Sam's new venture, I am very much looking forward to this development.

sudosysgen
0 replies
11h33m

I'm pretty sure only OpenAI (the non-profit) holds the IP. I don't see why they'd give Microsoft a transferable license.

pyrophane
0 replies
11h29m

MS doesn't own GPT4

frabcus
1 replies
10h46m

This could all be good for Microsoft.

Microsoft want OpenAI to do research, and give them the model to run on Azure.

They certainly don't need OpenAI competing in consumer products. ChatGPT could have been a Micrsoft product in an only slightly different timeline.

And they'd rather they didn't compete in API serving, they'd rather everyone currently using OpenAI had to shift to Azure.

og_kalu
0 replies
10h27m

This hinges on Open ai being as forthcoming with new models as they previously were. I'm not sure that's going to happen. Certainly the new board will be more inherently against it. Not sure what outs they have though besides the obvious one.

mupuff1234
0 replies
11h43m

The options were probably either resign and be on the crosshairs of SV royalty for the rest of their careers or stay and play it out.

dougmwne
0 replies
11h45m

They may be king of the ash pile. Trust seems to have been broken with investors, Microsoft, the staff and the developer community at large.

I expect them to continue to be relevant, but just one of the chorus, no longer the leader.

acl777
0 replies
11h48m

they prolly played the: "we've got GPT-5" card :-)

__turbobrew__
0 replies
11h48m

As we speak Satya is on his way to turn off OpenAI servers.

Barrin92
0 replies
11h38m

the entire organisational structure of openAI was literally designed for exactly that. Like Mozilla, the for-profit arm of the company is legally subject to the non-profit.

roddylindsay
14 replies
11h29m

Contrarian prediction: OpenAI will be just fine.

The board will bring in an adult CEO who can balance the nonprofit charter with Microsoft and the commercial business, and who doesn't have a million side projects taking his or her focus away. Some employees will leave but the vast majority will stay for the usual reasons (i.e. inertia), the business will keep growing because ChatGPT is already a worldwide brand at this point and the vast majority of users don't give a hoot about any of this palace intrigue as long as the product works.

And the board will ultimately be vindicated for acting as fiduciaries for the nonprofit's mission and bylaws -- and not for the financial interests of Satya Nadella, Vinod Khosla, and the like.

bufferoverflow
8 replies
11h26m

If they don't get Altman back, Altman starts a new company AkshuallyOpenAI and most talent moves there. They quickly get the funding and the contracts with MS. OpenAI is left in the dust.

azurezyq
4 replies
11h21m

You know, we are dealing with people and people think differently so they won't move altogether. Moving to a new company under Alt is not an easy choice, at least the new company:

- Does not yet have a big model (need $$$ and months to train, if code is ready)

- Does not have proprietary code OpenAI has right now

- Does not have labeled data ($$$ and time) and chatgpt logs

- Does not have ChatGpt brand...

bufferoverflow
3 replies
11h11m

I thought GPT-4 was not trained on labeled data, but simply on a large volume of text / code. Most of it is publicly accessible: wikipedia, archives of scientific articles, books, github, plus probably purchased data from text-heavy sites like Reddit.

lyu07282
0 replies
10h57m

No it's reinforcement learning with human feedback, RLHF lots of labeling

frabcus
0 replies
10h52m

Whatever they've built this year presumably uses all the positive/negative feedback on ChatGPT that they have a year worth of data now...

Another examples is the Be My Eyes data - presumably the vision part of GPT-4 was trained on the archive of data the blind assistance app has, and that could be an exclusive deal with OpenAI.

enigmurl
0 replies
11h5m

Assuming it's a reference to RLHF? Not sure

kumarvvr
1 replies
11h19m

Yes, but data for the training, model development and other things requires a lot of time and investment, with OpenAI having a huge head start.

It is to be seen if investors will likely pour another set of billions of dollars, just to catch up to speed with OpenAI, which, by that time, would have even further evolved.

There is a ray of hope that, as it so happens in this field, that old things are quickly obsolete and new things are the cutting edge, Sam Altman can convince investors to invest in the cutting edge with him. Then investors have a choice on an almost level field, to choose between people, companies and personalities, for a given outcome.

alsodumb
0 replies
10h45m

You are definitely over estimating how much time and effort it needs to build large models.

Sam will get billions of dollars if he starts a new company. So there's no issue of money. In terms of data and training models, look at Anthropic - they did train a reasonable model. Heck look at Mistral, a bunch of ex Meta folks and their LLaMA team lead who spinned up a good models in months.

The only bottle neck i could think of would probably be RLHF data - but given enough money, that's not an issue either.

selcuka
0 replies
11h21m

They quickly get the funding and the contracts with MS. OpenAI is left in the dust.

I haven't checked but I'm pretty sure OpenAI has many patents in the field and they won't be willing to share them with another company, especially with AkshuallyOpenAI.

tempsy
1 replies
11h23m

The issue is they’ve burned a million bridges with what they’ve done. No one in SV is going to be on their side in any of this. So hard to see how they succeed if nobody wants the perception of being on their team.

soumyadeb
0 replies
11h20m

Is it true? Lot of folks have left the OpenAI board recently (Musk, Hoffman etc). For various reasons but its not that everyone in SV is happy in the direction OpenAI was going.

Plus giant competitors like Google, Facebook might step in to fill the void.

rifty
0 replies
10h20m

I lean this way too, but how ever it ends up I think this is plainly the better outcome for the potential of the space. OpenAI was on the trajectory to acting like any other company. Now it at least has the chance to act different from every other player. We are all benefactors of this chance at more diversity of motivations in this race.

ah765
0 replies
10h59m

I agree. People are making noise now, but in the end almost everyone just cares about money, and won't keep any persistent opinion against OpenAI.

Microsoft will fall in line and do what makes sense once the dust settles, and that probably means continuing to work with OpenAI for the foreseeable future. Most of the employees, even if they supported Sam, will probably also remain until a better option truly appears, and it remains to be seen whether Sam will really open up a competitor and try to hire everyone.

GreedClarifies
0 replies
10h43m

Where is OpenAI going to get money? They just said, loudly, that they don’t care about money.

Money is exchanged for goods and services, like GPUs, hiring researchers and coders, and acquiring data.

No one is going to trust them. They will be able to use their whole 120M they got as donations to operate the company for a full week or two.

Good luck, twats.

(Maybe they have AGI/ASI in the basement. If so, kudos, they will be fine and it was classless to fire Sam)

loveparade
14 replies
10h57m

For a change, maybe it'd be a good idea to get a CEO who at least has faint idea about how ML works on a technical level? Would you like a company that builds rockets to be led by some salesman who doesn't understand basic Physics? Probably not. You don't need to be Hinton or Schmidhuber, but come on, who thinks it's a good idea to have these typical "Silicon Valley VC CEOs" lead an AI company?

astrange
7 replies
10h29m

The trick is that execs do have unusually good insight into how their tech works, because they can ask their reports to explain it.

Btw, I think it's funny how much credit Hinton gets for AI. His contribution is pretty much just keeping some grad students on the problem.

ninjin
1 replies
9h24m

I think it's funny how much credit Hinton gets for AI. His contribution is pretty much just keeping some grad students on the problem.

Yes, clearly overrated in terms of credit. Doing foundational work in the field going back to the 70s which laid the groundwork and inspired the resurgence of neural networks in the late 80s. Being a solid community organiser throughout his career and keeping neural network research alive through the more formal statistical methods dominating for over a decade. Supervising and thus raising many others who themselves contributed greatly to the explosion of neural network utility we have seen since around 2010 until now. Should I carry on?

I think it is absolutely clear that Hinton has contributed plenty enough to get a massive amount of credit for where we are today. The kind of mentality at display here is akin to ahistoricity on the level of saying that Gordon Moore "just started a company" after Apple released the M1 under the delusion that there is not a direct lineage between what we have today and breakthroughs and efforts in the past. Believe it or not, but we stand on the shoulders of giants and cutting them some slack is not the same as downplaying the impact of people more active in the present day; that are gradually becoming future giants.

calf
0 replies
7h56m

I think what the comment gets wrong is that even in the professor (PI) / grad student relationship, it's not defined as a purely managerial one. The one-on-one meetings between professor and graduate student are often about working out a new theory, even if the implementation and experimental work is left to the student. The amount of mind-share that goes on between is nontrivial, it is necessary for work that pushes the boundaries of human knowledge, and isn't at all like an executive asking for a report from an underling.

magicalhippo
1 replies
10h22m

His contribution is pretty much just keeping some grad students on the problem.

I think the second best thing after having technical knowledge is to recognize smart employees and then not get in their way...

astrange
0 replies
6h36m

It's important but in tech world you'd get it as explicit "leadership" credit rather than a researcher, I think. Or maybe people would just call you a middle manager.

rreichman
0 replies
9h4m

Hinton is overrated? That's quite a take.

loveparade
0 replies
10h24m

I think there's an important difference between not understanding and not doing the work yourself because you are in a managerial/advisor position. If you give Hinton a recent paper on LLM RLHF he will understand the nuances in it, he just delegates the actual work. If you give Emmett Shear or whoever such a thing, they almost certainly don't. For a deep tech company focused on research (not some consumer SaaS thing) I don't think you can be a good CEO if you don't even have an understanding of what you are building.

Upvoter33
0 replies
9h18m

Wow that really undersells Hinton.

justrealist
1 replies
10h43m

What special credentials do you think Sam Altman had in AI?

bmitc
0 replies
9h49m

I think the commenter you replied to was saying that Altman doesn't have special or any credentials for AI.

Kwpolska
1 replies
10h21m

The CEO is so far removed from day-to-day operations that there's no benefit in them having an idea about the technical nitty gritty.

mcv
0 replies
9h16m

That varies wildly per company. And whether or not that's a good idea also varies. But if company exists specifically to develop a single specific technology, I think the CEO should know a thing or two about that technology. Especially when it's something so easily misunderstand as AI.

ulfw
0 replies
4h23m

So true. On the other end of the spectrum... I just got denied an interview with a company for an SVP Product (!) role because "they are looking for someone with hands on ML experience". Why you'd need that in a leadership role where you don't code anything (as a Product SVP) is beyond me, but hey. Anything AI related is just odd nowadays anyway.

huytersd
0 replies
9h39m

I think so. People don’t add value just by being technical. That’s a very low man on the totem pole perspective.

jart
14 replies
11h42m

So long Mira Murati. That's the second CEO OpenAI has ousted in one week. https://openai.com/blog/openai-announces-leadership-transiti...

jxi
11 replies
11h30m

Her move makes even less sense than what the board did. The board told her the day before that they're firing Sam, so she obviously accepted it. But, one day later she caves and sides with Sam (that's what the heart tweet means)? That's weak leadership and resolve.

majikaja
3 replies
11h16m

Who sends heart emojis to their boss? Is this level of sycophancy normal nowadays?

peanuty1
1 replies
11h10m

Apparently, OpenAI President Greg Brockman doesn't use uppercase letters in his official communications with employees. So I'm not surprised.

lazystar
0 replies
9h52m

yeah... as more folks join the no-uppercase wave, it makes me consider using capitals again. but, its just a lot more of an expressive way to talk to another human in a text based format.

jazzyjackson
0 replies
10h9m

There was a weird coordinated show of support for Altman by a lot of his employees, quote tweeting "i love the openai team so much"

I also found it weird and cultish. I guess it's not so different than signing a birthday card going around the office tho. "Sorry you got fired, see you at the next burning man"

CSMastermind
2 replies
11h13m

Seems pretty reasonable she would accept it at first while she was still figuring out what's going on.

Like the board presumably called her and said, "hey Sam is out, you're the CEO for now, more details to come"

And then in the next 48 hours she had a chance to talk to Sam and others and realize that she was on his side.

paulddraper
0 replies
10h56m

And even if she did have the full context, she was going to be able to do more as CEO than as not-CEO.

awb
0 replies
10h50m

the board presumably called her and said, "hey Sam is out, you're the CEO for now, more details to come"

I wouldn’t take over as CEO or interim-CEO unless I knew why the previous CEO was fired and was OK with the process and reasoning.

siva7
0 replies
11h13m

Quite the contrary, that’s strong leadership having the courage to do so under those extreme circumstances.

qaq
0 replies
10h30m

Well if she talked to Sam and he showed her hypothetical 5 Billion in commitment and offered 1% in "SamAI" ...

ShamelessC
0 replies
10h34m

But, one day later she caves and sides with Sam (that's what the heart tweet means)

The amount of weight people here give to an _emoji_ on this site... Rampant, unnecessary, baseless speculation in every comment thread bout Altman.

Just wait til monday next time. These ultra wealthy over-privileged Worldcoin fucks are not worth this much attention.

Maxion
0 replies
11h18m

They might've just told her that she's now CEO without her actual approval.

peanuty1
1 replies
11h34m

Thank god they went with Emmett over Murati.

matt_daemon
0 replies
11h25m

Why?

acheong08
13 replies
11h43m

I’m glad people in this thread are supporting the OpenAI board in this decision. There seems to be too much celebrity worship around Altman from all the tech/crypto bros. Also, any new competition started by Altman would be good for the ecosystem in general.

alsodumb
5 replies
11h39m

I'd have agreed with you if it was an extensive board with many experts, both technical and policy, without any strong conflict of interests.

But at this point, we have Ilya who clearly has strong difference of opinions with Sam, Adam D'Angelo, whose company Poe has a conflict of interest with the existance of for profit OpenAI and their GPT store/customizable agents, and two other board members.

beeyaw
2 replies
11h37m

Are you for real? "Difference of opinions" isn't a problem for a board, it's exactly why multiple people are on one. And Adam will not have any conflict of interest with Poe very shortly.

siva7
0 replies
11h30m

Adam solved the issue very “elegantly” and Ilya can finally continue allocating more compute for his research interests. A dream of a board as you can see by the reactions of the investors ;)

alsodumb
0 replies
11h33m

It is a problem when you pretty much fired two board members who would've been against Ilyas opinion.

Maybe Adam will not have any conflict in a few months, but he does now. He did when he and others got rid of Greg and Sam from the board. That's my point.

vagabund
0 replies
11h37m

As long as we're speculating from the outside, in a strong difference of opinions between Altman and Sutskever, I'm inclined to support Sutskever.

eddiewithzato
0 replies
11h37m

Was Ilya not the brains of OpenAI? Seems like rest were investors

hliyan
1 replies
11h29m

I think in the long run, the board doing (even though somewhat clumsily) what it's supposed to do, which is to ensure the Charter, will emerge as the prevailing view. Apart from that, OpenAI was becoming too dominant, and a new entrant led by Altman would be a gift thing in the long run.

tatrajim
0 replies
11h0m

And, for joy: Baidu, Alibaba, Samsung et al. have a golden chance to close the current gap. Loud cheers and high-fives from the ea/decel camp at helping hobble the U.S. leader.

wolverine876
0 replies
11h32m

On one hand, I worry that it doesn't represent anything more than following the herd. For a long time, and not long ago, the herd followed people like Musk, Bankman-Fried, etc., and other cults of personality. I think a lot had to do with Trump - even people who say they hated him have followed his model (he was the President, after all) of behavior, business, and personal character, even non-famous people I know personally.

As best I can say regarding such a broad, poorly defined topic: Due to the brazen, distasteful, dangerous megalomania and seeing real consequences at Twitter and FTX, and others (Theranos, etc.), that brazen megalomania seems to have re-acquired the bad reputation that common sense has eternally given it, in every place and time.

Obviously, it's a ridiculous, self-destructive, dangerous idea. Whatever the cause, I'm glad people seem to be regaining their senses - but will they do it quickly and completely enough, will they be manipulated into blaming someone else (a massive risk in the post-truth world that accepts dis/misinformation), and will the megalomaniacs retain their attitudes and power, and then use it to suppress others.

rvz
0 replies
11h30m

Well the employees have their stock in OpenAI and needed their saviour back to get it realized and it looks like they are now have choice.

Follow their dear saviour into a new venture or stay on a ship with their shares that will begin to sink at markets open.

Microsoft maybe holding a very huge bag. Probably will sue.

peanuty1
0 replies
11h38m

Yes, I'm very interested to see what Sam and Greg achieve without Ilya.

og_kalu
0 replies
11h37m

I support....neither ?

I would have been extremely uneasy if Altman had the power to get back as CEO after this kind of ousting but I didn't really support ousting him like that in the first place, especially if this didn't reach a fever pitch over some much more capable model.

anupamchugh
0 replies
10h36m

FWIW, the board structure is more in the spotlight, especially Adam's, since Quora's Poe chatbot uses OpenAI's APIs. And on top of that, Sam was an investor in Quora (2018).

extheat
12 replies
11h55m

This is not a serious BOD. This is going to the courts, and it couldn't happen any sooner.

mupuff1234
4 replies
11h51m

Who's gonna sue if there's no shareholders being that it's a non profit?

seanhunter
2 replies
11h39m

There are investors including sequoia, Tiger Global and Microsoft[1]. They may well have standing to sue.

[1] Sorry - the full list is behind a paywall https://www.crunchbase.com/organization/openai/company_finan...

mupuff1234
1 replies
11h29m

I believe those are investor the in for-profit organization while the board is responsible for the non-profit org, and I believe the for-profit is bound to the mission of the non profit but not the other way around.

But I'm sure someone here knows the legal structure better than I do, I just quickly skimmed over

https://openai.com/our-structure

seanhunter
0 replies
11h9m

That's why I said they may well have standing to sue. It's going to get messy if it goes there for sure.

dorkwood
0 replies
10h37m

I thought they switched to a "capped profit" model, where investors are capped at a 100x return on investment.

jatins
2 replies
11h51m

How do these decisions even happen? Does the board just say "Hey I know a guy, let me check if they are interested"

Surely Emmett must be really capable but still seems a bit of wild card entry

captainkrtek
0 replies
11h48m

If the board was trying to further erode trust in their decision making, doesn’t inspire confidence that all of this happened in the span of a weekend: - fired, CTO announced as interim CEO - wait, come back - never mind, oh and a new interim CEO

15457345234
0 replies
11h44m

One might assume that the board has known 'whatever they know' for quite some time, possibly discussed quietly behind the scenes, which includes them scouting out potential interim CEOs.

Then when it becomes clear that negotiations won't solve the problem the board drops the hammer and lets the other side make their move. If they act professionally and like adults, well, maybe there's room for negotiation after all.

If, on the other hand, they do weird childish shit, well... guess not. In with the new guy!

dragonwriter
2 replies
11h50m

Who do you imagine is going to sue, and on what basis?

stravant
1 replies
10h59m

You really don't think someone involved can come up with something?

dllthomas
0 replies
10h26m

People can always come up with something, so you need more than that for an interesting prediction.

panarky
0 replies
11h40m

Courts take years.

This industry is changing far too fast for the legal calendar.

By the time the lawsuits are resolved, it will all be irrelevant.

zombiwoof
10 replies
11h25m

Can’t wait to see how many staffers who said they would quit sheep their way into the office tomorrow to collect their million dollar salaries

Thanks Sam we bluffed but no way we quiting

ergocoder
2 replies
11h15m

Most employees are on the profit-sharing-esque compensation.

And guess what OpenAI will not be focusing on? Profit.

They are not gonna quit tomorrow. They will leave over time due to the low compensation.

698969
1 replies
11h7m

*lower compensation

ergocoder
0 replies
10h35m

*much lower...

tempsy
0 replies
11h7m

Well the value of the company has almost certainly taken a large hit, especially now that Sam is definitely not returning

And no way Thrive, which was just about to buy up employee shares at $80-$90b, is going to go through with the tender offer now

siva7
0 replies
11h20m

Those few with million dollar salaries can start tomorrow for even much higher salaries in the current market.

sgu999
0 replies
11h14m

Capital seems to be on Altman's side, they may make a good deal by jumping ship...

ryanSrich
0 replies
11h11m

Yeah it does seem Ilya was smart enough to realize the overwhelming majority of people get caught up in the emotion of these things, but rarely act on their promises when money is involved. I guess we'll see in the coming weeks who quits, but I doubt it'll be significant. Perhaps once Altman gets another company stood up and has billions in the bank to pay people it'll be a different story.

jchonphoenix
0 replies
10h58m

Million dollar salaries paid in PPUs that are no longer worth millions.

I'd jump ship too if I were them.

jatins
0 replies
11h8m

Their PPUs go to zero if there is no profit to be made and the 85B tender offer is likely also off. I suspect this makes it easier for many of them to quit

TillE
0 replies
11h5m

It'd be pretty silly to quit in this situation, unless you just absolutely loved and worshiped Sam. You wouldn't be acting on some moral principle.

The only thing the board has really done wrong is not communicating their reasons clearly.

voidfunc
9 replies
11h26m

I'm expecting lawsuits in the next day or two from Altman for libel and defamation. I'd be surprised if OpenAI investors don't go to the courts also ASAP.

rrrrrrrrrrrryan
2 replies
10h41m

If they didn't reinstate Sam, Microsoft already threatened to cut off all future funding from OpenAI, (they've only paid 1 out of 10 tranches so far), then sue them, then help Sam poach OpenAI talent at his next venture.

ytoawwhra92
1 replies
10h20m

What's your source for that?

rrrrrrrrrrrryan
0 replies
9h11m

The Forbes article [1] from yesterday:

The plan some investors are considering is to make the board consider the situation “untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors.

[1] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...

Edit: Looks like Microsoft just hired Sam outright: https://twitter.com/satyanadella/status/1726509045803336122

rvba
0 replies
10h23m

Can microsoft sue for the drop in valuation of its shares?

poink
0 replies
11h16m

The board said he was fired for being insufficiently candid with them. Good luck getting a US court to call that defamation.

nicce
0 replies
11h18m

Altman has no case. Board protects their legal responsibilities of non-profit company. Same goes for investors.

mcpackieh
0 replies
10h57m

You mean Sam will be suing all the HN and twitter commenters who called him a sister rapist? Remember, the board never said anything like that.

jumelles
0 replies
10h54m

That's not at all how the law works. No one has been defamed.

dmitrygr
0 replies
11h20m

Surely, not being on the board you cannot just assume that you know all there is to simply assume that there is only one side to a story? Just because they aren't tweeting a storm does not mean they don't have reasons...

icyfox
9 replies
10h14m

This will be an interesting test to see how fast you can bootstrap GPT-4 level performance with unlimited funds and talent that already has deep knowledge of the internals. With the initial adoption of ChatGPT alongside Copilot, OpenAI's data moat of crawled data & RLHF is pretty vast. And that's not leaving the walled garden of OpenAI. You can simulate a lot of this using other off-the-shelf LLMs (see Alpaca) but nothing is a substitute for real world observed usage.

In a related note, has this meaningfully broken through to the mainstream yet? If a ChatGPT competitor comes out tomorrow that is just as good - but under a different brand - how many people will switch because it's Altman backed? I'll be curious to find out.

tempusalaria
6 replies
9h41m

Many of OpenAIs most talented people left to start Anthropic. They have billions in funding and have not yet got particularly close to GPT-4.

I think that illustrates it will be a be a big uphill battle for any new entrant no matter how well funded or resourced.

vintermann
0 replies
9h29m

And the new CEO was a consultant to Anthropic, apparently. I'm only grateful I don't have to make sense of this drama.

sumitkumar
0 replies
7h22m

Anthropic was formed for nearly the same reason Sam was fired. To slow the things down. OpenAI takes MS funding and Anthropic is formed. OpenAI pace goes a little above the comfort level of Ilya and Sam is fired. MS picks up Sam and will try to outpace openAI while openAI will put brakes on itself.

nmfisher
0 replies
7h41m

Speak for yourself, I cancelled my GPT4 subscription because I prefer using Claude 2.

leoh
0 replies
9h31m

The parent post is literally true yet keeps getting downvoted — what a mess HN has become, too

killerstorm
0 replies
4h12m

They have billions in funding and have not yet got particularly close to GPT-4.

Wrong. Claude 2 beats GPT-4 is some benchmarks (e.g. HumanEval Python coding; math; analytical writing.). It's close enough. It doesn't matter who holds the crown this week, Anthropic definitely has ingredients to make GPT-4-class model.

This is like comparing similar cars from BMW and Toyota, finding few specific parameters where BMW has a higher score and saying "You see? Toyota engineering is nowhere close".

This actually shows Sam Altman's true contribution: the free version of ChatGPT is undeniably worse than Bing Chat, and yet ChatGPT is a bigger brand.

(And it might be a deliberate choice to save money for Claude 3 instead instead of making Claude 2 absolutely SotA.)

adonese
0 replies
9h29m

noob ai here, but is it gonna be challenging because of something intrinsic to gpt4? or about collecting equivalent amount of data to train a comparable model. Because I see Facebook releasing their models down to the weights

mdekkers
0 replies
9h42m

I would switch, but not because of Altman backing or not. I would switch if their strategy were to be to progress at pace. I’m not big on AI safety as it is parroted these days, I just want more AI, faster.

Havoc
0 replies
7h2m

Realistically if they start now they’d need to hit gpt 5 like levels not 4.

Still, given the exodus and resources now available I’d imagine pretty fast

tempsy
8 replies
11h50m

The Twitch guy who now spends his day tweeting about mundane YIMBY stuff? Bizarre…

Racing0461
4 replies
11h46m

Feels like the board were "true believers" in the nonprofit/human first/guardrails and sam just wanted to zuckerberg openai by growing as fast as possible and in the end the true believers won.

dougmwne
3 replies
11h43m

What did they win exactly? This is far from the only effort to develop AI models. They were a leader for a time, but seem unlikely to continue in that position.

0xDEAFBEAD
1 replies
10h45m

Even so it buys some time to do safety work

apstls
0 replies
10h16m

I don’t really understand what safety work is or entails here, given OpenAI will surely not be the only group to achieve AGI (assuming any group does.) What stops other companies from offering similar models with no (or just less) regard for safety/alignment, which may even be seen as a sort of competitive edge against other providers? Would the “safety work” being done or thought about somehow affect other eventual players in the market? Even regulation has the same challenges, but with nations instead of companies, and AFAIK that was more Sam’s domain than Ilya’s. It almost seems like acceleration for the sake of establishing a monopolistic presence in the market to prevent other players from viability, and then working in safety afterwards, would give a better chance of safety long-term… but that of course also seems very unrealistic. I think more broadly, if we’re concerned with the safety of humanity as a species we can’t think about the safety problem on the timescale of individual companies or people, or even governments. I do wonder how Ilya and team are thinking about this.

Racing0461
0 replies
11h23m

They won the purity contest. I should start calling openai's board of directors The Squad from now on.

operatingthetan
2 replies
11h49m

The whole affair has been bizarre. Professionals discuss all this stuff out of the public eye and execute when they have a plan. Every single move from all parties here has been run and gun and looks like amateur hour. These people do business like the characters on Succession.

ilrwbwrkhv
1 replies
9h47m

I think trump and elon sort of made it ok to showcase the shit online. Like how YC got involved with fighting other VCs on twitter. I have huge respect for YC and never thought they would do that, but here we are.

Again a stark reminder that all of these guys from Ray Dalio to the run of the mill SF VC are all just normal, twisted people who don't know much better about anything and merely had a run of good luck. Stop paying attention to them.

AutoDunkGPT
0 replies
8h19m

Or in other words, there’s no downside to narcissism when you can broadcast it and accumulate loyal followers who will ply you with money and fame

gzer0
8 replies
9h54m

@karpathy on Twitter:

I just don’t have anything too remarkable to add right now. I like and respect Sam and I think so does the majority of OpenAI. The board had a chance to explain their drastic actions and they did not take it, so there is nothing to go on except exactly what it looks like.

https://twitter.com/karpathy/status/1726289070345855126

lajawfe
7 replies
9h34m

I for one thought Karpathy would side with the core researchers and not the corpos. To me, this whole ordeal is a clash between profit motives of Sam vs Non Profit and Safety motives of OpenAI's original charter. I mean didn't HN hate when OpenAI changed their open nature and become completely closed and profit oriented? This could be the healing of the cancer that OpenAI brought to this field to make it closed as a whole.

tsimionescu
1 replies
9h5m

There are at least three competing perspectives.

One is Sutskever, who believes AI is very dangerous and must be slowed down and closed source (edit: clarified so that it doesn't sound like closed down). He believes this is in line with OpenAI's original charter.

Another is the HN open source crowd who believes AI should be developed quickly and be open to everyone. They believe this is in line with OpenAI's original charter.

Then there is Altman, who agrees that AI should be developed rapidly, but wants it to stay closed so he can directly profit by selling it. He probably believes this is in line with OpenAI's original charter, or at least the most realistic way to achieve it, effective altruism "earn to give" style.

Karpathy may be more amenable to the second perspective, which he may think Altman is closer to achieving.

dmix
0 replies
7h37m

Now regardless, the new CEO Shear is also very much in the current development of AI is dangerous (not just hypothetically in the future as AGI becomes more plausible), comparable to a nuclear weapon, and wants to slow it down. This will definitely pit researchers into camps and have plenty looking at the door.

https://x.com/amir/status/1726503822925930759?s=46&t=

wilg
0 replies
9h22m

But isn't Ilya's thing that open sourcing it is too dangerous?

vintermann
0 replies
9h19m

Karpathy is a very agreeable guy and a fantastic educator, and he's very respected by everyone including leader-owners like Altman and Musk, but he doesn't seem like he has very strong opinions one way or another about the hot button issues.

tempusalaria
0 replies
9h12m

He spent 5 years at Tesla backing up their self driving lies for money.

TerrifiedMouse
0 replies
8h54m

This could be the healing of the cancer that OpenAI brought to this field to make it closed as a whole.

I don’t know. The damage might be permanent. Everyone is probably going to be way more careful with what information they release and how they release it. Altman corrupted the entire community with his aggressive corporate push. The happy-go-lucky “look what we created” attitude of the community might be probably gone for good. Now every suit is going to be asking “can we make massive amount of money with this” or “can I spin up a hype train with this”.

ShamelessC
0 replies
9h18m

Karpathy is a hybrid. He’s smart, but he clearly enjoys both the money and the attention. This is the guy who defended Elon’s heavily exaggerated self driving claims when the impact was actual human lives.

kuter
7 replies
11h42m

I wonder if the recent departure of Kyle Vogt from Cruise is related. Emmett Shear and Kyle Vogt founded Twitch together.

threeseed
2 replies
11h39m

It would make a lot of sense. Kyle knows the space, is already wealthy and has always been pretty altruistic.

And the fact that Emmett is only interim should give you a hint something is up.

dragonwriter
0 replies
10h25m

And the fact that Emmett is only interim should give you a hint something is up.

That they didn't complete the process of a permanent CEO over the weekend after firing and then negotiating with Altman?

0xDEAFBEAD
0 replies
10h28m

On the other hand, if Kyle was ousted for pushing too hard too fast at Cruise, that seems like out of the frying pan into the fire. See https://archive.is/Vqjpr

alsodumb
1 replies
11h36m

Wait why would you think this is would be related to Kyle?

Kyle's story was brewing from the moment GM appointed their attorney to manage Cruise - everyone knew there was gonna be restructuring of the executive team after the incident.

If anything, it was a convenient time for Kyle to step down as it wouldn't get a lot of prime time thanks to OpenAI drama.

himaraya
0 replies
11h17m

If Emmett has any say at all in the matter, Kyle's almost definitely under consideration.

pyb
0 replies
8h42m

It's probably because this is a very good time for tech companies to publish bad news, that GM chose to fire Vogt now.

keyle
0 replies
11h10m

And water is blue and so is the sky. Coincidence I think not! /s

huxflux
7 replies
10h6m

HN! If I had a recorded phone call, emails and chats I thought it would be the better for the public to know about, how would I go ahead and share such in the best and most anonymous way?

adastra22
2 replies
10h2m

First of all know that California is a two party consent state. You’d be putting yourself in grave legal danger if your identity could be inferred from a leaked recorded phone call.

quickthrower2
1 replies
9h52m

Would that apply to an international call?

Not that I am encouraging the GP. I upvoted the “laaaaaaaawyer” comment.

adastra22
0 replies
9h33m

Not a lawyer so: “laaaaaaaawyer!”

smitty1110
0 replies
9h57m

Stop thinking about posting things and talk to a lawyer. I'm 100% serious here. Depending on what you post you could end up either a) sued for libel/slander/illegally recording (CA is a two-party consent state), or b) you could end up called as a witness for any legal action in the fallout of this situation. Seriously, look out for yourself, my dude, there's billions of dollars at stake here, and in a fight between whales the shrimp's back gets broken.

kolinko
0 replies
10h2m

theverge and theinformation have tip lines/e-mails, and you can message them, or write to one of the reporters writing about that directly. I would trust them to keep the confidentiality

FridgeSeal
0 replies
10h0m

I believe most major news orgs have secure anonymous mechanisms for communicating with them. E.g. the guardian etc.

8bitchemistry
0 replies
9h55m

NYTimes has the SecureDrop tip submission which uses Tor (see details at https://www.nytimes.com/tips)

frabcus
7 replies
10h24m

"I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down. If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead." - Emmett Shear, Sep 16, 2023

https://twitter.com/eshear/status/1703178063306203397?t=8nHS...

_fizz_buzz_
1 replies
9h56m

Wouldn't simply others pick up the slack then?

unsupp0rted
0 replies
9h48m

We're about to find out

vasco
0 replies
9h57m

Rest in peace OpenAI. Microsoft's investment might have just gone from one of the best of the decade to one of the worst in the course of a weekend.

siva7
0 replies
9h52m

Slowing down what exactly? This cryptic messages are like a slap in the face for their dev community

mdekkers
0 replies
9h45m

Sounds like good news to all the others in the field.

adastra22
0 replies
9h56m

RIP OpenAI. You’ll be left in the dust.

CamperBob2
0 replies
9h46m

"That's great, Emmett. What do the Chinese think?"

siva7
6 replies
11h46m

This is more bizarre than an episode of silicon valley. Must be the happiest day for Google in the last year.

francisduvivier
4 replies
11h43m

Indeed, on the Lex Fridman podcast, Musk said it was a very tough recruiting battle for Ilya. Now it looks like they have a shot at getting him back.

tempsy
1 replies
11h36m

How so? Ilya won. OpenAI is now more his than it was before Sam was fired.

erichocean
0 replies
11h27m

A pyrrhic victory.

skygazer
0 replies
11h32m

I think Ilya is getting what he wanted. Although, that may yet have disastrous consequences for the company if employees follow through on mass quitting, thus freeing him up. But if they can quickly prevent people leaving in the short term, maybe urgent discontent will pass.

kfrzcode
0 replies
11h27m

If you mean they is Google and Larry, sure. But in that clip Elon was recruiting for OpenAI

throwaway128128
0 replies
11h43m

I was thinking Game of Thrones.

flylib
6 replies
11h42m

Seems like he liked anti sama stuff as well

https://x.com/moridinamael/status/1725893666663768321?s=46

polygamous_bat
1 replies
11h31m

Would be pretty weird to step in Sam’s replacement if he was philosophically aligned with him?

tempsy
0 replies
10h40m

Yes, but you rarely see two people who run in the same circles/part of the same OG YC class publicly criticize one or the other.

babyshake
1 replies
11h30m

That's not Emmett?

flylib
0 replies
11h26m

he liked the tweet

schleck8
0 replies
9h38m

Unless the tweet I saw was fake, he's also decided to slow down pace to 1 or 2 on a scale of 1 to 10. Going to run that company into the floor

ec109685
0 replies
11h6m

Part of the interview process.

charlie0
6 replies
11h51m

Real question: I've never heard/seen of this website before, is it reputable?

mattzito
1 replies
11h49m

The information? Yes, they are highly focused on the tech industry and very diligent. They make most (all?) of their revenue on subscriptions, so that limits their incentives to make clickbait content.

astrange
0 replies
10h24m

Well, they make different kinds of bad content, like in-depth investigations of how cool the parties their subscribers hold are.

xeriuno
0 replies
11h49m

The Information is very reputable

pigscantfly
0 replies
11h49m

Yes, this is one of the top outlets focused on the valley tech scene. It's subscription based, and they are more expensive than most others, which is probably why you haven't seen it posted much.

jdminhbg
0 replies
11h50m

Yes, it’s real. This story is also confirmed by Bloomberg.

d3nt
0 replies
11h50m

Yes, they get the scoop on many Silicon Valley news stories

Palmik
6 replies
10h51m

Emmett is a self-proclaimed AI doomer. Given some of his Tweets from the past, it's likely the board indeed wants someone who will takes things slower: https://archive.is/tuf3s https://archive.is/sNvvs

This is probably not something you want to hear as a researcher who is motivated by pushing the frontiers of our capabilities, nor as a researcher who is motivated by pushing their compensation.

yeck
2 replies
10h44m

I'd say it sounds like they want to go faster with research and slower with commercializing the outputs of the research.

Palmik
1 replies
10h37m

Accelerating research on increasing reasoning / intelligence capabilities of AI? That is opposite of what Emmett seemed to advocate recently, so it would be strange to pick him as the CEO if they really wanted to accelerate that.

yeck
0 replies
10h30m

Those are not the only things to research, but I wouldn't assume that capability-base research would stop. I mean, they want to create AI to perform AI alignment, so their road map includes capability research.

The for-profit bender that OpenAI was on appears to have been more of the issue. It's one thing to create greater capabilities for enhancing the rate of research, and another rushing those capabilities to market.

Klonoar
1 replies
10h4m

Those researchers should be met with restraint.

One can ultimately still push the frontiers while also showing restraint - stop acting like it's either-or.

jazzyjackson
0 replies
10h0m

?? I guess you can walk into the frontier a little slower if you don't mind someone else settling it before you.

rf15
0 replies
10h47m

But what are you going to do if your ambitions cannot be met with the money and resources they require, especially in the long term? They look like they're tripping over hard limits.

russellbeattie
5 replies
10h47m

Reality check for those who need it:

Altman won't be starting a competing company. First, he may have contractual restrictions and second, OpenAI owns their IP. And even if Altman is somehow free to do what he wants because he was fired (doubtful), anyone who quits to go with him surely do have airtight non-competes. Besides, it's not like Sam and his few loyalists are just going to spin up a few servers and duplicate what OpenAI has done. He'll do something AI-adjacent like the chip startup he was rumored to be pursuing.

As for Microsoft, they have a contract with OpenAI and are deeply reliant on them at the moment. OpenAI isn't disappearing just because Sam and Greg aren't there. Nadella may not be happy with the change, but he'll just have to live with it. Nothing will change for the foreseeable future there either.

When it comes to lawsuits? Who knows, but I highly doubt Altman will fight, or if he does, it will be discreetly settled as it's in no one's interest to wage some protracted battle. Microsoft may want to renegotiate their deal, but that again most likely isn't going to be anything nasty, as Microsoft needs OpenAI right now.

As for developers and consumers of OpenAI's service? They won't care or notice for many months until whatever changes the new CEO and board have in mind are enacted.

robbomacrae
2 replies
10h28m

Want to bet? Altman has been building himself as the public face of AI and won't let that go so easily. It's not like his world coin is taking off...

And non competes can be as airtight as you like.. they are completely unenforceable in California which is where OpenAI's HQ is based and where Sam Altman lives.

First the loyal will jump ship. Followed quickly by the mercenaries who see the chance to join a new rocket ship from the ground up. Then as OpenAI's shares tank on the secondary market the rest will decide they've seen enough of their paper money burn and cash out.

OpenAI will survive but its going to be a much smaller company and a much smaller valuation.

As for Microsoft I'm guessing one of the strings Nadella was pulling was threatening to revoke credits, resources and even use by his teams and I'm sure he would be interested in investing in whatever Altman starts next and dedicating those now spare machines to the new enterprise.

russellbeattie
1 replies
10h18m

they are completely unenforceable in California

California's non-compete laws don't cover "trade secrets", only your ability to use your expertise to pursue employment in your chosen field. In other words, if you're just a regular programmer, you can go work wherever you like or start a competing company. If you're a principal scientist or architect, you would be in danger of violating your contract. Anyone who went with Altman would presumably have deep inside knowledge of OpenAI's secret sauce and therefore be restricted.

robbomacrae
0 replies
9h54m

I doubt Sam was personally that involved in the mechanics of the models.

But there is an alternative scenario.. in order for Microsoft to avoid loosing any momentum they might offer Altman an insane amount of money to become the John Giannandrea of Microsoft and bring as many of his recent colleagues with him. And for Altman this might be the easiest way to not loose ground as well with Microsofts patents and license agreements.

notatoad
0 replies
10h17m

anyone who quits to go with him surely do have airtight non-competes.

noncompetes are illegal in california

casebash
0 replies
9h41m

Perhaps. Anthropic was started by a bunch of people who left OpenAI b/c it wasn't focused enough on safety.

kromem
5 replies
11h46m

Foundational AI companies fracturing is going to be good for the overall market.

Just like Anthropic spinning off from OpenAI brought some very cool research (particularly liked the functional introspection vs node based stuff), Altman going into his own venture to compete will help diversify the overall space.

And while many seem to be thinking this is going to bode poorly for OpenAI, I think perhaps long term it might be a good thing. Ilya is quite impressive and his having greater influence with Altman gone may result in them exploring more interesting avenues that have long term value which might not have been as prioritized with a focus on maximizing short term rollouts.

While the way the board handled it wasn't great optics, this is probably going to be an overall positive shakeup for AI and LLMs in 2024-2025.

peanuty1
1 replies
11h7m

I wonder how successful Sam's next AI venture will be without Ilya.

threatripper
0 replies
10h40m

The problems that had been solved remain solved, no need for a genius for those problems. They will also have solutions that the public doesn't know about yet.

Palmik
1 replies
10h39m

The default winners in this race, in the medium-term horizon (~18 months), are the existing incumbents: Alphabet, Microsoft, Meta. OpenAI was positioned to be able to compete. I am not sure a diluted OpenAI will be.

altpaddle
0 replies
7h48m

But I think this episode shows how dependent OpenAI was on Microsoft. They seem to be closer to an extension than a truly independent actor

dougmwne
0 replies
11h39m

It probably is a good thing, not because Ilya is going to safeguard humanity’s future, but because it’s better to have a very competitive field rather than one break away winner. No lone genius will invent AGI. It will be a collective effort of dozens of labs, trading talent and methods. If OpenAI is a few years ahead now, they will quickly lose that edge due to this infighting and others will catch up.

karmasimida
5 replies
9h12m

Update:

Sam and Greg, and left OpenAI staffers now join in Microsoft

https://twitter.com/satyanadella/status/1726509045803336122

mark_mart
1 replies
9h6m

This was unexpected.

dagmx
0 replies
8h55m

Idk it’s been one of the top speculations since the beginning of this drama.

exizt88
0 replies
8h54m

I bet "new advanced AI research team" at Microsoft is going to be underwhelming for many, but really, it should be eye-opening. This is what startups, especially VC-backed capital-intensive AI startups, usually are.

dagmx
0 replies
8h53m

Seems like a logical choice. Microsoft’s next big play is generative AI, and they’ve put a lot of money into that.

They need to show they’re taking steps to stabilize things now that their hype factory has come unraveled.

I don’t think they particularly need these people , because they likely already have in house talent that is competitive. But having these people on board now will allow them to paint a much more stable picture to their shareholders.

baal80spam
0 replies
8h31m

Oh wow.

icy_deadposts
5 replies
11h18m

I am literally crying and hyperventilating right now. I feel like a stake has just been driven through the heart of human civilization's future.

I don't know what to do but I'm really praying someone comes to their senses and by Monday we have a future where Sam is back at the helm, joined by Ilya and Elon Musk.

blockwriter
1 replies
11h4m

I can’t understand this perspective. Why? What’s one guy being fired going to mean in the grand scheme of ai’s development?

icy_deadposts
0 replies
10h34m

There are 10s or 100s of thousands of programmers working on AI, but only one team under the guidance of Sam has been able to make the progress we've seen with ChatGPT and that is massively jeopardized now by this drama.

skinpop
0 replies
11h9m

so are you in some kind of doom cult?

jazzyjackson
0 replies
9h45m

pro tip: if you don't identify as belonging to human civilization's future, it won't hurt when someone drives a stake through the heart of human civilization's future.

EMIRELADERO
0 replies
11h15m

I don't think a VC techbro is the best bet for humanity's future. A strong non-profit with clear goals and full transparency is 100% better.

fsckboy
5 replies
11h7m

I'm actually getting tired of this whole story, jeez.

they had an interim CEO: why exactly do they need a new interim CEO? it's been a couple of weekend days, zero business days. Not taking sides, none of this makes any sense, so much drama.

hypothesis: there's too much money on the table, and inasmuch as some people care more about the public welfare than money, too much of that too (meaning, a lot of money in something usually means it's something important, something of value, and therefore not all bad; so this is all just people trying to steer things the way they want them to go rather than the way other people want them to go)

Anuiran
1 replies
11h2m

The other interim ceo turned on the board and supported Sam I think.

Maxion
0 replies
9h19m

This just makes the board seem very incompetent TBH, why select a CEO that is not aligned with the board?

stylepoints
0 replies
10h11m

It's the nerd version of keeping up with the kardashians.

paulddraper
0 replies
10h58m

The former interim CEO was the CTO, and she (along with investors) was trying to get Altman back.

By selecting Shear as the new interim CEO, the board signaled they weren't interested in Altman coming back.

mock-possum
0 replies
10h58m

Right? Can I just check back in a month to hear how it all shakes out?

webwielder2
4 replies
11h42m

The real winner in all this is once again Tim Cook, the only tech CEO to not constantly flail around dripping in flop sweat chasing the buzzword du jour.

hirsin
2 replies
11h40m

I know it's small potatoes but those other CEOs have added hundreds of millions of dollars in ARR to their bottom lines on AI, in particular open ai. Microsoft doesn't think you're a real product line til a billion plus ARR but still, it's not nothing.

mbreese
0 replies
11h32m

I think it's still pretty early to know how lasting these ARRs are. We are still early on the hype cycle here. Chat based language models have definitely found good uses, so I don't think we'll ever see them go away, but I'm just cautious about how much net revenue these will really generate (particularly when the expense of the cards needed to train/run them is so high).

bushbaba
0 replies
11h34m

Hundreds of millions of UNPROFITABLE ARR and only through spending many billions.

Apple has rarely been first to market with technology. However they often are first to have a cohesive user experience that integrates it in.

wolverine876
0 replies
11h28m

It amazes me that more don't follow the example of the most successful company in the world, maybe in the history of the world (hard to make comparisons across eras).

rf15
4 replies
10h49m

This entire show will damage Altman and what remains of OpenAI equally - OpenAI because of its unprofessional way of dealing with internal problems, and Altman, well, there's apparently a reason they were unhappy with him.

astrange
2 replies
10h27m

They were apparently unhappy with him because he's too good at business, so I think his reputation is intact.

nicce
1 replies
10h17m

Or he is not doing what the board wants, which is damaging.

If I am good at hacking computers, it does not mean that I should hack all of them and get to the jail.

astrange
0 replies
7h34m

The board (Ilya) wanted to be a research nonprofit, which is fair, but they accidentally grew a giant business out of it via Microsoft investment and Sam was doing a pretty solid job of continuing that.

This is fair that it's a distraction from the mission, but randomly firing him and now having all their staff leave and probably losing all the compute funding is a strange way of dealing with it.

dragonwriter
0 replies
10h19m

This entire show will damage Altman and what remains of OpenAI equally

The only thing it damages about Altman is the credibility of him using his purported concern for AI safety as a PR lever for regulatory capture.

og_kalu
4 replies
11h47m

Murati being Interim CEO for a day to be replaced by another Interim CEO is exceedingly funny.

Also if some of the wilder motives behind this shitshow are true then Microsoft will never see GPT-5.

wannacboatmovie
3 replies
11h28m

Even funnier is someone (in a now flagged and dead'd post) accused me of being a misogynist for daring to even think of questioning her experience or capability as CEO.

If you are CEO for a day do you get to wear a paper crown like at Burger King?

og_kalu
2 replies
11h21m

I don't know if you're misogynist but her qualifications are fine. She's not CEO anymore because she wanted Sam and Greg back not because she somehow failed her 1 day duties.

sudosysgen
1 replies
11h8m

I'm not disputing her qualifications but going to war against the board is failing your duties. In a for profit it can work if you have your shareholders on your side but it's clear here that this maneuver can't work.

og_kalu
0 replies
11h1m

Needless pedantry. Incompetence isn't the reason she's out is the point anymore than incompetence is the reason Sam is out.

blobbers
4 replies
11h44m

Sam must have sizable equity in the company. He sells to Microsoft and they dissolve the board, problem solved.

stephenitis
1 replies
11h38m

have you read any of the articles at all before firing off that sentence?

blobbers
0 replies
9h55m

None of them seem to have mentioned it, but indeed it is confirmed he had very little equity in the company.

This seems like a fairly unconventional idea in a capitalist world where Altman has made all his money on exactly that - equity.

Seems like it has ultimately bit him, the investors, and the board in the ass. Poor incentive structure in the corporate governance is going to really mess with people's heads.

og_kalu
0 replies
11h43m

Sam doesn't have any equity in Open AI

localhost
0 replies
11h43m

He has nearly zero equity in OpenAI by design.

Tohhou
4 replies
11h17m

Embarrassing for OpenAI.

frabcus
2 replies
10h51m

On the contrary, it shows it is a non-profit. People keep denying it is and pretending it is a normal Silicon Valley startup. This demonstrates clearly it is a non-profit.

cheeselip420
0 replies
9h18m

Non profit != No profit

OpenAI will be even less open now. Ilya must protect all of us from his powerful creations.

Only he and his alignment team can be trusted.

anupamchugh
0 replies
10h39m

But the Board members of the non-profit company have been leveraging OpenAI's APIs in their other business endeavors -- like, Adam's Poe chatbot for Quora. More details on board's timeline:

https://loeber.substack.com/p/a-timeline-of-the-openai-board

Related read: https://www.techtris.co.uk/p/openai-set-out-to-show-a-differ...

yeck
0 replies
10h35m

I'm actually pretty impressed that they are sticking to their (non-profit) charter against so much pressure. While this was not a "clean" maneuverer I'm not really in a position to judge, given my lack of context. Perhaps there could have been a better execution, but it was the willingness to act when it would not be easy that I think matters most.

thatsadude
3 replies
11h45m

This is ridiculous. AI doomer as the new CEO.

reducesuffering
2 replies
11h40m

Bad news for you, their last CEO was too. Sam Altman signed a statement saying "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Looks like you're not on any of their team, but you do have crypto king Marc Andreessen.

https://twitter.com/robbensinger/status/1726039794197872939

polygamous_bat
1 replies
11h24m

Bad news for you, their last CEO was too.

Did anyone believe that he seriously believed in that? I thought the consensus was he was angling for a government backed monopoly with OpenAI as the “steward” of large AI models.

comp_throw7
0 replies
10h50m

This view was totally divorced from reality. Sam literally laid out his own case for AI posing an extinction risk on his blog in 2015, before OpenAI was even founded: https://blog.samaltman.com/machine-intelligence-part-1

"Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity."

seydor
3 replies
9h59m

My takeaway from this is the future of AI shouldnt be left to the american kindergarten

sschueller
0 replies
9h56m

No worries, Sam managed to lobby congress enough that they will make research quite R&D.

esafak
0 replies
9h55m

Or is it an example of American democracy and its culture of dissent?

Garrrrrr
0 replies
9h53m

american kindergarten

what point are you trying to make here exactly?

seanhunter
3 replies
11h19m

This saga highlights something that I found challenging when I was on the board of a startup, and which the board has completely misplayed. The difference between authority and autonomy. If you're on the board of OpenAI which is doing fantastically well etc you're a big deal. You have a great deal of authority.

You may well think that the structure of the company and how the votes on the board work etc means that any motion you can carry at the board is within the sphere of things you can achieve i.e. that you have autonomy to do anything you can secure the votes for, but that's rarely the case. In actual practise people like the investors and other stakeholders can change the rules if they need to, exercise soft pressure, bring legal suits and do other things. Your autonomy to act is effectively pretty narrow.

However this finally plays out, whether Sam comes back or not, whether openAI's board changes, the people who orchestrated this have seriously damaged themselves and they will most likely have less of both authority and autonomy in future.

cjbprime
2 replies
10h43m

This would normally be true, but this is the board of a non-profit that has no investors. If the board actually lacked autonomy, then the article we're commenting on would have been about Altman being re-hired today.

seanhunter
1 replies
9h46m

There have been a few stories which sound like he may have had the opportunity to come back but that negotiations over board control etc (which is pretty unsurprising) broke down[1].

Even setting that aside for a second, that doesn't change my essential point that the board doesn't necessarily have all the autonomy it thinks it has. There are for sure repercussions to this - they may have to make concessions. Some of the seemingly committed funding may be unpaid and the donors may have the ability to invoke MAC clauses and similar to pull it. Even if that turns out not to be the case, the way this has played out will certainly affect decisions about future donations etc.

[1] https://www.theguardian.com/technology/2023/nov/20/sam-altma...

cjbprime
0 replies
4h30m

There have been a few stories which sound like he may have had the opportunity to come back but that negotiations over board control etc (which is pretty unsurprising) broke down[1].

Thus disproving your point, in my opinion. There may now be consequences to the board's decision that make their company less powerful in the future, but it won't be because they lacked the autonomy to make their own decisions. Getting to discover the consequences of your preferences is what autonomy is.

rawgabbit
3 replies
10h56m

Someone help me understand OpenAI’s situation. If Sutskever remains at the company, OpenAi can continue to improve its ChatGPT models and produce model 5.0?

What leverage does Microsoft does have over OpenAi? Can Microsoft shut off access to their hardware to support Altman? Why would Microsoft want this?

rain_iwakura
1 replies
10h31m

the biggest leverage is the billions in compute that Microsoft holds over OpenAI. Without this compute they can't run most of the experiments to get number five before others, who are very close, especially Anthropic.

quickthrower2
0 replies
9h57m

Which is weird because Nvidia/TSMC must have some leverage over MS there and could ship out containers of their big cluster units.

Also AWS could be the sugar daddy.

sgift
0 replies
10h26m

If Sutskever remains at the company, OpenAi can continue to improve its ChatGPT models and produce model 5.0?

I think the more important question is: Is Sutskever interested in a model 5.0 anytime soon? If he really ousted Sam A. because he thought they moved "too fast" wouldn't he rather work on making 4.0 "more secure" (whatever that means) instead of producing a 5.0?

mupuff1234
3 replies
11h19m

Which character is Gordon Joseph Levitt gonna play when the movie comes out?

mschuetz
1 replies
11h17m

A board member's husband?

jxi
0 replies
11h6m

It's certainly telling when you're more known to be an actor's wife than anything to do with AI or the company you're on the board for.

gregcoombe
0 replies
11h8m

He'll probably be relegated to some minor role like "Husband of Board Member #6"

ilaksh
3 replies
11h37m

Assuming that's the end of the story, what hardcore ML people do Altman and Brockman have now for their new AI startup?

Or are they just going to focus on the hardware thing?

qaq
1 replies
10h26m

Jakub Pachocki - Director of Research, Aleksander Madry, Szymon Sidor.

schleck8
0 replies
9h25m

Bunch more people have left since, I think including Karpathy

peanuty1
0 replies
11h35m

They're going to raise half a billion dollars, maybe more, and poach a lot of ML people from OpenAI, that's my guess.

hn_throwaway_99
3 replies
10h14m

Kara Swisher has some additional details: https://twitter.com/karaswisher/status/1726477828072382480

skygazer
2 replies
9h38m

Oh wow, she’s definitely taken sides. I wonder if Elon will be paying their legal bills. She alludes to shadowy machinations. This reaches an end, then gets wackier, and repeats.

reducesuffering
1 replies
8h50m

Kara Swisher is so unhinged, she thinks Emmett Shear, founder and CEO of Twitch, is "being generous here, less than impressive in comparison to" Mira Murati...

skygazer
0 replies
8h14m

Noticed that. She did soften that stance later. She seemed oddly invested.

AmericanOP
3 replies
10h54m

</removed>

robbomacrae
1 replies
10h35m

I think you're mistakingly thinking Mira Murati was on the board. That is not the case. The board is Adam D'Angelo, Helen Toner, Ilya Sutskever, and Tasha McCauley after Sam and Greg were removed. Mira Murati was not involved except for the fact she was made the initial replacement. For all we know she had no information on what happened when she was asked to step in.

AmericanOP
0 replies
10h29m

Noted! Thanks

cthalupa
0 replies
10h46m

Murati was not on the board and had no vote.

torginus
2 replies
10h20m

This is wild speculation, but I just noticed whenever I load the ChatGPT page, it will briefly say 'ChatGPT Alpha' before switching to ChatGPT 4.

Maybe they created the new model and there's something interesting about it?

sidcool
0 replies
10h12m

Same here. I thought it's a React state management issue. But it could be GPT-5 Alpha.

kolinko
0 replies
9h59m

I think they were exprimenting with a more advanced 3.5 version for non-paying users - it was showing to some people as an option between gpt-3.5 and gpt4

rococode
2 replies
10h45m

"We fired the CEO for being too profit-driven" is a terrible message to send to your employees when you've lured them in with $1m comp packages that mostly consist of "Profit Participation Units" that are only worth anything when you actually make money.

ytoawwhra92
0 replies
10h3m

Maybe the executive management shouldn't have used "Profit Participation Units" to attract staff to join a non-profit organisation.

Seems like a way to put the incentives and motivations of the staff at odds with the charter of the organisation.

plugin-baby
0 replies
10h43m

...unless those engineers are happy with fat salaries and understand that startup equity should be valued at zero.

pcbro141
2 replies
11h7m

Andrej Karpathy: (radioactive emoji)

https://twitter.com/karpathy/status/1726478716166123851

Going nuclear?

sigmar
0 replies
11h0m

I'm assuming Karpathy is describing openai as meltingdown/toxic. Tomorrow will be very interesting.

elfbargpt
0 replies
9h51m

His likes give you a pretty good idea of where he stands:

https://twitter.com/karpathy/likes

mrcwinn
2 replies
11h2m

The best talent will leave OpenAI the moment Sam announces his new venture. Deep down builders want to build, and they want to do it as quickly as possible. OpenAI will represent caution. NewCo will represent opportunity. Opportunity wins. This is human nature.

0xDEAFBEAD
1 replies
10h43m

That's not obvious to me. Why did so many join OpenAI in the first place, given that its charter prioritizes humanitarian benefit over building as quickly as possible?

https://openai.com/charter

telotortium
0 replies
10h23m

Because they were the best. Now that they're actually committing to hobbling themselves, why would these employees stay unless they're AI doomers?

mcv
2 replies
8h49m

Is there a good article, or does anyone have the slightest inkling, about what the real conflict here is? There's a lot of articles about the symptoms, but what's the core issue here?

The board claims Altman lied. Is that it? About what? Did he consistently misinform the board about a ton of different things? Or about one really important issue? Or is this just an excuse disguising the actual issues?

I notice a lot of people in the comments talking about Altman being more about profit than about OpenAI's original mission of developing safe, beneficial AGI. Is Altman threatening that mission or disagreeing with it? It would be really interesting if this was the real issue, but if it was, I can't believe it came out of nowhere like that, and I would expect the board to have a new CEO lined up already and not be fumbling for a new CEO and go for one with no particular AI or ethics background.

Sutskever gets mentioned as the primary force behind firing Altman. Is this a blatant power grab? Or is Sutskever known to have strong opinions about that mission of beneficial AGI?

I feel a bit like I'm expected to divine the nature of an elephant by only feeling a trunk and an ear.

sainez
1 replies
8h3m

I'm not sure what more information people need. The original announcement was pretty clear: https://openai.com/blog/openai-announces-leadership-transiti....

Specifically:

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.

it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.

People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.

mcv
0 replies
6h58m

That original announcement doesn't make it nearly as explicit as you're making it. It doesn't say what he lied about, and it doesn't say he's not on board with the mission.

Sounds like firing was done to better serve the original mission, and is therefore probably a good thing. Though the way it's happening does come across as sloppy and panicky to me. Especially since they already replaced their first replacement CEO.

Edit: turns out Wikipedia already has a pretty good write up about the situation:

"Sutskever is one of the six board members of the non-profit entity which controls OpenAI.[7] According to Sam Altman and Greg Brockman, Sutskever was the primary driver behind the November 2023 board meeting that led to Altman's firing and Brockman's resignation from OpenAI.[30][31] The Information reported that the firing in part resulted from a conflict over the extent to which the company should commit to AI safety.[32] In a company all-hands shortly after the board meeting, Sutskever stated that firing Altman was "the board doing its duty."[33] The firing of Altman and resignation of Brockman led to resignation of 3 senior researchers from OpenAI."

(from https://en.wikipedia.org/wiki/Ilya_Sutskever)

losthubble
2 replies
11h25m

Hopefully this leads to a lot of internal dissent and someone leaks the models

maxlin
0 replies
11h8m

Would be fun to see the training material, source code (engine & training data scraping systems), and models all leak. With Sam not returning that would be the double tap the now-zombie OpenAI'd need.

jessenaser
0 replies
11h1m

Betting on Snowden part 2…

Trasmatta
2 replies
11h45m

I feel like the board probably refused the demand that they issue a statement retracting the claims of wrongdoing by Altman.

jxi
0 replies
11h20m

That would have opened them up to a lawsuit (while employment is "at will", you can still get sued for negligence or baseless slandering). A statement retracting claims for such a high profile move would be a pretty solid piece of evidence in such a case.

brandall10
0 replies
11h36m

Doubtful that was an issue. Altman asked for the board's resignation. So basically Ilya would have to leave the company.

LrnByTeach
2 replies
9h8m

This may be a fair workable solution to all the parties involved.

Context:

---------

1.1/ ILya Sukhar and Board do not agree with Sam Altman vision of a) too fast commercialization of Open AI AND/OR b) too fast progression to GPT-5 level

1.2/ Sam Altman thinks fast iteration and Commercialization is needed in-order to make Open AI financially viable as it is burning too much cash and stay ahead of competition.

1.3/ Microsoft, after investing $10+ Billions do not want this fight enable slow progress of AI Commercialization and fall behind Google AI etc..

a workable solution:

--------------------

2.1/ @sama @gdb form a new AI company, let us call it e/acc Inc.

2.2/ e/acc Inc. raises $3 Billions as SAFE instrument from VCs who believed in Sam Altman's vision.

2.3/ Open AI and e/acc Inc. reach an agreement such that:

a) GPT-4 IP transferred to e/acc Inc., this IP transfer is valued as $8 Billion SAFE instrument investment from Open AI into e/acc Inc.

b) existing Microsoft's 49% share in Open AI is transferred to e/acc Inc., such that Microsoft owns 49% of e/acc Inc.

c) the resulted "Lean and pure non-profit Open AI" with Ilya Sukhar and Board can steer AI progress as they wish, their stake in e/acc Inc. will act as funding source to cover their future Research Costs.

d) employees can join from Open AI to e/acc Inc. as they wish with no antipoaching lawsuits from OpenAI

eightnoteight
0 replies
9h4m

in hindsight, you got the context pretty accurate i.e importance of microsoft in all of this

eightnoteight
0 replies
9h5m

@sama @gdb joined microsoft

pug_mode
1 replies
9h22m

Doesn't Microsoft own access to everything OpenAI is working on prior to AGI? Couldn't Sam and his team get hired by MSFT and continue their work?

pug_mode
0 replies
9h4m

4 minutes after that comment, Satya said Sam and some his people are joining Microsoft!

https://twitter.com/satyanadella/status/1726509045803336122

nilkn
1 replies
11h31m

At this point, is there any reason to think this arrangement will last longer than 24-48 hours?

skygazer
0 replies
11h26m

I would think that the board would have tried to (or is still continuing to try to) pick someone to try to placate Microsoft, other than Altman.

jaimex2
1 replies
11h41m

How is this company structured?! Founders with no stake or control, rogue board members and Microsoft somehow in the mix.

frabcus
0 replies
10h50m

It's a non-profit. It isn't a Silicon Valley company.

https://openai.com/our-structure

djha-skin
1 replies
9h36m

Alright, cool, they stuck to their guns about being not-for-profit, and they want to be more mission focused than profit/product focused.

Why, then, did they bring in THE CO-FOUNDER OF TWITCH to help them out? Seems like an off-brand thing to do for a group of people focused on "the mission".

altpaddle
0 replies
7h46m

I mean it's not like there's going to be a huge pool of suitable candidates for something like this on short notice. They probably picked the most capable candidate they knew of who aligned with their values

Sverigevader
1 replies
11h11m

This is surreal! Are we watching Silicon Valley season 7? Laurie Breem is this your doing? Are going through every CEO you can find in an attempt to get Sam Altman to realize he was the right person all along?

wahnfrieden
0 replies
11h9m

Murati walked, so this new ceo is the interim replacement. That’s independent of rejecting sama.

MenhirMike
1 replies
11h17m

This breakup is so messy, did Taylor Swift already write a song about it?

quickthrower2
0 replies
10h13m

Blank Space

zombiwoof
0 replies
11h29m

I can’t what to see how these backstabbing stewards of AGI align our future overlords.

zabzonk
0 replies
9h23m

is it just me, or is Altman not a dead-ringer for Burke (horrible corporate asshole) in "Aliens"?

i mean physically - i'm not suggesting anything to do with chest-bursters and the like.

yieldcrv
0 replies
9h32m

I wonder if employees can sue and successfully get compensation or conditions imposed on the board due to private market volatility in the price of their PPU’s

That would be fun case law to keep this soap opera going

xcodevn
0 replies
10h38m

Even if OpenAI falls apart, this is still a good move.

wnc3141
0 replies
11h31m

What will be the dilution in value of Microsoft's stake if Altman has funding for a competitor?

timetraveller26
0 replies
11h13m

It seems the problem with the monopoly of AI solved itself!

tibbydudeza
0 replies
10h51m

RIP OpenAI - gone like MySpace but not through fickle customers but self inflicted hubris.

Microsoft might as well hire the entire team leaving and give them a Dave Cutler team like deal - do what you folks want in San Fransico - full autonomy reporting directly to the CEO but and make us some god dammed money already.

All that compute credits from Microsoft - hasta la vista baby.

tentacleuno
0 replies
11h51m

This appears to be paywalled; is there an alternative link / source?

Edit: This was in reply to the prior, non-paywalled URL. Comment moved by dang.

sofaygo
0 replies
11h40m

Sometimes fantasy feels more real than reality itself…

sidcool
0 replies
10h26m

Things just got pretty interesting. On one hand, the board has doubled down on their decision, which lends them some credibility, on the other hand Altman seems out for good from OpenAI.

sidcool
0 replies
10h17m

Elon Musk has been publicly unhappy with the direction Sam was taking, and seemed liked aligning with Ilya more. Would he have a hand in this?

senectus1
0 replies
10h26m

To me this choice feels like a push for monetizing... The guy has very little AI expertise.

rvba
0 replies
9h55m

Is there anything (law, bylaws, valuation) that is used to decide how many people should be on the board of OpenAI?

It looks like the current board (4 people?) can do absolutely whatever they want and they report to nobody? Nobody can replace them?

Second question: in a hypothetical situation that they all died, who and how would pick the new board? The CEO? What if the CEO died too? I think a case like this happened after some helicopter crash in Australia - where everyone entered the same helicopter and died.

pg_1234
0 replies
9h30m
nicetryguy
0 replies
10h20m

Nope'n AI

naiv
0 replies
10h51m

Do non-profits have non-competes? or can the employees now just leave and go to a competitor?

mlindner
0 replies
10h7m

I'm really happy about this. Hopefully this sets OpenAI back on a course toward positive and open uses for AI. Maybe even properly open sourcing things again.

minimaxir
0 replies
11h46m

Maybe OpenAI can now realize the true potential of AI Seinfeld.

metamate419
0 replies
9h44m

I can't believe Meta is now the leader in generative AI because of an own goal.

meroes
0 replies
11h37m

Guess they didn’t miss, or he wasn’t a king

maxlin
0 replies
11h16m

Replaced by CEO of Twitch, one of the most censorship-happy places on the internet where one can build their career and cancel themselves with one word they can't edit out even if they didn't say it themselves.

Damn. I hope Sam guts the place and ClosedAI ends up as a "Lostech" fossil and M$ ends up holding the most expensive sack of poo ever conceived.

lysecret
0 replies
11h31m
karmasimida
0 replies
11h45m

Guess i was right then. On to the next, Sam. Nobody deserved to be treated like this.

kareaa
0 replies
8h49m

Could Microsoft possibly sue the board for market manipulation? If someone on the board shorted MSFT stock, they could have made millions.

jessenaser
0 replies
10h57m

Looks like GPT 5 is ngmi. Watered down and GPT-4.5 coming up. GPT 5 coming 2026 at this rate or not at all…

hilux
0 replies
11h17m

I'm sure it's not Emmett's fault, but ChatGPT is having problems right now.

It's failing to generate Python code - first time I've seen this in months of heavy usage.

fidotron
0 replies
9h23m

It is quite clear the true purpose of OpenAI is the exact opposite of the name, meaning it is to run ahead of the curve, understand the impact of developments, and ensure they are locked away by any means necessary at least until they can be exploited by groups represented by the board.

Altman was trying to play this game in parallel with commercialization, which brings a whole pile of conflicting groups into the picture. People have utterly underestimated the depth of interest represented by the board.

It is highly amusing how many of the EA cult are on each side, and how both will portray whatever they are pursuing entirely for personal goals as for the greater good when in reality no one has a clue.

dr_dshiv
0 replies
10h59m

Well, it’s not an “AI Pause” but it definitely seems like a speed bump. Of course, now all the money-making energy of Sama will be put into a new entity that, we assume, will be much faster than OpenAI.

Curated datasets will rise in value… that’s the main cost of replicating OpenAI models, in my understanding

downWidOutaFite
0 replies
11h31m

I find it kind of funny that Altman got into this kind of trouble when presumably he is an expert on startup structure after having advised startups for so many years at YC.

d3nt
0 replies
11h43m
cryptoz
0 replies
11h18m

New AI startup announcement in….10 hours? Maybe they’ll take their time announcing, I dunno.

cryptoz
0 replies
9h24m

A take oft-said but not oft-discussed decently: where is Google? A whole year after ChatGPT and all we have is a limping-along Bard? Still shocks me. "May you live in interesting times" indeed. I really truely always assumed it was a given that Google would lead the AI future - but...here we are, the talk of the town and Google isn't anywhere.

chirau
0 replies
9h12m

Good companies are supposed to survive the exit of a CEO. No matter how good that CEO was. If those companies cannot, then perhaps their only value was the CEO and not the product they are offering.

carlosbaraza
0 replies
8h5m

Emmett announced a plan for the next 30 days https://x.com/eshear/status/1726526112019382275?s=46&t=VuyFi...

benkarst
0 replies
11h17m

OpenAI never found a true identity under Sam. Sam treated pursued hyper growth (treating it like a YC startup) and many in the company, including Ilya, wanted it to be a research company emphasizing AI safety.

Whether you're Team Sutzkaver or team Altman, you can't deny it's been interesting to see extremely talented people fundamentally disagree what to do with godlike technology.

anonzzzies
0 replies
6h51m
alex_young
0 replies
11h16m

If you’re a CEO for part of a weekend does it count? Doesn’t sound particularly meaningful.

Zenul_Abidin
0 replies
9h34m

What a clown show. OpenAI write that they have confidence in the CTO to be the interim and then they immediately dump her for another person?

TheCaptain4815
0 replies
9h34m

Does Ilya think he can slow down Zuckerberg, Elon, Anthropic, etc

Don’t understand his thought process, especially after all the resignations. Does he really expect OpenAI to maintain its position especially with the threat of Microsoft and pretty much all other investors backing off?

Ironically, he might have been better off keeping Sam since he’d have some say in things. But if llama 4 beats gpt5? Zuck won’t even answer his calls.

SeanAnderson
0 replies
11h29m

:s Am I getting up early to sell my MSFT at the bell?

What a ridiculous end to this chapter of the story.

PeterStuer
0 replies
10h33m

May I hope that after all those that were seduced by the money leave with Sam, Ilya will see the light and realize the only "safe" way to deal with AI is to have it 100% open, free and public?