The whole thing starts to look like a coup orchestrated by Microsoft
The whole thing starts to look like a coup orchestrated by Microsoft
Reasoning based on cui bono is a hallmark of conspiracy theories.
The alternative is "these guys don't know what they're doing, even if tens of billions of dollars are at stake".
Which is to say, what's your alternative for a better explanation? (other than the "cui bono?" one, that is).
> these guys don't know what they're doing, even if tens of billions of dollars are at stake
also known as "never attribute to malice that which can be explained by incompetence", which to my gut sounds at least as likely as a cui bono explanation tbh (which is not to be seen as an endorsement of the view that cui bono = conspiracy...)
Everyone always forgets there's two parts to Hanlon's razor:
Never attribute to malice that which is adequately explained by stupidity (1), but don't rule out malice. (2)
Your alternative explanation along with giant egos is pretty plausible.
Haha yeah the world is just run by silly fools who make silly mistakes (oops, just drafted a law limited your right to protest - oopsie!) and just random/lucky investments.
Haha yes, we should never look at the incentives behind actions. We all know human decision making is stochastic right?
Possibility is also a hallmark of conspiracy theories, yet we don't reject theories for being possible.
This is an argumentum ad odium fallacy
A few weeks ago my 4yr old Minecraft gamer was playing pretend and said "I'm fighting the biggest boss. THE MICROSOFT BOSS!"
Yeah M$ hasnt had a good reputation. I finally left Windows this year because I'm afraid of them after Win11.
2023/4 will be the year of the Linux Desktop in retrospect. (or at least my family's religion deemed it)
I was wondering how many lines I'd have to scroll down in the comments to see a "M$" reference here on HackerNews.
They're a $2+ trillion dollar company. They're doing something right.
If you shove a bunch of $100 dollar bills on a thorn tree, it doesn't make it any less dangerous or change it's fundamental nature.
Now do oil companies and big pharma.
I also finally left Windows behind. Tired of their shenanigans, tired of them trying to force me into their Microsoft account system (both for Windows and Minecraft).
The idea that Microsoft is going to control OpenAI does not exactly fill me with confidence.
You'd do yourself a favor by not referring to them as "M$". It taints your entire message, true or not.
Yeah, this was a fight between the non-profit and the for-profit branches of OpenAI, and the for-profit won. So now the non-profit OpenAI is essentially dead, the takeover is complete.
The nonprofit side of the venture actually was in worse shape before, because it was completely overwhelmed by for-profit operations. A better way to view this is the nonprofit side rebelled, has a much smaller footprint than the for-profit venture, and we're about to see if during the ascendency of the for-profit activities the nonprofit side retained enough rights to continue to be relevant in the AI conversation.
As for employees end masse acting publicly disloyal to their employer, usually not a good career move.
Exsept to many it looks like the board went insane and and started firing on themselves. Anyone fleeing that isnt going to be looked on poorly.
As for employees end masse acting publicly disloyal to their employer, usually not a good career move.
Wut?
This is software, not law. The industry is notorious for people jumping ship every couple of years.
Is it? Who are the non-profit and for-profit sides? Sutskever initially got blames for ousting Altman, but now seemed to want him back. Is he changing sides only because he realises how many employees support Altman? Or were he and Altman always on the same side? And in that case, who is on the other side?
Who are the non-profit and for-profit sides?
The only part left of the non-profit was the board, all the employees and operations are in the for-profit entity. Since employees now demand the board should resign there will be nothing left of the non-profit after this. Puppets that are aligned with for-profit interests will be installed instead and the for-profit can act like a regular for-profit without being tied to the old ideals.
Somehow reminds me of Nokia...
https://news.ycombinator.com/item?id=7645482
frik on April 25, 2014:
The Nokia fate will be remembered as hostile takeover. Everything worked out in the favor of Microsoft in the end. Though Windows Phone/Tablet have low market share, a lot lower than expected.
* Stephen Elop the former Microsoft employee (head of the Business Division) and later Nokia CEO with his infamous "Burning Platform" memo: http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia
* Some former Nokia employees called it "Elop = hostile takeover of a company for a minimum price through CEO infiltration": https://gizmodo.com/how-nokia-employees-are-reacting-to-the-...
For the record: I don't actually believe that there is an evil Microsoft master plan. I just find it sad that Microsoft takes over cool stuff and inevitably turns it into Microsoft™ stuff or abandons it.
In many ways the analysis by Elop was right, Nokia was in trouble. However his solution wasn't the right one, and Nokia paid for it.
There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
Yes, indeed and that's the real loss here: any chance of governing this properly got blown up by incompetence.
Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.
Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.
The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.
Of we ignore the risks and threats of AI for a second [..] just hope this helps ro cool down the ML sold as AI hype
If it is just ML sold as AI hype, are you really worried about the threat of AI?
It can be both, a hype and a danger. I don't worry much about AGI by now (I stopped insulting Alexa so, just to be sure).
The danger of generative AI is that it disrupts all kinds of things: arts, writers, journalism, propaganda... That threat already exists, the tech being no longer being hyped might allow us to properly adress that problem.
I stopped insulting Alexa so, just to be sure
Priceless. The modern version of Pascal's wager.
The scary thing is that these incompetents are supposedly the ones to look out for the interests of humanity. It would be funny if it weren't so tragic.
Not that I had any illusions about this being a fig leaf in the first place.
Ignoring "Don't be Ted Faro" to pursue a profit motive is indeed a form of incompetence.
Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.
They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.
They don't make large profits otherwise they wouldn't be nonprofits.
This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.
But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.
[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...
I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.
pressure Altman to limit his side-projects
People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.
His "side-projects" could have been hugely beneficial to them over the long term.
How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
Let's take personalities out of it and see if it makes more sense:
How could a new supply of highly optimized, lower-cost AI hardware benefit OpenAI?
Sam's independence is literally, currently, tanking the company?
Honestly, I think they did that to themselves.
Why does Microsoft have full rights to ChatGPT IP? Where did you get that from? Source?
The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.
Exactly. The generalities, much less the details, of what MS actually got in the deal are not public.
Exactly. The generalities, much less the details, of the deal are not public.
That was a seriously dumb move on the part of OpenAI
Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits
To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.
So you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe
you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe
Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.
"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.
Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.
Board will be ousted, new board will instruct interim CEO to hire back Sam at al, Nadella will let them go for a small favor, happy ending.
Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.
That's definitely still within the realm of the possible.
Board will be ousted, but the ship has sailed on Sam and Greg coming back.
I got the impression that the most valuable models were not published. Would Microsoft have access to those too according to their contract?
Don't they need access to the models to use them for Bing?
I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.
There is also all the questions for RLHF, and the pipelines to think around that.
Not necessarily, it would be just RAG, the use the standard Bing search engine to retrieve top K candidates, and pass those to OpenAI API in a prompt.
Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits
To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.
they're GPUs right? Time to mine some niche cryptos to cash out the azure credits..
I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.
But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.
Deservedly or not, Satya Nadella will look like a genius in the aftermath. He has and will continue to leverage this situation to strengthen MSFT's position. Is there word of any other competitors attempting to capitalize here? Trying to poach talent? Anything...
After Balmer I couldn’t have imagined such competency from Microsoft.
After Ballmer, competency can only be higher at Microsoft.
OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.
A good example of how just having your foot in the door creates serendipitous opportunity in life.
A good example of how just having your foot in the door creates serendipitous opportunity in life.
Sounds like Altman's biography.
Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"
I'm wondering why that option hasn't been used yet.
Which of the remaining board members could credibly make that threat?
Can the OpenAI board renege on the deal with msft?
Watch Satya also save the research arm by making Karpathy or Ilya the head of Microsoft Research
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.
Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)
Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
Remarkably, the letter’s signees include Ilya Sutskever, the company’s CTO who has been blamed for coordinating the boardroom coup against Altman in the first place.
What in the world is happening at OpenAI?
Sounds like a classic case of FAFO to me.
Who fucked around and who found out, exactly??
We the unsuspecting public?
Ilya FA
Ilya FO (in process)
He didn't strike me as a type to brainlessly FA.
Yea, check out his presentations on YT. Incredible talent.
What strikes me is that he wrote the regretful participation tweet after witnessing the blowback. He should have written it right with the initial news. And clearly explain employees. This is not a smart way to conduct board oversight.
500 employees are not happy. I’m siding with the employees (esp early hires), they deserve to be part of once in a lifetime company like OpenAI after working there for years.
"if you value intelligence above all other human qualities, you’re gonna have a bad time"
The thing about being really smart is that you can find incredible gambles.
He could be an expert in some areas but in others… not so much.
GPT-4 Turbo took control of the startup and fcks around ...
Adam D'Angelo?
If it weren’t so unbelievable, I’d almost accuse them of orchestrating all this to sell to Microsoft without the regulatory scrutiny.
It’s like they distressed the company to make an acquisition one of mercy instead of aggression, knowing they already had their buyer lined up.
sell to Microsoft without the regulatory scrutiny
I keep hearing this, principally from Silicon Valley. It’s based on nothing. Of course this will receive both Congressional and regulatory scrutiny. (Microsoft is also likely to be sued by OpenAI’s corporate entity, on behalf of its outside investors, as are Altman and anyone who jumps ship.)
From what I heard non-compete clauses are unenforceable in California, so what exactly are they suing for?
I'm pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue.
non-compete clauses are unenforceable in California, so what exactly are they suing for?
Part of suing is to ensure compliance with agreements. There is a lot of IP that Microsoft may not have a license to that these employees have. There are also legitimate questions about conflicts of interests, particularly with a former executive, et cetera.
pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue
Sure. I'm not suggesting anyone did anything illegal. Just that it will be litigated over from every direction.
Yeah, just like the suit Microsoft is in with windows 11 anticompetitive practices, right?
Microsoft can buy the company in parts, as it “fails” in a long drawn out process. By the end, whatever they are buying will have little value, as it will already be outdated.
I haven't seen brand suicide like this since EM dumped Twitter for X!!! (4 months ago)
It's nothing like it. What common people use is ChatGPT, many of them never heard about OpenAI, not even mention who sits on the board etc. And their core offering is more popular than ever. With Twitter, Musk started to damage the product itself, step by step. As far as I can tell ChatGPT continues to work just fine, as opposed to X.
Yeah, I also started out believing this must be a principle thing between Ilya and Sam. But no, this smells more and more like a corporate clusterfuck and Ilya was just an easy to manipulate puppet. This alleged statement from the board that destroying the company is an acceptable outcome is completely insane, but somewhat reasonable when combined with the fact that half the board has some serious conflict of interest going on.
(Rips off mask) Wow, it was the Quora CEO all along!
So this was never about safety or any such bullshit. It’s because GTPs store was in direct competition with Poe!?
Imagine letting the CEO of a simple question and answer site that blurs all of its content onto your board
Alongside luminaries like "the wife of the guy who played Robin in the Batman movie".
lol is that a real thing?
And that he might be the least incompetent of them all.
Absolutely mindboggling that Adam is on the board.
Poe has direct competition with the GPTs and the "revenue sharing" plan that Sam released on Dev day.
The Poe Platform has their "Creators" build your own bot and monetize it, including OpenAI and other models.
Even more interesting considering that Elon left OpenAI’s board when Tesla started developing Autopilot as it was seen as a conflict of interest.
Ilya is much less active on Twitter than the others. The rumors that blamed him emerged and spread like wildfire and he did nothing to stop it because he probably only checks Twitter once a week.
One would think that he would be on Twitter this week.
Why? To entertain bystanders like us?
looks like found his twitter password https://x.com/ilyasut/status/1726590052392956028?s=20
One would think that he would be on Twitter this week.
Or maybe _this_ week he would need to spend his time doing something productive.
He says he regrets his action, so he's not blameless. and it wouldn't have been possible for 3/6ths of the board to oust Brockman and Altman without his vote. My bet (entirely conjecture) is that Ilya now realizes the other three will refuse to leave their board seats even if it means the company melts to the ground.
not this week, trust me
There must be something going on which is not in the public domain.
What an utterly bizarre turn of events, and to have it all played out in public.
A $90 billion valuation at stake too!
I wonder how many people are on a path for a $250K/year salary instead of $30M in the bank now.
Microsoft can easily afford to offer them $30M of options each if they continue to ship such important products. That's only $15B for 500 staff.
Microsoft has a $2.75T market value and over $140B of cash.
Microsoft can easily afford to offer them $30M of options each
But it doesn’t have to. And the politics suggest it very likely won’t.
It’s looks like about 505.
The signatories want Bret Taylor and Will Hurd running the new Board, apparently.
We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
Googling Will Hurd only shows up a Republican politician with a history at the CIA. Is that the right guy? Can't be.
Please not another Eric Smith NSA shill running the show. on the other hand it was inevitable. either the government controls the most important companies secretly as in China or openly as in the US.
What options are left other than Adam D'Angelo orchestrated the downfall of a competitor to Poe?
"I am NOT a BELLBOY!"
Watching all this drama unfold in the public is unprecedented.
I guess it makes sense. There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.
I guess this is the Open in OpenAI, eh?
Absolutely bonkers.
The screenwriters are overdoing it at this point.
Understandable, they were on a strike for a long time. Now that they are back, they are itching to release all the good stuff.
Probably trying to shift the blame to the other three board members. It could be true to some degree. No matter what, it's clear to the public that they don't have the competency to sit on any board.
There's definitely more to this than just Ilya vs Sam.
Did it originally say CTO? Ilya is not CTO and it's been corrected now.
That settles it it has to be the AGI orchestrating it all.
Sexual misconduct. Ilya protects Sam by not letting this spiral out in media.
Maybe they found AGI and it is now controlling the board #andsoitbegins.
None of it makes sense to me now. Who is really behind this? How did they pull this off? Why did do it? Why do it so suddenly, in a terribly disorganized way?
If I may paraphrase Churchill: This has become a bit of a riddle wrapped in a mystery inside an enigma.
Ok... so this is not the scenario any of us were imagining? Ilya S vs Altman isn't what went down?
JFC.
What in the world is happening at OpenAI?
Well, we don't know.
What we do know, is that the "coordinating the boardroom coup against Altman" is a rumor and speculation about a thing we don't know anything about.
At this point either pretty much all the speculation here and on Twitter was wrong, or they've threatened to kneecap him.
It's extrazordinary to watch, I'll say that much.
I still think 'Altman's Basilisk' is a thing: I think somewhere in this mess there's actions taken to wrest control of an AI from somebody, probably Altman.
Altman's Basilisk also represents the idea that if a charismatic and flawed person (and everything I've seen, including the adulation, suggests Altman is that type of person from that type of background) trains an AI in their image, they can induce their own characteristics in the AI. Therefore, if you're a paranoid with a persecution complex and a zero-sum perspective on things, you can through training induce an AI to also have those characteristics, which may well persist as the AI 'takes off' and reaches superhuman intelligence.
This is not unlike humans (perhaps including Altman) experiencing and perpetuating trauma as children, and then growing to adulthood and gaining greatly expanded intelligence that is heavily, even overwhelmingly, conditioned by those formative axioms that were unquestioned in childhood.
This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).
In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.
How about: acquimire
one thing for sure this is one hell of a quagmire /s
I feel like that's giving them too much credit; this is more of a flukuisition. Being in the right place at the right time when your acquisition target implodes.
>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.
Msft/Amazon/Google would light 13 billion on fire to acquire OpenAI in a heartbeat.
(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).
They acquired Activision for 69B recently.
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
Some background: https://sl.bing.net/dEMu3xBWZDE
In hindsight firing Sam was a self-destructing gamble by the OpenAI board
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
Microsoft is going to come out of this better than anyone
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
"it would take an extraordinary mind to foresee all the possible twists."
How far along were they on GPT-5?
"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
I think the board needs to come clean on why they fired Sam Altman if they are going to weather this storm.
They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.
"Even rich people?" - especially rich people, as they are the ones who can afford to use laws to protect themselves.
“Employees” probably means “engineers” in this case. Which is a wide majority of OpenAI staff, I’m sure.
I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.
That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.
Frankly it's looking like Microsoft is going to come out of this better than anyone
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
it's looking like Microsoft is going to come out of this better than anyon
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
But is the door open to everyone of the 500 staff? That is a lot, and Microsoft may not need them all.
At this point, I think it’s absolutely clear no one has any idea what happened. Every speculation, no matter how sophisticated, has been wrong.
It’s time to take a breath, step back, and wait until someone from OpenAI says something substantial.
Absolutely agreed
This is the point where I've realized I just have to wait until history is written, rather than trying to follow this in real time.
The situation is too convoluted, and too many people are playing the media to try to advance their version of the narrative.
When there is enough distance from the situation for a proper historical retrospective to be written, I look forward to getting a better view of what actually happened.
Hah. I think you may be duped by history - the neat logical accounts are often fictions - they explain what was inexplicable with fabrications.
Studying revolutions is revealing - they are rarely the invevitable product of historical forces, executed to the plans of strategic minded players... instead they are often accidental and inexplicable. Those credited as their masterminds were trying to stop them. Rather than inevitible, there was often progress in the opposite direction making people feel the liklihood was decreasing. The confusing paradoxical mess of great events doesn't make for a good story to tell others though.
It's a pretty interesting point to think about. Post-hoc explanations are clean, neat, and may or may not have been prepared by someone with a particular interpretation of events. While real-time, there's too much happening, too quickly, for any one person to really have a firm grasp on the entire situation.
On our present stage there is no director, no stage manager; the set is on fire. There are multiple actors - with more showing up by the minute - some of whom were working off a script that not everyone has seen, and that is now being rewritten on the fly, while others don't have any kind of script at all. They were sent for; they have appeared to take their place in the proceedings with no real understanding of what those are, like Rosencranz and Guildenstern.
This is kind of what the end thesis of War and Peace was like - there's no possible way that Napoleon could actually have known what was happening everywhere on the battlefield - by the time he learned something had happened, events on the scene had already advanced well past it; and the local commanders had no good understanding of the overall situation, they could only play their bit parts. And in time, these threads of ignorance wove a tale of a Great Victory, won by the Great Man Himself.
That's not how history works. What you read are the tellings of the people and those aren't all facts but how they perceived the situation in a retrospective. Read the biographies of different people telling the same event and you will notice that they are quite never the same, leaving the unfavourable bits usually out.
Written history is usually a simplification that has lost a lot of the context and nuance from it.
I don't need to follow in real-time, but a lot of the context and nuance can be clearly understood at the moment and so it stills helps to follow along even if that means lagging on the input.
3 board members (joined with Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.
Speculation is just on motivation, the facts are easy to establish.
3 board members (joined with Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.
er...what does that even mean? how can a board "take full control" of the thing they are the board for? they already have full control.
the actual facts are that the board, by majority vote, sacked the CEO and kicked someone else off the board.
then a lot of other stuff happened that's still becoming clear.
I think the post is very clear.
The subject in that sentence that takes full control is “3 members" not "board".
The board has control, but who controls the board changes based on time and circumstances.
I wonder if AGI took over the humans and guided their actions.
It may well be that this is artificial and general, but I rather doubt it is intelligent.
Like the new tom cruise movie?
Makes sense in a conspiracy theory mindset. AGI takes over, crashed $MSFT, buys calls on $MSFT, then this morning the markets go up when Sam & co join MSFT and the AGI has tons of money to spend.
I agree. Although the story is fascinating in the way that a car crash is fascinating, it's clear that it's going to be very difficult to get any kind of objective understanding in real-time.
This breathless real-time speculation may be fun, but now that social media amplifies the tiniest fart such that it has global reach, I feel like it just reinforces the general zeitgeist of "Oh, what the hell NOW? Everything is on fire." It's not like there's anything that we peasants can do to either influence the outcome, or adjust our own lives to accomodate the eventual reality.
I will say, though, that there is going to be an absolute banger of a book for Kara Swisher to write, once the dust has settled.
Why are gaslighting me. I never did anything but click a link
Just made it 100% certain that the majority of AI staff is deluded and lacks judgment. Not a good look for AI safety.
We can certainly believe Ilya wasn't behind it if he joins them at Microsoft. How about that? By his own admission was involved, and he's one of 4 people on the board. While he has called on the board to resign, he has seemingly not resigned which would be the one thing he could certainly control.
This suggestion was already made on Saturday and again on Sunday. However, this approach does not enhance popcorn consumption... Show must go on ...
Likely Ilya and Adam swayed Helen and Tasha. Booted Sam out. Greg voluntarily resigned.
Ilya (at the urging of Satya and his colleagues including Mira) wanted to reinstate Sam, but the deal fell through with the Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam got his mate Emmett to steady the ship but things went nuclear.
I agree. I'm already sick of reading through political hit pieces, exaggeration, biased speculations and unfounded bold claims. This all just turned into a kind of TV sports, where you pick a side and fight.
Everything on social media (and general news media) pointed to Ilya instigating the coup. Maybe Ilya was never the instigator, maybe it was Adam + Helen + Tasha, Greg backed Sam and was shown the door, and Ilya was on the fence, and perhaps against better judgment, due to his own ideological beliefs, or just from pure fear of losing something beautiful he helped create, under immense pressure, decided to back the board?
At this point, after almost 3 days of non-stop drama, and we still have no clue what has happened to a 700 employees company under million of people watching. Regardless the outcome, the art of keeping secrets at OpenAI is truly far beyond human capability!
Ilya posted this on Twitter:
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.
Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.
He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.
Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.
Oh that's definitely common. I've seen it many times and it's ugly.
I don't think this is what Ilya is trying to do. His tweet is clearly about preserving the organization because he sees the structure itself as helpful, beyond his role in it.
For someone who isn't a political animal he made some pretty powerful political moves.
researchers and academics are political withing their organization regardless of whether or not they claim to be or are aware of it.
ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.
He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.
That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.
At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.
Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?
So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.
Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.
It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.
We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.
"I deeply regret my participation in the board's actions."
Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.
To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy. The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2]. Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operational work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible! I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
[1]: https://news.ycombinator.com/item?id=38330819 [2]: https://nitter.net/jeremyphoward/status/1725712220955586899
It takes a lot of courage to do so after all this.
From outside, it looks like a Microsoft coup to take over the company all together.
Never assume someone is winning a game of 5D chess when someone else could just be losing a game of checkers.
what does that even mean?
I think it means don't attribute to intelligence what could be easily explained as stupidity?
OpenAI may just be a couple having an angry fight, and M$ is just the neighbor with cash happy to buy all the stuff the angry wife is throwing out for pennies on the dollar.
In other words - it doesn’t have to be someone’s genius plan, it could have just been an unintelligent mistake
Hanlon's razor, basically.
The most plausible scenario here is that the board is comprised of people lacking in foresight who did something stupid. A lot of people are generating a 5D chess plot orchestrated by Microsoft in their heads.
In this case, it means that what happened is: “OpenAI board is incompetent”, instead of “Microsoft planned this to take over the company.”
A conspiracy like the one proposed would basically be impossible to coordinate yet keep secret, especially considering the board members might loose their seats and their own market value.
"Never attribute to malice that which is adequately explained by stupidity"
He is saying that what might seem like a sophisticated, well-planned strategy could actually be just the outcome of basic errors or poor decisions made by someone else.
I highly doubt this was a coordinated plan from the start by Microsoft. I think what we're seeing here is a seasoned team of executives (Microsoft) eating a naive and inexperienced board alive after the latter fumbled.
Nah, It's just good to be the entity with billions of dollars to deploy when things are chaotic.
I think it was Mark Zuckerberg that described (pre-Elon) Twitter as a clown car that fell into a gold mine.
Reminds me a bit of the Open AI board. Most of them I'd never heard of either.
You know, this makes early Google's moves around its IPO look like genius in retrospect. In that case, brilliant but inexperienced founders majorly lucked out with the thing created... but were also smart enough to bring in Eric Schmidt and others with deeper tech industry business experience for "adult supervision" exactly in order to deal with this kind of thing. And they gave tutelage to L&S to help them establish sane corporate practices while still sticking to the original (at the time unorthodox) values that L&S had in mind.
For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.
(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)
but were also smart enough to bring in Eric Schmidt and others with deeper tech >industry business experience for "adult supervision"
(Now, of course that original Google vision is just laughable as Sundar and Ruth >have completely eviscerated what was left of it, but whatever)
Those two things happening one after another is not coincidence.
I'm not sure I agree. Having worked there through this transition I'd say this: L&S just seem to have lost interest in running a mature company, so their "vision" just meant nothing, Eric Schmidt basically moved on, and then after flailing about for a bit (the G+ stuff being the worst of it) they just handed the reigns to Ruth&Sundar to basically turn into a giant stock price pumping machine.
G+ was handled so poorly, and the worst of it was that they already had both Google Wave (in the US) and Orkut (mostly outside US) which both had significant traction and could’ve easily been massaged into something to rival Facebook.
Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.
All successful companies succeed despite themselves.
Working in consultancies/agencies for the last 15 years, I see this time and time again. Fucking dart-throwing monkeys making money hand over fist despite their best intentions to lose it all.
This makes the old twitter look like the Wehrmacht in comparison.
The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.
The benefit of having incentive-aligned board, founders, and execs.
Even the clown car isn't this bad.
I often hear that about the OpenAI board, but in general are people here know most board members of some big/darling tech companies? Outside of some of the co-founders I don't know anyone.
That's a confused heuristic. It could just as easily mean they keep their heads down and do good work for the kind of people whose attention actually matters for their future employment prospects.
Do whatever you want but don't break the API or I will go homeless
Just create an OpenAPI endpoint on azure. Pretty sure not run by OpenAI itself.
Azure OpenAI is always a bit behind, e.g. they don't have GPT-4 turbo yet
They do actually, https://learn.microsoft.com/en-us/azure/ai-services/openai/w...
Hmmm, just what are you willing to do for API access?
At this point nothing would surprise me anymore. Just waiting for Netflix adaption.
You and 5000 other recent founders in tech.
I feel seen
brew install llm
How likely is that the API will change (from specs, to pricing, to being broken)? I am about to finish some freelance work that uses GPT api and it will be a pain in the ass if we have to switch or find an alternative (even creating a custom endpoint on Azure...)
the letter’s signees include Ilya Sutskever
_Big sigh_.
For people who appreciate some vintage British comedy:
https://www.youtube.com/watch?v=Gpc5_3B5xdk
The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?
Knew it had to be Benny Hill before I clicked. Yackty-sax indeed.
Indeed. I wonder how it came to become the anthem of incompetence.
I was thinking more the Curb Your Enthusiasm theme song.
Sounds like a CYA move after being under pressure from the team at large.
Well that accelerated very quickly and this is perhaps the most dysfunctional startup I have ever seen.
All due to one word: Greed.
And the ironic part of the greed is that it seems there is far more (at least potential) earnings to be spread around and make everyone there wealthy enough to not have to think about it ever again.
Yet they start this kind of nonsense.
Not exactly focusing on building a great system or product.
I assumed that due how the whole company/non-profit was structured employees didn't really get any actual equity?
What? Greed is the backbone of our startup landscape. As soon as you get VC backing all anyone cares about is a big payday. This is interesting because there is something going on beyond the typical pure greed shitshow.
Perhaps it was just that original intention for openai to be a nonprofit, but at some point somewhere it wasn't pure $ and that's what makes it interesting. Also more tragic because now it looks like it's heading straight to a for profit company one way or another.
All due to one word: Greed.
I would say it's due to unconventional not-battle-tested governance.
I don’t know about OpenAI, but Ive been in a few similar business situations where everyone is in a good situation and greed leads to an almighty blowup. It’s really remarkable to see.
Well, great to see that the potentially dangerous future of AGI is in good hands.
They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.
Can you explain for us not up to date with AI developments?
Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.
1) It may be possible to brute-force a model into something that sufficiently resembles AGI for most use-cases (at least well enough to merit concern about who controls it) 2) Deep learning has never been terribly scientific, but here we are.
Poor little geepeet is witnessing their first custody battle :(
Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?
It seems odd to have it described as “may resign.” Seems like the worst of all worlds.
That’s like trying to create MAD with the position you “may” launch nukes in retaliation.
It's easier to get the support of 500 educated people at a moments notice by using sane words like 'may'. This is rational given the lack of public information as well as a board that seems to be having seizures. Using the word 'may' may seem empty-handed; but it ensures a longer list of names attached to the message -- allowing the board a better glimpse of how many dominoes are lined up to fall.
The board is being given a sanity-check; I would expect the signers intentionally left themselves a bit of room for escalation/negotiation.
How often do you win arguments by leading off with an immutable ultimatum?
Right, but the absolute last thought you want in the board's head is: "they're bluffing."
200 people or even 50 of the right people who are definitely going to resign will be much stronger than 500+ who "may" resign.
Disclaimer that this is a ludicrously difficult situation for all these folks, and my critique here is made from far outside the arena. I am in no way claiming that I would be executing this better in actual reality and I'm extremely fortunate not to be in their shoes.
Presumably some will resign and some won't. They aren't going to get 550 people to make a hard commitment to resign, especially when presumably few concrete contracts have been offered by MSFT.
WSJ said "500 threaten to resign". "Threaten" lol! WSJ says there are 770 employees total. This is all so bizarre.
HN desperately needs a mega thread, it's only Monday early hours, there is so much drama to come out of this.
Its early West coast time, dang has to wake up first.
I bet he's up making sure the servers aren't crashing! Thanks dang! As the west coast wakes up .. HN is going to be busy...
Tangentially I noticed that Reddit's front page has been conspicuously absent on coverage of this, I feel a twinge of pity. Maybe there are some some subreddits but I haven't bothered to look.
Or a new category, like "Ask HN" and "Show HN". Maybe call it "Hot HN" or "Hot <topic>" or something like that. Could be used for future hot topics too. If you change the link bold every time a hot topic is trending, it could be even used to show important stuff.
I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.
The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
I don't really understanding why the workforce is swinging unambiguously behind Altman.
I have no inside information. I don't know anyone at Open AI. This is all purely speculation.
Now that that's out out the way, here is my guess: money.
These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.
Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.
Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.
I don't really understanding why the workforce is swinging unambiguously behind Altman.
Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].
If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares become possibly worthless.
[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...
maybe the workforce is not really behind the non-profit foundation and want shares to skyrocket, sell, and be well off for life.
at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.
Microsoft is nothing without its people?
Maybe the employees of OpenAI should stop a second and think about their privileges as rock stars in a super hyped startup before they bail for a job in a corporation where everything and everyone is setup to be replaceable.
These boys will not be your rank and file employees. They will operate exactly as they have done in OpenAI. Only difference will be that they no longer have this weird "non-profit, but actually some profit" thing going on.
Just remember, the guys who run your company are probably more incompetent than this.
*competent
No, almost certainly not lol
Can anyone explain this?
“Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.”
It's the well known 'let me call for my own resignation' strategy.
Maybe he did because he regrets it, maybe the open letter is a google doc someone typed names into.
So, all this happens over Meet, in Twitter, and by email. What is the possibility of an AGI having took over the control of the board members' accounts? It would be consistent with the feeling of a hallucination here.
This is just stupid enough to be the product of a human.
Honestly, I feel like pretty low. That said, I kind of love the dystopian sci-fi that paints... So I'm going to go ahead and hope you're right haha
I don’t trust any of this. Every one of these wired articles has been totally wrong. Altman clearly has major media connections and also seems to have no problem telling total lies.
Don't know what's happening, but MS looks to be a winner in long run, and probably most others. Who stay gets promotion, who leaves gets fat check. The loosers are customers, no GPT-5 or any significant improvements any time soon. MS made GPT will be much more closed and pricey. Oh, yes, competitors are happy too.
Competitors including Quora: https://quorablog.quora.com/Poe-1
Wait. Has Ilya resigned from the board yet, or did he sign a letter calling for his own resignation?
He did indeed. (I don't think it is necessarily inconsistent to regret an action you participated in and want the authority that took it to resign in response, though "participated" feels like it's doing a lot of work in that sentence.)
I just downloaded all of my data / chats. Who knows if it'll be up and running the next days.
That's not a terrible idea on principle.
what do you mean "nearly 500". According to wikipedia openAi has 500 employees
505/700 -some sources say 550
This feels like a sneaky way for Microsoft to absorb the for-profit subsidiary and kneecap (or destroy) the nonprofit without any money changing hands or involvement from those pesky regulators.
It's not sneaky.
There are thousands of extremely talented ML researchers and software devs who would jump at the chance to work at Open AI.
Everyone is replaceable.
Everyone is replaceable.
Nope. That holds only true for mediocre employees but not above. The world class in their field isn't replaceable otherwise there would be no openai.
My ChatGPT wrapper is in danger, please stop
lmfao
A lot of people here seem to be forgetting [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor)
Never attribute to malice that which is adequately explained by stupidity.
Except for when it's actual malice vOv
Can we have a quick moment of silence for Matt Levine? Between Friday afternoon and right now, he has probably had to rewrite today's Money Stuff column at least 5 or 6 times.
Didn't he say that he was taking Friday off, last week? The day before hit bete noire Elon Musk got into another brouhaha and OpenAI blew up?
I think he said once that there's an ETF that trades on when he takes vacations, because they keep coinciding with Events Of Note.
Have seen a lot of criticism of Sam and of other CEO's
But I don't think I have seen/heard of a CEO this loved by the employees. Whatever he is, he must be pleasant to work with.
I don't know, is it about being loved by the employees, or the employees being desperate about the alternative?
I said this on Friday: the board should be fired in its entirety. Not because the firing was unjustified--we have know real knowledge of that--but because of how it was handled.
If you fire your founder CEO you need to be on top of messaging. Your major customers can't be surprised. There should've been an immediate all hands at the company. The interim or new CEO should be prepared. The company's communications team should put out statements that make it clear why this was happening.
Obviously they can be limited in what they can publicly say depending on the cause but you need a good narrative regardless. Even something like "The board and Sam had fundamental disagreement on the future direction of the company." followed by what the new strategy is, probably from the new CEO.
The interim CEO was the CEO and is going back to that role. There's a third (interim) CEO in 3 days. There were rumors the board was in talks to re-hire Sam, which is disastrous PR because it makes them look absolutely incompetent, true or not.
This is just such a massive communiccations and execution failure. That's why they should be fired.
There's no one to fire the board. They're not accountable to anyone but themselves. They can burn down the whole company if they like.
Ok, time to create an OpenAI drinking game. I'll start:
Every time a CEO is replaced, drink.
Every time an open letter is released, drink.
Every time OpenAI is on top of HN, drink.
Every time dang shows up and begs us to log out, drink.
There will be a lot of alcohol poisoning cases based on those four alone.
Quick question for some of the folks here who may have a handle on how VC's may see this, but is Microsoft effectively hiring all these staff members out from OpenAI (a company they've invested heavily in) going to affect their ability to invest into other startups in the future?
Not at all. This is an extremely unusual, one-of-a-kind situation and I think everybody realizes that.
And there's no evidence Microsoft was an indicator of the drama.
Me: "ChatGPT write me an ultimatum letter forcing the board to resign and reinstate the CEO, and have it signed by 500 of the employees."
ChatGPT: Done!
Clearly this started with the board asking ChatGPT what to do about Sam Altman.
Season 2
Better hope this isn't a Netflix show.
From The Verge [1]:
Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.
So, 3/4 of the current board (excluding Ilya) held on despite this letter?
[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...
If so they're delusional. Every hour they hold on to the pluche will make things worse for them.
The speed at which this is happening could be a masterful execution of getting out of under the non-profit status.
The corporate structure is so convoluted, OpenAI is only part non profit.
so what happens if @eshear calls this probably-not-a-bluff, but lets everyone walk? The people that remain get new options and 500 other people still definitely want to work at OAI?
If it comes to that, I reckon Emmett will have his former boss Andy Jassy merge whatever's left of OpenAI into AWS. Unlikely though, as reconciliation seems very much a possibility.
& the most drastic thing is that Ilya says he regrets what he has done and undersign the public statement.
'the man who killed OpenAI' that will be hard to wash out.
I'm cancelling my Netflix subscription, I don't need it.
But boy will I renew it when this gets dramatized as a limited series.
This is some Succession-level shenanigans going on here.
Jesse Eisenberg to play Altman this time around?
We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.
[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...
Doesn't it look like the complete opposite is going to happen though?
Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.
So we went from "Faux NGO" to, "For profit", to "100% Closed".
I guess Microsoft now has a new division. (https://www.microsoft.com/investor/reports/ar13/financial-re...)
Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.
Hurray for employees seeing the real issue!
Hurray also for the reality check on corporate governance.
- Any Board can do whatever it has the votes for.
- It can dilute anyone's stock, or everyone's.
- It can fire anyone for any reason, and give no reasons.
Boards are largely disciplined not by actual responsibility to stakeholders or shareholders, but by reputational concerns relative to their continuing and future positions - status. In the case of for-profit boards, that does translate directly to upholding shareholder interest, as board members are reliable delegates of a significant investing coalition.
For non-profits, status typically also translates to funding. But when any non-profit has healthy reserves, they are at extreme risk, because the Board is less concerned about its reputation and can become trapped in ideological fashion. That's particularly true for so-called independent board members brought in for their perspectives, and when the potential value of the nonprofit is, well, huge.
This potential for escape from status duty is stronger in our tribalized world, where Board members who welch on larger social concerns or even their own patrons can nonetheless retreat to their (often wealthy) sub-tribe with their dignity intact.
It's ironic that we have so many examples of leadership breakdown as AI comes to the fore. Checks and balances designed to integrate perspectives have fallen prey to game-theoretic strategies in politics and business.
Wouldn't it be nice if we could just built an AI to do the work of boards and Congress, integrating various concerns in a roughly fair and mostly-predictable fashion, so we could stop wasting time on endless leadership contests and their social costs?
Boards suck. Especially if they are VCs or placed there by VCs.
Altman can’t really go back to OpenAI ever because it would create an appearance of impropriety on the part of MS (that perhaps MS had intentionally interfered in OpenAI, rather than being a victim of it) and therefore expose MS to liability from the other investors in OpenAI.
Likewise, these workers that threatened to quit OpenAI out of loyalty to Altman now need to follow thru sooner rather than later, so their actions are clearly viewed in the context of Altman’s firing.
In the mean time, how can the public resume work on API integrations without knowing when the MS versions will come online or if they will be binary interoperable with the OpenAPI servers that could seemingly go down at any moment?
Wow, they made it into Guardian live ticker land: https://www.theguardian.com/business/live/2023/nov/20/openai...
Ilya single handed ruined 700 of OpenAI's fortune overnight, this is not going to end well, my prediction is that, OpenAI is done, in 1-2 years nobody will even care about its existence.
Microsoft just won the jackpot, time to get some stocks there.
This whole debacle is a complete embarrassment and shredding the organisations credibility.
If the board had any balls they'd call them on their bluff. I'd love to see it honestly, a mass resignation like that.
I like this a lot. Shows how valuable employees are. It’s almost feels like a union. Love it.
If they align with Sam Altman and Greg Brockman at Microsoft, they wouldn't have to initiate from ground zero since Microsoft possesses complete rights to ChatGPT IP. They could simply create a variant of ChatGPT.
it's worth noting that Microsoft's supposed contribution of $13 Billion to OpenAI doesn't fully materialize in cash, a large portion of it is faceted as Azure credits.
this scenario might transform into the most cost-effective takeover for Microsoft, acquiring a corporation valued at $90 billion for a relatively trifling sum.
Might be just me as a programmer out in the styx, SV programmers seem to flex a lot, in comparison to your average subordinates.
We should strive to be leaders who inspire such loyalty and devotion
Employees are for-profit entities, huge conflict of interest.
Didn’t that train already depart with the announcements from MS and Sam? Is there a way back?
This is starting to look like an elaborate, premeditated ruse to kill any vestige of the non-profit face of OpenAI once and for all.
I've never seen a staff walkout / threat to walk out ever succeed.
Am I wrong?
Drama queens
I wonder how the FTC and Lina Khan will view all of this if most of the team moves over to Microsoft
Is it too late? Satya already announced Sam and brock is joining.
Altman must be pissed af, he help built so much stuff and now got fked in the arse by these doomers. He realize the fastest way to get back to parity is to join MS because they already own the source code and model weights and it’s Microsoft. Starting a new thing from scratch would not guarantee any type of success and would take many years. This is his best path.
They can leave for sure, but they likely have some kind of non-compete clause in their contract, right?
I read the news, make a picture of what is likely happening in my head, and every few hours new news comes up that makes me go: "Wait, WTF?".
What an astonishing embarrassment.
Let’s say how would Ilya play along after this? Any similar incidents historically, like a failed coup but the participant got to stay?
Chaos is a ladder
I wonder what their employment contracts state? Are they allowed to work for vendors or clients?
If Microsoft emerges as the "winner" from all of them then I think we are all the "losers". Not that I think OpenAI was perfect or "good" just that MS taking the cake is not good for the rest of us. It already feels crazy that people are just fine with them owning what they do and how important it is to our development ecosystem (talking about things like GitHub/VSCode), I don't like the idea of them also owning the biggest AI initiative.
Celebrity gossip dressed in big tech. And the people love it. I'm kinda sick of it :P
Huh, so collective bargaining and unionization is supported in tech under some circumstances...
So, how is Poe doing during all this?
To keep the spotlight on the most glaring detail here: one of the board members stands to gain from letting OpenAI implode and that board member is instrumental in this weeks' drama.
For the past few days, whenever I see the word "OpenAI," the theme to "Curb Your Enthusiasm" starts playing in my head.
Play Stupid Games, Win Stupid Prizes
1. Board decides to can Sam and Greg. 2. Hides the real reasons. 3. Thinks that they can keep the OpenAI staff in the dark about it. 4. Crashes future 90b stock sale to zero.
What have we learned: 1. If you hide reasons for a decision, it may be the worst decision in form of the decision itself or implementation of the decision via your own lack of ownership of the actual decision. 2. Title's, shares, etc. are not control points. The control points is the relationships of the company problem solvers with the existential threat stakeholders of the firm.
The board itself absent Sam and Greg never had a good poker hand, they needed to fold sometime ago before this last weekend. Look at this way for 13B in cloud credits MS is getting team to add 1T to their future worth....
The pace to which OpenAI is speedrunning their demise is remarkable.
Literally just last week there were articles about OpenAI paying “10 million” dollar salaries to poach top talent.
Oops.
Who needs to buy out a 80bln dollars worth AI startup when talent is jumping ship in their direction already. OpenAI is dead.
It always seemed like Microsoft was behind this, biggest tell was how comfortable MS was at having their entire AI future depend on a company where they don't really have full rights to.
All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck is a thing of the past.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38347868&p=2
https://news.ycombinator.com/item?id=38347868&p=3
https://news.ycombinator.com/item?id=38347868&p=4
etc...
OpenAI is more or less done at this point, even if a lot of good people stay. Speed bumps will likely turn into car crashes, then cashflow problems, and lawsuits all around.
Probably the best outcome is a bunch of talented devs go out and seed the beginning of another AI boom across many more companies. Microsoft looking like the primary benefactor here, but there's not reason new startups can't emerge.
The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company
Unless their mission was making MS the biggest AI company , working for MS will make the problem worse and kill the their mission completly.
Or they are pretty naive.
I suspect they’ll quit, and the “top” N percent will be picked up by Microsoft with healthy comp packages. Microsoft will have effectively purchased the company for $10 billion. The net upside of this coup business may just flow to Microsoft shareholders.
What a mess this has become. Regardless of the outcome, this situation reflects badly (to say the least) on OpenAI.
Any journalist covering the OpenAI story must be swearing and cursing at the board at this moment..
The great Closing of “Open”AI.
Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"
wow, this is a crazy detail
rats, sinking ship, …
So...Ilya signed the letter too?
Wow, this new season has even more drama than the one about blockchain tech! Just when you think the writers were running out of ideas they blow you away with more twists. I will be renewing my Netflix subscription that's for sure! I can't wait to see what this Sam character does next. Perhaps it will involve robots or something? The skys the limit at this point.
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
First class board they have.
What does this mean?
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Is the board taking a doomer perspective and seeking to prevent the company developing unsafe AI? But Emmett Shear said it wasn’t about safety? What on earth is going on?
The whole drama feels like the Shepard’s tone. You anticipate the climax, but it just keeps escalating.
What a bunch of immatures.
If anything this proves that everybody is replaceable and fireable, they should be happy because usually that treatment is only reserved to workers.
Whatever made OpenAI successful will still be there within the company. Next man up philosophy has built so many amazing organizations and ruined none.
At this stage the entire board needs to go anyway. This level of instigating and presiding over chaos is not how a governing body should act
inb4: this is why we need unions!
This affair has Musk's fingerprints all over it but he lost, again.
Easiest layoff round ever in the US.
Oh my goodness, this just gets more entertaining everyday.
Money talks...
What a mess.
I genuinely feel like this is going to set back AI progress by a decent amount, while everyone is racing to catch OpenAI I was still expecting them to keep a reasonable lead. If OpenAI falls apart, this could delay progress by a couple of years.
Unbelievable incompetence of the board. Like a kindergarten.
If Microsoft is playing its card in a good way, Satya Nadella will look like a genius and Microsoft will get ChatGPT like functionality for cheap.
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...