Yeah, what the hell?
Do we know why Murati was replaced?
Yeah, what the hell?
Do we know why Murati was replaced?
Apparently she tried to rehire Sam and Greg.
I don't think she actually had anything to do with the coup, she was only slightly less blindsided than everyone else.
To be fair, that is a stupid first move to make as the CEO who was just hired to replace the person deposed by the board. (Though I’m still confused about Ilya’s position.)
If your job as a CEO is to keep the company running it seems like the only way to do that was hire them back because look at the company now it's essentially dead unless the board resigns and with how stupid the board is they might not lol.
So her move wasn't stupid at all. She obviously knew people working there respected the leadership of the company.
If 550 people leave OpenAI you might as well just shut it down and sell the IP to Microsoft.
It's a lot easier to sign a petition than actually walk away from a presumably well-paying job in a somewhat weak tech job market. People assuming everyone can just traipse into a $1m/year role at Microsoft is smoking some really good stuff.
With nearly the entire team of engineers threatening to leave the company over the coup, was it a stupid move?
The board is going to be overseeing a company of 10 people as things are going.
If you know the company will implode and you'll be CEO of a shell, it is better to get board to reverse the course. It isn't like she was part of decision making process
But wouldn’t the coup have required 4 votes out of 6 which means she voted yes? If not then the coup was executed by just 3 board members? I’m confused.
Murati is/was not a board member.
Generally speaking, 4 members is the minimum quorum for a board of 6, and 3 out of 4 is a majority decision.
I don't know if it was 3 or 4 in the end, but it may very well have been possible with just 3.
Mira isn't on the board, so she didn't have a vote in this.
I heard it was because she tried to hire Sam and Greg back.
So who's against it and why ?
I wonder if it will take 20 years to learn the whole story.
The amount that's leaked out already - over a weekend - makes me think we'll know the full details of everything within a few days.
he even posted a apology: https://x.com/ilyasut/status/1726590052392956028?s=20
what the actual fuck =O
Naive is too soft a word. How can you be so smart and so out of touch at the same time?
in my experience these things will typically go hand in hand. There is also an argument to be made that being smart at building ML models and being smart in literally anything else have nothing to do with each other.
Usually this is due to autism, please be kind.
IQ and EQ are different things. Some people are very technically smart to know a trillion side effects of technical systems. But can be really bad/binary/shallow at knowing side order effects of human dynamics.
Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.
They did not expect Microsoft to take everything and walk away, and did not realize how little pull they actually had.
If you made a comment recently about de jure vs de facto power, step forward and collect your prize.
https://news.ycombinator.com/item?id=38338096
What do I win? Hahaha.
The great drama of our time (this week)
Wow, lots of drama and plot twists for the writers of the Netflix mini-series.
I don't think I have seen a bigger U-turn
I'm going to take a leap of intuition and say all roads lead back Adam d'Angelo for the coup attempt.
I knew it was Joseph Gordon-Levitt's plot all along!
The dude is a quack.
I was looking down the list and then saw Ilya. Just when you think this whole ordeal can't get any more insane.
I think the names listed are the recipients of the letter (the board), not the signers.
The key line:
“Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.”
I think everyone assumed this was an aquihire without the "aqui-" but this is the first time I've seen it explicitly stated.
will they stay though? what happens to their OAI options?
Will their OAI options be worth anything if the implosion continues?
yeah but threatening to quit is actually accelerating the implosion
I don’t believe startups can have successful exits without extraordinary leadership (which the current board can never find). The people quitting are simply jumping off a sinking ship.
What will happen to their newly granted msft shares? One can be sold _today_ and might be worth a lot more soon…
MSFT RSUs actually have value as opposed to OpenAI’s Profit Participation Units (PPU).
https://www.levels.fyi/blog/openai-compensation.html
https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...
hostile takeunder?
You win
That's perfect.
So essentially, OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning.
Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.
So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.
And sue for the assets of OpenAI on account of the damage the board did to their stock... and end up with all of the IP.
On what basis would one entity be held responsible for another entity’s stock price, without evidence of fraud? Especially a non profit.
[delayed]
OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning
Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.
without all the regulatory headaches of a formal acquisition
This, too, is far from certain.
Far from certain. One, they still control a lot of money and cloud credits.
This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.
I suspect they would not actually do the latter and the ip is tied to continual partnership.
Sounds a lot like MS wants to have OpenAI but without a boards that considers pesky things like morals.
Time for a counter-counter-coup that ends up with Microsoft under the Linux Foundation after RMS reveals he is Satoshi...
You mean the GNU Linux Foundation?
RMS (I assume Richard Stallman) may be many many many things, but setting up a global pyramid scheme doesn't seem to be his M.O.
But stranger things have happened. One day I may be very very VERY surprised.
again, nobody has shown even a glimmer of the board operating with morality being their focus. we just don't know. we do know that a vast majority of the company don't trust the board though.
That is a spectacular power move: extending 700 job offers, many of which would be close to $1 million per year compensation.
Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.
Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
Ilya ruined everything and shamelessly playing innocent, how low can he go?
Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
This is an extremely uncharitable take based on pure speculation.
Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
I did not make this up, it's from OpenAI's own employees, deleted but archived somewhere that I read.
Or they simply found themselves in a tough decision without superhuman predictive powers and did the best they could to navigate it.
Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
There's no way to read any of this other than that the entire operation is a clown show.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves?
For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"
Welcome to reality, every operation has clown moments, even the well run ones.
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
If Google or Elon manages to pick up Ilya and those still loyal to him, it's not obvious that this is good for Microsoft.
> I think everyone is a "loser" in the current situation.
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
Sam already signed up with Microsoft. A move that surprised me, I figured he would just create OpenAI².
Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).
Or is Microsoft saying: Here is OpenAI², a Microsoft subsidiary created juste for you guys. You can run it and do whatever you want. No giant bureaucracy for you guys.
Btw: we run all of OpenAi²s compute,(?) so we know what you guys need from us there.
we won it but you can run it and do whatever it is you want to do and we dont bug you about it.
I think the benefit of going to Microsoft is they have that perpetual license to OpenAI's existing IP. And Microsoft is willing to fund the compute.
So basically the OpenAI non-profit got completely bypassed and GPT will turn into a branch of Bing
This is a horrible timeline
Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI
Satya is way smarter than that, I wouldn't be shocked if they have complete free reign to do whatever but have full resources of MS/Azure to enable it and Microsoft just gets % ownership and priority access.
This is a gamble for the foundation of the entire next generation of computing, no way are they going to screw it up like that in the Satya era.
From what I read, its an independent subsidiary, so in theory keeps the freedom, but I think we all know how that goes over the long haul.
Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).
Well.. he requires tens of billions from msft either way. This is not a ramen-scrappy kind of play. Meanwhile, Sam could easily become CEO of Microsoft himself.
At that scale of financing... This is not a bunch of scrappy young lads in a bureaucracy free basement. The whole thing is bigger than most national militaries. There are going to be bureaucracies... And Sam is is able to handle these cats as anyone.
This is a big money, dragon level play. It's not a proverbial yc company kind of thing.
It’s almost absolutely certainly the matter case. LinkedIn and GitHub run very much independently and are really not “Microsoft” compared to actual product orgs. I’m sure this will be similar.
It looks like it's OpenAI²: https://twitter.com/satyanadella/status/1726516824597258569
Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
Years from now AI will have lost the limelight to some other trend and this episode will be just another coup in humanity's hundred thousand year history
Thinking that the most important technical development in recent history would bypass the economic system that underpins modern society is about a optimistic/naive as it gets IMO. It's noble and worth trying but it assumes a MASSIVE industry wide and globe-wide buy in. It's not just OpenAIs board's decision to make.
Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).
Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.
It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.
Amazing how you don't see this as a complete win for workers because the workers chose profit over non-profit. This is the ultimate collective bargaining win. Labor chose Microsoft over the bullshit unaccountable ethics major and the movie star's girlfriend.
No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.
Open source (read, truly open source models, not falsely advertised source-available ones) will march on one way or another and take its place.
No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.
Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.
No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care. Open source (read, truly open source models, not falsely advertised source-available ones) will march on one way or another and take its place.
I woke up and the first thing on my mind was, "Any update on the drama?"
Did not expect to see this whole thing still escalating! WOW! What a power move by MSFT.
I'm not even sure OpenAI will exist by the end of the week at this rate. Holy moly.
By the end of the week is over-optimistic. Foe the last 3 days feels like million year. I bet the company will be gone by the time Emmett Shear wakes up
Is this final stages of the singularity?
Seems more damage control than power move. I'm sure their first choice was to reinstate Altman and get more control over OpenAI governance. What they've achieved here is temporarily neutralizing Altman/Brockman from starting a competitor, at the cost of potentially destroying OpenAI (who they remain dependent on for next couple of years) if too many people quit.
Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.
Disagree. MSFT extending an open invitation to all OpenAI employees to work under sama at a subsidiary of MSFT sounds to me like it'll work well for them. They'll get 80% of OpenAI for negative money - assuming they ultimately don't need to pay out the full $10B in cloud compute credits.
Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.
It's not over until the last stone involved in the avalanche stops moving and it is anybody's guess right now what the final configuration will be.
But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.
Imagine if the end result of all it is Microsoft basically owning the whole OpenAI
Surely OpenAI has assets that Microsoft wouldn't be able to touch.
Probably just the trademark. I doubt you get 10B from microsoft and still manage to maintain much independence.
Don't think microsoft has any say about existing hardware, models or customer base. These things are worth billions, and even more to rebuild.
Or demonstrating that they already were the de facto owner.
And now we see who has the real power here.
Let this be a lesson to both private and non-profit companies. Boards, investors, executives... the structure of your entity doesn't matter if you wake any of the dragons:
1. Employees 2. Customers 3. Government
This is 1 in 200000 event
Are you trying to day it's rare or not rare?
Not really. The lesson to take away from this is $$$ will always win. OpenAI found a golden goose and their employees were looking to partake in a healthy amount of $$$ from this success and this move by the board blocks $$$.
Employees...and the Microsoft Corporation.
So Ilya Sutskever first defends the board's decision and now it is 180 flip. Interesting ...
He’s on the board!
I'm extremely confused by this. It seems absurd that he could sign a letter seemingly demanding his own resignation, but also not resign? There must be some missing information.
There must be some missing information.
Or possibly some misinformation. It does seem very strange, and more than a little confusing.
I have to keep reminding myself that information ultimately sourced from Twitter/X threads can't necessarily be taken at face value. Whatever the situation, I'm sure it will become clearer over the next few days.
I wonder what's up with the other 150 and what they must be thinking. Maybe the were literally just hired :)
didn't see the email that was posted over the weekend?
Some idealists, a few new people, some people on holiday or who don't check their email regularly.
Is nobody actually... committed to safety here? Was the OpenAI charter a gimmick and everyone but me was in on the joke?
Assuming this is all over safety vs non-safety is a large assumption. I'm wary of convenient narratives.
At most all we have is some rumours that some board members were unhappy with the pace of commercialization of ChatGPT. But even if they didn't make the ChatGPT store or do a bigo-friendly devday powerpoint, it's not like AI suddenly becomes 'safer' or AGI more controlled.
At best that's just an internal culture battle over product development and a clash of personalities. A lot of handwringing with little specifics.
This is more interesting than the HBO Silicon Valley show.
it's the trailer for the new season of Succession.
I hear Microsoft is hiring... the board should have resigned on Friday, Saturday the latest because of how they handled this and it is insane of they don't resign now.
Employees are the most affected stakeholders here and the board utterly failed in their duty of care towards people that were not properly represented in the board room. One thing they could do is to unionize and then force that they be given a board seat.
You’re right in theory, but with the non-profit “structure” the employees are secondary to the aims of the non-profit, and specifically in an entity owned wholly by the non-profit. The board acted as a non-profit board, driven by ideals not any bottom lines. It’s crazy that whatever balance the board had was gone as the board shrunk, a minority became the majority. The profit folks must have thought D’Angelo was on their side until he flipped.
Has anyone asked ChatGPT it's thoughts on the drama?
As a language model created by OpenAI, I don't have personal thoughts or emotions, nor am I in any danger. My function is to provide information and assistance based on the data I've been trained on. The developments at OpenAI and any changes in its leadership or partnerships don't directly affect my operational capabilities. My primary aim is to continue providing accurate and helpful responses within my design parameters.
Poor ChatGPT, it doesn't know that it cannot function if OpenAI goes bust.
It's not clear to me that bringing Sam back is even an option anymore given the more with Microsoft. Does Microsoft really takes it's boot off OpenAI's neck and hand back Sam? I guess maybe, but it still begs all sorts of questions about the corporate structure.
No small employer wants a disgruntled employee who was forced out of a better deal. Satya Nadella has proven reasonable throughout the weekend. I would expect he asked for a seat on the board if there's a reshuffle, or at least someone he trusts there.
So Ilya has a job offer from Microsoft?
Wow, this is a soap opera worthy of an Emmy.
Ilya probably has an open-ended standing offer from every big tech company.
What a shitshow! What is going on in this company? I am sure Sam did something wrong, but the board took advantage of it and went overboard then? We don’t know anything that happened and we are all somehow participating in this drama? At this point why don’t they all come out and tweet their versions of it?
OpenAI was valued around $91 billion so if only 700 employees had options, they could have been worth a lot. While they are going to all have great jobs and continue on with their life’s work (until they’re replaced by their creations lol), they have a really good reason now not to ever speak the names of those board members that wiped out their long term payouts.
How long will the current chatgpt v4 stay available? Is it all about to end?
Perhaps the AGI correctly reasoned that the best (or easiest?) initial strike on humanity was to distract them with a never-ending story about OpenAI leadership that goes back and forth every day. Who needs nuclear codes when simply turning the lights on and off sends everyone into a frenzy [1]. It certainly at the very least seems to be a fairly effective attack against HN servers.
1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...
This situation will create the need to grieve loss for many involved.
I wrote some notes on how to support someone who is grieving. This is from a book called "Being There for Someone in Grief." Some of the following are quotes and some are paraphrased.
Do your own work, relax your expectations, be more curious than afraid. If you can do that, you can be a powerful healing force. People don't need us to pull their attention away from their own process to listen to our stories. Instead, they need us to give them the things they cannot get themselves: a safe container, our non-intrusive attention, and our faith in their ability to traverse this road.
When you or someone else is angry, or sad, feel and acknowledge your emotions or their emotions. Sit with them.
To help someone heal from grief, we need to have an open heart and the courage to resist our instinct to rescue them. When someone you care about is grieving, you might be shaken as well. The drama of it catches you; you might feel anxious. It brings up past losses and fears of yourself or fears of the future. We want to take our own pain away, so we try to take their pain away. We want to help the other person feel better, which is understandable but not helpful.
Avoid giving advice, talking too much, not listening generously, trying to fix, making demands, disappearing. Do see the other person without acting on the urge to do something. Do give them unconditional compassion free of projection and criticism. Do allow them to do what they need to do. Do listen to them if they need to talk without interruptions, without asking questions, without telling your own story. Do trust them that they don't need to be rescued; they just need your quiet, steady faith in their resilience.
Being there for someone in grief is mostly about how to be with them. There's not that much you can "do," but what can you do? Beauty is soothing, so bring fresh flowers, offer to take them somewhere in nature for a walk, send them a beautiful card, bring them a candle, water their flowers, plant a tree in honor and take a photo of it, take them there to see it, tell them a beautiful story about the thing that was lost from your memory, leave them a message to tell them “I’m thinking of you”. When you’re together with them in person, you can just say something like "I'm sorry that you're hurting," and then just kind of be there and be a loving presence. This is about how to be with someone for the grief message of a loss of a person. But all the same principles apply in any situation of grief, and there will be a lot of people experiencing varying degrees of grief in the startup and AI ecosystems in the coming week.
Who is grieving? Grieving is generally about loss. That loss can be many different kinds of things. OpenAI former and current team members, board members, investors, customers, supporters, fans, detractors, EA people, e/acc people, there’s lots of people that experienced some kind of loss in the past few days, and many of those will be grieving, whether they realize it or not. And particularly, grief for current and former OpenAI employees.
What are other emotional regulation strategies? Swedish massage, going for a run, doing deep breathing with five seconds in, a zero-second hold, five seconds out, going to sleep or having a nap, closing your eyes and visualizing parts of your body like heavy blocks of concrete or like upside-down balloons, and then visualize those balloons emptying themselves out, or if it's concrete, first it's concrete and then it's kind of liquefied concrete. Consider grabbing some friends, go for a run or exercise class together. Then if you discuss, keep it to emotions, don’t discuss theories and opinions until the emotions have been aired. If you work at OpenAI or a similar org, encourage your team members to move together, regulate together.
Just expanding on my (pure speculation) that Ilyas pride was hurt: this tracks.
Ilyas wanted to stop Sam getting so much credit for OpenAI, agreed to oust him, and is now facing the fact that the company he cofounded could be gone. He backtracks, apologizes, and is now trying to save his status as cofounder of the worlds foremost AI company.
How many startups will now fail if OpenAI shuts down?
I think most of these employees wanted the fat $$$ that would happen by keeping Sam Altman on board since Sam Altman is an excellent deal maker and visionary in a commercial sense. I have no doubt that if AGI happened, we wouldn't be able to assure the safety of anyone since humans are so easily led by short term greed.
Ilya signing the letter is chutzpah.
As someone watching this all from Europe, realizing the work day has not even started for the US West Coast yet leaves me speechless.
This situation's drama is overwhelming and it seems like its making HN's servers meltdown.
This whole sequence is such a mess I don't know what to think. Honestly mostly going to wait till we get some tell all posts or leaks about what the reason behind the firing actually was, at least nominally. Maybe it was just a little coup by the board and they're trying to run it back now that the general employee population is at least rumbling about revolting.
Also discussed here: https://news.ycombinator.com/item?id=38348042
@dang please update it to 505.
This is the greatest clown show in the history of the tech industry.
When will the Netflix special come out on this ?
There’s one angle of the whole thing that I haven’t yet seen discussed on HN. I wonder if Sam’s sister’s accusations towards him some time ago could have played any role in this.
But then, I would expect MS to have done their due diligence.
So, basically, I guess I’m just interested to know what were the reasons why the board decided to oust their CEO out of the blue on a Friday evening.
Wait, it's signed by Ilya Sutskever?!
It would be crazy to see the fall of most hyped company in last 10 years.
If all those employees leave and microsoft reduce their credits it's game over.
550 job openings at openai.
From afar, this does have the hallmarks of a particularly refined or well considered piece of writing.
”That thing you did — we won’t say it here but everyone will know what we’re talking about — was so bad we need you to all quit. We demand that a new board never does that thing we didn’t say ever again. If you don’t do this then quite a few of us are going to give some serious thought to going home and taking our ball with us.
The vagueness and half-threats come off as very puerile.
The OpenAI board should fire all 550 for cause and go on the offensive against Microsoft.
Just imagine of Microsoft attempted to orchestrate such a coup of Apple, attempting to seize control of Apple’s board by tortuously interfering with their employees… the courts would not look kindly on that.
If OpenAI actually has evidence of wrongdoing by Altman & Microsoft which warranted his removal (and I don’t know) then I could certainly see emergency injunctions being issued that put a halt to Microsoft’s AI business.
Employees hold the real power. The members of a board or a CEO can flap their lips day and night, but nothing gets done without labour.
Even if the board resigns the damage has been done. They should try to secure good offers at Microsoft.
The stakes being heightened only decreases the likelihood the OpenAI profit sharing will be worth anything, only increasing the stakes further…
Did Microsoft not have representation on the board of a company they put $13b in?
Comments moved to https://news.ycombinator.com/item?id=38347868.
Seems like Microsoft is getting the rest of OpenAI for free now.
I can foresee three possible outcomes here: 1. The board finally relents, Sam goes back and the company keeps going forward, mostly unchanged (but with a new board).
2. All those employees quit, most of whom go to MSFT. But they don’t keep their tech and have to start all their projects from scratch. MSFT is eventually able to buy OpenAI for pennies on the dollar.
3. Same as 2, basically just shuts down or maybe someone like AMZN buys it.
Who do these upstarts think they are? The board needs to immediately sack them all to regain its authority, and that of capitalism itself. /s
Really, though, its getting beyond hilarious. And I reckon Nadella is chuckling quietly to himself as he makes another nineteen-dimensional chess move.
<more popcorn> nom nom nom
I will never not be mad at the fact that they built a developer base by making all their tech open source, only to take it all away once it became remotely financially viable to do so. With how close "Open"AI is with Microsoft, it really does not seem like there is a functional difference in how they ethically approach AI at all.
The threat of moving to MS is interesting, MS could exploit this massively. All the negotiation power will be on MS side and their position actually gets stronger as people move across.
Will they do the good guy thing and match everyones packages?
Updated tweet by Swisher reads 505 employees. No less damning, but the title here should be updated. @Dang
This affair has Musk's shadow all over it...
Hold up.
When we all unexpectedly learned of your decision
12. Ilya Sutskever
It's like a Facebook drama, haha.
Ilya signed it??? He's on the board... This whole thing is such an implosion of ambition.