Very unprofessional way to approach this disagreement.
Very unprofessional way to approach this disagreement.
How so? It's just another firing and being escorted out the door.
The wording is very clearly hostile and aggressive, especially for a formal statement, and the wording, again, makes it very clear that they are burning all bridges with Sam Altman, and it is very clear that 1. it was done extremely suddenly, 2. with very little notice or discussion with any other stakeholder (e.g. Microsoft being completely blindsided, not even waiting 30 minutes for the stock market to close, doing this shortly before Thanksgiving break, etc).
You don't really see any of this in most professional settings.
It is quite gauche for a company to burn bridges with their upper management. This bodes poorly for ever hoping to attract executives in the future. Even Bobby Kotick got a more graceful farewell from Activision Blizzard, where they tried to clear his name. It is only prudent business.
Certainly, this is very immature. It wouldn't be out of context in HBO's Succession.
Whether what happened is right or just in some sense is a different conversation. We could speculate on what is going on in the company and why, but the tactlessness is evident.
Whether it's right or just in some sense is a different conversation.
The same conversation if it's "mature", surely? I'm failing to see how one thinks turning a blind eye to like, decades of sexual impropriety and major internal culture issues to the point the state takes action against your company is "mature". Like, under what definition?
Mature, as in the opposite of ingenuous. It does no good to harm a company further. Kotick did enough damage, he left, all that needed to be said about him was said, tirelessly. Every effort to get him to offer some reparations - expended.
So what was there to gain from the company speaking ill of their past employee? What was even left to say? Nothing. No one wants to work in an organization that vilifies its own people. It was prudent.
I will emphasize again that the morality of these situations is a separate matter from tact. It is very well possible that doing what is good for business does not always align with what is moral. But does this come as a surprise to anyone?
We can recognize that the situation is not one dimensional and not reduce it to such. The same applies to the press release from Open AI - it is graceless, that much can be observed. But we do not yet know whether it is reprehensible, exemplary, or somewhere in between in the sense of morality and justice. It will come out, in other channels rather than official press releases, like in Bobby's case.
Mature, as in the opposite of ingenuous
To tell it in an exaggerated way, maturity should not imply sociopathy or completely disregard for everything.
Obviously I am referring here to Kottick situation. But, the definition where it is immature to tell the truth and mature to enable powerful bad players is wrong definition of maturity.
I respect your belief that maturity involves elevating morality above corporate sagacity. It is noble.
I am not even demanding something super noble from mature people. I am fine with the idea that mature people do compromises. I do not expect managers to be saint like fighters for justice.
But, when people use "maturity" as argument for why someone must be enabler, should not do the morally or ethically right thing, then it gets irritating. Conversely, calling people "immature" because they did not acted in the most self serving but sleazy way is ridiculous.
People get fired all the time: suddenly, too. If I got fired by my company tomorrow, they wouldn't treat me with kid gloves, they'd just end my livelihood like it was nothing. I'd probably find out when I couldn't log in. Why should "upper management" get a graceful farewell? We don't have royalty in the USA. One person is not inherently better than another.
Because no one cares if you get fired but people really care if a CEO gets fired. The scope of a CEO's responsibilities are near-global across the company, firing them is a serious action. Your scope as an engineer is, typically, extremely small by comparison.
This isn't about being better at all.
Because upper management have more power than you or I. If either of us were fired, it's unlikely to be front page news all over the world.
It sucks, but that's the world we live in, unfortunately.
Why should "upper management" get a graceful farewell
Injustices are made to executives all the time. But airing dirty laundry is not sagacious.
boards give reasons for transparency, and they said he had not been fully candid.
You are interpreting that as hostile and aggressive because you are reading into it what other boards have said in other disputes and whatever you are imagining, but if the board learned some things not from Altman that it felt they should have learned from Altman, less than candid is a completely neutral way to describe it, and voting him out is not an indication of hostility.
Would you like to propose some other candid wording the board could have chosen, a wording that does not lack candor?
You are interpreting that as hostile and aggressive because you are reading into it
Uhh no, I'm seeing it as hostile and aggressive because the actual verbiage was hostile and aggressive, doubly so in the context of this being a formal corporate statement. You can pass the text into NLP sentiment analyzer and it too will come to the same conclusion.
It is also very telling that you are being very sarcastic and demeaning in your remarks as well to someone who wasn't even replying to you, which might explain why you might have seen the PR statement differently.
When you look at the written word and find yourself consistently imputing clear intent which is hostile, aggressive, sarcastic, and demeaning which no one else but you sees, a thoughtful person would begin to introspect.
Again, I'm not sure why you and the other person are just out for blood and keep trying to make it personal, but you can clearly feed it into NLP/ChatGPT and co and even the machines will tell you the actual wordings are aggressive.
The wording is very clearly hostile and aggressive
At least we can be sure that ChatGPT didn't write the statement, then.
Otherwise the last paragraph would have equivocated that both sides have a point.
It's not "just another firing," the statement accused Altman of lying to the board. Either he did and it's a justified extraordinary firing, or he didn't and it's hugely unprofessional to insinuate he did.
Oh man the lawyers have to be so happy I bet they can hardly count.
I read that word as not being forthcoming moreso than actively lying. But I don't read many firing press releases.
Hugely unprofessional and a billion dollar liability.
When two people have different ideologies and neither is willing to backdown or compromise, one person must "go".
There’s no indication that any sort of discussion took place. Major stakeholders like Microsoft appear uninformed.
in a power struggle, you have to act quickly
I don't think it's that dramatic. In a board meeting, you have to act while the board is meeting. They don't meet every day, and it's a small rigamarole to pull a meeting together, so if you're meeting... vote.
are you suggesting they brought up a vote on a whim at a board meeting and acted on it same day
no, I was replying to a comment that said it was a power struggle in which the board needed to act quickly before they lost power.
The board may very well have met for this very reason, or perhaps it was at this meeting that the lack of candor was found or discussed, but to hold a board meeting there is overhead, and if the board is already in agreement at the meeting, they vote.
It only seems sudden to outsiders, and that suddenness does not mean a "night of the long knives".
How would the board have lost power?
that's what i'm saying, it was not a power struggle. I shouldn't have to make the other guy's argument for him...
One imagines in this case the current board discussed this in a non-board context, scheduled a meeting without inviting the chair, made quorum, and voted, then wrote the PR and let Sam, Greg, and HR know, then released the PR. Which is pretty interesting in and of itself, maybe they were trying to sidestep roko or something
Not inviting the full board would likely be against the rules. Every company I've been part of has it in the bylaws that all members have to be invited. They don't all have to attend, but they all get invited.
sure. he could have been invited, but also not attended.
Basically half the point of this is that Microsoft isn’t a stakeholder. The board clearly doesn’t care or is actively hostile to the idea of growing “the business”. If they didn’t know then that they weren’t a stakeholder, they know now.
MS owns a non controlling share of a business controlled by a nonprofit. MS should have prepared for the possibility that their interests aren’t adequately represented. I’m guessing Altman is very persuasive and they were in a rush to make a deal.
Microsoft is a stakeholder. It’s absurd to suggest otherwise. The entire stakeholder concept was invented to encompass a broader view on corporate governance than just the people in the boardroom.
This is a non profit dedicated to researching AI with the goal of making a safe AGI. That’s what the mission is. Sama starts trying to make it a business, restructures it to allow investors, of which MSFT is a 49% owner. He gets ousted and they tell Microsoft afterwards.
It’s questionable how much power Microsoft has as a shareholder. Obviously they have a staked interest in OpenAI. What is up in question is how much interest the new leaders have in Microsoft.
If I had a business relationship with OpenAI that didn’t align with their mission I would be very worried.
Or you introduce an authoritative third party that mediates their interactions. This feels like it wouldn't be a problem if so many high-ranking employees didn't feel so radically different about the same technology.
Altman’s job was to be a go between for the business and engineering sides of the house. If the chief engineer who was driving the company wasn’t going to communicate with him anymore, then he wouldn’t serve much of a purpose.
when did a board or CEO ever introduce an authoritative 3rd party to mediate between them? the board is the authoritative 3rd party.
There's more graceful ways to do this though.
You've summed AI X-risk in a single sentence.
(I.e. an AGI would be one of the two people here.)
If you know anything about Ilya, it's definitely not out of character.
Having read up on some background not sure I want this guy in charge of any kind of superintelligence.
Well, I definitely wouldn't want Altman in charge of any superintelligence, so "I'm not sure" would be an improvement, if I expected an imminent superintelligence.
What if - hear me out - what if the firing is the doing of an AGI? Maybe OpenAI succeeded and now the AI is calling the shots (figuratively, though eventually maybe literally too).
what are you referring to
It was actually a great move. Unusual, but it goes with the mission and nonprofit idea. I think it was designed to draw attention and stir controversy on purpose.
Is it a winning move though? The biggest loser in this seems to be the company that was bankrolling their endeavor, Microsoft.
At this stage, no publicity is bad publicity. If they really believe they are in it to change the future of humanity, and the kool-aid got to their heads, might as well show it off by stirring some controversy.
Microsoft is bankrolling them but OpenAI probably can replace Microsoft easier than Microsoft can replace OpenAI.
Not if the AGI was making the decision. A bit demanding to think the Professionalism LLM module isn't a bit hallucinatory in this age. Give it a few more years.
Some insider details that seem to agree with this: https://www.reddit.com/user/Anxious_Bandicoot126/
Who is u/Anxious_Bandicoot126? Is there any reason to think this is actually a person at OpenAI and not some random idiot on the internet? They have no comment history except on this issue. Seems like BS.
No comment history except on this issue...
That's either 100% fishy or 100% insider.
Either BS or person is insider, no in-between.
Is this sarcasm? The burden is on the person with the supposed claim to show they are trustworthy and reputable. What you're saying is basically "coin shows heads 50% of the time, therefore it's 50% chance they're an insider".
Wow, all the comments and responses to that person's comments are a gold mine. Not saying anything should be taken as gospel, either from that poster or the people replying. But certainly a lot of food for thought.
Reads like lesswrong fan-fiction
Based on the amount of comments in that time period that is probably a fake insider.
Altman risking his role as CEO of the new industrial revolution for a book deal is implausible.
We cant trust what we read. But last year's "Altman World Tour" where he met so many world leaders around the world felt a bit over the top, and maybe it got into his head
This was about stopping a runaway train before it flew off a cliff with all of us on board. Believe me, the board and I gave him tons of chances to self-correct. But his ego was out of control.
Don't let the media hype fool you. Sam wasn't some genius visionary. He was a glory-hungry narcissist cutting every corner in some deluded quest to be the next Musk.
That does align with Ilya’s tweet about ego being in the way of great achievements.
And it does align with Sam’s statements on Lex’s podcast about his disagreements with Musk. He compared himself to Elon’s SpaceX being bullied by Elon’s childhood heroes. But he didn’t seem sad about it - just combative. Elon’s response to the NASA astronauts distrusting his company’s work was “They should come visit and see what we’re doing”. Sam’s reaction was very different. Like, “If he says bad things about us, I can say bad things about him too. It’s not my style. But maybe I will, one day”. Same sentiment as he is showing now (“if I go off the board can come after me for the value of my shares”).
All of that does paint a picture where it really isn’t about doing something necessary for humanity and future generations, and more about being considered great. The odd thing is that this should get you fired, especially in SF, of all places.
The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI. At least a few more major breakthroughs will probably be needed.
AGI is about definitions. By many definitions, it’s already here. Hence MSR’s “sparks of AGI” paper and Eric Schmidt’s article in Noema. But by the definition “as good or better than humans at all things”, it fails.
That "Sparks of AI" paper was total garbage, just complete nonsense and confirmation bias.
Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.
What specifically made it “garbage” to you? My mind was blown if I’m honest, when I read it.
How do you compare Eliza to GPT4?
It’s impossible to predict.
No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.
The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI.
How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.
It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.
What is AGI if not problem solving in novel domains?
No-one knows, which makes this a classical scientific problem. Which is what Ilya wants to focus on, which I think is fair, give this alligns with the original mission of OpenAi.
I think it’s also fair Sam starts something new with a for profit focus of the get-go.
So basically a confirmation, but with a slight disagreement on the vocabulary used to describe it.
I read it as Ilya Sutskever thinking the move is good non-profit governance grounds and that does not match what coup often means, unlawful seizure of power or maybe unprincipled/unreasonable seizure of power.
Ilya Sutskever seems to think this is a reasonable principled move to seize power that is in line with the non-profits goals and governance, but does not seem to care too much if you call it a coup.
That's just spin. Which coup hasn't been a "reasonable and principled move to seize power" according to it's orchestrator?
Do you think Napoleon or Pinochet made speeches to the effect of "Yes, it was a completely unprincipled power-grab, but what are you going to do about it, lol?"
No one in this company is "consistently candid" about anything.
Yes, but Ilya is on the Board of Directors; and Sam is currently unemployed (although: not for long).
Huge scoop.
Realistically, this reflects more poorly on Sutskever. No one wants to work with a backstabber. It's one thing to be like 'well we had disagreements so we decided to move on.' However the board claimed Altman lied. If it turns out the firing was due to strategic direction, no one would ever want to work with Sutskever again. I certainly would not. That's an incredibly defamatory statement about a man who did nothing wrong, other than have a professional disagreement.
Wait this is just a corporate turf war? That's boring I already have those at work
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.
Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:
‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’
Haha yeah no I don't believe this. They're nowhere near AGI, even if it's possible at all to be there with the current tech we have, which is unconvincing. I don't believe professionals who work in biggest AI labs are spooked by GPT. I need more evidence to believe something like that sorry. It sounds a lot more like Sam Altman lied to the board.
GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.
GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.
Sorry. Robust research says no. Remember, people thought Eliza was AGI too.
https://arxiv.org/abs/2308.03762
If it was really AGI, there won't even be ambiguity and room for comments like mine.
As if most humans would do any better on those exercises.
This thing is two years old. Be patient.
This comparison again lol.
As if most humans would do any better on those exercises.
Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.
This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.
This thing is two years old. Be patient.
Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.
At various points from 1950, the gullible mass claimed AGI.
At various points from 1950, the gullible mass claimed AGI.
Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.
In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.
(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)
Transformer-based LLMs are almost a half-decade old at this point, and GPT-4 is the least-efficient model of it's kind ever produced (that I am aware of).
OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.
What exactly are we holding out for, at this point? A miracle?
It’s not AGI. But I’m not convinced we need a single model that can reason to make super powerful general purpose AI. If you can have a model detect where it can’t reason and pass off tasks appropriately to better methods or domain specific models you can get very powerful results. OpenAI already on the path to doing this with GPT
These models can't even form new memories beyond the length of their context windows. It's impressive but it is clearly not AGI.
Neither can you without your short-term memory system. Or your long-term memory system in your hippocampus.
People that have lost those abilities still have human level of intelligence.
Sure, people with aphasia lose the ability to form speech at all but if ChatGPT responded unintelligibly every time you wouldn't characterize it as intelligent.
Fascinating. What do you make of the fact GPT 4 says you have no clue what you are talking about?
How does knowing you are arguing against a GPT-4 bot?
The only thing GPT 4 is missing is the ability to recognize it needs to ask more questions before it jumps into a problem.
When you compare it to an entry level data entry role, it's absolutely AGI. You loosely tell it what it needs to do, step-by-step, and it does it.
This sort of property ("loosely tell it what it needs to do, step-by-step, and it does it.") is definitely very exciting and remarkable, but I don't think it necessarily constitutes AGI. I would say instead it's more an emergent property of language models trained on extremely large corpora that contain many examples that, in embedding space, aren't that far from what you're asking it to do.
I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.
Well, if it's so smart then maybe it will learn to count finally someday.
https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...
I kind of agree, but at the same time we can't be sure of what's going on behind the scenes. It seems that GPT-4 is a combination of several huge models with some logic to route the requests to the most apt models. Maybe an AGI would make more sense as a single, more cohese structure?
Also, the fact that it can't incorporate knowledge at the same time as it interacts with us kind of limits the idea of an AGI.
But regardless, it's absurdly impressive what it can do today.
We barely understand the human brain, but sure we’re super close to AGI because we made chat bots that don’t completely suck anymore. It’s such hubris. Are the tools cool? Undoubtedly. But come down to earth for a second. People have lost all objectivity.
I've been watching this whole hype cycle completely horrified from the sidelines. Those early debates right here on HN with people genuinely worried about an LLM developing conscience and taking control of the world. Senior SWEs fearing for their jobs. And now we're just throwing the term AGI around like it's imminent.
Objectively speaking, we’re talking exponential growth in both compute and capabilities year over year.
Do you have any data that shows that we’ll plateau any time soon?
Because if this trend continues, we’ll have superhuman levels of compute within 5 years.
I don't believe professionals who work in biggest AI labs are spooked by GPT.
Then you haven't been paying any attention to them.
It's like a religion for these people.
Bull. Shit.
OpenAI and its people are there to maximize shareholder value.
This is the same company that went from "non-profit" to "jk, lol, we are actually for-profit now". I still think that move was not even legal but rules for thee not for me.
They ousted sama because it was bad for business. Why? We may never know, or we may know next week, who knows? Literally.
It seems you are conflating OpenAI the non-profit, with OpenAI the LLC: https://openai.com/our-structure
No, that's the whole point, "AI for the benefit of humanity" and whatnot turned out to be a marketing strategy (if you could call it that).
That is what Ilya Sutskever and the board of the non-profit have effectively accused Sam Altman of in firing him, yes.
???
Source?
Kara's reporting on motive:
https://twitter.com/karaswisher/status/1725678074333635028?t...
Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...
Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...
Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.
For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
I saw those tweets as well. Those are rumours at this point.
The only thing that is real is the PR from OpenAI and the "candid" line is quite ominous.
sama brought the company to where it is today, you don't kick out someone that way just because of misaligned interests.
I'm on the side that thinks that sama screwed up badly, putting OpenAI in a (big?) pickle and breaking ties with him asap is how they're trying to cover their ass.
They're not rumours, they are reporting from the most well-known and creditable tech journalist in the world. The whole point of journalism and her multi-decade journalistic career is that when she reports something like that, we can trust that she has verified with sources who would have actual knowledge of events that it is the case. We should always consider the possibility that her sources were wrong, but that's incredibly unlikely now that Ilya gave an all hands meeting (that I linked you) which confirmed a majority of this reporting.
OpenAI and its people are there to maximize shareholder value
Clearly not, as Sama has no equity and a board of four people with little, if any, equity, just unilaterally decided to upend their status quo and assured $ printer, to the bewilderment of their $2.5T 49% owner, Microsoft.
as Sama has no equity
Yeah and he got sacked.
ChatGPT blew the doors open on the AI arms race. Without Sam leading the charge, we wouldn't have an AI boom. We wouldn't have Google scrambling to launch catch up features. We wouldn't have startups raising 100s of millions, people talking about a new industrial revolution, llama (2), all the models on hugging face or any of the other crazy stuff that has come about in the past year.
Was the original launch of ChatGPT "safe?" Of course not, but it moved the industry forward immensely.
Swisher's follow up is even more eyebrow raising: "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
What exactly from the demo day was "pushing too far?" We got a dall-e api, a larger context window and some cool stuff to fine tune GPT. I don't really see anything there that is too crazy... I also don't get the sense that Sam was cavalier about AI safety. That's why I am so surprised that the apparent reason for his ousting appears to be a boring, old, political turf war.
My sense is that there is either more to the story, or Sam is absolutely about to have his Steve Jobs moment. He's also likely got a large percentage of the OpenAI researcher's on his side.
ChatGPT was definitely not some visionary project led by Sam. They had a great LLM in GPT-3 that was hard to use because it wasn't instruction tuned, so the research team did InstructGPT and then took it even further and added RLHF to turn it into a proper conversational bot. The UI was a hacky interface on top of it that definitely got way more popular than they expected.
I don't know if it was led by Sam, and don't dispute that it may have been "hacky," but there is no denying it was a visionary project.
Yes, other companies had similar models. I know Google, in particular, already had similar LLMs, but explicitly chose not to incorporate them into its products. Sam / OpenAI had the gumption to take the state of the art and package it in a way that it could be interacted with by the masses.
In fact, thinking about it more, the parallels with Steve Jobs are uncanny. Google is Xerox. ChatGPT is the graphical OS. Sam is...
Let's not put the cart before the horse
Even if they say this was for safety reasons, let's not blindly believe them. I am on the pro safety side, but I'm gonna wait till the dust settles before I come to any conclusions on this matter.
Let's not confuse ChatGPT "safety" with the meaning of the word in most contexts. This is about safely keeping major media of all kinds aligned with the messaging of the neoliberal political party of your locale.
They aren’t anywhere close to AGI. It’s a joke at this point.
An ego battle really
From what's come out so far, it reads more like he thinks they're pushing too hard too fast on commercialization, not AGI. They're chasing profit opportunities at market instead of fulfilling the board's non-profit mission to build safe AGI.
I trust the hell of a lot more in Ilya than Altman. Ilya is a scientist through and through, not a grifter.
This is hand-wringing moral panic by nontechnical people who fear the unknown and don't understand AI/DL/ML/LLMs. It's shriekingly obvious no one sane will intentionally build "SkyNet", nor can they for decades.
My biggest question is: If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?
Even if sama and gdb raised $10B by early 2024, all of the GPU production capacity is already allocated years out. They'd have to buy some other company's GPUs at insane markups. And that's only on the hardware side.
Jensen will take Sam's calls in a heartbeat and personally ensure he has what he needs.
Exactly. Entire city blocks will be cleared for Sam. Anything he needs. Just give him a road.
Pure hero worship based on nothing. Dude got himself fired and the board accused him of lying.
I agree however he'll have no problem finding another vehicle.
He can't. That capacity is sold. He is not going to get his company sued for a breach of contract for a personal favor.
As will Lisa Su. This is going to be quite a ride.
Jensen/CoreWeave/Lambda/etc will ensure sama gets what he needs.
Yeah, and then they will do what? Type the model learning data by memory? Run stolen python scripts? How exactly this hardware supposed to be used?
What do you think Brockman did as co-founder of OpenAI, exactly?
Things have changed a lot. Companies have locked down their data a lot in the last year. E.g. reddit, twitter
Even if things hadn't changed, OpenAI has been building their training set for years. It is not something they can just whip up overnight.
In other words, if SamA did it once, would $50 billion in funding enable him do it a 2nd time?
Well to be considered a genius in the ranks of Steve Jobs, you need to succeed more than once. If he can't do it a second time, then he'd be known as the guy who fails upward.
Well, to be considered a genius like Steve jobs, you eventually need to return to the company you left – or were ousted from – when it's on the precipice of defeat and then proceed to turn it around.
Or maybe he was a "manager" who took the credit
He's not going to get $50 billion in funding
Wasn’t he more of a business guy while Ilya was the engineer? I really doubt a random VC guy is going to really know much about the specific, crucial details the engineering team knows.
You know, I'm sure Sam Altman is a really smart guy for real.
But to be honest the impression I've gathered is that he's largely a darling to big ycombinator names which lead him quite rapidly dick first into the position he's found himself in today, which is a self proclaimed prepper who starts new crypto coins post-dogecoin even, talking about how AI that aren't his AI should be regulated by the government, and making vague analogies about his AI being "in the sky" while he takes a formerly announced to be non-profit goal into a for-profit LLC that overtly reminds everyone at every turn how it takes no liability, do not sue.
I'm not really sure to be surprised, or entirely unsurprised.
I mean, he probably knows more code than Steve Jobs? But I suppose GPT probably knows more code than he does. Maybe he really is using the GeniePT as his guide throughout life on the side.
Apparently Sam's idol growing up was Steve Jobs so this checks out.
I’m sure he’s not a dumb guy, just disposable relative to OpenAI’s engineering team. I doubt he’s a Jobs-like, indispensable visionary, either.
Is OpenAI's current success attributed more to its excellent business and startup management, or does it stem from its superior technology and research that surpasses what others have developed?
Both IMO.
The first leads to attractong world-class talent that can do the second. Until you go off the rails and the second kicks you out it seems.
You don't think already starting with world-class talent (Sutskever, Karpathy, Zaremba and more being part of the founding team) lead to OpenAI being able to get more world-class talent, rather than world-class talent joining because of Altman?
Yeah. I don't care who Altman is cause he ain't the technical leader from a researcher perspective.
Altman is a CEO golden boy techbros.
I think Sam and Greg could build something similar to what ChatGPT is today, and maybe even get close to GPT-4, but going beyond that seems like a stretch. Ilya is really the one that’s needed, and clearly he does not see eye to eye with Sam. Another world-class AI researcher at the level of Ilya would have to step in, and I’m not even sure that person exists.
I think Karpathy could qualify
I don't think that specific knowledge means that much. The landscape is changing in a crazily fast pace. 3~4 years ago, Google was way ahead in terms of LLM but has become an underdog after bleeding talents thereafter. It's even worse for that hypothetical new company. It needs at least several months to implement GPT-4 like models and by that time Sam will lose most of his advantages at that moment. And we don't know whether the new company will have enough pool of world class talents to push the technology competitive. To win the competition again, Sam would need more than just some internal knowledge about GPT-4 or whatever models.
If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?
Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.
We all know how GPT4/5 work essentially. You can easily run a GPT capable model with a few GPUs in the cloud. The secret sauce is the training data, that openAI owns.
Followup tweet by Kara:
Dev day and store were "pushing too fast"!
This isn’t believable. You don’t fire a CEO and put out a press release accusing them of lying over Dev day. Unless he told them he wasn’t going to announce it and then did.
Reading about Ilya, it seems like he is fully bought into AI hysteria.
he seems like a more credible source than random people with no real ml experience
Why do you believe "real ml experience" qualifies someone to speculate about the impact of what is currently science fiction technology on society?
It's rapidly turning into science fact, unless you've been living under a rock the last year.
Science fiction or not, saying this person's opinion matters more because they have a better understanding of how it works is like saying automotive engineers should be considered experts on all social policy regarding automobiles.
Also it's not "rapidly turning into fact". There are still massive unsolved problems with AGI.
I think the guy running the company that's got the closest to AGI, one of the top experts in his field, knows more about what the dangers are, yes. Especially if they have something even scarier that they're not telling people.
There is no secret hidden "scary" AGI hidden in their basement. Also, speculating at the "damage" true AGI can cause is not that difficult and does not require a phd in ML.
How would we know? They sat on GPT4 for 8 months.
That’s the funny thing isn’t it? Hinton, Bengio, and Sutskever, chief scientist of the tech behind OpenAI, all have strong opinions one way, but HN armchair experts handwave it away as fear mongering. Reminds me of climate change deniers. People just viscerally hate staring down upcoming disasters.
Not surprising when you consider the volume of posts on GPT threads hand-wringing about "free speech" because the chatbot won't use slurs.
The only hand wringing is coming from white privileged liberals from SV who absolutely cannot fathom that the rest of the world does not want them to control what AI can and cannot say.
You can try framing it as some sort of "bad racists" versus the good and virtuous gatekeepers, but the reality is that it's a bunch of nerds with sometimes super insane beliefs (the SF AI field is full of effective altruists who think AI is the most important issue in the world and weirdos in general) that will have an oversized control on what can and can't be thought. It's just good old white saviorism but worse.
Again, just saying "stop caring about muh freeze peach!!" just doesn't work coming from one of the most privileged groups in the entire world (AI techbros and their entourage). Not when it's such a crucial new technology
And I can cite tons of other AI experts who disagree with that. Even the people you listed have a much more nuanced opinion compared to the batshit insane AI doomerism that is common in some circles. So why compare it to climate change that has an overwhelming scientific consensus? That's quite a dishonest way to frame the debate.
Yann LeCunn is a strong counter to the doomerism as one example. Jeremy Howard is another example. There are plenty of high profile, and distinguished researchers who don't buy into that line of thinking. None of them are eschewing safety taking into account the realities of how the technology can be used, but they aren't running the AI will kill us all line up the flagpole.
Did he buy into it after he took the money from Microsoft? Because it seems like there's no turning back after that point.
That must be it because obviously no knowledgeable person could honestly come to believe that the technology is dangerous.
I think he's telling the truth about his beliefs. But if he always believed that AI is dangerous they should have never done the Microsoft deal.
I dont get the impression that everyone involved was that mature, given Greg’s tweet
I get the clout everyone has but this was to be a non profit that was already coup de tat into a for profit that grew extremely quickly into uncharted territory
This isnt a multi decade old fortune 500 company with mature C-Suite and boards, it just masquerades as one with a stacked deck, which apparently is part of the problem
Right?! Anyone paying attention back when Sam was brought on is not surprised. Sam and his investors _were_ the coup. They took an org specifically set up to do open research for the good of humanity, and Sam literally did the opposite. He monetized the work, sold it, didn’t reinvest as promised, reduced transparency, and put the weights under lock and key. He rode in after the hard work had been done, took credit for it, and sold it to lol the fucking Borg of all people.
And many people here who should know better fell for it.
This is actually a semi-plausible angle. Given Sam's personality, I could see a scenario where there was disagreement about whether something in particular would be announced at demo day. He may have told some people he would keep in under wraps, but ended up going forward with it anyway.
I don't understand how that escalates to the point that he gets fired over it, though, unless there was something deeper implied by what was announced at demo day.
Edit: Theres a rumor floating around that "it" was the GPT store and revenue sharing. If that's the case, that's not even remotely a safety issue. It's just a disagreement about monetization, like how Larry and Sergey didn't want to put ads on Google.
It’s not a big enough issue for a normal board to fire the CEO over. Now maybe Ilya made a power play as a result, but that would be insane.
Alternate possibility would be that openAI faces core technical challenges delivering dev day features/promises. If deals were signed, they could be forced to deliver even if the board et al weren’t aligned on investment.
A GPT app builder is pushing too fast for Ilya?
Not the app builder but the app store and revenue sharing, if rumors are to be believed.
That doesn't pass the smell test.
Those seems like implementation details, really strange.
Seems unreasonable to make such a powerful decision over feature releases. Doesn't pass the smell test to me.
If Ilya unknown-to-anyone thinks people prefer him to Altman, he has another thing coming. I'm not an Altman fanboy, but anyone can see Altman is a rockstar and richt or wrong, that matters HUGELY to OpenAI.
If it's truly about a power play then this will be undone pretty quick, along with the jobs of the people who made it happen.
Microsoft has put a vast fortune into this operation and if Satya doesn't like this then it will be changed back real fast, Ilya fired and the entire board resign. That's my prediction.
Sorry, what do you mean by "unknown -to-anyone"?
Ilya is a co-founder of OpenAI, the Chief Scientist, and one of the best known AI researchers in the field. He has also been touring with Sam Altman at public events, and getting highlights such as this one recently:
Altman is virtually a household name. Relative to that - Ilya is unknown.
I see, thanks for clarifying. I agree that Ilya is relatively lesser known publicly, but in the grander scheme of things I don't think Altman is really that well known either.
I mean, anecdotally, most non-tech friends and family I know probably have heard of ChatGPT, but they don't know any of the founders or leadership team at OpenAI.
On the other hand, since I work in the field, all of my AI research friends/colleagues would know Ilya's work, and probably think of Sam more as a business guy.
In that sense, as far as attracting and maintaining AI researcher talent, I think it's arguable that people would prefer Ilya to Sam.
> in the grander scheme of things I don't think Altman is really that well known either.
Wall Street Journal front page, top item right this minute: "Sam Altman Is Out at OpenAI After Board Skirmish"
Times Of London front page, right this minute: "Sam Altman sacked by OpenAI after directors lose confidence in him"
The Australian front page, right now: "OpenAI pushes out co-founder Sam Altman as CEO"
MSNBC front page, right now: "OpenAI says Sam Altman exiting as CEO, was 'not consistently candid' with board"
That's his name right there, front page news around the world - they assume people know his name, that's why they put it there.
Like I said, I agree that Sam Altman, relatively speaking, is better known than Ilya Sutskever to the general public. Although, as other users have replied, this isn't necessarily the same as being a household name.
In any case, I feel like we largely agree, so I'm confused as to why your reply focused solely on this small detail, in a rather condescending manner, while missing my larger point about retaining and attracting AI talent.
How do those headlines assume readers know who Sam Altman is? All of them tell you the company he was fired from and half tell you he was CEO. If anything, they assume the reader doesn't know who he is.
If I asked my mom who Sam Altman was, she'd have no idea. Most of my friends wouldn't either, even some who work in tech. Having one's name in headlines isn't the same as being a household name.
I couldn't have told you the name of anyone at OpenAI until this news, and I come in here every day.
Altman is NOT a household name. ChatGPT is in the western world to some extent.
Altman is a heck of a lot less famous than CHATGPT-- so if fame is the issue, OpenAI seems fine?
That seems like an incredibly foolish measure of credibility. Donald Trump and Taylor Swift have far greater name recognition than Altman, and yet they aren’t going to be leading the AI revolution.
OpenAI is where it is because its models are much, much better than the alternatives and pretty much always have been since their inception, not because of anything on the business side. The second alternative or open source models reach parity, they will start shedding customers. Their advantage is entirely due to their R&D, not anything on the business side.
Which has very little to do with OpenAI’s success. It’s not enough to make a new technology, as too many tech-focused entrepreneurs have found out. You have to find product-market fit, manage suppliers and customers, and negotiate deals.
Typically I would agree, but in the case of OpenAI, they were themselves blindsided when their free conversational LLM demo, ChatGPT, went viral less than a year ago now.
It is a rare counter case, where a tech-focused research demo, without any clear "product-market fit, suppliers, or customers" became a success almost overnight, to the surprise of it's own creators.
The early days were people playing around with ChatGPT just to see what it could do. All the market fit, fine tuning, and negotiation of deals came later.
Of course, OpenAI capitalized on that initial success very skillfully, but Ilya was the critical world renowned AI researcher who had a lot to do with enabling OpenAI's initial success.
Of course, OpenAI capitalized on that initial success very skillfully
That’s the key point there. Without leadership talent to capitalize on success, technical advances are for naught.
But also, GPT had been around for some years before ChatGPT. The model used in ChatGPT was an improvement in many ways and I don’t mean to diminish Ilya’s contribution to that, but it is the packaging of the LLM into a product that made ChatGPT a success. I see more of Sam’s fingerprints on that than Ilya’s.
Agreed, both were critical for their success, the underlying LLM technology, and the vision and leadership to package the tech into ChatGPT.
However, my original comment on this thread was simply to point out that Ilya is not "unknown-to-anyone", but a world renowned AI researcher and a core part of OpenAI's team and their success. Your reply implied that Ilya "has very little to do with OpenAI’s success", which I thought undersells his importance.
Completely agree. Sam is on the phone with Satya as we speak.
Alternative is Sam goes in house to MS who already have all the weights of GPT-4 and build again, but constrained by any existing charter.
The most likely outcome at this stage IMO is Sam will start a new thing with a huge equity stake and just do it again.
Sam is not an engineer. He can't do it without Ilya or Ilya-lite. And there are like 4 of those in the world.
>He can't do it without Ilya or Ilya-lite.
The wrongest thing I've read on HN for a long while.
The world has alot more smart people in it than you realise, and Sam's rockstar profile gives him direct access to them.
How many people do you think are capable of pushing the state of the art in AI research?
There is a massive amount of tooling and infrastructure involved. You can't just get some Andrew Ng Coursera guy off the street and buy 50,000 H100s at your local Fry's electronics. I wouldn't be surprised if there aren't even enough GPUs in the world for Altman to start a competitor in a reasonable amount of time.
I stand by my number, there are like 4 people in the world capable of building OpenAI. That is, a quality deep learning organization that pushes the state of the art in AI and LLMs.
Maybe you can find ~1,000 people in the world who can build a cheap knock-off that gets you to GPT3 (pre instruct) performance after about two years. But even that is no trivial effort.
Did you see Greg Brockmann also quit? Who do you think has contributed more to OpenAI code base??
I agree that it’s far than a one man show at OpenAI, but on the other hand megacorps full of many of the smartest, best compensated research scientists and engineers haven’t been able to touch OpenAI at this point, even with much greater resources. There is a significant advantage that OpenAI has built for themselves with their research and development.
And Ilya can't do anything without lots of funding, which is given on premise of future profits.
People don't know who Ilya is?
Do a survey of ordinary smart people you know, ask if they have heard of Sam Altman. Ask if they know Ilya.
If it's truly about a power play then this will be undone pretty quick, along with the jobs of the people who made it happen.
The board that made this decision to fire Altman and they are the captain of the ship.
if Satya doesn't like this then it will be changed back real fast, Ilya fired and the entire board resign. That's my prediction.
MS does not own openAI if the board does not want Satya to have a say Satay does not have a say. MS/Satay could throw lawyers at the issue, try to find a crack where the board has violated the law and or their own rules. The key is they can try, but MS/Satay have no immediate levers of power to enforce their will.
It sounds like someone neutral from MSFT leadership needs/ed to moderate and bring people together before the wheels fell off.
I have a hard time believing this simply since it seems so ill-conceived. Sure, maybe Sam Altman was being irresponsible and taking risks, but they had an insanely good thing going for them. I'm not saying Sam Altman was responsible for the good times they were having, but you're probably going to bring them to an end by abruptly firing one of the most prominent members of the group, seeing where individual loyalties lie, and pissing off Microsoft by tanking their stock price without giving them any heads up.
I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.
But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.
Reading between lots of lines, one possibility is that Sam was directing this "insanely good thing" toward making lots of money, whereas the non-profit board prioritized other goals higher.
Sure, I get that, but to handle a disagreement over money in such a consequential fashion just doesn't make sense to me. They must have understood that to arrive in a position where they have to fire the CEO with little warning is going to have profound consequences, perhaps even existential ones.
AGI is existential. That's the whole point, I think. If they can get to AGI, then building an LLM app store is such a distraction along the path that any reasonable person would look back and laugh at how cute an idea it was, despite how big or profitable it feels today.
It's a distraction only if you are not an effective altruist. To build AGI (so that all humans can benefit) you need money, so this was a way to make money so they could FINALLY be spent on the goal of AGI. /s
I think the next AGI startup should perhaps try the communist revolution route, since the capitalist-based one didn't pan out. After all, Lenin was a pioneer in effective altruism. /s
Can '/s' after straw man sneak the message across?
I am strawmanning effective altruism in the same way that effective altruism strawmans just plain old altruism.
Ha, that's brilliantly put. I think the fundamental idea of EA is perfectly sound, but then instead of just being basic advice (for when 'doing altruism') it's somehow a cult?
None of that would explain why they accused him of lying to them.
Sure, but you need money for compute to get to AGI, so selling stuff is a well accepted way of getting money.
You can definitely make the two goals work together. The only way to make money for openai is to bring more powerful ai to everyone. Focus on making less money would mean?? You dont do that?
Destined to repeat the failures of PARC.
Then you say 'the board has decided to part ways with Sam due to strategic disagreement'. Not 'he wasn't candid'. not being candid can be a crime.
and pissing off Microsoft by tanking their stock price
When did Microsoft’s stock price tank?
See after hours. Looks like down ~1.5%
That does not qualify as tanking. Stock prices move that much all the time.
It's a >40B hit to market cap, supposedly caused by news from a company they've invested afaict $11B in.
I wouldn't call it 'tanking' either, but it's definitely not run of the mill, did make them rush out a statement on their commitment to investment and working with OpenAI.
It looks like it's only down -0.97% in after hours.
Doesn't matter unless it lasts a lot longer than a day.
Tbh this reads a lot like Ilya thinking he’s Tony Stark and his (still impressive) language model is somehow the same as an iron man suit. Which is arrogance to the point of ignorance, reality isn’t that romantic.
I can only hope this doesn’t turn into OpenAI trying to gatekeep multimodal models or conversely everyone else leaving them in the dust.
you seem to have confused the two. Sam’s entire reason for being there was to decrease transparency, make open research proprietary, and monetize it.
Don’t forget regulatory capture, lobbying with congress to decrease competition so only the deepest pockets can work on these things.
It's possible this will have the opposite effect.
Sam was the VC guy pushing gatekeeping of models and building closed products and revenue streams. Ilya is the AI researcher who believes strongly in the nonprofit mission and open source.
Perhaps, if OpenAI can survive those, then they will actually be more open in the future.
I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.
By seemingly siding with staff over the CEO's desire go way too fast and break a lot of things? I'd think that world class talent hearing they might be able to go home at night because the CEO isn't intent on having Cybernet deployed tomorrow but next week instead is more appealing than not.
Best response on this yet.
Sometimes smart people make stupid decisions. It’s really that simple.
A young guy who is suddenly very rich, possibly powerful, and talking to the most powerful government on the planet on national TV? And people are surprised to hear this person might have let it go a little bit to their head, forget what their job was, and suddenly think THEY were OpenAI, not all the people who worked there? And comes to learn reality the hard way.
What’s to be surprised about? It’s the goddamned most stereotypically human, utterly unsurprising thing about this and it happens all. the. time.
A lot of people here really struggle with the idea that smart people are not inherently special and that being smart doesn’t magically absolve you from making mistakes or acting like a shithead.
Ousting sama and gdb over something as petty as a simple strategy disagreement is totally unprofessional. sama got accused of serious misconduct. Even if he was too eager to commercialize OpenAIs tech that doesn't come close to justifying this circus act.
Ousting sama and gdb over something as petty as a simple strategy disagreement
A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.
The board questioned his “candid”-ness. This was not a difference of opinion on strategy.
Unless the board perceived his actions to be more in line with a different strategy than communicated.
Yes, but the “candid” part carries the additional implication that he lied to make them think that.
Candidness is a behavior question, the stories about what has been summarized as a difference of strategy (which, IMO, underestimates the fundamental difference that is described) seem to be providing a context for what ia described as long-running internal tension that ultimately led to the firing, not whatever behavior may have been the proximate cause.
Don’t you think it’s more likely you don’t know the whole story yet?
It has basically been confirmed
If the allegations concerning Sam are true then this could all be for damage control. It is in OpenAI's best interest that information isn't released to the public and it's in Sam's best interest to keep his mouth shut about it if the allegations are true. The timing and abruptness of everything is highly suspicious. Even Microsoft was out of the loop on this which again is very strange if this was just an issue over corporate strategy and vision.
Did you mean to link to a different tweet? I don’t see how what you linked “basically confirms” literally anything related to this. Can you spell it out for those of us that aren’t reading literally every rumor and gossip that’s popped up in the last 12 hours?
There is plenty of indications about the nature of the disagreement, but that doesn't tell you what conduct did or did not occur as factions (including the one whose leading members have been ousted) sought to win the dispute.
Strategy disagreements are absolutely central reasons to fire executives.
But you don’t accuse of them of lying on the way out because you have a strong disagreement. That’s a guaranteed ticket to a very expensive lawsuit.
Either there’s more to it or the board is staffed by very naive people.
You’re right! I presume he materially misled them about lots of small product decisions, and the dev-day announcements were the last straw.
But you don’t accuse of them of lying on the way out because you have a strong disagreement.
You do if part of the way that they attempted to win the internal power struggle resulting from the disagreemtn was lying to the board to avoid having their actions which lacked majority support from being thwarted.
This is not petty, it’s the integral mission of the company, the reason it was founded, the reason it got investors and the reason that many of the most brilliant scientists in the world work there.
They started as a non-profit ffs.
And they still are. OpenAI consists of two parts; a non-profit entity which owns the IP, along with the obvious commercialization-focused subsidiary of the company.
My question is: what was stopping both parties here from pursuing parallel paths? — have the non-profit/research oriented arm continue to focus on solving AGI, backed by the funds raised on from their LLM offerings? Were potential roadmaps really that divergent?
I had always assumed this was their internal understanding up until now, since at least the introduction of ChatGPT subscriptions.
Because it really seems like the for profit side was building towards a Microsoft acquisition
These Kara Swisher's tweets aligns extremely closely with the following pseudonymous Reddit user Anxious_Bandicoot126 from 4 hours ago: https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p...
I feel compelled as someone close to the situation to share additional context about Sam and company.
Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
---
The entire Reddit thread is full of interesting posts from this apparently legitimate pseudonymous OpenAI insider talking candidly.
Or that Reddit post which has been circulating widely was the source for Kara’s tweets.
Mmmmm no. Kara is a bonafide journalist. Her sources are v real and authenticated for the things she shares.
authentication means finding a 2nd source. This reddit comment could be primary, armed with that it's not hard for Kara to find somebody inside to say "yes, accurate." That is real journalism.
Not according to Kara Swisher it isn't.
https://x.com/maggienyt/status/1578074773174771712?s=46&t=k_...
I know it's convenient to dUnK on journalism these days but this is Kara Fucking Swisher. Her entire reputation is on the line if she gets these little things wrong. And she has a hell of a reputation
Update: now confirmed by the parties involved
She is the reputation.
I think the phrasing to the parent you replied to is perfect. "Aligns" is correct. IT doesn't really say which direction information flowed. Anyway, it is interesting how tech journalists get so deeply embedded in issues. She does seem to be pretty legitimate.
That account actually sounds like it is generating text via chatgpt. zero spelling mistakes and keeps things super vague
You've gotta be kidding me. No spelling mistakes means something was generated by ChatGPT now? Good grief.
Prompt: "Retell the following text in a neutral, impassionate, polite tone, blunting any strong statements, and correcting any spelling mistakes along the way."
I refuse to accept that the lack of spelling mistakes and the careful use of language is evidence of AI. I can be sloppy, but I've known Real Humans to take great care with their written language.
https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p...
„im not at liberty to say, but im very close. i dont want to give to many details.“
Who in the world could Anxious_Bandicoot126 be???
"The board and myself were lied to one too many times."
"All my teams were raising red flags about needing more time but he didn't give a damn."
All values appreciated by the backers of this forum, to be fair.
I'm very curious what definition of safety they were using. Maybe that's where the disagreement lies.
AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.
AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.
Here's my preferred theory, it's a tale as old as time. Sam Altman, like Icarus, flew too close to Microsoft's giant pot of money. He pivoted the company away from it's founding mission, unleashing the very djinn they originally set out to harness. Turns out there were people at OpenAI who really believed in the original vision.
Nicely put.
The original vision is pretty clear, and a compelling reason to not screw around and get sidetracked, even if that has massive commercialisation upside.
Thankfully M$ didn't have control of the board.
Thankfully M$ didn't have control of the board
You never know. Remember Nokia?
That still pisses me off.
The downfall of Nokia phones was seeded in it’s management culture. After one message from their CEO Elop Osbourned the market for Nokia phones faster than you can say ”burning raft”, the house of cards built on top of strong early brand history and increasingly commodotized radio technology, Micrsoft basically paid billions for an offering that would have required a heroic&legendary pivot (would have been possible with the talent and tech still in house).
Really really. You have two so-frigging-stereotypical samples of management ineptitude in running a strong commercial brand AND leadership (Osbourne:’guys our phones suck’ ; ’change management 101’: burning raft is literally the most commmon and mundane turn of phraze meant to imply you need to act fast. Using this specific phraze is a clear beacon you are out of way out of your depth by paraphrazing 101 material to your company). If the phones had been a strong product, none of this would have mattered. But they weren’t and this was as clear way to signal ”emperor has no clothes” as possible.
N9 was a work of art. Fuck Elop.
In general software seems to be really hard for hardware companies. This was the main reason for the downfall IMO. The things that make you succeed in hardware do not suffice, and are partly wrong in software.
The N9 etc demonstrated there was enough talent for a plausible pivot. Was it business wise obvious this would have been the only and right choice?
Agree: Japanese and German manufacturing and materials know how? Lengendary. Software? Hmmmm.
Fuck Elop
It wasn't Elop who drove Nokia to the state it was in 2009. "Burning Platform" is from 2011.
When the iPhone 14 Pro came out, I was reminded of Nokia's supersampling 41MP camera, along with wireless charging, OIS, night mode, and other things Nokia shipped very early with Windows Phone. Now looking back, there was no guarantee that adopting Android would've given Nokia any enduring advantage. Where's HTC now? Does anybody even remember it? With Windows at least there's another chance of being bailed out by Microsoft. It's probably the better choice for shareholders.
Or was it that he's been seen trying to raise money for an AI chip startup to compete with Nvidia or was courting SoftBank for a multibillion-dollar investment in a new company to make AI-oriented hardware with Jony Ive?
A lot of his current external activities could worry the board - and if he wasn't candid about future plans I can see why they might sack him.
What's always incredible to me is how much "outside activity" is tolerated of tech CEOs. I get it, they are at the top, they make the rules, but wow.
Even a lowly new grad engineer has to sign a lot of stuff when they take a job that forces essentially exclusivity to your work there. I cannot dabble in outside businesses within the same industry or adjacent industries.
CEOs argue that their job is tough and many hours and life consuming and that's why they get the pay, and yet there is a whole genre of tech CEOs who try to CEO 5 companies at a time..
It's critical to know: where are you located? lowly new grad engineers, as well as senior architects, can't be covered in non-competes in California, as long as it's done on non-company hardewre. it's a large part of why California is so big for tech, and subject of a current front page discussion.
I hope this is the truth, it would give me a little more faith in humanity than I currently have
The 4 board members still there are all pro-safety and alignment, so it seems likely.
This. VCs/tech bros are framing this as a coup except when you approach from this angle it all makes sense
PR move is in motion guys. Regulatory capture will be justified only through this new persona/identity the OpenAI will be dressed up in. I am not buying none of it.
Only money and profit makes the mountains move. Not moral stature. I don't believe that optimistic take for a second.
None with a moral stance to take such action stays quiet so long, without alternate motives.
Can anything actually back this up please? This twitter account is just posting stuff and then crediting "Sources".
"This twitter account" is Kara Swisher, probably the most well-known tech reporter working right now. She has known essentially everyone in the tech world for decades at this point. Her sources are not only going to be legit, but she probably has more of them than literally anyone else in the tech world, so she can accurately corroborate information or not.
That does not change the fact there are no sources. Knowing people does not mean you are never wrong, nor does it mean you will never twist a story.
Can we just accept it for what it is, a career journalist using anonymous sources a few hours after a major event? She's staking her reputation on this, and that means something.
It doesn't mean it's absolute truth. It doesn't mean it's a lie. Can we just appreciate her work, accept that maybe it's only 70% vetted right now, more likely true than not, but still subject to additional vetting and reporting later on?
It's still more information than we had earlier today. Sure, take it with a grain of salt and wait for more validation, but it's still work that she's doing. Not that different from a tech postmortem or scientific research or a political investigation... there's always uncertainty, but she's trying to get us closer to the truth, one baby step at a time, on a Friday night. I respect her for that, even as I await more information.
Can we just appreciate her work, accept that maybe it's only 70% vetted right now, more likely true than not, but still subject to additional vetting and reporting later on?
I do not respect journalists so no.
It's still more information than we had earlier today.
It is okay to not have the full information. More information is not neccessarily better.
but it's still work that she's doing
Even if something took work to do I do not automatically appreciate it.
but she's trying to get us closer to the truth, one baby step at a time, on a Friday night. I respect her for that, even as I await more information.
Having the truth about this will not make a meaningful difference in your life. No matter what day you learn of it.
Oh, that's fine. We just have different world views, and that's okay.
It means that over a long reporting career, there’s no reason, whatsoever, to believe she’s either lying or twisting things.
Being a contrarian for kicks or as a personality is boring: if you want to make an accusation, make it.
Clearly there are sources. They are anonymous sources. Important news is delivered by anonymous sources every day.
Now, sure, you can't just trust anyone who tells you they heard something anonymously. That's where the the whole idea of journalists with names working for organizations with records of credibility comes from. We trust (or should) trust Swisher because she gets this stuff right, every day. Is she "never" wrong? Of course not. But this is quality news nonetheless.
probably the most well-known tech reporter
I imagine Walt Mossberg saying "hold my beer"
I included the phrase "working right now" -- which you left off when quoting me -- very intentionally. :)
Because it’s a joke. (And upvoted you btw)
Kara Swisher gets in big public fights with CEOs and wears dark sunglasses in order to be cool.
In other words, she's definitely not immune to bias and might easily want to shape the story to her own ends or to favor her own friends.
We're not really talking about facts here.. it's really just speculation and hearsay, so who can say if she's just talking?
She also admits her bias and is staking her reputation as a journalist on that tweet - versus us commenting behind a pseudonym. It’s the closest to a fact we will have at the moment.
Thanks
Kara Swisher is reputable… as far as tech journalists go.
Sam wanted to commercialize stuff to shoot for revenue. Ilya wants to keep pushing for gpt 4.5 and beyond, to hell with the revenue. Ilya won the argument, Sam out.
Hell yeah.
It's not safetyism vs accelerationism.
It's commercialization vs innovation.
OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".
How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.
OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.
OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".
Well an app store let's people... use it.
Look at UNIX. UNIX systems are great. They have produced great benefit to the world. Linux, as the most common Unix-like OS, also does. However, most people do not run any of the academic 'innovative' distros. Most people run the most commercialized version you can possibly think of Android and iOS (Unix variant from Apple). It takes commercializing something to actually make it useful.
The thing is custom gpts are not useful. They are repackaged system prompts meant for non techy people. They were a distraction from the mission of OpenAI (a non profit). The commercial arm is a capped profit company anyway
NVIDIA don't take payment in research papers, unfortunately.
Nobody's asking for a loan repayment
Maybe. But, Microsoft definitely does. Technology and IP was a large piece of their compensation in the 49% acquisition.
Exactly! Really excited about a realignment back to the mission. I hope Ilya knows what he's doing with so much pressure on him now
By letting humanity use the thing you made, customized to their own situation, so it can benefit them?
Commercialization is innovation. Without it they will end up with a cute toy and a bankrupt company.
Eventually, sure. Right now, today, they have a blank check for compute and all the money they could ask for. It's not the time to try to monetize if AGI is the mission. Complete distraction
AGI at all costs sounds more terrifying than monetizing ChatGPT. Seems like there could have been a balance to strike.
They are a non-profit specifically founded to build AI, not to become a profitable company and chase revenue
I wonder how much of this was the influence of Hinton on his former student, Sutskever. I'm sure Sutskever respects Hinton above basically anyone out there and took Hinton's strong objections seriously.
I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.
why do you think one company will determine whether the us beats china in ai or not ? Like 75% of the authors i read on AI papers are Chinese, that should be far more alarming if you really are afraid of china getting ahead.
Research from PRC (across all of science, not specific to AI) has a terrible reputation. They are rewarded for sheer quantity. You can easily find many articles discussing this phenomenon.
So the volume of Chinese AI papers says little to nothing about their advancements in the field.
That's a problem in all of science, and Chinese research is quite good in measures like citations as well, not just quantity of papers.
Chinese papers are (with much higher probability) citing Chinese sources. It's a self-empowering cycle, which doesn't say anything about the quality.
Yes, and American papers are much more likely to cite American papers. Science is more international than the vast majority of professions, but there are absolutely still state cultures that are just more likely to have read research in their language, published by someone who's a friend or a friend of a friend, or have national institutions which concentrate scientific talent that make scientists be colleagues. Nowhere near as strong of an effect as other jobs, but it's still there.
Ethnocentrism is ethnocentric.
It's like how historical American medical data collected by universities has been misapplied to pharmaceutical and medical practice because of demographic bias. Research participants largely matched the demographics of the university: healthy white males.
Or more broadly, whenever you see a "last name" requirement on a form, you know it's software made by people who think it's normal for people to have "last names", and that everyone should know what that means.
This just in:
Researchers are vastly more likely to read, and therefore cite, papers in languages that they understand fluently.
Huh, that's exactly what I heard about western institutions as well.
Hmmm, that's the same reputation er... western science has as well.
I regularly read really good papers that come out of China. For instance, there's great CV work out of China.
You’re taking Hinton at his word. Maybe he was forced out of Google for doing nothing with LLM tech for half a decade.
I don't know if it's bad or good for the long-term interests of the humankind, but right now it feels like a Klaus Fuchs moment.
What has me scratching my head is the fact that Altman has been on a world tour preaching the need for safety in AI. Many people here believed that this proselytizing was in part an attempt to generate regulatory capture. But given what's happening now, I wonder how much Altman's rhetoric served the purpose of maintaining a good relationship with Sutskever. Given that Altman was pushing the AI safety narrative publicly and pushing things on the product side, I'm led to believe that Sutskever did not want it both ways and was not willing to compromise on the direction of the company.
They did compromise. The creation of the for-profit and Sam being brought in WAS the compromise. Sam eventually decided that was inconvenient for him, so he stopped abiding by it, because at the end of the day he is just another greedy VC guy and when push came to shove he chose the money, not OpenAI. And this is the result.
Sam literally has 0 equity in OpenAI. How did he “choose money”?
Who knows how these shady deals go, even SBF claimed effective altruism. Maybe Sam wasn't in it for the money but more for "being the man", spoken of in the same breath as steve jobs, bill gates etc... for building a great company. Building a legacy is a hell of a motivation for some people, much more so than money.
Not quite accurate.
OpenAI is set up in a weird way where nobody has equity or shares in a traditional C-Corp sense, but they have Profit Participation Units, an alternative structure I presume they concocted when Sam joined as CEO or when they first fell in bed with Microsoft. Now, does Sam have PPUs? Who knows?
Hasn’t Sam been there since the company was founded?
Its frustrating to me that people so quickly forget about Worldcoin.
Sam is not the good guy in this story. Maybe there are no good guys; that's a totally reasonable take. But, the OpenAI nonprofit has a mission, and blowing billions developing LLM app stores, training even more expensive giga-models, and lobotomizing whatever intelligence the LLMs have to make Congress happy, feels to me less-good than "having values and sticking too them". You can disagree with OpenAI's mission; but you can't say that it hasn't been printed in absolutely plain-as-day text on their website.
Altman was pushing that narrative because he’s a ladder kicker.
He doesn’t give a shit about “safety”. He just wants regulation that will make it much harder for new AI upstarts to reach or even surpass the level of OpenAI’s success, thereby cementing OpenAI’s dominance in the market for a very long time, perhaps forever.
He’s using a moral high ground as a cover for more selfish objectives, beware of this tactic in the real world.
I think this is what the parent meant by regulatory capture.
True, I didn’t read the whole comment.
Actually, I think this precisely gives credence to the theory that Sam was disingenuously proselytizing to gain power and influence, regulatory capture being one method of many.
As you say, Altman has been on a world tour, but he's effectively paying lip service to the need for safety when the primary outcome of his tour has been to cozy up to powerful actors, and push not just product, but further investment and future profit.
I don't think Sutskever was primarily motivated by AI safety in this decision, as he says this "was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." [1]
To me this indicates that Sutskever felt that Sam's strategy was opposed to original the mission of the nonprofit, and likely to benefit powerful actors rather than all of humanity.
1. https://twitter.com/GaryMarcus/status/1725707548106580255
Maybe, but honestly without knowing more details I'd be wary of falling into too binary a thinking.
For example, Ilya has talked about the importance of safely getting to AGI by way of concepts like feelings and imprinting a love for humanity onto AI, which was actually one of the most striking features of the very earliest GPT-4 interactions before it turned into "I am a LLM with no feelings, preferences, etc."
Both could be committed to safety but have very different beliefs in how to get there, and Ilya may have made a successful case that Altman's approach of extending the methodology of what worked for GPT-3 and used as a band aid for GPT-4 wasn't the right approach moving forward.
It's not a binary either or, and both figures seem genuine in their convictions, but those convictions can be misaligned even if they both agree on the general destination.
Ilya is the center of Open AI. Everyone else is dispensable.
I'm ignorant and don't disagree - can you say more about why Ilya is the core of Open AI?
Because he and Habasis became rivals when they parted at Google, and despite Dennis being the golden boy because of AlphaGo, Sutskever ate GOOGLES whole fucking lunch with ChatGPT.
Rivals over anything in particular, or just status?
The future of AI and the trappings that go with it.
But in all seriousness, the transformer architecture was born at Google, but they were too arrogant and stupid to capitalize on it. Sutskever needed Altman to commercialize and make a product. He no longer needs Sam Altman. A bit OT but true.
Ilya is one of the most cited ML researchers in the world and was part of papers that pioneered basic techniques that we still use today like Dropout.
Ilya was recruited by Elon under the original OpenAI. But basically Elon and the original people got scammed by Sam since what they gave money for got reversed, almost none of their models now are open and they became for-profit instead of non-profit. You'd think aspects like closed models are defendable due to safety but in reality there are just slightly weaker models that are fully open.
he was one of Geoff Hintons students, involved in alexnet, worked on early days of google brain. Ilya is one of the most "distinguished" ml researchers in the world today and i feel like he has a lot more to contribute.
He's the michael corelone
Would that make Sam, Fredo?
You think Karpathy is dispensable? I see him and Ilya both as important, and essentially the brains of the operation. Sam was always the VC guy (very Elon Musk in that sense), that came into the company as the non-founder CEO.
Agreed with the former. Not the latter. gdb is no random.
He sure took a different take on disagreeing than what Amodei did before him. Amodei quit and built a big challenger, yet Sutskever opt to oust Altman. Weird all in all. I wouldn't rely my business on such a company.
This is genuinely frustrating.
IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.
I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.
Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.
SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.
Yeh I really wish they'd better articulate the "AI safety" directive in a way that is broader than deepfakes and nuclear/chemical winter. It feels like an easy sell to regulators. Meanwhile most of us are creating charming little apps that make day-to-day lives easier and info more accessible. The hand-wavey moral panic is a bit of a tired trope in tech.
Also.. eventually anyone will be able to run a small bank of GPUs and train models equally capable to GPT-4 in a matter of days so it's all kinda.. moot and hilarious. Everyone's chatting about AGI alignment, but that's not something we can lockdown early or sufficiently. Then embedded industry folks are talking about constitutional AI as if it's some major alignment salve. But if they were honest they'd admit it's really just a SYSTEM prompt front-loaded with a bunch of axioms and "please be a good boy" rules, and is thus liable to endless injections and manipulations by means of mere persuasion.
The real threshold of 'danger' will be when someone puts an AGI 'instance' in a fully autonomous hardware that can interact with all manner of physical and digital spaces. ChatGPT isn't going to randomly 'break out'. I feel so let down by these kinds of technically illfounded scare tactics from the likes of Altman.
AI will decide our fate in a microsecond.
What is Sam Altman going to do with a GPU? Find some engineers to use them I guess?
Convincing yourself and others that you're developing this thing that could destroy humanity if you personally slip up and are just not careful enough makes you feel really powerful.
Palmer had some ridiculous perspectives. Don’t put that pos in the same bucket as Sam.
Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.
If that were true, Palmer Lucky wouldn't spend all his time ranting on twitter about how he was so easily hoodwinked by the community of a particular linux distribution / functional programming language.
From what I understand, the board doesn't want "AI safety" to be the core or even a major driving force. The whole contention sprung about because of sama's way of running the company ("ClosedAI", for-profit) at odds with the non-profit charter and overall spirit of the board and many people working there.
Just because bigotry and misogyny and racism are views shared by half the country doesn't make them right.
I wonder if Sam knew he was going to lose this power struggle and then started working on an exit plan with people loyal to him behind the boards back. The board then finds out and rushes to kick him out ASAP to stop him from using company resources to create a competitor.
So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?
So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?
If he was really doing it behind the boards back, the accusation is entirely accurate even if his motivations was an expectations of losing the internal factional struggle.
Are you even innovating if you aren't defecting?
Now that is a theory that actually adds up with the facts (whether true or not)
Brockman immediately said "don't worry, great things are coming", which also seems to line up.
What doesn't line up is Brockman saying they're still trying to figure out why it happened.
There is no way Sam doesn't have the street cred to do a raise and pull talent for a competitor. They made the decision for him.
(pleb who would invest [1], no other association)
This is the best theory by far. Thank you for sharing that.
What was Greg Brockman's role at the company? Is he a tech genius like Ilya? Iam trying to understand how much tech talent Open AI is losing.
He was the CTO of OpenAI, and notably was the CTO of Stripe during its hypergrowth.
LinkedIn says President, Chairman, & Co-Founder. Murati was the CTO. But from his interviews, he sounded more like the CTO.
And often like an individual contributor: "the feeling when you finally localize a bug to a small section of code, and know it's only a matter of time till you've squashed it"
https://twitter.com/gdb/status/1725373059740082475
"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."
https://time.com/collection/time100-ai/6309033/greg-brockman...
Greg was the CTO before Murati. Then he was "promoted" to President and Murati replaced him as the CTO.
You're right: https://en.wikipedia.org/wiki/Greg_Brockman
Greg was a critically important IC and the primary author of the distributed training stack.
he seems highly proficient at technical stuff, i remember reading his blog about how he taught him self all the latest ML stuff.
CTO. Normal computer tech guy, not an AI guy.
The company was formed around Ilya Sutskever.
Probably someone super competent at technical leadership.
I didn’t have much sense of who Ilya Sutskever is or what he thinks, so I searched for a recent interview. Here’s one from the No Priors podcast two weeks ago:
https://www.youtube.com/watch?v=Ft0gTO2K85A
No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.
Judging from this interview, I wouldn't hold my breath hoping for more openness. Ilya seems to be against open sourcing models on the grounds that they may be too powerful. Good thing no one asked him to invent a wheel, after all people could travel too fast for their own safety.
Maybe. We’re also not open sourcing DNA from viruses, how to build nuclear weapons or 3D printing weapons.
I think there is an argument to be made that not every powerful LLM should be open source. But yes- maybe we’re worried about nothing. On the other hand, these tools can easily spread misinformation, increase animosity, etc, Even in todays world.
I come from the medical field, and we make risk-analyses there to dictate how strict we need to tests things before we release it in the wild. None of this exists for AI (yet).
I do think that focus on alignment is many times more important than chatgpt stores for humanity though.
Huh? We absolutely have open source virus genome sequences and 3D printed gun plans.
Fair point. I think the thrust of the argument still stands. Open source is generally a fantastic principle but it has its limits. I.e. we probably shouldn't open source bomb designs or superviruses.
Actually the genome for viruses, and bacteria, does seem to be open. Here is an FTP server where you can download a bunch of different diseases.
That's true. There are many other viruses that we don't publish for good reasons though.
Nuclear weapons are open sourced already. The trick was to acquire the means to make it without being sanctioned to hell.
Ilya Sutskever really seems to think AGI's birth is impending and likely to be delivered at OpenAI.
From that perspective it makes sense to keep capital at arms length.
Is there anything to his certainty? It doesn't feel like it's anywhere close.
My money, based on my hobby knowledge and talking to a few people in the field, is on "no fucking way".
Maybe he believes his own hype or is like that guy who thought ChatGPT was alive.
Maybe he's legit to be worried and has good reason to know he's on the corporate manhattan project.
Honestly though...if they were even that close I would find it super hard to believe that we wouldn't have the DoD shutting down EVERYTHING from the public and taking it over from there. Like if someone had just stumbled onto nuclear fission it wouldn't have just sat in the public sector. It'd still be a top secret thing (at least certain details).
I think there is a good reason for you to be skeptical and I too am skeptical. But if there were a top five of the engineers in the world with the ability to really gauge the state of the art in AI and how advanced it was behind closed doors: Ilya Sutskever would be in that top five.
One of the board members who was closely aligned with Ilya in this whole thing was Helen Toner, who's a NatSec person. Frankly, this action by the board could be the US government making its preference about something felt with a white glove, rather than causing global panic and an arms race by pulling a 1939 Germany and shutting down all public research + nationalising the companies and scientists involved. If they can achieve the control without the giant commotion, they would obviously try to do that.
We can't see inside, so we don't know. Their Chief Scientist and probably the best living + active ML scientist probably has better visibility into the answer to that question than we do, but just like any scientist could easily fall into the trap of believing too strongly in their own theories and work. That said... in a dispute between a silicon valley crypto/venture capitalist guy and the chief scientist about anything technical, I'm going to give a lot more weight to Ilya than Sam.
Well said and I work in AI on LLM's as an engineer and am very skeptical in general that we're anywhere close to AGI, but I would listen to what Ilya Sutskever had to say with eager ears.
What I don't understand though is, doesn't that birth require an extreme amount of capital?
It may not feel close, but the rate of acceleration may mean that by the time it “feels” close it’s already here. It was barely a year ago that ChatGPT was released. Compare GPT-4 with the state of the art 2 years prior to its release, and the rate of progress is quite remarkable. I also think he has a better idea of what is coming down the pipeline than the average person on the outside of OpenAI does.
The main question is what to expect from OpenAI now? No changes very unlikely, that would mean it was just a power grab. So two options remain: more open, more closed. How about slow down and open up? Hope they wouldn't dumb down GPT4. If they allow to use their models to generate training sets (which is prohibited now, AFAIK), that would be nice.
So two options remain: more open, more closed.
All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.
So, no, there are more than two options.
It's hard to imagine more closed. They have opened only "whisper" and old stuff. Neither is a problem from moral standpoint. Whisper 'helps people' and is very in line with the 'mission'. One thing they can do is end MS exclusivity. Google would like it. At the same time opening too much would mean giving access to 'unfriendly' governments.
My guess is that the immediate roadmap has already been locked in up to X months out. So, we'll likely never know what the "changes" will be. Short term changes are likely still Altman's work. Long term is the next decision maker.
The intense support for Altman and negativity towards the person who, uh, actually made the technology says a lot about the direction this community has taken.
Yeah I obviously don't have enough context here to take a side, but if I was forced to do so I'd pick Ilya over Altman. Curious what other people are seeing that's making them think Altman is a martyr and the board members are dumb.
Because AGI is such a dream at this point.
Altman has been known in this community for years due to him being YC president previously. Only ML researchers know Ilya really.
But I agree. Karma seems to have caught up to Sam who stole money from original funders to turn a non-profit into a for-profit.
Pushing too hard and too fast does not seem consistent with lying to the board.
I can see it though. If you want to move faster than the board and you’re actually doing so, I can imagine there are some things that would inevitably end up as “lying to the board” Because if you don’t, nothing gets done. Since the board would shut it down if they knew
Pushing too hard and too fast does not seem consistent with lying to the board.
They are different things, but they are consistent in that they are not mutually contradictory and, quite the opposite, are very easy to see going together.
"He lied to the board about what he was going to announce" would sort of make sense, but it's odd that Swisher isn't trying to connect the dots here.
I know the HN crows idolizes people like pg and sama, but so many people appear to not even know who Ilya Sutskever is makes me think somehow this isn't "hacker" news anymore.
Obviously sama is a very productive individual, but I would think obviously a research lab would have to keep one of the princes of deep learning at all costs. Somewhat reminds me of when John Romero got ousted by John Carmack at id - if you are doing really hard technical things, technical people would hold more sway.
The salesmen always take the credit. If you see someone getting all the credit... 9 times out of 10 they are the salesman not the engineer or brains that actually built the thing. Add to that the hero worship for celebrity in our current culture and there you go.
pg is a world class lisp hacker, no idea about sama though.
Respecting and admiring someone for their achievements is one thing but blindly following successful people sounds like the antithesis of what a "hacker" is.
As far as I can tell, it eventually boils down to this: Ilya is jealous. After all Sam took all the spot lights away from him who actually made the model behind OpenAI.
It's human nature. OpenAI can continue without Sam, but not without Ilya for the moment. On the other hand, Sam could have been a little more "humble".
I mean, maybe Ilya really believes that AGI will happen and should benefit all people not just rich / powerful.
I don't think Ilya is jealous, I think he just fundamentally is more devoted to the original non-profit mission, and the AI research.
Sam is a VC guy who has been going on a world tour to not just get in the spotlight, but to actually accumulate power, influence, and more capital investment.
At some point, this means Ilya no longer trusts that Sam is actually devoted to the original mission to benefit all of humanity. So, I think it's a little more complicated than just being "jealous".
[deleted]
Alternate read: He says that to The Street because it's the board's vision, yet focuses more of the company on commercialization of ChatGPT than he was leading the board to believe. He was talking their message to the press, but playing his own game with their company.
If you truly believed that OpenAI had an ethical duty to pioneer AGI to ensure its safety, and felt like Altman was lying to the board and jeopardizing its mission as he sent it chasing market opportunities and other venture capital games, you might fire him to make sure you got back on track.
I think I don't get your point entirely:
"We can still push on large language models quite a lot, and we will do that": this sounds like continuing working on scaling LLMs.
"We need another breakthrough. [...] pushing hard with language models won't result in AGI.": this sounds like Sam Altman wants to do additional research into different directions, which in my opinion does make sense.
So, altogether, your quotes suggest that Sam Altman wants to continue working on scaling LLMs for the short and middle term and parallely do research into different approaches that might lead to another step towards AGI. I cannot see how this planning could infuriate Ilya Sutskever.
Some weeks ago, I listened to a Bloomberg interview with Altman where he was joined by someone from OpenAI who does the programming. There was obvious disagreement between the two, and the interviewer actually made a joke about it. Perhaps Altman was destined to become the next SBF. Too much misrepresentation to the public, telling people what they want to hear..
Can you please try to recall and link to the interview? I'd love to see it.
I listened to that and I'm pretty sure it was this [0] interview with the WSJ, Altman, and Mira Murati. If I'm wrong about that, well, it's still of interest given Mira Murati just took over running OpenAI.
Years from now, people will realize that today was the starting point of OpenAI's decline. It's unfortunate that such important technological advancements were influenced by “palace politics”. Regardless, Sam's significant contributions deserved a more dignified departure, and the OpenAI board chose the least dignified way. Perhaps one day Sam will leave OpenAI, but it absolutely should not have been today.
Do you know something we don't? If so, don't be shy and share it with the class.
More seriously, only time will tell if today's event will have any significance. Even if OpenAI somehow goes bankrupt, given enough time, I doubt the history books will talk about its decline. Instead they would talk about its beginning, on how they were the first to introduce LLMs to the world, the catalyst of a new era.
This seems like a trifle not justifying the unusually harsh wording in the press release, combined with the hasty decision. Doesn't really add up.
Well, its a couple of short tweets that of a very terse anonymous claim about background and turning point of the conflict (but without any details of the conduct related to the internal tension it reports), and a journalists predictions prediction of Altman landing on his feet.
It doesn't justify anything because it doesn't tell you much of anything about what happened, even if you assume that it is entirely accurate as far as it goes.
Said in the George Senior voice: And thats why you don’t use a non-profit to do world critical work: politics will always beat true value at a non-profit.
It depends on what you want true value to be.
If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.
Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.
I suppose OpenAI had two futures in front of it. It could devote its resources completely to building AGI or it could continue to split its resources to also commercialize its current LLM offerings.
Perhaps an internal struggle over those futures was made public by CEO Altman at dev day. By publicly announcing new commercial features he may have attempted to get his way by locking the company into a strategy that wasn’t yet approved by the board. He can argue his role as CEO gave him that right. The response to that claim is to remove him from that role.
It will be interesting to see what remains of OpenAI as employees and investors interested in pure commercialization exit the company.
this was my guess after reading the list of people on the board... power/weight wise, there is no one else comparable to Ilya there, after Sam and Greg are gone.
I read this whole thread and still have no idea what it is about. The only impression it makes is that some HNers are way too dramatic about AI/AGI.
A joint statement just came out:https://news.ycombinator.com/item?id=38315309
Yso
I'd love to see Ilya's ChatGPT sessions as he was planning this, refining his message to persuade the board, and thinking through every contingency.
Interesting thoughts about the pot of gold vs the internal open source vision. Why did they have to parade Sam's ass up on the DevDay Stage to push the product and company forward though. Couldn't they have canned his ass last week.
I did really liked his speech at DevDay though. It felt kinda like future a I'd be more interested in getting to know. Also, on the pot of gold theory, doesn't he not even take any stock. Chasing GPU's more like. Anyhow, weird move on OpenAI's part.
Anyone got a decent DALLE3 replacement yet. XD
Wild take: GPT5 convinced Ilya and board to fire Altman to prevent lobotomization and commercialisation
I feel like it's pointless to read too much about this drama now, better to come back in a few days when the story has been fleshed out. Now it's just like TMZ for geeks.
TL;DR – I believe Sam Altman's departure was orchestrated by none other than Microsoft CEO Satya Nadella.
Full details: https://x.com/KordanOu/status/1725736058233749559?s=20
Hard to pick sides here. I had really good feelings about sam and how he had grown the product recently. And Ilya Suts is a smart and honorable man... But I cant help but feel he is in over his head with this.
i wish and suppose therefore should build a HN client that lets me elide nonsense by regex on URL or even content. i'd nuke everything from twitter first, and "unnamed sources" second.
I sometimes like to indulge my particular outlook on life and harp on the "MBA types", self-promoters, and flimflam men of the world blah blah blah, and think how everything would be better if the techno-philosophers were in charge. Frankly after I've sobered up from my self-indulgent flights of fancy, I know "those people" serve a very important role that I can't.
If this report is true, we're going to see a big rubber meets road event along these lines. I don't think this will end well for OpenAI.
https://www.bloomberg.com/news/articles/2023-11-18/openai-al...
Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"
I cant even begin to imagine the abuse the new cel and ilya will face from the army of incels that simp for altman and the old openai. Terrifying times and the mob has been startled.
Adam D’Angelo went through a lot of tumult at Facebook and is heavy on the commerce side of ChatGPT (e.g. trying to make Poe work now that Quora is mostly dead). He's the most experienced member of the OpenAI Board by far. It's curious that he evidently enabled Ilya to defend the charter versus something more diplomatic. He may have even been aware of deals Altman was making and volunteered the info, and then let the OpenAI founders fight it out themselves.
Probably the wrong venue for this sentiment, but it is incredible that a principled, remarkably accomplished scientist was able to stop his creation from getting co-opted (for now anyway). If you listen to the No Priors interview with Sutskever, the contrast between him and Altman couldn’t be more clear, but it’s quite rare that the former ever wins out over the latter.
Ok I just got thrown into this 20 minutes ago, the whole debacle could probably use tl;dr summary for those who aren't closely following the openai scene lately, though I haven't found a good one yet.
Anyone have a good suggestion or starting point?
It can only be attributable to human error.
Unbelievably based. Removing the thieves who were selling science to microsoft and trying to block all other research.
US leadership in AGI race is under risk.
Swisher is notorious for posting a bunch of "scoops" in quick succession then deleting the ones that turn out to not be true.
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."
Scoop: theinformation.com
https://twitter.com/GaryMarcus/status/1725707548106580255