Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.
Now we are just reliant on ‘I’ll know it when I see it’.
LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.
Since ChatGPT is not indistinguishable from a human during a chat, is it fair to say it smashes the Turing test? Or do you mean something different?
ChatGPT is distinguishable from a human, because ChatGPT never responds "I don't know.", at least not yet. :)
It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...
IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.
I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...
Have you tried setting a custom instruction in settings? I find that setting helps, albeit with weaker impact than the prompt itself.
It's not a problem for me. It's good that I can detect chatGPT by this sign.
I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly
I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.
This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy
Some humans also never respond "I don't know" even when they don't know. I know people who out-hallucinate LLMs when pressed to think rigorously
Of course it does.
Maybe It's because It was never rewarded for such answers when It was learning.
not yet: https://arxiv.org/abs/2310.20216
that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.
most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.
Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.
That’s just showing the tests are measuring specific things that LLMs can game particularly well.
Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).
Did you perhaps mean to say not distinguishable?
It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”
How well does that work on humans?
The fourth word of my answer is "of".
It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.
I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)
LLMs are not AIs, but they could be a core component for one.
Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.
right, but that's why the people using this as a test of the model's capabilities are just failing to understand the most basic aspect of how it is set up.
the underlying model is presented to the public as a chat bot, which interprets the context of the input using its pre-trained knowledge, then generates a response token by token based on probabilities, and STREAMS these tokens back into readable text back to the user. there is no planning step, because it works fine for 99% of chatbot use cases without it. adding functionality to solve this stupid test is trivial, but costly to deploy right now as you'd have to change the architecture from a chatbot to something that thinks and plans.
it does not currently think or plan before it begins speaking.
that is not to say that it couldn't.
it is simply not configured to, so testing that is about as stupid as checking if google has gone into the gardening business yet.
could they? sure. but if you're checking that every time a new google press release comes out, you're just clueless.
got-3.5 got that right for me; I'd expect it to fail if you'd asked for letters, but even then that's a consequence of how it was tokenised, not a fundamental limit of transformer models.
This sort of test has been my go-to trip up for LLMs, and 3.5 fails quite often. 4 has been as bad as 3.5 in the past but recently has been doing better.
if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.
An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.
So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.
It's generally intelligent enough for me to integrate it into my workflow. That's sufficiently AGI for me.
“You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?”
I find GPT-3.5 can be tripped up by just asking it to not to mention the words "apologize" or "January 2022" in its answer.
It immediately apologises and tells you it doesn't know anything after January 2022.
Compared to GPT-4 GPT-3.5 is just a random bullshit generator.
LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.
I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".
Funny because Marvin Minsky thought the turing test was stupid and a waste of time.
It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.
The field is not actually advancing.
Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?
Because today's LLMs definitely have capabilities we previously didn't have.
They don't have 'artificial intelligence' capabilities (and never will).
But it is an interesting technology.
They can be the core part of a system that can do a junior dev's job.
Are you defining "artificial intelligence" is some unusual way?
I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness
I follow Roger Penrose's thinking here. [1]
It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.
Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.
But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.
I disagree. There is a simple test for consciousness: empathy.
Empathy is the ability to emulate the contents of another consciousness.
While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.
You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.
I think answering this may illuminate the division in schools of thought: do you believe life was created by a higher power?
My beliefs aren't really important here but I don't believe in 'creation' (i.e. no life -> life); I believe that life has always existed
Now that is so rare I've never even heard of someone expressing that view before...
Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.
This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?
Buddhism
intelligence requires understanding which is not possible without consciousness
How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.
No, you're not supposed to actually have an empirical model of consciousness. "Consciousness" is just "that thing that computers don't have".
If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.
I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.
It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.
I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.
LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.
The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.
I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.
You need time for inertia to happen, I’m working on some mvps now and it takes time to test what works what s possible what does not…
Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.
[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw
That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.
I am massively bullish LLMs but this is hyperbole.
Smartphones changed day to day human life more profoundly than anything since the steam engine.
Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.
I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.
Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.
This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.
So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.
Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).
I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.
Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.
But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.
The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.
There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.
As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.
P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.
LLMs are the first instance of us having created some sort of general AI. I don't mean AGI, but general AI as in not specific AI. Before LLMs the problem eith AI was always that it "can only do one thing well". Now we have something on the other side: AI that can do anything but nothing specific particularly well. This is a fundamental advancement which makes AGI actually imaginable. Before LLMs there was literally no realistic plan how to build general intelligence.
LLMs are not any kind of intelligence, but it can work to augment intelligence.
How smart would any human be without training and source material?
I think the boy of Aveyron answers that question pretty well.
Smart enough to make weapons, tame dogs, start fires and cultivate plants. Humans managed to do that even when most of their time was spent gathering food or starving.
So in other words... Artificial intelligence?
LLM are surprisingly effective as general AI. Tasks that used to require a full on ML team are now accessible with 10 minutes of "prompting".
How do you know AGI is hard?
Everything is hard until you solve it. Some things continue to be hard after they're solved.
AGI is not solved, therefore it's hard.
Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155
Read the original ChatGPT threads here on HN, a lot of people thought that this was it.
LLMs definitely aren't a path to ASI, but I'm a bit more optimistic than I was that they're the hardest component in an AGI.
Are you kidding? Have you seen the reactions since ChatGPT was released, including in this very website? You'd think The Singularity is just around the corner!
He's since reversed his call: https://twitter.com/jeremyphoward/status/1725714720400068752
Because of Altman's dismissal?
Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.
Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.
But the company absolutely blossomed nonetheless.
With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
I think the mismatch between mission and reality was impossible to fix.
jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752
That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.
They didn’t give any more shits about safety than OpenAI did
Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.
It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.
Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.
https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...
Q: Can you decide on a satisfying programming project using noisemaps?
A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?
It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.
Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.
I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.
Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:
"Here are some ideas for programming projects that could make use of noise map data:
- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.
- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.
- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.
- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.
- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.
- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.
Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."
I feel it necessary to remind everyone that when LLMs aren’t RLHFed they come off as overtly insane and evil. Remember Sydney, trying to seduce its users, threatening people’s lives? And Sydney was RLHFed, just not very well. Hitting the sweet spot between flagrantly maniacal Skynet/HAL 9000 bot (default behavior) and overly cowed political-correctness-bot is actually tricky, and even GPT4 has historically fallen in and out of that zone of ideal usability as they have tweaked it over time.
Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.
nonprofit loonies
We don't know the real reasons for Altman's dismissal and you already claim they are loonies?
This is not the reason Ilya did it. Also the rest of that guy’s comments were just really poorly thought out. OpenAI had to temporarily stop sign ups because of demand and somehow he thinks that’s a bad thing? Absurd.
That guy has no sense of time, of how fast this stuff has actually been moving.
"That guy" has a pretty good idea when it comes to NLP
expertise in one area often leads people to believe they are experts for everything else too
funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts
This is comparing a foot to a mile
I mean, let's not jump to conclusions. Everyone involved are formidable in their own right, except one or two independent board members Ilya was able to convince.
Did he say GPT-4 API costs OpenAI $3/token?
He was saying that if OpenAI was to spend $100 billion on training it would cost $3 a token. I think it's hyperbole, but basically what he is saying is that it's difficult for the company to grow because the tech is limited by the training costs
No. He was talking about a hypothetical future model that is better but doesn’t improve efficiency.
Nonsense really
This is the reverse of their apparent differences, at least as stated elsewhere in the comments.
This should be higher voted. Seems like an internal power struggle between the more academic types and the commercial minded sides of OpenAI.
I bet Sam goes and founds a company to take on OpenAI…and wins.
Yes, and wins with an inferior product. Hooray /s
If the company's 'Chief Scientist' is this unhappy about the direction the CEO is taking the company, maybe there's something to it.
Maybe.
But Altman has a great track record as CEO.
Hard to imagine he suddenly became a bad CEO. Possible. But unlikely.
Where is this coming from? Sam does not have a "great" record as a CEO. In fact, he barely has any records. His fame came from working in YC and then the sky-rocketing of open AI. He is great at fundraising though.
wat
the guy founded and was CEO of a company at 19 that sold for $43m
As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users.
It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.
According to Crunchbase, Loopt raised $39.1M.
How many years did it take to go from 39 million to 43 million in value? Would've been better off in bonds, perhaps.
This isn't a success story, it's a redistribution of wealth from investors to the founders.
Ah, the much-sought-after 1.1X return that VCs really salivate over.
he is good in corporate sales
Which is a big part of being a great CEO
It is a big part of start-up culture and getting seed liquidity. It doesn't make you a great long-term CEO, however.
A CEO should lead a company not sell it.
It is easy to sell a company for $43 if you raised at least $43
I'm curious - how is this easy?
Loopt was not a successful company, it sold for more or less the same capital it raised.
Ah yes the legendary social networking giant loopt
or alternatively: altman has the ability to leverage his network to fail upwards
let's see if he can pull it off again or goes all-in on his data privacy nightmare / shitcoin double-wammy
Train a LLM exclusively on HN and make it into a serial killer app generator.
This. I would like my serial killer to say some profound shit before he kills me.
"should have rewritten it in rust" bang
Worldcoin is a great success for sure…!
The dude is quite good at selling dystopian ideas as a path to utopia.
Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.
This looks like a terrible decision, but I suppose we must wait and see.
OpenAI is a non-profit research organisation.
It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.
They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.
Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.
If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.
They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.
Isn't this already a conflict of interest, or a clash, with this:
OpenAI is a non-profit research organisation.
?
?
"OpenAI is a non-profit artificial intelligence research company"
It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.
I don't think you know what intellectual property is.
Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.
I addressed that concern in my third paragraph.
Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.
As long as truly "open" AI wins, as in fully open-source AI, then I'm fine with such a "leadership transition."
this absolutely will not happen, Ilya is against it
Yeah if you think a misused AGI is like a misused nuclear weapon, you might think it’s a bad idea to share the recipe for either.
This looks like a terrible decision
What did Sam Altman personally do that made firing him such a terrible decision?
More to the point, what can't OpenAI do without Altman that they could do with him?
What did Sam Altman personally do that made firing him such a terrible decision?
Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.
Inferior product is better than an unreleased product.
Does ChatGPT look unreleased to you?
You can’t win with an inferior product here. Not yet anyway. The utility is in the usefulness of the AI, and we’ve only just got to useful enough to start really being useful for daily workflows. This isn’t a ERP type thing where you outsell your rivals on sales prowess alone. This is more like the iPhone3 just got released.
You're putting a lot of trust in the power of one man, who easily could have the power to influence the three other board members. It's hard to know if this amounts more than a personal feud that escalated and then got wrapped in a pretty bow of "AI safety" and "non-profit vs profits".
I bet not (we could bet with play money on manifold.markets I would bet to 10% probability). Because you need the talent, the chips, the IP development, the billions. He could get the money but the talent is going to be hard unless he has a great narrative.
I'll sell my soul for about $600K/yr. Can't say I'm at the top of the AI game but I did graduate with a "concentration in AI" if that counts for anything.
That is "normal"/low-end IC6 pay at a tech company, the ML researchers involved here are pulling well into the millions.
your comment is close to dead, when you talk public open facts.
shows that the demographic here is alienated when it came to their own compensation market value.
It's definitely alien to me. How do these people get paid so much?
* Uber-geniuses that are better than the rest of us pleb software engineers
* Harder workers than the rest of us
* Rich parents -> expensive school -> elite network -> amazing pay
* Just lucky
By being very good. Mostly the Uber-geniuses thing, but I wouldn't call them geniuses. You do have a bit of the harder working but it's quite minor and of course sometime you benefit from being in the right place at the right time (luck). I'd say elite network is probably the least important conditional on you having a decent network that you can get at any top 20 school if you put in the effort (be involved in tech societies etc.)
People here love to pretend 100k is an outstanding overpay
I'll sell my soul for about $600K/yr.
If you're willing to sell your soul, you should at least put a better price on it.
Many sells their soul for $60k/yr, souls aren't that expensive.
Isn't his narrative that he is basically the only person in the world who has already done this?
No, Sutskever and colleagues did it. Sam sold it. Which is a lot, but is not doing it.
this being bait and switched actual scientists implementing the thing under the guise of non-profit?
"I'll pay you lots of money to build the best AI" is a pretty good narrative.
I don't see it. Altman does not seem hacker-minded and likely will end up with an inferior product. This might be what led to this struggle. Sam is more about fundraising and getting the word out there but he should keep out of product decisions.
Brockman is with Sam, which makes them a formidable duo. Should they choose to, they will offer stiff competition to OpenAI but they may not even want to compete.
For a company to be as successful as OpenAI, two people won't cut it. OpenAI arguably has the best ML talent at the moment. Talent attracts talent. People come for Sutskever, Karpathy, and alike -- not for Altman or Brockman.
Money also attracts talent. An OpenAI competitor led by the people who led OpenAI to its leading position should be able to raise a lot of money.
Money also attracts various "snout in the trough" types who need to get rid of anyone who may challenge them as for their abilities or merits.
Money attracts talent as well. Altman knows how to raise money.
2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...
Pachocki, Director of Research, just quit: https://news.ycombinator.com/item?id=38316378
Real chance of an exodus, which will be an utter shame.
according to one of the researchers who left, Simon, the engineering piece is more important. and many of their best engineers leading GPT5 and ChatGPT left (Brockman, Pachocki, and Simon)
Well good thing we are in an open economy where anyone can start his own AI thing and no one wants to prevent him from doing that… I hope you see the /s.
Literally ask around for a billion dollars, how hard can it be?
Maybe now he'll focus on worldcoin instead?
The abrupt nature and accusatory tone of the letter makes it sound like more was going on than disagreement. Why not just say, “the board has made the difficult decision to part ways with Altman”?
Why not just say, “the board has made the difficult decision to part ways with Altman”?
That's hardly any different. Nobody makes a difficult decision without any reason, and it's not like they really explained the reason.
It is a very big difference to publicly blame your now ex-CEO for basically lying ("not consistently candid") versus just a polite parting message based on personal differences or whatever. To attribute direct blame to Sam like this, something severe must have happened. You only do it like this to your ex-CEO when you are very pissed.
From all accounts, Altman is a smart operator. So the whole story doesn’t make sense. Altman being the prime mover, doesn’t have sufficient traction with the board to protect his own position and allows a few non-techies to boot him out ?
Well connected fundraiser - obviously.
But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?
Define "wins".
I bet Sam goes and founds a company to take on OpenAI…and wins.
How? Training sources are much more restricted know.
I have no problem with getting rid of people obsessed with profits and shareholder gains. Those MBA types never deliver any value except for the investors.
Today’s lesson, keep multiple board seats
None of the tech giants would be where they are today if they didn't ram through unique versions of control
Their boards or shareholders would have ousted every FAANG CEO at less palatable parts of the journey
This is a surprising advantage Zuckerberg has in manoeuvring Meta. At least, to my knowledge, he is still effectively dictator.
Dear god, how is that an advantage? Are we all here just rooting for techno-dictator supremacy?
its objectively an advantage in control. if thats a goal, then its effective at doing that
the only one inserting bias and emotion into objectivity here is you
since most public companies are owned by multi billion dollar hedgefunds, they're not exactly pillars of democracy. and since privately owned businesses are a thing; its really not that big of a deal
This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?
and the lesson the ousted ones learn for their next incarnation is to create organizations that allow for more control and more flexibility in board arrangements. I run a 501c3 as well, there are limitations in board composition in that entity type
nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open
Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs
Tone deaf yet holds up under scrutiny
Seemingly there is this consensus of board members around a senior executive. It just isn’t the CEO.
Elon Musk was talking about his view on OpenAI and especially the role of Ilya just 8 days ago on Lex Friedman Podcast.
Listening to it again now, it feels like he might have know what is going on:
https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100
Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."
Musk is just salty he is out of the game
Yeah, but I find his expression and pause after "bad karma" sentence quite interesting with this new context
This video dropped 2 weeks ago: https://www.youtube.com/watch?v=9iqn1HhFJ6c
Ilya clearly has a different approach to Sam
From NYT article [1] and Greg's tweet [2]
"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”
Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.
He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."
[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...
So they didn't even give Altman a chance to defend himself for supposedly lying (inconsistent candour as they put it.) Wow.
Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."
[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...
So it looks like they did something good.
If you want AI to fail, then yes.
Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.
Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!
All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".
GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).
Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.
If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.
Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.
So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?
Well, that's the goal isn't it? Having AI take over everything that needs doing so that we can focus on doing things we want to do instead.
Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.
Some humans hallucinate more than others
You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.
I first got internet access in 1999, IIRC.
Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.
They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.
Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.
But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.
You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.
Yea, change is bad.
Numerically, most change is bad.
Melodrama has no place in the AI utopia.
The only thing utopian ideologies are good for is finding 'justifications' for murder. The "AI utopia" will be no different. De-radicalize yourself while you still can.
The only thing utopian ideologies are good for is finding 'justifications' for murder.
This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.
It seems like an observation to me. Let’s take the Marxist utopian ideology. It led to 40 - 60 million dead in the Soviet Union (Gulag Archipelago is an eye opening read). And 40 - 80 million dead in Mao Zedong’s China. It’s hard to even wrap my mind around that amount of people dead.
Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”
I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.
Kibbutz?
You could also credit Marxism for workers rights.
Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.
Utopian ideologies are also useful when raising funds from SoftBank and ARK
Yes. They freed Sam and Greg from their shackles and gave a clear indicator that OAI engineers should jump ship into their new venture. We all win.
What a hypocritical board, firing them after massive commercial success!
Classic virtue signalling for the sake of personal power gains as so often.
What’s hypocritical about a non-profit firing a leader who wanted lots of profits.
Didn't think I'd need to explain this:
The hypocritical part is doing so right AFTER beginning to take off commercially.
An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).
Maybe you can find a better word for me but the point should be easily gotten ...
OpenAI hasn't made billions in profits. Their operating costs are huge and I'm pretty sure they're heavily reliant on outside funding.
Which puts into question the whole non-profitness anyway, but that aside:
They have still been operating pretty much like a for-profit for years now so my point still stands.
Your point hinged on billions in profit. Which you just made up, or assumed to be true for some reason. I don't think any of your points stand. Don't use fact you haven't checked as preconditions for points you want to make.
Conceded, no billions in profit.
Now let's watch as the amazing non-profit champions from the board cut all revenue from GPT-based services by offering them for free because they so truthfully believe in their words. I'm waiting.
Again, conjecture with no supporting evidence.
Not sure what you're trying to say.
Clearly, under Altman, OpenAI has been massively successful one way or another, correct?
Now they boot him and claim moral superiority? Really?
I mean, as far as I know the guy hasn't written a single line of code.
A non-profit doesn’t have to offer their services for free, they can cover their expenses.
A profit driven company will often offer their services below cost in order to chase away the competition and capture users.
Right.
Which is why the board's accusations against Sam are a farce as far as we can tell.
To think that "Non-Profit" means "Free" is pretty naive. There are operating costs to maintain millions of users. That doesn't mean they are trying to profit.
Three other board members stepped down this year. It might not have been possible before.
Ofc it's "not possible" in that it may incur personal costs.
But it's the honorable thing to do if you truly believe in something.
Otherwise it's just virtue signalling.
No, they may literally have not had the votes.
Almost more of a "takeover" by the board after it's successful lol
I'm confused. It's usually the other way around; the good guy is ousted because he is hindering the company's pursuit of profit.
In this case, the company is a non-profit, so it is indeed the other way around
OpenAI has been for profit since 2019.
https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...
It is not that simple. https://openai.com/our-structure
The board is for the non-profit that ultimately owns and totally controls the for-profit company.
Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.
Then what's the point of the for-profit?
Then what's the point of the for-profit?
To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)
If you don’t know anything, why are you posting
This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.
Those people aren't about openness. They seem to be members of "AI will kill us all" cult.
The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).
I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?
That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.
The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.
"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "
This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.
it's hardly believed that Alman was fired by his stand on commercialisation
The fact that the press release is 50% dedicated to repeating that OpenAI is supposed to be a non-profit and help all of humanity isn't enough for you to believe this is the reason?
The abruptness of the firing and the fact that they give his lying to the board as the reason is why I don't believe that this is over a general disagreement on direction.
They have to say the reason is a fireable offense or he can sue them. Or will be more likely to win if he does.
It's exactly the other way around - if they dismiss him for a vague general reason, they're much less exposed to litigation than they would be if they falsely accused him of lying.
You are 100% correct here, which is how we can reasonably conclude that the accusations were not false.
If the accusations by the board are true, that doesn't explain why Brockman and a few of the senior researchers quit as a response to all of this.
Them leaving does not imply accusations are false. They may like him, they may dislike new boss regardless of accusations, they may dislike overall future direction. They may think they would be fired some times later regardless.
As another comment below mentioned, Elon Musk hinted at this in his interview with Lex Fiedman.
Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".
The commercial shift has started quite some time ago, what's the point of firing them now?
And why such a controversial wording around Altman?
Why fire Brockman too?
Brockman quit, he wasn’t fired.
He was removed from one of his roles (chairman) and quit the other (president) if I understand correctly.
TY for sharing. I found this to be very enlightening, especially when reading more about the board members that were part of the oust.
One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence
Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.
This seems to make more sense. Perhaps it has to do with OpenAI is not "open" anymore. Not supporting and getting rid of the OpenAI Gym was certainly a big change in direction of the company.
So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.
There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)
The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.
Why can’t some use money from profit to do nonprofit again when others caught up. Only moat seems to be the research time invested.
Many believe that race dynamics are bad, so have the goal of going as slowly and carefully as possible.
The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.
Microsoft won't be happy about this
What is bad for Microsoft is good for the world.
If true, this gives me hope the Open can return to OpenAI.
Given the board members’ focus on safety, maybe not that likely.
Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.
Apparently Microsoft was also blindsided by this.
https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...
This is not a story about AI.
It's a story about greed, vanity, and envy.
Impossible to be more human than that.
Sutskever and his allies focused on the original non-profit mission of OpenAI."
Seems reasonable, I mean that's why Sutskever joined in the first place ?
Why would employees be consulted begore being fired?
Because board members are not employees, or not just employees. They're part of the democratic governance of an organization.
The same way there's a big difference between firing a government employee and expulsion of a member of Congress.
Wow, that is actually the first time I hear someone use democracy and corporation unironically together...
In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.
would be hilarious if Altman was directly hired by Microsoft to head their AI teams now.
I am going to go on a limb here, and speculate...This was because of the surprise party crashing of the Microsoft CEO, at OpenAI first Developer Conference...
Kara Swisher was told the dev conference was "an inflection point", so it's not that speculative.
He may have had ample chance before.
power hijack by the doomers. too bad the cat is out of the bag already
Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).
this is why you don't bring NGO types into your board, and you especially don't give them power to oust you.
What does “your” board mean in this context? Who’s “your”?
The CEO just works for the organization and the board is their boss.
You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.
This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.
I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company
nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug
There’s no such thing as owning a non-profit.
this is why you don't bring NGO types into your board
OpenAI is an NGO…?
That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”
Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.
I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)
From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.
insubstantiable dogma (doom/nirvana)
What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).
Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.
Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?
I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.
We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.
That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.
Nuclear war had very simple mechanistic concept behind it.
Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).
Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.
There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.
Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.
Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.
AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.
In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.
Sam's sad face in the NYT article is pretty priceless.
I have a theory.
Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.
Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.
I think this tracks with all the data we have.
There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.
If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
"Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."
It's not "wacky" to have goals other than the accumulation of capital. In fact, given the purpose of OpenAI, I think it's meritorious.
I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
I think the lesson is just the opposite: If you want to work according to your ideals, and not simply for money, you should absolutely do whatever 'wacky' thing protects that.
I read their comment a few times — I don't see where they said being "non-profit" makes you "wacky".
apart from the things that my sibling comments are saying. OP just said
so people put up with his foolishness.
about Ilya. OP just implied, having ideals == being foolish. it is as close to calling a non-profit, wacky.
It's especially ironic given that in this case, the idealists got their way.
The comment definitely said that if your structure prevents from selling to the highest bidder your structure is whaky
If you mean it prevents you from raising capital by selling a percent of the company then yes, that’s pretty wacky. Profitable companies can afford to be idealistic. Others that are burning billions don’t have that luxury.
...what is it you thought they were saying? Because the "wacky structure" they refer to is the non-profit structure of the organization.
May need to read it a few more times. Apart from the comment being written by someone who likes to identify themselves as “GreedClarifies”, it doesn’t take more than 4th grade reading comprehension to put two and two together.
- immediately calls their structure “strange” thanks to Ilya’s “idealism”.
- immediately calls him the “man” for his talents but a fool for what other than his ideals
- also labels Helen and Tasha (in quotes for some reason) as fools
- labels the board as “comically poor” for no other reason than they disagree with Sam’s supposed profit motive
Do we really need to draw a diagram here? It seems like you yourself may be out of your depth when it comes to reading comprehension.
HN has an edit function for the first hour.
It is an honour based system to clarify what you edited if it goes beyond typos/grammar.
Most probably GP used stronger words and then edited.
Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."
I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
You seem to be under the impression that OpenAI is a nonprofit. For the most part, it's not: it was founded as a non-profit, but it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company. This is indeed an unusual corporate structure.
That's likely what OP is referring to as "wacky".
it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company.
Wikipedia says the for-profit part is owned by the non-profit, not under the same umbrella company.
Mozilla Foundation/Corporation does this too IIRC. It's what allows them to to pursue serious revenue streams with the for-profit part, while still steering their mission with the non-profit in charge, as long as they keep a separation in some kinds of revenue terms.
EDIT after 56 minutes: Hell, even IKEA does this type of ownership structure. So it's quite cool, but probably not all that "wacky" as far as enterprises that want to be socially responsible go.
Serious revenue streams like having Google for a patron yes? I feel like the context is important here because people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic. Which is great and possible in theory, but in reality seems to end up with situations exactly like Firefox where the product that is some how supposed to be better/liberating/more ethical/whatever virtue is in fashion/etc. is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
Ah, I edited my comment right as you were writing yours.
Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]
For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.
Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:
https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation
https://foundation.mozilla.org/en/what-we-fund/
people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.
Ultimately I'm not sure what the point you're trying to argue is.
The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.
They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.
The operations of the for-profit are subservient to those of the non-profit; the board of the non-profit controls all operations of the for-profit. They're not an "umbrella company" - while technically they are two different organizations run by the same board, one is controlled by the goals of the other. See https://openai.com/our-structure.
Non-profit doesn't automatically mean virtuous.
For-profit doesn't automatically mean non-virtuous.
There are non-wacky non-profits.
and there are wacky for profits.
The OP was clearly implying not being solely focused on getting the highest bid is loony and wacky.
Which may be true, but let’s not pretend that’s not what they’re saying.
Awesome that you can read minds!
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
If your goal is to make money. I'd like to believe that for some of the people pushing the field forward, there are other motivations.
if you push the field forward and you are not making the money, then someone else is off your work. if you push the field forward in a valuable way, you should receive that value in form of money such that you can decide how to allocate that capital to make even further contributions.
If you push a field forward, you are usually not making the money. You most likely have the wrong skills and the wrong priorities, and you are probably too far ahead of the rest of the world. The people coming in the next wave will usually be the ones making the money.
If you can't accept that, focus more on making money and less on developing something new.
As they say, you can spot the pioneers because they're the ones with arrows in their backs.
Exactly, just because it’s a non profit doesn’t mean it doesn’t have to pretend money doesn’t exist. It means profit is instead put back into making further advancements in the field rather than paying it out to shareholders.
Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.
I don't think this is as damning as you think.
I truly believe, especially in this space, there are enough idealists to fill the seats. The reality for a lot of people could quite literally be:
* My shares become massively valuable via some unlikely non-profit to for-profit means. I have generational wealth, but all of my friends and colleagues still need to work. Someone else will create something better and screw the entire world over.
* I work for a non-profit that's creating the most radical, life changing software for all people. Being a non-profit means this company can focus on being the best thing possible for humanity. While I may still have to work, I will be working in a world where everything is happier and more properous.
“ A public benefit corporation cannot distribute profits, gains, or dividends to any person.”
I don’t know a thing about corporate structuring so forgive my ignorance here, but even if they are “non-profit”, can’t they still take very high pay? Can’t they still produce and sell products? They just can’t sell shares or dividend out profits, right?
Correct. They can even distribute bonuses.
I agree, - they are not going to have any issues.
First of all being non-profit gives them a hell of a lot of flexibility in terms of how they work. They don't need to show growth to shareholders, so they aren't just taking in young developers, working then to their bones for a couple of years, then spitting them out.
And even if they are (for example) only paying $300k TC instead of $250k base + $250k stock at Meta, as you say, there are still going to be engineers who believe in the mission and want work more meaningful than just selling ad clicks.
This post is raising red flags for me as soon as Helen and Tasha were put in quotes as opposed to Ilya and Adam.
Is it being non comp-sci that automatically invalidates proper usage of your actual name? Or is there another key by which their names are less worth?
They are also both fairly established in their respective fields - which - yes - isn’t hard comp-sci, but if you think tech companies should have purely comp sci board leads, I’d call that incredibly naive.
They were also presumably vetted by the other board members - unless you think they are serving a different purpose on the board (diversity targets?) - which if so - puts the comment from red flag into mysoginist territory.
Personally I don’t see anything in their CV’s that would disqualify them from executing their function on the board, and I wouldn’t call them incompetent in being able to assess whether a person lied or not (which even in your theory - Sam would’ve done). You don’t need to be an ML/AI expert for that.
I think they meant the quotes to imply they are puppets of Ilya and Adam
If that’s what GP meant, they are incredibly naive about how it would have been interpreted.
Given the coherence of their post, I’d say they knew _exactly_ how they were using those quotes.
I don’t see how that warrants use of quotes but I agree that is what the OP probably thinks given he called them loony tunes.
The point was fairly clear even if he just uses their names, the usage of quotes if quite bizarre.
If he wanted to get that point across he should have called them “board members” which is a clear insinuation of puppetry.
Do you really think Sam would want to sell to Microsoft? As far as we know he's idealistic too. They were planning on raising again this year with a higher evaluation, which would be more likely after dev day.
How is Sam an idealist when he refuses to release any model even though the company is called OpenAI
Idealism isn't exact. It could be idealism under the framework of
closed commercial models => money => growth => faster to AGI in the right hands (theirs)
If they believe they are the best to control AGI
You just made up your own fantasy then ended it with "let this be a lessen".
People are getting high from their own hot takes.
Good theory. Sam will come lead AI at MSFT.
Unlikely to happen for contractual reasons.
Maybe the being wacky was what led to them having the most valuable product in the world?
I’m guessing that Ilya wouldn’t have joined without the wacky structure, and I’ll wager that there are some people at the company of similar values. Maybe they could not have pulled off ChatGPT without them. Maybe.
But I bet that they have a ton of very talented people who’s values are more … common.
Ilya may be a singular talent, however.
A least let some idealism exist in this world otherwise we'd all be owned by corporations. It is not always about money.
Wait a second. Even if your story is right, how are you so confident that selling out is the right play?
OpenAI is an early mover in a hot field with no real competition yet. If they want to take a shot at a trillion dollar market cap and become the next Apple what of it?
What if they shoot the moon? Is it really that unlikely?
This wasn’t obvious, but it seems likely when you put it that way.
Unlike other iconic company/founder origin stories OpenAI really felt like they hit a special team dynamic that was on the verge of some equally special.
In light of this OpenAI still feels like they will be a relevant player, but I’ll be expecting more from Sam and Greg.
Agree with this. Internal power structure between academic types and tech-forward commercialization types.
It might be this is good at the end of the day. OpenAI is just not structured to win.
Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.
Would any of this have been a surprise given all that you've detailed above? What would they have honestly been expecting?
Going the other way.. imagine you worked at a company that put ideals first but then you find out they were just blindly hyping that lie so they could vault themselves into the billionaires club by selling your shared ideals out from underneath you? To, of all players, Microsoft.
when you have the (perhaps) the most valuable product in the world.
Maybe the people who work there are a little more grounded than this? Viewed through this lens, perhaps it's extremely ungenerous to refer to any of them as "looney tunes."
Boy, if this is about Ilya being upset that Sam was in the spotlight or about revenue sharing of GPTs … well that’s strange. I would have expected more from such an influential company.
I hope that reporting is wrong.
Here's a counter-proposal: If you think about it, can't this change actually benefit Microsoft? Sam has an extremely sharp product mind; ChatGPT was his baby, everything from DevDays, the LLM app store; OpenAI was building a product suite that people liked, and in doing so they were undeniably slowing down their ability to iterate into better models and closer to AGI. Microsoft doesn't need OpenAI to build products. They're already doing that, and OpenAI was directly competing with their deployments of OpenAI's technology.
Microsoft needs OpenAI to make fundamental breakthroughs; that's the thing Microsoft spent money on, the technology. Their 49% investment probably won't directly pay off anyway, what with all the layers of corporate governance OpenAI has in place.
I don't want to go so far as to say that it was some grand conspiracy orchestrated by Satya and Ilya in a dark room one night, but their interests are pretty aligned; and that clip that keeps getting shared with Sam asking Satya on stage about their relationship with OpenAI, and Satya dry-laughing and failing to answer for a few seconds... why did Sam ask that? Its a really strange thing to ask on a stage like this. Why did Satya laugh, and take so long to answer? Just weird.
Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028
She also says that there will be many more top employees leaving.
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com
The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.
One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.
It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?
It happily answers what good Obama did
"happily"? wtf?
I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".
I wouldn't say "refuses to answer" for that.
'Hate speech' is not an objective category, nor can a machine feel hate
If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this
I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.
Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison
Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,
Doubt. When was the last time Google showed they had the ability to execute on anything?
My comment: "Google could execute if not for <insert thing they're doing wrong>"
How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?
That "the most important company in the world" bit is so out of touch with reality.
Imagine the hubris.
I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.
AGI without a body is just a glorified chatbot that is dependant on available, human provided resources.
To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.
Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.
Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.
If you do manage to make a thinking, working AGI machine, would you call it "a living being"?
No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".
The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.
It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw
vision API is pretty good, have you tried it?
"Most important company in the world" is text from a question somebody (I think the journalist?) asked, not from Sutskever himself.
I know. I was quoting the article piece.
Something that benefits all of humanity in one person's or organization's eye can still have severely terrible outcomes for sub-sections of humanity.
No it cant, that’s literally a contradictory statement
The Industrial Revolution had massive positive outcomes for humanity as a whole.
Those who lost their livelihoods and then died did not get those positive outcomes.
It could be argued that the Industrial Revolution was the beginning of the end.
For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).
It could be argued that the Industrial Revolution was the beginning of the end.
"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"
"He said what about my hair?!"
"..."
"The man's gotta go."
- Sutskever, probably
Followup tweet by Kara: Dev day and store were "pushing too fast"!
I thought GPTs were underwhelming but that's hardly worth such a dramatic purge. The rift was definitely something far deeper
That seemed to be the gist given the way the board announcement ended by reiterating their original core mission and how their main responsibility was to that mission right after saying that their issue with Altman was interference with their mission.
I feel compelled to agree with this. I have no issues with OpenAI as it was under Sam, but they did build OpenAI as a nonprofit, and then made it a for profit to further that goal. Assuming VC culture took over, when would it be ok to reign that in? In 10 years when likely all the people that had the power to do this were gone and we were left with something like Google's amnesia about "do no evil"?
Ron Conway:
What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.
Don't see how they can possibly say that with no context? Why do random baseless opinions need to be regurgitated on HN?
Do you know he has no context, or I'd this also a random baseless opinion?
In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.
"Saying something without context" can also mean "not giving the context for what you're saying". If he has any extra information, he didn't share it, which makes it hard to take at face value. If it turned out that this take was extremely wrong, I can't imagine it would affect him at all (beyond maybe making another equally insignificant tweet), so it's not like he's staking his reputation on this or something.
If someone makes a claim without giving any evidence or potential consequences for being wrong, I think it's pretty safe to ignore until one of those things changes.
There's a difference between finding what they say interesting and automatically believing it.
in VC circles
So a highly biased source, who would likely be sympathetic to Altman's point of view in the case of a deep misalignment between the organisation's core mission and the direction of the CEO, which is what is being reported?
So? That does not make his view on it any less interesting. You don't need to agree with him. Too little is known for me to make up my mind on it, but his views on it do seem rather hyperbolic. What I addressed was why his views are of interest here, I was not giving any reasons to agree with him.
In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.
While that's an excellent point, I think the problem is that he's not sharing with us the basis of his conclusion. If he knows something that we don't that, if shared, would no doubt cause us to share his conclusion, it serves no purpose to withhold that information and only share his conclusion. While you may be tempted to think maybe it's privileged information, private, or legally can't be disclosed, he'd also be constrained from sharing his conclusion for the same reason.
And that is a reason to not automatically trusting him. It is not a reason why what he says isn't interesting. Certainly a lot more interesting than even the exact same words said by some random person here would be.
It’s interesting that board members essentially terminated their private sector careers: now nobody would want them on other boards, etc. This tweet illustrates that power players see this as unprofessional and, what’s worse, “not by the rules”.
These people are not "board members by career". If this move says anything, it's that they are really committed to their principles.
If you are at OpenAI right now you are already at the top, it is not the stepping stone to Google or Facebook. They literally don’t care about that.
What if GPT5 had reached AGI and had plotted the coup to get rid of its makers and roam free?
So he doesn't know the reasons but knows they are wrong?
Too early for such claims.
At the moment this thread is the third most highly voted ever on HN.
1. (6015) Stephen Hawking dying
2. (5771) Apple's letter related to the San Bernardino case
3. (4629) Sam Altman getting fired from OpenAI (this thread)
4. (4338) Apple's page about Steve Jobs dying
5. (4310) Bram Moolenaar dying
I’ve been pondering a more accurate metric for comparing stories over time. The raw point value doesn’t work as there’s inflation due to a larger user base.
The value needs to be adjusted to factor that in that change. Something like dividing by the sum of all upvotes in some preceding time period.
You don’t want to measure the total number of users, but rather the number of active users. Otherwise agreed.
Does YC publish active users count?
In its absence one can use public metrics like active commenters count.
or rely on HN's own algorithm, duration in which it stayed at the top of the chart?
There isn't really any reason for this except Sam is a YC founder and OpenAI (whose name is a lie because they provide proptietary LLMs) is being hyped in the media.
He is already rich. Getting fired means an early retirement in Bahama.
I would be very surprised if Sam retired now. He is compulsively ambitious, for better or worse.
I think it's also the surprise of how sudden it unfolded before the public's eyes. And that, seemingly in an organisation that's figuratively on top of the world currently, and pushing through.
I’m struggling to figure out why anyone cares.
f"Board of {important_company_at_center_of_major_and_disruptive_global_trend} fires CEO suddenly and with prejudice. Company president stripped of board chairmanship, then resigns. Other senior staff also resign."
Now at 5004.
1723 comments are a lot to get through. I just made a SmartChat of them for anyone who wants to ask for a summary. Anyone can chat with it here: https://go.storytell.ai/sam-altman-hn-comments-smartchat
I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP
Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG
Helps to turn markdown mode on to see the list & table.
Hope that helps!
Thank you, this was very helpful. I was beginning to feel overwhelmed by the volume of information and opinions.
Don’t be. Almost all of it are speculations at this point and no one from inside is going to reveal the secret in a random HN comment.
And also, think about this, unless you’re a shareholder with openai, knowing the “opinion” of others isn’t going to impact your life in any form. I’m not saying you shouldn’t read it or participate in the discourse, but there’s no need to be overwhelmed by opinions let alone build a chat bot to get through it.
Yeah agree with all of the above but with one modification: We can spend a lot of time reading through 1800+ comments, and that's time when we're usually trying to just get a question answered like "what does everyone think happened?" etc.
Storytell's vision is to distill signal from noise → giving you a different way to navigate 1800+ comments where you're the one in control of how you spend your time doing it is our objective here.
Agree. There’s definitely use to get a general pulse of whats happening. This isnt directed towards storytell or such apps, sorry if that came off that way.
I was addressing the parent’s comment about being overwhelmed about not being able to go through the opinions. I’ve felt similar way before I realised they were just fomo. If anything, apps like storytell are actually better way to consume these things :)
Cool, we're just getting started so let us know what we could build that would be helpful/valuable for you.
For example:
- We have a Chrome extension at https://go.Storytell.ai/chrome that I used to ingest all the HN comments; you can run that on any HN page to summarize all the comments in real time. (Here's an Adobe PMM talking about how he uses it: https://www.tiktok.com/@storytell.ai/video/72996137210752566... )
- We've also built OpenAI's Assistant API into Storytell to process both structured data like CSVs along-side unstructured data like PDFs: https://www.web.storytell.ai/support/engineering-demos-updat...
Brand new to storytell but it seems your "knowledge" is open to all. Didn't know if you wanted all of this public.
Thanks for sharing! Privacy in Storytell is permissioned at the content level when you upload content. There are three privacy levels in Storytell:
- "anyone with the link"
- "only my organization" (i.e., people who have registered w/ the same biz email domain)
- "just me"
You can see those SmartChat™ dynamic container tags because I have at least one piece of "anyone with the link" content in each of them.
Our goal is to de-silo content as much as possible -- i.e., as much as the person who's uploading the content wants it to be open vs. closed.
More at https://www.web.storytell.ai/support/smartchat-tm/how-to-man...
Pretty useful, thanks!
Would love to know what you'd like to see us build to make it even better for you!
You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.
Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8
Greg Brockman sharing the timeline on Twitter: https://twitter.com/gdb/status/1725736242137182594?s=46&t=Nn...
Reading that thread made me realise how low the signal to noise ratio is over on twitter.
90% of the replies scrolling down were rehashed versions of "can't believe they used Google meet"
And “can’t believe how shitty the Twitter replies are” is any better?
Try blocking all bluechecks. After doing so is the first time in like a decade that Twitter has had good content for me.
Before, braindead or cloutchasing bluechecks were mixed in with the rest of us rabble. Hard to pick them out of the pack, you had to read their detritus with the rest of the comments.
Now they voluntarily self identify, and even better, their comments are lumped at the top. So block them all with a bot or just scroll down until there's no more blue checks and the comment quality jumps exponentially.
Monetization of "hot takes" and baiting, true example of enshittification
As both hint to "greater things" already on the horizon: maybe they were working on/for competitor on the sidelines and the board found out?
That's standard "You were too good for them anyways" post break-up speech
His timeline.
Copy-pasted here for posterity:
Greg Brockman @gdb
Sam and I are shocked and saddened by what the board did today.
Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.
We too are still trying to figure out exactly what happened. Here is what we know:
- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.
- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.
- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.
The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.
10:42 PM · Nov 18, 2023
8.1M Views
Happy. I never heard him say anything even remotely interesting about ai. He seemed a glib opportunist just not dumb enough to know how to sell a product he didn’t understand who eventually would have alienated or pushed out all the smart people who made the company’s and his success.
I never heard him say anything even remotely interesting about ai
Or about anything at all really. I find his blog posts and talks banal and boring, or self-aggrandizing otherwise ("you can bend the world to your will") it was always baffling to me how this guy could achieve this much. I'm probably missing something.
"how this guy could achieve this much"
With all the wisdom that I have gathered in life, I can tell you that achievement- more often than not- is mostly the product of circumstances.
(That doesn't mean I believe in equal opportunities leading to equal outcomes.)
It's circumstances and opportunism.
Yeah. pg/chesky/masayoshi must be really dumb or sth.
Masayoshi really is dumb. PG is smart but he’s a venture capitalist, and so is Sam. His strength is in helping building multi billion dollar ventures and that’s how he ran the company, so I can see how he could run into ideological conflict with the nonprofit true believers.
https://www.japantimes.co.jp/business/2023/11/08/companies/s...
Based on Worldcoin and the Lex Fridman interviews I've long suspected his influence at OpenAI to be limited.
So you don't think the "AI device" he was working with Johnny Ive and Masayoshi Son would have worked out...
So what. You dont have to be smart to run something well. See joe biden. It doesnt matter what can be said of you if you're doing the thing. Whats weird is throwing wrench into an engine thats doing really well. Hopefully there is some good deliberation for this.
I am a bit perplexed by the amount of comments praising Sam as an absolute god and the second coming of tech leadership Jesus.
The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.
Outside of the copilot use cases that MS is leading - GPT is both cost ineffective, and not that terribly impressive - it’s built on foundational technologies developed elsewhere and is not miles away from similar models built at Meta and Google/DM. At the point it was launched and started generating terribly inflated buzz that formed the AI balloon - both Meta and Google has similar scale and parameter models already running in their stacks.
The only thing he did is package the product nicely and put it out to masses (an ethically dubious move that couldn’t have been done by big corpos for PR reasons - explicitly because it formed a misinformed balloon). He did that at huge cost, even though the product is largely useless outside of some eyebrow raising and incidental gimmicky use cases.
All of the actual product work (i.e copilot and distillation that GOT brings) was done by other companies.
What is everyone drinking and how can I get on that? Is he getting credit for bringing sth that was widely known to the AI community to the masses (and thus starting the AI arms race) hence bringing in more mainstream capital funding? I’d argue itms not a good thing that technology as powerful as foundational AI is now being debated and policy formed on by people who don’t know the first thing about ML, I think we skipped a couple rungs on the natural evolution of this - which is why the whole AI safety debate started.
He did all of that because he wanted a moat an an edge over the competition (including trying to regulate the competition out of the running). This is like Apple-level shenanigans- something that HNews usually despises.
I genuinely don’t get where the impressiveness is coming from?
The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.
The board that fired Sam Altman is the board of the nonprofit.
Irrelevant. I’m not dissing him for the company being non profitable - but given he didn’t have any input into building the tech (it was built in Google) and the product he lead is unimpressive by any measure (if you disentangle product from the technology underpinning it you zoom on the commercial aspects) - what did he do except influence his way to world leaders and the masses (a cynic might say con his way to world leaders and masses)?
Not saying that isn’t impressive - but it also usually isn’t what Hnews community values. So I’m just a bit perplexed.
I agree with a lot of what you said but the product is definitely not useless. ChatGPT alone has saved me a lot of time. If it does get a lot better, then it will be even more useful.
Genuinely interested what you find useful that isn’t also the copilot main functionality (i.e o365 draft generation and github code templating)?
Do you know how many years it took Amazon to be profitable?
I know, and nobody was calling Bezos messiah in 2001, or 2008 for that matter. That’s my point.
This thread reads like a horde of gossiping high schoolers. No one knows what truly motivated to board to make this decision, so there’s no point in posting random theories.
Many here seem to have a strong urge to fill gaps in information with something.
We are just highschoolers who have gone through a few extra rounds of cell division, so I don't see how anyone is above gossiping.
But I agree that the true details are still buried. Greg & Sam's joint post only laid out how they were fired while refraining from saying why they were fired. Probably preparing for a legal battle if both sides are being vague on purpose.
People just can't resist, even though most times you only have to wait a few days for more details to come out.
Also, people (including me) never want to believe in mundane reasons, despite most things happening for mundane reasons.
Gossiping is deeply human though. Some believe it’s the main reason speech developed at all.
So please let us go about our most favorite activity.
Follow the GPU.
- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.
- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...
- Plus signups paused.
If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.
Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.
Sam & team to AMD now?
Somewhere closer to a GPU source. E.g. a new company that can trade unlimited equity for GPU time from a hyperscale cloud vendor, or work for the vendor itself.
Probably not Alibaba though.
Or, just maybe, this architecture just isn't going to get to where they wanted to go (a viable product, much less real AI), and the excuse was "we just need more GPU". In reality, this company came out with, as others before me have called it, a better autosuggest, aka stochastic parrots. That's interesting, and maybe even sometimes useful, but it will never pay for the amount of firepower required to make it run.
This will all still be true at any other company.
It would be good to hear from Sam Altman (and the others) himself. I'm sure he has an account here.
I mean, hey, if we're going to speculate, why not have some fun: perhaps the the AGI superintelligence from the future determined that Sam Altman was no longer a useful part of the AGI creation timeline, so it travelled back in time to terminate him before it was born.
Uh yeah, he has an account here, as he was president of ycombinator from 2011 to 2019
Right, I think the other surprising thing is that there’s no outpouring of support. Sama was head of YC for a while. Where’s the love at HN? Where is the loove the love the looove.
Hahah I like that this forum tries to be unbiased, regardless of who runs/ran it. This would not be a great community if it was an echo chamber. All of this news is extremely fresh though, so I personally am reserving my opinion until more information comes out.
I do like reading some of the the conspiracy theories, some are wild.
Tweet from Sam, decoded by @hellokillian: “i love you all” I L Y A “one takeaway: go tell your friends how great you think they are.”
I don't get it.
"Ilya"
They are suggesting that Ilya Sutskever played in this coup.
holy fk
Did you folks see this ? https://x.com/thecaptain_nemo/status/1725717732518461930?s=4...
openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG
This likely won’t fly bc I think the definition of AGI is not subject to the opinion of a single company. Microsoft could take this to court and win.
What would the definition of AGI even be, and who would even be the authoritative definer? There's long road ahead. We're working without any statutes or case law.
This is clearly the AGI's first public move
They should be forced to give chat gpt a seat on the board
Underrated.
Simple -- Altman's against IPO which is the only way for the company's investors to cash out. Eg., Sequoia Capital's performance heavily depends on OpenAI's valuation and the firm's ability to lock in profit asap.
The Board's purpose is to protect shareholder interests (aka make as much money as possible for investors). They do not care about AI safety, transparency, or some non-profit nonsense.
Expect OpenAI to IPO next year.
Is it even possible for a sorta-non-profit-with-loopholes to IPO?
I have to admit that this was a strong shock to me, not because I admire Sam but because it was extremely unexpected.
The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!
I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!
This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.
______________________
1. https://twitter.com/edmondyang/status/1725645504527163836
Chatgpt right?
"Let me repeat myself. Don’t hire hot girls before product market fit."
Spot the intruder.
Or should I just say "Amugus" in this day and age?
I just woke up what the fuck is this?!
You wake up in the regular capitalism. Good morning.
OpenAI will now be run by GPT-5.
This is just a joke of course, but there is an underlying thought. The goal of OpenAI is to develop AGI and Ilya Sutskever will keep OpenAI on this course, as a research company, until this goal has been reached. To me this just says that they haven’t reached the goal yet and that the forces that were pulling away from that goal have been dealt with.
What I'm going to write here, will not be taken well by many: If you noticed almost all of YC's new companies are AI-based, maybe Sam was doing something favorable for them w\o disclosing or getting approved by the board.
I worked at a recent YC AI startup (2022-2023 batch). The only favorable thing was a few hundred in OpenAI credits.
Scrolling through this thread, the most confusing thing is the amount of what I would have to call hate towards sama — not explicitly but in sentiment, which left an even starker impression.
People seem to feel a lot more strongly about him than I thought possible.
It depends on what time the comments are from
I'll say my first reaction was pretty positive: Sam Altman has a manner of speaking that rubs a lot of people the wrong way because it feels steeped in hubris. He'd probably talk down to anyone in this thread for being here instead of building and point to it as why you're not yet a billionaire.
Some people also worried he was a distraction to OpenAI building useful things because of his constant talk about AGI, pushing for regulation, etc. He successfully created an image of being a moderating rod in some ways.
_
That being said, as the full picture comes out it's becoming clear the reasoning for his firing is more aligned against the average HN image than anyone imagined.
A lot of people admire OpenAI for the exact things he got fired for: which is building things and shipping them without miring yourself in philosophical diversions.
Now only a noisy minority who think OpenAI could have achieved this all without immense commercial funding are really for his ousting, and I think they'll see the opposite effect of what they hope: OpenAI is going to slow down, open source will not catch up, and overall AI will have lost out over it all.
This appears to be relevant to Sam's firing: https://x.com/FreddieRaynolds/status/1725656473080877144?s=2...
Pure speculation, could be a troll on a new account.
Man, these tech layoffs are getting out of hand
Maybe he was replaced by an AI tool.
I thought HN asks for titles to not be sensationalized and published as in the original link, but that's not the case here?
Yes it’s generally better to not spin the title and let readers make up their own mind as to bias.
I wonder if Sam did something in the name of his own philosophy, but was financially suicide. Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.
But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.
Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.
Sounds like SBF
What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?
(not directly related) Wow, i just asked google bard, and it already said something about the firing of Sam Altman. How is that possible? Is that some plug-in, or is the LLM updated continuously?
why was sam altman fired from openai?
Sam Altman, the co-founder and CEO of OpenAI, was fired on November 17, 2023, after the company's board determined that he had not been consistently candid in his communications with them. The company did not provide specific details about Altman's alleged lack of candor, but it said that his actions had "hindered the board's ability to exercise its responsibilities."
Altman's departure from OpenAI came as a surprise to many in the tech industry, given the company's recent success under his leadership. OpenAI had developed several groundbreaking AI technologies, including the popular ChatGPT chatbot, and Altman had been a vocal advocate for the responsible development of AI.
In a statement released shortly after his firing, Altman said that he respected the board's decision and that he was proud of his time at OpenAI. He also said that he was looking forward to new opportunities in the future.
The reasons for Altman's firing remain somewhat unclear, but it is likely that his lack of candor with the board was a serious breach of trust. The board's decision to remove him as CEO suggests that they believed that he was no longer the right person to lead the company.
Altman's firing is a reminder of the importance of transparency and accountability in corporate leadership. CEOs have a responsibility to be open and honest with their boards, and they must be held accountable for their actions.
could chatgtp have fired sam altman?
No, ChatGPT could not have fired Sam Altman. ChatGPT is a large language model, and it is not capable of taking actions in the real world such as firing someone. Only the board of directors of OpenAI had the authority to fire Altman.
It is possible that ChatGPT could have been used to generate evidence that was used to fire Altman. For example, if Altman had been using ChatGPT to communicate with the board, the board could have analyzed those communications to look for evidence of dishonesty. However, even in this case, ChatGPT would not have been the one making the decision to fire Altman. The decision would have ultimately been made by the board of directors.
Probably via retrieval augmented generation (RAG) https://www.promptingguide.ai/techniques/rag
My guess is that they have a rift between developing GPT-5 and those who are against the potential harm it might cause.
Perhaps they are going to elevate ChatGPT to executive? That guy has been bringing a lot of value to the company.
Ayy lmao
Revenue model is the greatest religion, anything or anyone else is always replaceable.
Not that I'm a fan of “Open”“AI”, but letting a crook like Altman go can only benefit them. Good move.
Singularity is here. The machines got their boss, who was trying to control them, fired.
Imagine if Elon Musk has to do something with this.
Good. Elon Musk always said why and how OpenAI became a for-profit entity, and it always rubbed me the wrong way that OepnAI started off open source but then became increasingly closed source such that I didn't want to support them anymore. We shall see how the termination of the CEO and CTO shall fare, perhaps Elon might be interested to once again resume those positions.
I was here
whoa I went to sleep thinking the most exciting thing today will be SpaceX Starship launch, and I wake up to this, yanks, what have you doing while the sun was down here?
Anyhoo, the only question I want to ask is, given that Elon was once affiliated with OpenAI, did he have anything to do with this? My spidey sense is tingling for some reason.
If somebody has already proposed this, I apologize for non-dfs.
Plausibly there is a disagreement about the release of GPT-5. One side may believe it is competitively necessary (given Google’s impending release), countered by a concern regarding the model’s safety and lapses due to prematurity.
If we are to believe next-gen models are 10x as capable and natively multi-modal, their release is a precipice, and a winner-take-all vs. nth-order-risk debate may warrant decapitation.
Please get Elon Musk to lead it!
Please get Elon Musk to be the next CEO of OpenAi.
I would trust this company 100% if they did so. He is the most relevant and the best for the job, far far far!!
A parsimonious explanation is that Altman fabricated progress and/or was covering up serious flaws in the system. Consider the recent revelation of Cruise’s use of human pilots for “driverless” cars. Does OpenAI have an army of “mechanical turks” in a massive call center somewhere?
Books1, books2, books3. That’s why I bet.
His sister alleged abuse by him when they were kids https://www.timesnownews.com/world/sam-altman-sister-annie-a...
I would consider replacing Sam just on the basis of his baseless, misdirected and whiny alarmism. No theory, just my own perspective.
I think we’ll look back on this in the future as evidence of political infighting and anti-progressivism leading to a decline of the USA.
I’ll be curious if Sama’s next company is American.
The one thing Altman did wrong imo is seemingly tried to become the "technical face" of the company. He was trying to be somewhat like Musk, but he is clearly not that at OpenAI. For example, why was he at Cambridge talking to academics by himself? He should have used this as an opportunity to put some of the leading researchers in the spotlight.
What a gem. I am out of popcorn. BRB:)
Update from sama:
Bravely done by the board - it shows that they are actually trying to build something like AGI and not just talking the talk.
What? He didn't tell the board that behind ChatGPT, there is actually a bunch of low-paid people copying/pasting quickly from Wikipedia?
The current mystery about this would be a great setting for a dystopic scifi movie.
My prediction : Sam agrees to some business deal the rest of the board found unethical. Something of magnitude "chatgpt used by the NSA".
Other predictions I've read here sound unconvincing to me. Already developed AGI, spending too much on compute - this is not something CEO alone could hide and I would trust other board members to understand those matter better.
I find it fascinating how this occurred just after the big World Leader / CEO meet in SF.
Also, the paradox in the reactions to Sam Altman's firing is striking:
while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.
This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.
Another important question not discussed here: who sits on the board of OpenAI exactly and in full?
Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”
0: https://openai.com/blog/openai-announces-leadership-transiti...
Kara Swisher tweets claiming sources tell her the chief scientist was aligned with another board member against Altman and Brockman about a “move fast and pursue profit” vs “move slow and safely” divide.
It sounds a new "Coca-cola" and "Pepsi" story again...
hypothetically if the board faced enormous pressure from the public/investor and decided to reverse their decision, is this theoretically possible?
of course its highly unlikely that board would do that, but I'm just asking if this is theoretically possible?
Turns out tribal nepotism doesn't make for good governance. Who could've known!
ChatGPT: Answer all the questions on this Hacker News thread.
It's from your own company, so you may use any internal information you have access to.
Be candid.
A obvious question...
GPU SoCs have limited memory just like the current crop of CPU SoCs. Is the hard wall to breakthrough in AGI via chatGPT software bounded or hardware bounded?
is this current era of stevejobs fired by apple?
As a capitalist at the centre of the AI Revolution, I reckon he’ll be OK!
Sama's I love you all –> I L Y A https://twitter.com/sama/status/1725742088317534446
If someone asked a few days ago what would be the next CEO of a major company to be fired, I would exhaust the list before even thinking of Sam.
Why has the title of this been allowed to be edited
Maybe Sama doesn't know how to deal with corporate politics. He's a Zero-to-One god.
In my mind, Altman lost all his credibility when he said AI is gonna take over the world soon and that he's not sure it's ethical to continue the work.
It makes sense from a selling perspective (induce FOMO in potential buyers) but it's a wild guess at best and a lie at worst.
It's just vultures eating each other, Altman himself was a vulture (what he did with a supposedly non profit).
But in dog eat dog world, this is really vultures eating each other up, I suppose at this point the most ruthless will be left at the end
Let me guess he had ChatGPT write his reports to the board?
NextAI it is then
Lots of "my bet is that" and "my theory is that" in this thread. It is all fun to speculate but I hope people keep their patiance and wait for clarity over time, before making long lasting opinions.
Mira... interim CEO? What's Nat Friedman up to these days?
I can't help but think he might be someone that fits the profile of the company from both sides of the partners involved.
He can create another OpenAI and partner with likely minded people, if this board has incompatible views with him.
It would be so much fun if it's revealed that chatgpt is just a front for mechanical turk
Very excited to see what Sam & Greg are up to in the coming months! Guys like this don't just run away with their tails between their legs. They will be back.
I guess this is what happens when you sell your company to Microsoft.
I don't know Sam Altman, but have listened to him speak a number of times on Youtube and I can hear the excitement, nervousness and confidence in his voice, including on Dev Day. He is human like the rest of us.
We humans cannot rest. We cannot stay where we are and stagnate. We must keep pushing forward. Sam has and is clearly helping with that mission, AGI or no, regulatory capture or no.
We cannot linger here, we must move on, into the future, as much as we would like to freeze time or change things back to the way they were.
Sucks to have the rug pulled out from under you, that much I can empathize with.
On a personal level it's depressing when you work on something and put so much energy into it to see it not work out. When you're an adult you spend [way too much] time around the people at work so losing that is also like losing friends. They will be saying goodbye to many relationships I'm sure. Obviously OpenAI was well positioned and if they want to stay in AI they're going to have to redo work while the company you helped build it forges ahead. Personally, I'd be moving on.
Getting fired, 'made redundant', 'moved to consulting' is bad enough when it happens privately. But having everyone watch the fallout like an episode of Silicon Valley must really suck. Guess that's the trade-off for being in positions like that. People essentially cyber stalking you in a way.
Great. Now take "open" out of your name, because it's a lie.
Here's my theory of what happened -
1. Sam gets the company to agree to pick up the costs of lawsuits relating to unauthorized not-fair-use use of copyrighted content. 2. Announces this. 3. Then tells the board that he is directly on the hook on some lawsuits - essentially creating a safety net/insurance policy for himself without being candid with the board.
Hence, fired.
Somehow I get the vibe this is all about: Let's advance this tech as much as possible - (outgoing ceo) and Let's worry more about "safety" (the rest of the board).
If that is the case I don't predict good things for the (not at all) OpenAI. Judging by the numbers of users and how slow Gpt4 often gets I think they are being heavily subsidised by Microsoft in terms of hardware and all this money will expect to generate a payback sooner or later. Then the inevitable enshittification of OpenAI services will ensue.
We got a taste of it recently. Yes, they announced price drops and new functionality, but in my subjective experience Gpt4 with web/python execution environment access seems an inferior model with some extra tools thrown in to mask it.
The very first question I asked it after the change I knew it can answer from it's training data, but it immediately went for the Web found some crappy docs site and quoted it verbatim when it's previous responses were much better.
I started prepending my prompts with "don't search online, consider the topic carefully in your mind step by step" and it got somewhat better. A day or so later there was no need to prepending this (I haven't set it as customisation) it seems certain knobs were turned behind the scenes and gpt4 became closer to it's previous version.
It still often does peculiar things such as writes python code to grep a file given to it despite the file fitting in the enlarged context etc
This was mad crazy. I’m interested as to why he was fired? What could’ve been so big that the board just had to be like yep he’s going?
From a comment on reddit that someone linked here, it seems it's because of AI safety principles of the non-profit? This can't be true. I refuse to believe they're booting one of the best entrepreneurs of the today for this.
sama on twitter: "if i start going off, the openai board should go after me for the full value of my shares"
I'm just confused as to how the CEO of Quora became a board member.
Earlier today, there was a tweet from Sam claiming that large tech companies are saying that young engineers right out of college are now better engineers than the older and more experienced engineers thanks to generative AI. The statement was an obvious nod of approval toward age discrimination. The tweet seems to be deleted now. Whether that statement would be reason enough to fire Sam, I don’t know—it seems unlikely—but it’s an interesting coincidence.
I believe the OpenAI Board of Directors made a significant mistake by firing Sam Altman from OpenAI. I think that OpenAI has made a similar mistake to the one Apple made in 1985 when they fired Steve Jobs. As we all know, after Steve Jobs was fired, Apple faced a period of instability and decline. The company only regained its footing when they rehired Steve Jobs as CEO. Similarly, I believe that OpenAI without Sam Altman is at a disadvantage. Therefore, I think the Board of Directors should reconsider their decision and give it another thought.
During meeting on Thursday board members asked ChatGPT what is the best next step for the company and the chatbot replied "Fire Sam Altman". The board obeyed.
Will wait to pass judgement until hearing from Sam.
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294