return to table of content

Greg Brockman quits OpenAI

johnwheeler
156 replies
16h40m

Edit: I called it

https://twitter.com/karaswisher/status/1725682088639119857

nothing to do with dishonesty. That’s just the official reason.

———-

I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.

Also interesting that Sutskever tweeted a month and a half ago

https://twitter.com/ilyasut/status/1707752576077176907

The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.

Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.

I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.

PS most engineers, like myself, are replaceable. Ilya is probably not.

lenerdenator
40 replies
16h7m

I think that if there were a lack of truth to him being less-than-candid with the board, they would have left that part out. You don’t basically say that an employee (particularly a c-suiter with lots of money for lawyers) lied unless you think that you could reasonably defend that statement in court. Otherwise, it’s defamation.

johnwheeler
39 replies
16h6m

I’m not saying there is lack of truth. I’m saying that’s not the real reason. It could be there’s a scandal to be found, but my guess is the hostility from OpenAI is just preemptive.

There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.

anigbrowl
31 replies
15h26m

John, I don't think you understand how corporate law departments work. It's not like a romantic or friend breakup where someone says a mean remark about the other to underline that it's over; there's a big legal risk to the corporate entity from carelessly damaging someone's reputation like that, so it's smarter to just keep the personality/vision disagreements private and limit public statements to platitudes.

johnwheeler
17 replies
15h11m

Please don’t patronize me. It indeed looks like the press release from OpenAI is under scrutiny. What you fail to understand is human nature and the way people really do things ^TM

https://twitter.com/karaswisher/status/1725685211436814795

anigbrowl
15 replies
14h18m

I'm not patronizing you, I'm just responding on the same level as the post I replied to. There's an endless supply of examples of corporate/legal decisions and communication being made on very different criteria from interpersonal interactions.

Of course the press release is under scrutiny, we are all wondering What Really Happened. But careless statements create significant legal (and thus financial) risk for a big corporate entity, and board members have fiduciary responsibilities, which is why 99.99% of corporate communications are bland in tone, whatever human drama may be taking place in conference rooms.

Jerrrry
14 replies
13h7m

John

I'm not patronizing you

(A)ssuming (G)ood (F)aith, referring to someone online by their name, even in an edge case where their username is their name, is considered patronizing as it is difficult to convey a tone via text medium that isn't perceived as a mockery/veiled threat.

This may be a US-internet thing; analogous to getting within striking distance with a raised voice can be a capital offense in the US, juxtaposed to being completely normal in some parts of the Middle East.

lijok
12 replies
11h51m

referring to someone online by their name is considered patronizing

This has to be a joke, right?

jrockway
6 replies
11h38m

I don't think it's a joke. I would find it patronizing unless I'm already on a first name basis with the commenter through some prior relationship.

Filligree
3 replies
9h46m

Really? Referring to someone by first name is perfectly ordinary where I’m from, regardless of relationship. If someone doesn’t want me to do that, I’d expect them to introduce themselves as “Mr. so-and-so”, instead.

TeMPOraL
1 replies
9h31m

It's not the first name alone, it's also the sentence structure. "Hey John, did you hear about..." sounds perfectly normal even when talking on-line to strangers. "John, you misunderstand..." is appropriate if you're their parent or spouse or otherwise in some kind of close relationship.

jrockway
0 replies
9h20m

You have explained this much more concisely than me.

jrockway
0 replies
9h21m

In person, sure, that's totally normal. It's unusual on a forum for a few reasons:

1) The comments are meant to be read by all, not just the author. If you want to email the author directly and start the message with a greeting containing their name ("hi jrockway!"), or even just their name, that's pretty normal.

2) You don't actually know the person's first name. In this case, it's pretty obvious, since the user in question goes by what looks like <firstname><lastname>. But who knows if that's actually their name. Plenty of people name their accounts after fictional people. It would be weird to everyone if your HN comment to darthvader was "Darth, I don't think you understand how corporate law departments work." Darth is not reading the comment. (OK, actually I would find that hilarious to read.)

3) Starting a sentence with someone's name and a long pause (which the written comma heavily implies) sounds like a parent scolding a child. You rarely see this form outside of a lecture, and the original comment in question is a lecture. You add the person's name to the beginning of the comment to be extra patronizing. I know that's what was going on and the person who was being replied to knows that's what was going on. The person who used that language denies that they were trying to be patronizing, but frankly, I don't believe it. Maybe they didn't mean to consciously do it, but they typed the extra word at the beginning of the sentence for some reason. What was that reason? If to soften the lecture, why not soften it even more by simply not clicking reply? It just doesn't add up.

4) It's Simply Not Done. Open any random HN discussion, and 99.99% of the time, nobody is starting replies with someone's name and a comma. It's not just HN; the same convention applies on Reddit. When you use style that deviates from the norm, you're sending a message, and it's going to have a jarring effect on the reader. Doubly jarring if you're the person they're naming.

TL;DR: Don't start your replies with the name of the person you're replying to. If you're talking with someone in person, sure, throw their name in there. That's totally normal. In writing? Less normal.

lijok
0 replies
5h8m

Is it the first name or the personal touch that would make you feel patronized? What if you read a reply “… a 24 year old, such as yourself, will know …”.

EFreethought
0 replies
11h31m

It happened to me recently on a list where I post under my real name, and yes, it's irritating, especially if it is someone you never met, and they are disagreeing with you.

tsimionescu
0 replies
9h52m

Perhaps the wording here is a bit confusing, but I think it's unambiguous that responding to a comment using the commenter's name ("John, you misunderstand") comes off as patronizing.

The commenter above doesn't mean that any reference to someone else by name ("Sam Altman was fired") is patronizing.

oooyay
0 replies
9h31m

No. Look at examples where people hurl veiled threats at dang. They almost always use his real first name. It's a form of subtle intimidation. That kind of intimidation, whether the users real name is incorporated into their username in some way or they're using other open source intel goes back to the early days of the internet.

jlpom
0 replies
4h10m

- it means you answer more to the person than its argument (ad hominem) - it is uneccessary and 9/10 when used for a disagreement, especially at the beginning of a response, it is meant to be patronizing.

jholman
0 replies
10h26m

It's not the "online" that's the issue exactly, I think Jerrrry didn't describe it exactly right, but it's still correct. I, too, personally, thought it was very clear that the "John, " was ... I dunno if it was patronizing or what, but marginally impolite or condescending or patronizing or something. Unless, unbeknownst to us, anigbrowl and johnwheeler are old personal associates (probably offline), in which case it would mean "remember that I know you", and the implication of that would depend on the history in the relationship.

I recognize that the above para sort of sounds like I think I have some authority to mediate between them, which is not true and not what I think. I'm just replying to this side conversation about how to be polite in public, just giving my take.

The broad pattern here is that there are norms around how and when you use someone's name when addressing them, and when you deviate from those norms, it signals that something is weird, and then the reader has to guess what is the second most likely meaning of the rest of the sentence, because the weird name use means that the most likely meaning is not appropriate.

TeMPOraL
0 replies
9h34m

No. More than that, comes off as patronizing to start a comment with the other person's first name when speaking, off-line, face-to-face, unless you're their spouse, parent, or in some other close relationship.

14u2c
0 replies
8h39m

Jerrrry, thank you for your opinion.

majormajor
0 replies
15h5m

One imagines "human nature" cuts both ways here - sometimes damage control is just damage control.

williamcotton
11 replies
12h55m

What’s the legal risk? Their investors sue them for..? Altman sues for..?

How is the language “we are going our separate ways” compared with “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI” going to have a material difference in the outcome of the action of him getting fired?

How do the complainants show a judge and jury that they were materially harmed by the choice of language above?

lenerdenator
9 replies
12h7m

The legal risk comes if Altman decides he wants a similar job and can't find it over the next few months or years, and has reason to believe that OpenAI's statements tainted his reputation.

OpenAI's board's press release could very easily be construed as "Sam Altman is not trustworthy as a CEO", which could lead to his reputation being sullied among other possible employers. He could argue that the board defamed his reputation and kept him from what was otherwise a very promising career in an unfathomably lucrative field.

edgyquant
8 replies
11h53m

It’s not defamation if it’s true

adastra22
5 replies
10h27m

The onus is on OpenAI to prove that in a court of law, in front of a jury.

tsimionescu
4 replies
9h56m

No, the onus would be on Sam Altman to prove that the statement was materially false, AND intended to slander him, AND actually succeeded in affecting his reputation.

When you're a public person, the bar for winning a defamation case is very high.

rich_sasha
3 replies
7h10m

I don't know. The board statement, peeling away the pleasantries, says he lied to the board repeatedly. That's a very serious accusation. I don't know how US law works here, but in the UK you can sue and win over defamation for far milder infractions.

tsimionescu
2 replies
6h1m

Even in the UK, if you sue, it is on you to prove that you didn't lie, not on the person you're sueing to prove that you did.

Also, as long as you are a public person, defamation has a very high bar in the USA. It is not enough to for the statement to be false, you have to actually prove that the person you're accusing of defamation knew it was false and intended it to hurt you.

Note that this is different from an accusation of perjury. They did not accuse Sam Altman of performing illegal acts. If they had, things would have been very different. As it stands, they simply said that he hasn't been truthful to them, which it would be very hard to prove is false.

rich_sasha
0 replies
5h54m

In a specific case, perhaps. But surely, I can't go out, make a broad statement like, "XYZ is a liar and fornicator" and leave it there. And how would XYZ go around proving they are not a liar and fornicator? Talk to everyone in the world and get them to confirm they were not lied to or sexually involved?

Surely, at some level, you can be sued for making unfounded remarks. But then IANAL so, meh.

notahacker
0 replies
3h56m

Even in the UK, if you sue, it is on you to prove that you didn't lie, not on the person you're sueing to prove that you did.

No, in the UK it's unambiguously the other way round. The complainant simply has to persuade the court that the statement seriously harmed or is likely to seriously harm their reputation. Truth is a defence but for that defence to prevail the burden of proof is on the defendant to prove that it was true (or to mount an "honest opinion" defence on the basis that both the statement would reasonably be understood as one of opinion rather than fact and that they did honestly hold that opinion)

lenerdenator
1 replies
11h29m

Truth is subjective and if there is anything that could suggest other motive, as I said earlier, it would be open to interpretation by a jury.

Really they should have just said something to the effect of, "The board has voted to end Sam Altman's tenure as CEO at OpenAI. We wish him the best in his future endeavors."

watwut
0 replies
8h35m

Meh, they don't need to prove that much. It would be Altman that had to prove a lot, because the law favors defendant in this situation. To protect the speech, actually.

adastra22
0 replies
10h28m

How much total compensation could Altman have gotten from another company, if not for this slander? Yeah no one knows for sure, but how much could he argue? He's a princeling of Silicon Valley, and just led a company from $0 to $90 billion dollars. I'm guessing that's going to be a very, very big number.

Unless OpenAI can prove in a court of law that what they said was true, they're on the hook for that amount in compensation, perhaps plus punitive damages and legal costs.

svnt
0 replies
13h48m

None of these people seem to be typical corporate board members, except maybe Altman.

lenerdenator
6 replies
15h40m

I mean I'm not a lawyer (of the big city or simple country varieties, or any other variety) but if you talk to most HR people they'll tell you that if they ever get a phone call from a prospective employer to confirm details about someone having worked there previously, the three things they'll typically say are:

1) a confirmation of the dates of employment

2) a confirmation of the role/title during employment

3) whether or not they would rehire that person

... and that's it. The last one is a legally-sound way of saying that their time at the company left something to be desired, up to and including the point of them being terminated. It doesn't give them exposure under defamation because it's completely true, as the company is fully in-charge of that decision and can thus set the reality surrounding it.

That's for a regular employee who is having their information confirmed by some hiring manager in a phone or email conversation. This is a press release for a company connected to several very high-profile corporations in a very well-connected business community. Arguably it's the biggest tech exec news of the year. If there's ulterior or additional motive as you suggest, there's a possibility Sam goes and hires the biggest son-of-a-bitch attorney in California to convince a jury that the ulterior or additional motive was _the only_ motive, and that calling Sam a liar in a press release was defamation. As a result, OpenAI/the foundation, would probably be paying him _at least_ several million dollars (probably a lot more) for making him hard to hire on at other companies.

Either he simply lied to the board and that's it, or OpenAI's counsel didn't do their job and put their foot down over the language used in the press release.

wavemode
4 replies
15h7m

Someone at OpenAI hates the man's guts. It's that simple.

Even with very public cases of company leaders who did horrible things (much worse than lying), the companies that fired them said nothing officially. The person just "resigned". There's just no reason open up even the faintest possibility of an expensive lawsuit, even if they believe they can win.

So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.

staticman2
0 replies
14h10m

You can't say a person resigned if they refused to resign, correct? If the person says they refuse to resign you have to fire them. So that's one scenario where they would have to say they fired him.

You also wouldn't try to avoid a lawsuit if you believed (hypothetically) it was impossible to avoid a lawsuit.

rootusrootus
0 replies
14h51m

I don't know that this is always the case. For example, when BK was forced to resign from Intel, the board's announcement was quite specific on why.

lenerdenator
0 replies
15h2m

Well, for their sake, I hope they either issue a retraction soon, have good lawyers and documentation of their decision, or Sam turns out to be a forgiving person.

I wouldn't put money on the last one, though.

adastra22
0 replies
10h32m

So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.

You're assuming they even consulted the lawyers...

satvikpendem
0 replies
9h37m

There is no legal justification for ever saying those dates, much less their department and role. I have never heard of any HR department saying anything of the sort, even if this is an oft-quoted meme of HR. I suspect you have actually never worked in HR to provide such statements, you are merely speculating.

cowl
38 replies
16h7m

Doesn't justify the hostile language and the urgent last minute timing. (partners were notified just minutes before press release). They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time.

A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.

nostrademons
13 replies
14h41m

Yeah, this is more abrupt and more direct than any CEO firing I've ever seen. For comparison, when Travis Kalanick was ousted from Uber in 2017, he "resigned" and then was able to stay on the board until 2019. When Equifax had their data breach, it took 4 days for the CEO to resign and then the board retroactively changed it to "fired for cause". With the Volkswagon emissions scandal, it took 20 days for the CEO to resign (again, not fired) despite the threat of criminal proceedings.

You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.

That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.

That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.

jonas_kgomo
7 replies
14h35m

what examples are you considering here, bioweapons?

lucubratory
4 replies
14h16m

I don't think the person you are replying to is correct, because the only technological advancement where a new OpenAI artifact provides schematics that I think could qualify is Drexler-wins-Smalley-sucks style nanotechnology that could be used to build computation. That would be the sort of thing where if you're in favour of building the AI faster you're like "Why wouldn't we do this?" and if you're worried the AI may be trying to release a bioweapon to escape you're like "How could you even consider building to these schematics?".

I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.

I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.

Apocryphon
3 replies
14h2m

I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that

Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.

lucubratory
2 replies
12h16m

No, not really. Calling something "science-fiction" at the present moment is generally an insult intended to say something along the lines of "You're an idiot for believing this made up children's story could be real, it's like believing in fairies", which is of course a really dumb thing to say because science fiction has a very long history of predicting technological advances (the internet, tanks, video calls, not just phones but flip phones, submarines, television, the lunar landing, credit cards, aircraft, robotics, drones, tablets, bionic limbs, antidepressants) so the idea that because something is in science fiction it is therefore a stupid idea to think it is a real possibility for separate reasons is really, really dumb. It would also be dumb to think something is possible only because it exists in science fiction, like how many people think about faster than light travel, but science fiction is not why people believe AGI is possible.

Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).

About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.

Apocryphon
1 replies
12h0m

How would a superhuman intelligence invent a new non-hypothetical actually-working device without actually conducting physical experiments, building prototypes, and so on? By conducting really rigorous meta-analysis of existing research papers? Every single example you listed involved work IRL.

with enough time and copies of itself.

Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.

lucubratory
0 replies
10h41m

Yes, the sort of challenges you're talking about are pretty much exactly why I don't consider it feasible that OpenAI has an internal system that is at that level yet. I would consider it to be at the reasonable limits of possibility that they could have an AI that could give a very convincing, detailed, & feasible "grant proposal" style plan for answering those questions, which wouldn't qualify for OPs comment.

With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.

Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.

hypercube33
0 replies
14h9m

Well OpenAI gets really upset when you ask it to design a warp drive so maybe that was it.

VirusNewbie
0 replies
10h19m

promising not to train on microsoft's customer data, and then training on MSFT customer data.

wly_cdgr
1 replies
14h29m

Human cloning

koolba
0 replies
14h19m

Actual humans or is this a metaphor for replicating the personas of humans via an LLM?

username332211
1 replies
9h5m

You are comparing a corporate scandals, but the alternative theory in this forum seems to be a power struggle and power struggles have completely different mechanics.

Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.

notahacker
0 replies
3h24m

A boardroom coup isn't remotely like one where one looks for the gap where the guards and guns aren't and worries about the deposed leader being reinstated by an angry mob.

If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable

watwut
0 replies
8h31m

Yeah, but Uber is completely different organization. The boards you mention were likely complic in stuff they kicked their CEOs out about.

mark_l_watson
10 replies
15h49m

Great point. It was rude to drop a bombshell during trading hours. That said, the chunk of value Microsoft dropped today may be made back tomorrow, but maybe not: if OpenAI is going to slow down and concentrate on safe/aligned AI then that is not quite as good for Microsoft.

mcmcmc
5 replies
14h12m

Why is OpenAI responsible for protecting Microsoft’s stock price?

wordpad25
2 replies
13h35m

Well if for nothing else they are their biggest partner and investor

mcmcmc
0 replies
12h13m

Yeah and if antitrust regulators weren’t asleep at the wheel they’d be competitors

conductr
0 replies
12h2m

Even Microsoft themselves shouldn’t care about the traders that react to this type of headline so quickly.

This will end up being a blip that corrects once it’s actually digested.

Although, the way this story is unfolding, it’s going to be hilarious if it ends up that the OpenAI board members had taken recent short positions in MSFT.

cowl
1 replies
2h8m

it's not that OpenAI is reponsible, but those board members have burned a lot of bridges with investors with this behaviour. the investor world is not big so self-serving interest would dictate that you at least take their interest in consideration before acting especially with something like waiting 1 hour before press release. No Board would want them now because they are posined apple for the investors.

swatcoder
0 replies
2h1m

Alternately, there may be mission-minded investors and philanthropists who were uncomfortable with Microsoft's sweetheart deals and feel more comfortable after the non-profit board asserted itself and booted the mission-defying VC.

We won't know for a while, especially since the details of the internal dispute and the soundness of the allegations against Altman are still vague. Whether investors/donors-at-large are more or less comfortable now than they were before is up in the air.

That said, startups and commercial partners that wanted to build on recent OpenAI, LLC products are right to grow skittish. Signs are strong that the remaining board won't support them the way Altman's org would have.

dclowd9901
3 replies
15h41m

It only dropped 2% and it’s already half back in after hours. I don’t think the market thinks it’s Altman who’s the golden boy here.

Sebb767
2 replies
15h31m

It's still a completely unnecessary disturbance of the market. You also don't want to bite the hand that feeds you. This would be taking a personal disagreement to Musk-levels of market impact.

lotsofpulp
1 replies
14h32m

What is all this nonsense about MSFT stock price? Nothing material has happened to it.

https://www.google.com/finance/quote/MSFT:NASDAQ

jrockway
0 replies
12h25m

People zooming in too far. If you look at the 1d chart, yeah, something happened at 3:30. If you look at the 1m chart, today is irrelevant.

matwood
8 replies
15h31m

last minute timing

Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.

vikramkr
5 replies
12h22m

Announcing something huge like this before market close is not something that can be interpreted as anything other than either a huge timing mistake or a massive feeling of urgency

Paul-Craft
4 replies
9h50m

I find it hard to believe that the board of OpenAI isn't smart, savvy and self-interested enough to know that not delaying the announcement by an hour or so is the wrong move. That leads me to believe that yes, this was something big and worthy enough of being announced with that timing, and that it was probably not a mistake.

vikramkr
2 replies
9h44m

They also said Greg was going to stay at the company and then he immediately quit. I find it very hard to believe that smart, savvy, and self interested are adjectives that apply to a board who doesn't know what their own chairman thinks.

Paul-Craft
1 replies
9h37m

Even smart, savvy, and self interested people can't always predict what individual humans are going to do. It's certainly an interesting wrinkle, but I don't think it's relevant to the limited scope of the analysis I've presented here.

vikramkr
0 replies
7h2m

He was the chair of the board. And they were wrong very quickly. It very much sounds like they spoke for him. Or he pretended that he was going to stay and then backstabbed them. Which, given how strongly aligned with altman he seems to be, not really a surprise. I have yet to see a single action from them that means towards saavy rather than incompetent.

manojlds
0 replies
9h12m

Takeaway Sam, Greg and Ilya and who are the others even in the board. Doesn't inspire any confidence.

johnwheeler
0 replies
14h58m

Exactly. They call it: The shit hitting the fan.

capableweb
0 replies
14h40m

Everything points towards this being last minute both for people outside and people inside. Microsoft caught with their pants down, announcement before markets closed rather than just waiting a bit, and so on.

mandeepj
2 replies
14h39m

. They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time

Ha! Tell me you don't know about markets without telling me! Stock can drop after hours too.

mahkeiro
0 replies
10h21m

After market prices are just a potential trend as the volume traded is very small and easily manipulated.

gautamgxtv
0 replies
12h37m

Not as much tho right?

hartator
0 replies
15h35m

MSFT is still up this week.

siva7
14 replies
12h9m

Ah, the old myth about the irreplaceable engineer and the dumb suit. Ask Wozniak about that. I don't think he believes Apple would be without Steve Jobs.

leptons
9 replies
11h10m

Steve Jobs would be nothing without Wozniak to design something people wanted.

siva7
5 replies
11h8m

vice versa ;)

smoldesu
4 replies
9h22m

If Steve Jobs couldn't claim Wozniak's work as his own, he wouldn't have landed "his" Atari contract. Who knows where things would have gone after that, but I have a hard time tracing Apple's history of success without, say, the Apple II.

fsloth
3 replies
4h30m

The milieu in which Apple came to fruitition was full of young small microcomputer shops. So it's not like Woz invented the microcomputer for the masses - his contribution was critical for early Apple for sure, but the market had tons of alternatives as well. Without Apple II it's hard to say were Jobs and Wozniak would have turned out. Jobs is such a unique, driven figure that I'm fairly sure he would have created a lasting impression on the market even without Woz. This is not to say Woz was insighnificant - but rather the 1970's Silicon Valley had tons of people with Woz's acumen (not to disparage their achievements) but only few tech luminaries who obviously were not one trick ponies but managed to build their industrial legacy over decades.

fidotron
2 replies
2h31m

I don’t believe the evidence backs this up. Woz is as much a one off as Jobs.

Woz did two magic things just in the Apple II which no one else was close to: the hack for the ntsc color, and the disk drive not needing a completely separate CPU. In the late 70s that ability is what enabled the Apple II to succeed.

The point is Woz is a hacker. Once you build a system more properly, with pieces used how their designers explicitly intended, you end up with the Mac (and things like Sun SPARCstafions) which does not have space for Woz to use his lateral thinking talents.

fsloth
1 replies
1h14m

I’ve always in my mind compared Apple 2 to the likes of Commodore 64, ZX Spectrum etc - 8 bit home computers. Apple 2 predates these several years ofc. You could most certainly create an 8 bit home computer without Woz. I haven’t really geeged out on the complete tech specs & history of these, it’s possible Apple 2 was more fundamental than that (and would love to understand why).

smoldesu
0 replies
57m

Apple already had competitors back then in the likes of Franklin Computers. From a pure "can you copy Woz" perspective, it's not really even a question; of course you could. It was always a matter of dedication and time.

It's foolish for any of us to peer inside the crystal ball of "what would Jobs be without Woz", but I think it is important to acknowledge that the Apple II and IIc pretty much bankrolled Apple through their pre-Macintosh era. Without those first few gigs (which Woz is almost single-handedly responsible for), Apple Computers wouldn't have existed as early (or successfully) as it did. Maybe we still would have gotten an iPhone later down the line, but that's frankly too speculative for any of us to call.

ascorbic
2 replies
9h4m

But the point is that Woz was replacable, because he was replaced. Jobs on the other hand was replaced and the company nearly died, he came back and turned it around. Of course it only because a trillion dollar company after Tim Apple took over, which I guess just shows that nobody is irreplacable.

CapitalistCartr
1 replies
6h11m

Tim Cook?

derrasterpunkt
0 replies
4h41m

Trump said Tim Apple instead of Tim Cook: https://knowyourmeme.com/memes/tim-apple

sunshinerag
1 replies
9h30m

Apple without Woz did fine. Apple without Jobs not so much.

croes
0 replies
5h42m

Stock market is full of dumb suits falling for salesman

OscarTheGrinch
1 replies
9h56m

Sam Altman is no Steve Jobs.

karmasimida
0 replies
9h9m

OpenAI is no Apple.

jeron
9 replies
16h36m

the theory that Altman did something in bad faith means that it might not be a disagreement but it's something that forced Sutskever to vote against Sam

duringmath
8 replies
16h32m

The theory is Altman gave his eye scanning crypto company early access to openAI tech without telling anyone and what ensued is just FAFO in action.

woeirua
5 replies
16h21m

That’s not big enough to immediately terminate him.

dragonwriter
4 replies
16h7m

Self-dealing like that really is, not saying I see any reason to suspect its that and not something else, but, yeah, doing that and cocncealing it absolutely would be a reason for both firing him and making the statement made.

Unless Brockman was involved, though, firing Brockman doesn't really make sense.

fragmede
3 replies
15h52m

Yeah, looking at the self-dealing going on with WeWork and Adam Neumann, it's not that.

ssnistfajen
0 replies
15h28m

Not sure what you mean here. They tried to IPO in 2019 and all the dirty laundry came out, scuttled the IPO and Neumann got ousted.

lmm
0 replies
15h37m

The board knew and agreed to it in that case.

dragonwriter
0 replies
15h26m

There's a difference between self-dealing you sell the board on and self-dealing you conceal from the board (also, different where its a pure for-profit where the self-dealing happens and where there is a non-profit involved, because the latter not only raises issues of conflict of interest with the firm, but also potential violations of the rules governing non-profits.)

nothrowaways
0 replies
15h57m

No way, no Sam - crypto scandal again.

moralestapia
0 replies
13h59m

I'm also inclined to believe something like this happened.

concurrentsquar
9 replies
14h30m

More new information from Swisher:

"More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."

"The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."

[source: https://twitter.com/karaswisher/status/1725702501435941294]

Sounds like you exactly predicted it.

refurb
5 replies
11h20m

No doubt Sam will have another AI company up in no time.

Which is good!

mirekrusin
3 replies
11h14m

It's very good, competition is all you need.

TeMPOraL
2 replies
10h29m

I thought that [regulatory] attention is all you need.

schleck8
1 replies
9h2m

That too, otherwise ASI could be another PTFE and asbestos moment but on crack

croes
0 replies
5h50m

I doubt that there will be any ASI in the near future, with or without regulation.

leobg
0 replies
9h31m

Well, he tweeted that once he “goes off” the board won’t be able to do anything about it, because he never owned any equity. That’s how I read it.

mirekrusin
1 replies
11h21m

ClosedAI?

My bet: He’ll have a new company up by Monday.
m463
0 replies
9h49m

Apart from some piddly tech, silicon valley startups primarily sell stock. And a monday company will be free to capitalize on the hype and sell stock that won't have its shoelaces tied like a non profit.

baq
0 replies
9h14m

With VCs cat fighting in the queue and sama just sticking it to them with bootstrapping independently.

I don’t like this whole development one bit, actually. He lost his brakes and I’m sure he doesn’t see it this way at all.

foobiekr
7 replies
14h56m

I think you're completely backward. A board doesn't do that unless they absolutely have to.

Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.

tsimionescu
4 replies
10h16m

Per other profiles of OpenAI, this is an organization of true believers in the benefits and dangers of AGI. It's also a non-profit, not a company.

All this to say that the board is probably unlike the boards of the vast majority of tech companies.

danbmil99
3 replies
8h48m

This. There were no investors on the board -- as Jason @ all-in said "that's just crazy".

bananapub
2 replies
7h11m

as Jason @ all-in said

lol

"that's just crazy".

why is it crazy? the purpose of OpenAI is not to make investors rich - having investors on the board trying to make money for themselves would be crazy.

Rastonbury
1 replies
6h41m

Exactly, if we assume Altman wanting to pursue commercialization at the cost of safety was the issue, the board did its job by advancing its mandate of "AI for the benefit of humanity" although not sure why they went with the nuclear option.

tsimionescu
0 replies
5h54m

Very true.

Though I would go further than that: if that is indeed the reason, the board has proven themselves very much incompetent. It would be quite incompetent to invite this type of shadow of scandal for something that was a fundamentally reasonable disagreement.

apstls
1 replies
12h27m

The board, like any, is a small group of people, and in this case a small group of people divided into two sides defined by conflicting ideological perspectives. In this case, I imagine the board members have much broader and longer-term perspectives and considerations factoring into their decision making than the significant, significant majority of other companies/boards. Generalizing doesn’t seem particularly helpful.

foobiekr
0 replies
10h29m

Generalizing is how we reason, and having been on boards and worked with them closely, I can straight up tell you that's not how it works.

In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.

cowl
6 replies
16h2m

What is confusing here is Why would Greg have agreed to the language in the press release (that he would be staying in the company and report to the CEO) only to resign 1 hour later. Surely the press release would not have contained that information without the his agreement that he would be staying.

dekhn
2 replies
12h35m

He didn't. Greg was informed after Sam (I'm assuming the various bits being flung about by Swisher are true; she gets a free pass on things like this), so I think the sequence was: a subset of the board meets, forms quorum, votes to terminate Sam and remove Greg as chair (without telling him). Then they write the PR, and around the same time, let Sam and then Greg know. If OpenAI were a government, this would be called a coup.

brohee
1 replies
10h24m

Governments fall all the time in non-coup. They just lost majority.

username332211
0 replies
9h2m

Rushing out a press release with vague accusations and without consulting the relevant parties certainly feels more like a coup than a traditional vote of no confidence.

tsunamifury
0 replies
15h56m

Ha Most people don’t know how slipshod these things go. Succession had it right when people were always fighting over the PR release trying to change each others statements.

mrandish
0 replies
15h24m

Why would Greg have agreed to the language in the press release

We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.

Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.

mkl
0 replies
10h55m

He didn't. From https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...:

When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.

Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
AmericanOP
5 replies
15h43m

So I should cancel any plans to build on the GPT platform? Because it doesn't align with the values of Ilya and Helen?

Open AI- we need clarity on your new direction.

jay_kyburz
3 replies
15h33m

Do you really want to build a business on a platform with 100% lock in.

It not like to can just move to another AI company if you don't like their terms.

RetpolineDrama
2 replies
15h11m

My Open AI bill was (opens dashboard...), $43,xxx last month.

First thing tomorrow I'm kicking off another round of searching for alternatives.

7thaccount
1 replies
11h40m

Just out of curiosity, what are you using it for that makes it that valuable?

nurettin
0 replies
10h40m

It's probably the gpt cost of a metered product.

bitcharmer
0 replies
7h43m

I think you're confused. If Altman was allowed to continue you'd end up in a vendor lock-in situation with that guy endlessly bumping the fees.

eslaught
3 replies
15h1m

This doesn't make any sense. If it was a disagreement, they could have gone the "quiet" route and just made no substantive comment in the press release. But they made accusations that are specific enough to be legally enforceable if they're wrong, and in an official statement no less.

If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.

croes
0 replies
5h49m

I doubt that a quite route is possible on that matter.

So better be the first to set the narrative.

YetAnotherNick
0 replies
9h16m

Even if their case is 100% solid, they wouldn't have said it publically. Unless they hated Sam for doing something, so it's not just direction of the company or something like that. It's something bigger.

Paul-Craft
0 replies
10h0m

I agree. None of this adds up. The only thing that makes any sense, given OpenAI has any sense and self interest at all, is that the reason they let Altman go may have even been bigger than even what they were saying, and that there was some lack of candor in his communications with the board. Otherwise, you don't make an announcement like that 30 minutes before markets close on a Friday.

wslh
2 replies
15h29m
broken_clock
0 replies
15h25m

That's a parody account

brap
0 replies
15h20m

Wait, this guy really landed an AI lead job in a few hours?

Edit: Ok seems to be a joke account. I guess I’m getting old.

karmasimida
1 replies
12h16m

Without Sam running the company, OpenAI can’t get what it is today.

And this time around he would have the sympathies from the crowd.

Regardless this is very detrimental to OpenAI brand, Ilya might be the genius behind ChatGPT, he couldn’t do it just himself.

The war between OpenAI and Sam AI is just the beginning

arthurcolle
0 replies
12h15m

Alec Radford always left out in these convos, curious

aliston
1 replies
15h2m

If it was really just about seeing eye to eye, why would the press release say anything about Sam being "consistently candid in his communications?" That seems pretty unnecessary if it were fundamentally a philosophical disagreement. Why not instead say something about differences in forward looking vision?

notahacker
0 replies
4h14m

Which they can do in a super polite "wish him all the best" way or an "it was necessary to remove Sam's vision to save the world from unfriendly AI" way as they see fit. Unlike an accusation of lying, this isn't something that you can be sued for, and provided you're clear about what your boardroom battle-winning vision is it probably spooks stakeholders less than an insinuation that Sam might have been covering up something really bad with no further context.

ramraj07
0 replies
13h29m

So, in this round, Woz won?

pfannkuchen
0 replies
11h30m

Or there was a disagreement about whether the dishonesty was over the line? Dishonesty happens all the time and people have different perspectives on what constitutes being dishonest and on whether a specific action was dishonest or not. The existence of a disagreement does not mean that it has nothing to do with dishonesty.

moralestapia
0 replies
14h5m

https://twitter.com/ilyasut/status/1707752576077176907

Dang! He left @elonmusk on read. Now that's some ego at play.

jstummbillig
0 replies
10h0m

Ilya is probably not.

If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.

johnwheeler
0 replies
14h35m
croes
0 replies
5h52m

nothing to do with dishonest

Who knows, maybe they settled a difference of opinion and Altman went ahead with his plans anyway.

cjbgkagh
0 replies
15h17m

That may be the case, but I have a feeling that it will end up being presented as a alignment and ethics versus all in on AGI consequences be damned. I'm sure OpenAI has gotten a lot of external pressure to focus more on alignment and ethics and this coup is signalling that OpenAI will yield to that pressure.

bitcharmer
0 replies
7h47m

Purging an non-profit organization from a greedy MBA aggressively focused on sales and nothing else is always a good riddance in my book.

PeterStuer
0 replies
9h37m

Feels like Griffindor beheaded Slytherin right before Voldemort could make them his own. Hogwarts will be in turmoil but that price was unavoidable given the existential threath?

crop_rotation
69 replies
16h52m

This all seems so weird, and the list of Board members doesn't make this any easier to understand. Apart from the 3 insiders, there are 3 other board members. 2 of them seem complete no names and might not qualify for any important corporate board. In a for profit shareholders in theory control the board, in a non profit I am not even sure of who really has control over things.

hnthrowaway0315
18 replies
16h41m

Maybe non-profits are just frontends of some three letter agencies :)

__loam
13 replies
16h37m

Are we going to start speculating about insane conspiracy theories now?

crop_rotation
8 replies
16h35m

That doesn't seem insane to me in this case. OpenAI is easily the most important non profit for any Government in the whole world.

krapp
5 replies
16h18m

Governments will have their own black-budget private LLM networks, they don't need OpenAI. The NSA probably has a whole cluster of them in its data center in Utah, trained on every public and private communication they've slurped up over the years, likely a generation or two ahead of what's available to the public.

chairhairair
4 replies
14h26m

This is immensely dumb. What secret cabal of researchers would they be hiring that would be capable of being ahead of Deepmind/OpenAI? Where exactly would they find these people? Shadow MIT? CalTech2?

krapp
2 replies
14h19m

Military and intelligence technology is almost always ahead of the private sector. Governments have practically infinite money and resources to throw at the problem, including for recruiting and industrial espionage.

chairhairair
1 replies
14h6m

The only people who think this are people who have never been associated with a top research org. You NEVER hear about anyone, let alone the top people, going to work for government. They all get scooped up with big tech salaries or stay in academia.

The military would need to be literally breeding geniuses and cultivating a secret scientific ecosystem to be ahead on AI right now.

krapp
0 replies
1h40m

The military does have a secret scientific ecosystem. Where do you think all of its advanced classified technology and cryptography comes from, the Hammacher Schlemmer catalog?

satvikpendem
0 replies
9h40m

I can tell you right now that the government agencies are ahead of for-profit ones. Whether you choose to believe it is up to you.

__loam
1 replies
16h31m

They don't need you to be their pr department. Their products are based on research done at Google and Meta, they're not the only ones working on this and they're also one of the smaller players in the space.

crop_rotation
0 replies
16h26m

I never made any of the points you are contesting and my point still stands. And they are not a smaller player in this space, they are the most well known player.

zapataband1
1 replies
16h30m

CIA has been destabilizing and puppeteering governments around the world. Why are you so steadfastly assured that they wouldn't meddle in the US?

Not saying there is proof, but we just found out Ukraine blew up the Russian pipeline so it seems weird to just squash debate at the 'that's too crazy to ever happen'. Way crazier things have happened/are constantly happening.

timeon
0 replies
16h9m

Is the thing with the pipeline actually confirmed?

Anyway if I was in business of destabilizing governments around the world I would not bother dealing with board meetings. But maybe that's just me.

askonomm
0 replies
16h34m

Knowing what some of those three letter agencies have gotten caught doing, I'm not so sure this particular one would be so insane.

arthurcolle
0 replies
16h34m

yes. this is a very strange event and given the relationship to what we may call "cutting edge applied DL" technology, right after DevDay, with two key players dropping out. GDB leaving is pretty wild, IMO. indicates something maybe on the engineering level wasn't above board. Anyways, we shall see. I think some conspiratorial thinking is fine, especially if its backed up with some evidence. This comment isn't, but the fact remains this is pretty weird and people should let their minds wander and connect dots that maybe they half-remember. IMO

duringmath
2 replies
16h34m

You think too highly of the government

hnthrowaway0315
1 replies
16h30m

What if government is also some frontend?

Man I'm drunk in conspiracy theories tonight. Between a huge lay off and the Open AI fiasco please allow me indulge myself...

SAI_Peregrinus
0 replies
12h16m

The Government is a front for the Illuminati.

The Illuminati are a front for the Jews™ (not to be confused with Jewish people).

The Jews™ are a front for the Catholic Church.

The Catholic church is a front for the Lizard People.

The Lizard People are a front for the Government.

Nobody is in control. The conspiracy is circular. There is no conspiracy. Everything in this post is false. Only an idiot cannot place his absolute certainty in paradoxes.

aaomidi
0 replies
15h43m

More like four letter agencies. AKA the stock tickers of large companies.

dougmwne
14 replies
16h48m

The board does and they are not supposed to have a financial stake in the non-profit. Usually they just vote their friends on. Welcome to the loony tunes that is nonprofit management.

Clearly Microsoft staked its whole product roadmap on 4 random people with no financial skin in the game.

PaulDavisThe1st
11 replies
16h42m

Usually they just vote their friends on

You actually think that for-profit corporate boards are significantly different, especially in the startup/early IPO phase?

dougmwne
5 replies
16h40m

Sure, the investors own the company and the board answers to them. Nonprofits are significantly disconnected from their own financial incentives. I have witnessed it at every nonprofit I have worked for.

cthalupa
1 replies
15h28m

Sure, the investors own the company and the board answers to them.

Huh? Plenty of startups in the stage being referenced are still majority owned by the founders.

jdminhbg
0 replies
14h47m

Even if I only owned 1% of Google I’d be very motivated to vote in the best financial interests of the company. If I owned 0% not so much.

adastra22
1 replies
16h28m

In the early stage the investor does not own the startup. 20-30% stake would be typical. Hence why a Series A investor usually demands a board seat and special considerations.

manquer
0 replies
5h26m

Investor here is not someone who puts cash in professionally without running the company. Investor here means whoever owns the stock. There is always an investor in a company even if its just the founder owning 100% stock.

The board reports to the shareholders and the management reports to the board.

In early stage companies it is possible and likely that all three are the same person, that doesn't change the different fiduciary responsibilities for each role they play.

foota
0 replies
13h23m

I mean.. the OpenAI foundation is literally not motivated by profit. I guess the main question here is how was the board chosen, and why didn't Sam much sure they were friendly to them.

crop_rotation
3 replies
16h41m

But those are people who have some skin in the game right? And shareholders can change the board structure right?

PaulDavisThe1st
2 replies
16h40m

I was at amzn when jeff formed the first board. No skin in the game, and no shareholders with any votes. I gather this is pretty typical.

crop_rotation
1 replies
16h38m

But Jeff was the shareholder and those were his nominees right? Not to mention he was mostly able to pick the board as needed. In for profit corporation there is a clear ultimate ownership in shareholders. No such thing here.

PaulDavisThe1st
0 replies
16h36m

The claim was that non-profits "just put their friends on the board". No difference.

kolinko
0 replies
15h3m

From Tom Perkins’s biography - after serving on boards both big private companies and non-profjts, he said that non profits were much worse. His theory was that with no money on the stake it’s all about egos, and they cause weird situations to happen.

Also, I worked in startups and my ex-gf in various nonprofits, and the amount of drama she saw was way higher than in the commercial world

crop_rotation
1 replies
16h45m

That does sound like loony tunes. If the board elects itself then I think it is a very very bad arrangement for something as important as OpenAI.

0xDEF
0 replies
16h19m

That is one problem with non-profits. They end up with completely unprofessional leadership because they hire their friends who are crazies just like themselves.

When things cool down in a few months we will learn Altman and Brockman were some of the few sane people on the board.

anon291
12 replies
16h44m

I am not even sure of who really has control over things.

Honestly, this is the big problem with Big Non Profit (tm). The entire structure of non-profits is really meant for ladies clubs, Rotary groups, and your church down the street, not openai and ikea.

adastra22
7 replies
16h27m

ikea is a non-profit?!?

underseacables
1 replies
16h22m
erk__
0 replies
11h4m

I think the Novo Nordisk foundation is the largest now. It owns a majority of both Novo nordisk and Novozymes.

https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation

adw
1 replies
16h21m

Ikea has the wildest legal structure, but yes, a lot of IKEA is technically owned by a couple of "nonprofits" which happen to pay out a lot of money to the Kamprad family.

ipqk
0 replies
15h50m

The same way that Rolex is technically a non-profit. Complete bullshit legal wrangling.

u320
0 replies
15h2m

It's a foundation in Luxembourg, with a Dutch subsidiary that owns some offices in Sweden.

herval
0 replies
15h7m

So is Rolex!

betaby
0 replies
16h24m
Wowfunhappy
2 replies
15h55m

What are ladies clubs and rotary groups?

not_real_acct
0 replies
15h39m

The latter is for Mazdas, the former is something we can't discuss in a SFW forum

lokar
0 replies
15h36m

Small scale social groups

otikik
0 replies
16h21m

ladies clubs

Or lads clubs. Don't leave us out.

synaesthesisx
9 replies
16h27m

What I find incredibly odd is the lack of a Microsoft board seat, considering their large ownership in OpenAI. Something does not add up.

dragonwriter
8 replies
16h13m

Microsoft has zero ownership of the entity the board controls (the OpenAI nonprofit), and a for-profit firm having seats on a nonprofit board especially if it was because they invested in a for-profit subsidiary of the nonprofit would raise serious issues of the “nonprofit” being run for purposes incompatible with its status.

015a
7 replies
16h8m

Sure; but its still weird that Microsoft agreed to the deal with the board in the state that it was; not just no board seat, but three absolute outsiders, two of them extremely unqualified. We may look back on their decision to buy 49% of OpenAI as a big misstep.

philipov
2 replies
16h3m

How much did they pay for that 49% stake?

speedylight
1 replies
15h53m

Something like $10 billion.

campbel
0 replies
15h31m

Already a good investment then, even if this fundamentally changes how impactful OpenAI is going forward.

solardev
1 replies
14h52m

Wasn't it a Hail Mary for Microsoft? They're not doing anything else particularly earth shattering, and if this came out without them, they'd be even less relevant. If OpenAI fought this and won without them, Microsoft would have nothing to compete against Google and everyone else with.

Did Microsoft have any other route to AI relevance?

satvikpendem
0 replies
9h43m

Sure, they could simply copy the GPT papers (as they are entirely public) and implement them inside their own products, as they are doing already with GitHub and Office. There is really no need to hang onto OpenAI's word.

SXX
1 replies
15h18m

Microsoft got access to their IP and capitalized on it.

Likely it's already brought them more than $10B they paid.

tyre
0 replies
13h14m

And much of that money is/will be spent on Azure. That’s incredibly valuable data and return on investment

0xDEF
7 replies
16h24m

There is one OpenAI board member who has an art degree and is part of some kind of cultish "singularity" spiritual/neo-religious thing. That individual has also never had a real job and is on the board of several other non-profits.

What the hell were they thinking? Just because you are a non-profit doesn't mean you should imitate other non-profits and put crazies on the board.

markus_zhang
3 replies
16h15m

Smells like XYZ agencies, or some white gloves.

astrange
2 replies
16h10m

No, all native Californians are like this (this just replaces hippie Buddhism with hippie computer worship) and the singularity stuff is the reason OpenAI was founded in the first place. And the reason Elon is mad at them, because they pivoted from it.

markus_zhang
0 replies
16h4m

Thanks, love the insight.

j45
0 replies
15h22m

Hippie computer worship seems like indirectly self worship as the creators.

j45
0 replies
15h24m

Non profits too often can be uniquely bureaucratic, undertrained in governance and efficiency, and more tied to deeper personal interpretations or none at all and being open to bouts of oversimplification.

gkbrk
0 replies
7h36m

And their old CEO even runs a cryptocurrency scam. Truly an interesting bunch of people.

crop_rotation
0 replies
16h23m

The only explanation I can find is that their importance went through a step function at the launch of ChatGPT, and before that it didn't matter who was a board member.

underseacables
0 replies
16h25m

Sort of remind you of the Silicon Valley Bank board

nikcub
0 replies
16h35m

Three other people have left the board this year: Reid Hoffman, Will Hurd and the person from Neuralink.

indymike
0 replies
12h25m

in a non-profit I am not even sure of who really has control over things

The board is in absolute control in a not-for-profit. The loophole is that some have bylaws that make ad-hoc board meetings and management change votes very difficult to call for non-operating board members, and it can take months to get a motion to fire the CEO up for a vote.

In some not-for-profits, the board often even manages to recruit and seat new board members. Some not-for-profits operate as membership associations, where the organization’s membership elects the board members to terms.

On the few not-for-profits where I was a board member, we started every meeting with a motion to retain the Executive Director (CEO). If the vote failed, so did the Executive Director.

DebtDeflation
0 replies
14h30m

See my comment above. I don't think OpenAI's absurd corporate structure will survive this.

jumploops
38 replies
16h3m

Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.

Sam claims LLMs aren't sufficient for AGI (rightfully so).

Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.

In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.

abra0
20 replies
14h52m

rightfully so

How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic

jumploops
18 replies
14h31m

The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.

Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.

Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.

Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!

dboreham
13 replies
14h13m

Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.

mlyle
12 replies
13h36m

Clearly there's a difference, because the architectures we have don't know how to persist information or further train.

Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.

I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.

darkerside
7 replies
13h17m

Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

I mean, can't you say the same for people? We are easily confused and manipulated, for the most part.

mlyle
6 replies
13h11m

I can remember to do something tomorrow after doing many things in-between.

I can reason about something and then combine it with something I reasoned about at a different time.

I can learn new tasks.

I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.

The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."

blackoil
3 replies
12h16m

That just proves we real-time fine tuning of the neuron weights. It is computationally intensive but not fundamentally different. A million token context would look close to long short-term memory and frequent fine-tuning will be akin to long-term memory.

I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.

visarga
0 replies
7h25m

I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs.

I think creativity is made of 2 parts - generating novel ideas, and filtering bad ideas. For the second part we need good feedback. Humans and LLMs are just as good at novel ideation, but humans have the advantage on feedback. We have a body, access to the real world, access to other humans and plenty of tools.

This is not something an android robot couldn't eventually have, and on top of that AIs got the advantage of learning from massive data. They surpass humans when they can leverage it - see AlphaFold, for example.

mlyle
0 replies
11h49m

Real-time fine tuning would be one approach that probably helps with some things (improving performance at a task based on feedback) but is probably not well suited for others (remembering analogous situations, setting goals; it's not really clear how one fine-tunes a context window into persistence in an LLM). There's also the concern that right now we seem to need many, many more examples in training data than humans get for the machine to get passably good at similar tasks.

I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.

I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.

calf
0 replies
11h4m

Are there theoretical models that use real time weights? Every intro to deep learning focuses on stochastic gradient descent for neural network weights; as a layperson I'm curious about what online algorithms would be like instead.

MVissers
1 replies
11h6m

Former neuroscientist here.

Our brain actually uses many different functions for all of these things. Intelligence is incredibly complex.

But also, you don't need all of these to have real intelligence. People can problem solve without memory, since those are different things. People can intelligently problem-solve without a task.

And working towards long-term goals is something we actually take decades to learn. And many fail there as well.

I wouldn't be surprised if, just like in our brain, we'll start adding other modalities that improve memory, planning, etc etc. Seems that they started doing this with the vision update in GPT-4.

I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.

mlyle
0 replies
10h22m

I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.

Yes-- this is pretty much what I believe. And there's considerable uncertainty in how close AGI is (and how cheap it will be once it arrives).

It could be tomorrow and cheap. I hope not, because I'm really uncertain if we can deal with it (even if the AI is relatively well aligned).

visarga
1 replies
8h44m

But context windows got to 100K now, RAG systems are everywhere, and we can cheaply fine-tune LoRAs for a price similar with inferencing, maybe 3x more expensive per token. A memory hierarchy made of LoRA -> Context -> RAG could be "all you need".

My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.

So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.

Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.

Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.

spangry
0 replies
6h2m

I've been wondering along similar lines, although I am for all intents and purposes here a layman so apologies if the following is nonsensical.

I feel there are potential parallels between RAG and how human memory works. When we humans are prompted, I suspect we engage in some sort of relevant memory retrieval process and the retrieved memories are packaged up and factored in to our mental processing triggered by the prompt. This seems similar to RAG, where my understanding is that some sort of semantic search is conducted over a database of embeddings (essentially, "relevant memories") and then shoved into the prompt as additional context. Bigger context window allows for more "memories" to contextualise/inform the model's answer.

I've been wondering three things: (1) are previous user prompts and model answers also converted to embeddings and stored in the embedding database, as new "memories", essentially making the model "smarter" as it accumulates more "experiences" (2) could these "memories" be stored alongside a salience score of some kind that increases the chance of retrieval (with the salience score probably some composite of recency and perhaps degree of positive feedback from the original user?) (3) could you take these new "memories" and use them to incrementally retrain the model for, say, 8 hours every night? :)

Edit: And if you did (3), would that mean even with a temperature set at 0 the model might output one response to a prompt today, and a different response to an identical prompt tomorrow, due to the additional "experience" it has accumulated?

satvikpendem
0 replies
9h54m

Clearly there's a difference, because the architectures we have don't know how to persist information or further train. Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Nope, and not all people can achieve this as well. Would you call them less than humans than? I assume you wouldn't, as it is not only sentience of current events that maketh man. If you disagree, then we simply have fundamental disagreements on what maketh man, thus there is no way we'd have agreed in the first place.

Paul-Craft
0 replies
8h58m

Isn't RAG essentially the "something small you can bolt on" to an LLM that gives it "persistence outside the context window?" There's no reason you can't take the output of an LLM and stuff it into a vector database. And, if you ask it to create a plan to do a thing, it can do that. So, there you have it: goal-oriented persistence outside of the context window.

I don't claim that RAG + LLM = AGI, but I do think it takes you a long way toward goal-oriented, autonomous agents with at least a degree of intelligence.

satvikpendem
0 replies
9h57m

, they just regurgitate it, remix it, and expose patterns

Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.

mrangle
0 replies
2h3m

I agree with your premise.

You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.

It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.

I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.

But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.

When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.

bitcharmer
0 replies
7h39m

They are very convincing, and show that the Turing test may be flawed

The only think flawed here is this statement. Are you even familiar with the premise of Turing test?

FeepingCreature
0 replies
13h22m

If LLMs can copy the symbolic behaviors that let humans generate new knowledge, it'll be there.

smilekzs
0 replies
11h56m

Maybe "rightfully so" meant "it is totally within Sam's right to claim that LLMs aren't sufficient for AGI"?

cedws
5 replies
11h41m

Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.

[0] https://arxiv.org/abs/2309.12288

tinco
1 replies
10h57m

Humans don't work this way either. You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine. Just like humans do when they shut down their system 1 brain and go into system 2 slow mode.

I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.

denton-scratch
0 replies
2h9m

You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine.

This is my view!

Expert Systems went nowhere, because you have to sit a domain expert down with a knowledge engineer for months, encoding the expertise. And then you get a system that is expert in a specific domain. So if you can get an LLM to distil a corpus (library, or whatever) into a collection of "facts" attributed to specific authors, you could stream those facts into an expert system, that could make deductions, and explain its reasoning.

So I don't think these LLMs lead directly to AGI (or any kind of AI). They are text-retrieval systems, a bit like search engines but cleverer. But used as an input-filter for a reasoning engine such as an expert system, you could end up with a system that starts to approach what I'd call "intelligence".

If someone is trying to develop such a system, I'd like to know.

MediumD
1 replies
10h24m

We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.

This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.

visarga
0 replies
9h37m

No, if you read this article it shows there were some issues with the way they tested.

The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.

When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs

Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.

https://andrewmayne.com/2023/11/14/is-the-reversal-curse-rea...

lijok
0 replies
11h21m

They should fire Ilya and get you in there

anvuong
4 replies
15h34m

That is just disagreement in technological aspects, which any well-functioning company should always have a healthy amount of within its leadership. The press release literally said he was fired because he was lying. I haven't seen anything like that from a board of a big company for a very long time.

kolja005
3 replies
15h6m

Additionally, OpenAI can just put resources towards both approaches in order to settle this dispute. The whole point of research is that you don't know the conclusions ahead of time.

capableweb
1 replies
15h0m

Seemingly, OpenAIs priorities shifted after the public ChatGPT release, and seems to be more and more geared towards selling to consumers, rather than a research lab that it seemed they initially were aiming for.

I'm sure this was part of the disagreement as Sam is "capitalism incarnated" while Ilya gives of much different feelings.

thelittleone
0 replies
13h2m

Maybe some promise was made by Sam to MS for the funding that the board didn't approve. He may have expected the board to accept the terms he agreed to but they fired him instead.

ilaksh
0 replies
3h23m

That might be part of it. They announced that they were dedicating compute to researching superintelligence alignment. When they launched the new stuff on Dev Day, there was not enough compute and the service was disrupted. It may have also interfered with Ilya's team's allocation and stopped their research.

If that happened (speculation) then those resources weren't really dedicated to the research team.

jprete
2 replies
15h0m

The question has enormous implications for OpenAI because of the specifics of their nonprofit charter. If Altman left out facts to keep the board from deciding they were at the AGI phase of OpenAI, or even to prevent them from doing a fair evaluation, then he absolutely materially misled them and prevented them from doing their jobs.

DebtDeflation
1 replies
14h37m

If it turns out that the ouster was over a difference of opinion re: focusing on open research vs commercial success, then I don't think their current Rube Goldberg corporate structure of a non profit with a for profit subsidiary will survive. They will split up into two separate companies. Once that happens, Microsoft will find someone to sell them a 1.1% ownership stake and then immediately commence a hostile takeover.

acmiyaguchi
0 replies
11h17m

Interesting, is this how they're currently structured? It sounds a lot like Mozilla with the Mozilla Foundation and Corporation.

calf
1 replies
11h2m

Did Ilya give a reason why transformers are theoretically sufficient? I've watched him talk in a CS seminar and he's certainly interesting to listen to.

kolja005
0 replies
9h45m

From the interviews with him that I have seen, Sutskever thinks that language model is a sufficient pretraining task because there is a great deal of reasoning involved in next token prediction. The example he used was that suppose you fed a murder mystery novel to a language model and then prompted it with the phrase "The person who committed the model was: ". The model would unquestionably need to reason in order to come to the right conclusion, but at the same time it is just predicting the next token.

MVissers
0 replies
11h13m

Rightfully so?

No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.

lexandstuff
26 replies
16h43m

What an unbelievable turn of events! To the outside observer, OpenAI is one of the most successful, well-oiled machines, shipping nearly weekly and doing an unbelievably good job marketing itself. Clearly, there's a lot of turmoil going on behind the scenes.

tempsy
8 replies
16h28m

Ehh the whole “I don’t have equity” thing was a bit strange to me.

s1artibartfast
6 replies
14h32m

what was strange about it? seemed pretty straight forward to me

moralestapia
5 replies
13h49m

what was strange about it?

There's 8 billion people on the planet nowadays, of those, about 7.9 billion would not lift a finger if there's no material benefit to them. Hence why it's strange.

s1artibartfast
4 replies
13h31m

I think the ratio is basically the opposite, but thanks for explaining.

moralestapia
3 replies
12h53m

Ok, now I'm curious, do you live in a monastery or a very small community?

s1artibartfast
2 replies
11h24m

no, I live in a US urban environment and my experience is that most people enjoy doing good and helping others, especially when it is simple and low effort.

manquer
1 replies
5h16m

simple and low effort

You would agree the OpenAI is neither ? the comparison doesn't do well then?

s1artibartfast
0 replies
4h24m

You set the bar for most people at lifting a finger, not running a global company.

If you want to talk about rarer cases, there at lots of examples of people that literally sacrifice their lives and die for no personal benefit

mcenedella
0 replies
16h23m

So weird.

__loam
7 replies
16h35m

We don't really know anything about the internal finances at this point. The product is solid but who knows how fucked up things are. Maybe they were on track to run out of money. GPU compute ain't cheap.

crop_rotation
6 replies
16h30m

None of that can be a reason for a step like this. OpenAI can easily charge much more for their products, and there is a market for even extremely high prices (even if not as big) and given this is a non profit, it doesn't even need to make billions of dollars in money.

CrazyStat
5 replies
16h17m

OpenAI exists both as a nonprofit and, for several years now, as a for-profit company [1] that has taken billions of dollars in investment. It needs to make billions of dollars to return to investors just as much as any other for-profit company does.

[1] https://openai.com/blog/openai-lp

crop_rotation
4 replies
16h12m

The non profit is the majority owner of the for profit, and there is no investor pressure here to make billions.

j45
3 replies
15h21m

Could that not change as the board changes?

jprete
2 replies
14h54m

I think the board is required to be a majority non-equity-holders precisely because an equity-holding board will not keep to their non-profit mission.

j45
1 replies
2h40m

Since it's a private non-profit corp it might be whatever they want the rules to be.

Arms-length neutrality on a board in silicon valley might still work like the rest as other comments have stated. Maybe someone can shed some light on it

jprete
0 replies
1h36m

I’m presuming it was put into place as part of creating the capped-for-profit entity, to make sure the for-profit couldn’t itself permanently misalign the non-profit’s board.

soderfoo
6 replies
16h4m

And Sam was the face of AI to the general public, which makes these moves all the more perplexing.

cthalupa
4 replies
15h26m

I think you're likely vastly overestimating the amount of people in the general public who have any idea who Sam is at all.

strikelaserclaw
2 replies
14h35m

Just look at how this news is doing on reddit (a service i conflate a little more with the general public than hacker news which learns towards silicon valley technology) and you can easily see the truth of your statement.

rchaud
1 replies
13h56m

Interestingly parts of this comment section are behaving in Reddit-y ways, posting board members' Linkedins and questioning their credentials, as if their jobs are to just rubber stamp the CEO's calls.

blackoil
0 replies
11h10m

that's not un-HN. Noone has any clue, and all are speculating, so it is one theory after another. We have dozen posts on capitalization.

LoganDark
0 replies
15h6m
strikelaserclaw
0 replies
15h32m

he wasn't a very good face, most of my friends don't recognize who sam is, they know what chatgpt is though.

mvkel
0 replies
15h23m

Definitely lots of tape and bubblegum holding things together. Like any fast growing company! And this one is breaking speed records.

cedws
0 replies
15h43m

OpenAI is still riding on a fast wave of success from ChatGPT. Let's see how they're doing in a year.

DavidSJ
23 replies
16h56m

This suggests that Greg Brockman wasn't in the board meeting that made the decision, and only "learned the news" that he was off the board the same way the rest of us did.

dragonwriter
6 replies
16h54m

Well, yeah, he wouldn't be allowed to participate in deliberation about his own removal.

branweb
4 replies
16h41m

wait...isn't "the decision" referred to in the parent comment about the removal of Altman?

dragonwriter
1 replies
16h36m

He was renoved as Chairman at the same time (close enough that they were announced together, and presumably linked in cause, though possibly a separate vote) as Altman was removed as CEO.

branweb
0 replies
16h19m

ah ok makes sense. I thought he just resigned in response to Altman's ouster, so there was no board decision to remove Brockman.

bbreier
1 replies
16h36m

Greg's removal was announced in the same press release as Altman's

branweb
0 replies
16h18m

ah ok. I thought the board decided to remove Altman, then Brockman quit in response, so there was no deliberation about his (Brockman's) removal.

DavidSJ
0 replies
16h53m

Maybe, but there's a difference between not being in the deliberation, and not being notified until the entire planet was.

w10-1
4 replies
16h30m

Boards cannot meet or act without notice to board members and the opportunity for them to participate.

cl42
1 replies
16h26m

They can if the meeting is about the problematic board member(s).

throwaway4aday
0 replies
15h12m

Still have to give notice

adastra22
1 replies
16h23m

I'm not sure those rules apply to non-profits.

Eliezer
0 replies
11h16m

They're usually in the Bylaws. MIRI's Bylaws, iirc 23 years after I wrote them, contain a provision like that.

cldellow
3 replies
16h36m

You've put "learned the news" in quote, but what Greg Brockman wrote was "based on today's news".

That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.

EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.

philipov
2 replies
15h57m

It's all very ambiguous, but if he had been there for the board meeting where he was removed, I imagine he would have quit then and it would have been in the official announcement. It comes across like he didn't quit until after the announcement had already been made.

latexr
1 replies
14h27m

and it would have been in the official announcement.

It is:

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

https://openai.com/blog/openai-announces-leadership-transiti...

philipov
0 replies
14h21m

and will remain in his role at the company

The portion you quoted says he will remain at the company. This post is about him quitting, and no longer remaining with the company.

blast
3 replies
16h41m

He was chairman of the board, no? surely he was in the meeting? More likely it's some kind of schism and he was on Sam's side.

dekhn
1 replies
16h35m

Just guessing here, but I think the board can form a quorum without the chair, and vote, and as long as they have a majority, i think they can proceed with a press release based on their vote.

freedomben
0 replies
16h19m

It varies by jurisdiction and board rules, but this is a common setup and a very reasonable guess.

dragonwriter
0 replies
16h34m

He was chairman of the board, no? surely he was in the meeting?

Since he was removed as Chairman at the same time as Altman was as CEO, presumably he was excluded from that part of the meeting (which may have been the whole meeting) for the same reason as Altman would have been.

dekhn
2 replies
16h42m

Since he was chair of the board... I'm curious how the rest of the board implemented this.

Widdershin
1 replies
16h35m

In the board I served on in the past we had an agreed quorum where we could make binding decisions if ~2/3rds of the members were present.

Probably a similar situation.

cyanydeez
0 replies
16h12m

which makes sense because it'd take 2/3 to implement something if they're unanimous.

just base logic.

Geee
20 replies
16h21m

I think I see it now. Speculation following:

They achieved AGI internally, but didn't want OpenAI to have it. All the important people will move to another company, following Sam, and OpenAI is left with nothing more than a rotting GPT.

They planned all this from the start, which is why Sam didn't care about equity or long-term finances. They spent all the money in this one-shot gamble to achieve AGI, which can be reimplemented at another company. Legally it's not IP theft, because it's just code which can be memorized and rewritten.

Sam got himself fired intentionally, which gives him and his followers a plausible cover story for moving to another company and continuing the work there. I'm expecting that all researchers from OpenAI will follow Sam.

26fingies
8 replies
16h19m

No I don’t think that’s what happened at all. Also memorizing code and rewriting it is very much IP theft.

lenerdenator
5 replies
16h12m

Is it?

Seriously, I’m asking. Like… if you were an engineer that worked on UNIX System V at AT&T/Bell Labs and contributed code to the BSDs from memory alone, would you really be liable?

sheepscreek
2 replies
16h4m

GPT and transformer code has been open sourced many times over by different companies. The weights and the operational model is where the IP really is. This will include the architecture for distributing training and inference at this scale. That said m, any developer and scientist worth their salt will be able to replicate it from memory - without having to copy stuff over 1:1.

So unless any of the necessary bits are patented, I highly doubt an argument against them starting a new company will hold in the courts.

Sometimes the contracts can include a cool-down period before a person can seek employment in the same industry/niche, I don’t think that will apply in Sam’s case - as he was a founder.

Also - the wanting to get himself fired intentionally argument doesn’t have any substance. What will he gain from that? If anything, him leaving on his own terms sounds like a much stronger argument. I don’t buy the getting-fired-and-having-no-choice-but-to-start-an-AGI-company argument.

An interesting twist would be if he joins Elon in his pursuit. Pure speculation, sharing it just for amusement. I don’t think they’ll ever work together. Can’t have two people calling the shots at the top. Leaves employees confused and rarely ever works. Probably not very good for their own mental health either.

astrange
1 replies
15h58m

Sometimes the contracts can include a cool-down period before a person can seek employment in the same industry/niche, I don’t think that will apply in Sam’s case - as he was a founder.

It's very difficult to enforce anything like this in California. They can pay him to not work, but can't just require it.

Paul-Craft
0 replies
8h27m

It's actually easier to enforce a noncompete in California on a founder or principal of a firm than it is on an employee. I don't recall the exact, legal specifics, but it has something to do with the fact that those people are in some way attached to the "goodwill" of the original business, which is something of value that the company can protect.

Someone else can probably say it better than I can, but that's how I understand it at this moment.

astrange
0 replies
16h0m

It is not a copyright issue unless you typed out the exact code from memory. It could be a patent issue if it behaves the same way.

26fingies
0 replies
16h3m

Probably depends on what the code is and how material it is to AT&T’s business and what agreements are in place. IANAL. Youre not gonna get sued for routine stuff.

Geee
1 replies
16h2m

It's not if it's not literal. You can easily reimplement the same ML architecture which you have written before. Also, it's not really OpenAIs IP, if they kept it secret.

Paul-Craft
0 replies
8h22m

Even if it is literal, it may not be infringement. See "rangeCheck" in the Oracle v. Google case.

https://www.theverge.com/2017/10/19/16503076/oracle-vs-googl...

woeirua
2 replies
16h20m

This is certainly a take, but MSFT would probably sue the hell out of them if they tried to do this.

xyst
1 replies
16h15m

US patent lawyers can’t touch China mainland.

woeirua
0 replies
16h9m

Right, so he’s going to flee the US for China?!? Come on.

sheepscreek
2 replies
16h12m

Interesting view - I, and many others presumably, would really like more insight into the source and nature of this speculation.

I am not dismissing the possibility, far from it. It sounds very plausible. But are there any credible reports to back it up?

Geee
1 replies
16h4m

Speculation means: the forming of a theory or conjecture without firm evidence.

It's just a fun theory, which I think is plausible. It's based on my personal view of how Sam Altman operates, i.e. very smart, very calculative, makes big gambles for the "greater purpose".

sheepscreek
0 replies
52m

I believe the comment was edited. The original comment made a mention of “seeing a lot of speculation” (paraphrasing) that piqued my curiosity.

The source of the speculation could further enhance or remove the probability of this being true. For instance, a journalist who covers OpenAI vs. a random tweeter (now X’er?) with no direct connection. It’s a loose application of Bayesian reasoning - where knowing the likelihood of one event (occupation of speculator and their connection to AI) can significantly increase the probability of the other event (the speculation).

ilrwbwrkhv
1 replies
14h50m

If you understand a bit of math you would know there is no AGI and there would be no AGI on the current path.

LatticeAnimal
0 replies
11h48m

What "bit of math" are you referring to? Similarly, would you have said the same things one year ago about the capabilities that ChatGPT currently possesses?

cthalupa
1 replies
15h4m

Legally it's not IP theft, because it's just code which can be memorized and rewritten.

That is not how IP law works. Even writing new code based on the IP developed at OpenAI would be IP theft.

None of this really makes sense when you consider that Ilya Sutskever, arguably the single most important person at OpenAI, appears to have been a part of removing Sam.

w-ll
0 replies
14h14m

Would Ilya maybe the one to push for AGI, and Sam didn't? And the board wants the Skynet they were promissed?

windowshopping
0 replies
16h16m

I thought this was a joke and would end with "/s" and now I'm just left with mouth slightly agape, completely in awe.

jumploops
16 replies
16h56m

Greg is the one who announced GPT-4. Sam enabled Greg and vice-versa.

The next AI winter may have just begun...

strikelaserclaw
4 replies
15h28m

Yea cause Steve Jobs dying stopped apple from becoming a juggernaut. People need to stop idolizing the fact that one or two people are "indispensable". Humanity moves forward eventually, even if Einstein wasn't born, someone would have figured out general relativity.

sicariusnoctis
2 replies
13h50m

It's also quite silly that society often credits one guy at the top who supposedly has "incredible vision" and yet would likely fail at explaining even the most basic technical details. And if such a person must be credited, why not the CTO, chief engineers, or principal scientists, who are at least closer to what actually drive the technical innovations than the CEO?

In reality, it's actually the 1000s of actual engineers that deserve most of the credit, and yet are never mentioned. Society never learns about the one engineer (or team) that solves a problem that others have been stuck on for some time. The aggregate contributions of such innovators are a far more significant driving force behind progress.

Why do we never hear of the many? It's probably because it's just easier to focus on a single personality who can be marketed as an "unconventional genius" or some such nonsense.

strikelaserclaw
0 replies
12h52m

Our stupid monkey brains are evolved to work in a primitive, human centric way, we always need a "figure", a "leader" to look up to, we can't comprehend that many people can be involved in something, that doesn't satisfy our primate brains need to follow or worship someone.

manquer
0 replies
4h56m

Human motivations and effort are like Brownian motion completely stochastic and hard to direct in any one direction to make any significant impact .

A effective leader whether it is Musk, Jobs, Altman, Gandhi, Mandela (or Hitler for that matter) has the unique to skill to be able to direct everyone in a common direction efficiently like a superconducting material.

They are not individually contributing like say a Nobel laureate doing theoretical research. They get accolades they get is because they were able to direct many other people to achieve a very hard objective and keep them motivated and focused on the common vision, That is rare and difficult to do.

In the case of Altman, yes there were 1000s researchers, programmers who did the all the actual heavy lifting of getting OpenAI where it is today.

However without his ability and vision to get funding none of them would be doing what they are doing today at OpenAI.

All those people would not work a day more if there is no pay, would not be able train any model without resources. A CEO's first priotity is to make that happen by selling that vision to investors, Secondly he has to sell the vision to all these researchers to leave their cushy academic and large company jobs to work in small unproven startup and create an environment they can thrive in their roles. He has done both very well.

adharmad
0 replies
14h35m

Unrelated, but maybe you mean special relativity. Poincaré was very close and others like Lorentz would have made the logical leap to discover special relativity. Most scientists however agree that GR would have taken much longer for someone to fill in the crucial gap of modeling gravity as the geometry of space time.

But sooner or later someone would have done it.

bananapub
2 replies
16h49m

what a fucking ridiculous statement - sam altman is just a YC VC machine man, and I'm sure openai can find another CTO in the hottest ML market in history.

woeirua
1 replies
16h33m

We just don’t know enough yet. Sam could’ve been let go over a disagreement about direction. Or he could’ve been cooking the books. Or he could’ve been concealing the true operating costs. Or subscriber numbers. Some of those things just require a change in leadership. Others are existential risks to OpenAI.

jodrellblank
0 replies
15h19m

Or their first real AGI could have ousted him.

LegibleCrimson
1 replies
16h38m

The next AI winter may have just begun...

Because two executives were ousted from a company? That's dramatic.

cyanydeez
0 replies
16h7m

this entire thread is a fascinating treatise on AI philosophy mixed with business conspiracy.

wildpeaks
0 replies
15h51m

The CEO and board aren't the people who create the actual products or do the research.

potatopatch
0 replies
16h27m

Thomas Crapper stepped down from the Crapper company in 1904, which is why we don't have Crappers today.

justcool393
0 replies
5h54m

we can only hope

i'm sick and tired of everyone sticking a chatbot on random crap that doesn't need it and has no reason to ever need it. it also made HN a lot less interesting to read

fatherzine
0 replies
16h52m

"the next AI winter may have just begun" -- good!

time to stop playing with existential fire. humans suffice. every flaw you see in humans will be magnified X times by an intelligence X times stronger than humans. whether it is autonomous or human lead.

dougmwne
0 replies
16h44m

Thermonuclear winter is more likely at this point. 4 AI safety believers that formed a board majority just got spooked.

__loam
0 replies
16h40m

If the AI ecosystem is so fragile that the ouster of two men from one start up is enough to destroy it then it wasn't ever a solid bet. I don't think this will mean much for the broad viability of these systems. Gpt is clearly valuable, but I guess we need to figure out if these systems can be run profitably. I'm not sure people have given much thought to how insanely expensive running massive gpu clusters is. It might just fundamentally not scale well.

robswc
15 replies
16h8m

Can a super smart business-y person educate this engineer on how this even happens.

So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?

Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.

cowl
5 replies
16h0m

Could have been only Sam that was under Vote and Greg being forced to step down after he voted in his favour maybe in a second vote later.

throwaway4aday
3 replies
15h28m

Why would Greg have said "after learning today's news" if he took part in the vote? If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on? I don't think he took part, the others probably convened a meeting and cast a unanimous vote, issued the statement and then contacted Greg and Sam. The whole thing seems rushed so that's probably how it would have played out.

pdpi
1 replies
14h5m

If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on?

Why would they issue a statement saying that he was going to stay on without some form of assurance from him?

I mean, you're writing a release stating that you're firing your CEO and accusing him of lack of candor. Not exactly the best news to give. You're chasing that with "oh by the way, the chairman of the board is stepping down too", so the news are going from bad to worse. The last thing you want is to claim that said chairman of the board is staying as an employee to have him quit hours later. I find it hard to believe that they'd make mistake as dumb as announcing Greg was staying without some sort of assurance from him, knowing that Greg was Sam's ally.

bhelkey
0 replies
10h19m

Why would they issue a statement saying that he was going to stay on without some form of assurance from him?

Maybe to make it clear that if he leaves, it is him quitting not him being fired. This would avoid potential legal issues.

Maybe they thought there was a chance be would stay.

throwaway4aday
0 replies
15h18m

After looking into it they would have had to give notice in case they wanted to attend but from the sounds of it they may not have bothered to go which would make sense if they knew they were screwed.

robswc
0 replies
15h53m

Ah, I really like that theory!

I mean, I still wonder though if they really only need 3 ppl fully on board to effectively take the entire company. Vote #1, oust Sam, 3/5 vote YES. Sam is out, now the vote is "Demote Greg", 3/4 vote YES, Greg is demoted and quits. Now, there could be one "dissenter" and it would be easy to vote them out too. Surely there's some protection against that?

0xDEF
3 replies
16h2m

Can a super smart business-y person educate this engineer on how this even happens.

There is nothing business-y about this. As a non-profit OpenAI can do whatever they want.

robswc
2 replies
15h58m

Well, there has to be some sort of framework in which they operate, no?

OpenAI isn't a single person, so decisions like firing the CEO have to be made somehow. I'm wondering about how that framework actually works.

matwood
0 replies
15h28m

You'd have to read the company charter and by-laws.

adastra22
0 replies
10h14m

There isn't really any rules specified in the law for this, unlike with corporate law which mandates companies to be structured a certain way. We'd have to see OpenAI's operating by-laws.

crop_rotation
1 replies
16h3m

I had the same questions, and have learnt now that non profit governance is like this and that is why is a bad idea for something like OpenAI. In a for profit the shareholders can just replace the board.

robswc
0 replies
15h41m

Asking ChatGPT (until someone else answers) says that to remove a board member usually takes a super majority, which makes much more sense... but still seems to imply they need at least 4/6.

mrandish
0 replies
15h3m

The details depend on what's specified in the non-profit's Bylaws and Articles of Incorporation. As a 501(c)3 there are certain requirements and restrictions but other things are left up to what the founding board mandated in the documents which created and govern the corporation.

Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.

andrewstuart2
0 replies
15h56m

It's a comedy but I feel like I learned a lot about SV and VC/board culture from watching HBO's Silicon Valley.

SkyPuncher
0 replies
13h22m

Funnily enough, I just started watching Succession last week.

This feels like real like succession panning out. Every board member is trying to figure out how to optimize their position.

Waterluvian
14 replies
16h52m

Is there some weird chic thing where you intentionally don’t use capital letters? What is up with that behaviour?

Is it some cute attempt at saying “an AI didn’t write this”?

Michelangelo11
5 replies
16h50m

Is there some weird chic thing where you intentionally don’t use capital letters? What is up with that ridiculous behaviour?

I was wondering the same thing. Always, on purpose, avoiding starting sentences with capital letters. Both this guy and Sam Altman. What ... why ... ?

ryandrake
2 replies
16h43m

I've seen this from a couple of VP/SVP level execs at companies I've worked. My pet theory is it's some kind of weird "My time is too important to even use the shift key" signal. They probably add up the cumulative amount of time they would spend using the shift key and multiply by their compensation and realize they could buy another car if they just optimized that useless key out of their lives.

zamfi
1 replies
16h38m

at this point, it's work to keep autocorrect from fixing this.

malwarebytess
0 replies
15h6m

Plenty of people don't use autocorrect.

capableweb
1 replies
16h46m

So you know that they are rule and norm breakers, no rule too tiny to be broken and skipped in the name of productivity/health/insert-hot-thing-of-the-day

Michelangelo11
0 replies
14h48m

That's it. Disruption as a way of life. True entrepreneur mindset. Yeahhhh.

liuliu
1 replies
16h45m

Maybe auto-correction is off because you need to type too much acroynms and jargons and auto-correction is annoying?

welpo
0 replies
15h57m

Unlikely. They are separate settings both on iOS and Android.

airstrike
1 replies
16h47m

personally, it comes from spending too much time on IRC back in the day and now thinking it's normal not to capitalize ;-) a bit of a stylistic choice

but over time I've become accustomed to capitalizing a bit more often and it's become sort of random. I actually have auto-capitalization turned off on my phone

Waterluvian
0 replies
16h46m

Oh that’s an interesting source I haven’t considered. It is rather stylistic.

zamfi
0 replies
16h45m

a habit of a certain era of internet kids.

tom_
0 replies
16h40m

Some people just don't like to press shift. I moaned at one of my friends about this, years ago, and I got a 1-word reply that really stuck in my mind:

thomas.

And there is indeed no law against not pressing shift.

stickfigure
0 replies
16h42m
robswc
0 replies
16h49m

Yea, I'm not a fan. I'm not grammar nazi but it makes reading a bit unpleasant.

RivieraKid
14 replies
16h47m

Finally some non-Elon-related drama in tech happening.

voisin
11 replies
16h44m

non-Elon-related so far

ronsor
7 replies
16h42m

Soon: "New CEO of xAI revealed to be Sam Altman"

malfist
2 replies
16h20m

I'm sure there's a non compete clause in his employment

rirarobo
1 replies
15h38m

Unlikely, non-compete clauses are not enforceable in California.

manquer
0 replies
5h14m

Not for principals/ founders. The law applies to employees.

alphaddx
1 replies
15h44m

That would be happening but Elon won't want to be overshadowed

Havoc
0 replies
15h5m

And then in a plot twist it turns out Elon was the AGI

adharmad
0 replies
14h28m

Or Musk returns to the board of OpenAI after a hiatus.

MVissers
0 replies
11h26m

Never, Elon despised Sam's pivot from non-profit to for-profit. He invested 100M in the beginning.

robswc
1 replies
16h41m

If it comes out Elon had a hand in this, I might as well cancel my Netflix.

golergka
0 replies
16h24m

You still need it to check Silicon Valley references.

dkarras
0 replies
16h31m

yeah I'm sure he'll have something stupid to say about it very soon to attract the attention to himself.

next_xibalba
1 replies
16h41m

Elon co-founded OpenAI. So, this isn’t exactly non-Elon.

aidaman
0 replies
14h29m

and mira murati spent 3 years at Tesla

bangalore
13 replies
16h49m

How do I profit from this news. Should I buy or sell MSFT?

JumpinJack_Cash
3 replies
16h21m

> How do I profit from this news

Ignore and focus on your life, the grapevine in your neighborhood about who is selling their car or their house is not as exciting but will net you way more money than this happening thousands of miles away from you. And most importantly without having to fuck with leverage.

bangalore
2 replies
16h15m

Thanks. I'm doing reasonably well in life.

satvikpendem
0 replies
9h49m

Who isn't? That does not mean you cannot do well on financial news. The best way forward is to buy VTI/VTSAX (they are eqiivalent, just based on whether you want to buy a mutual fund or ETF) and wait 20 to 50 years.

JumpinJack_Cash
0 replies
16h13m

That is true for millionaires and billionaires alike, the best ROI opportunities investments are geographically close to you before they are sniffed out by other people.

Betting on the World Cup Final vs. betting on a local match where you know a team has been clubbing and drinking until late into the night at your bar.

Local advantage.

kristjansson
1 replies
16h46m

Both, you can't lose!

voisin
0 replies
16h43m

Well, if you are doing put and call options betting on increased volatility, I think you’re right!

astrange
1 replies
16h8m

The same way you profit without the news, buy VTSAX and wait 10 years.

hiddencost
0 replies
14h32m

This, tho.

Fordec
1 replies
16h14m

MSFT stock already dropped a bit before the bell on the news. That may be baked in by faster movers. This is not stock advice but I'm more inclined to sell NVDA as whatever happens next is a distraction and will slow AI market growth and inclined to buy Google as they have an opportunity here to do some poaching.

But for clarity sake I'm doing neither personally because I'm not a day trader and look more long term.

cyanydeez
0 replies
16h10m

if you wanted to day trade, you'd buy msft because this drop is clearly nominally related but the odds that it has a substantive effect is really unlikely. similar stuff happened during Trump's tenure where they just had bots trading on Trump tweets.

rchaud
0 replies
13h50m

Buy Worldcoin, obviously /s

They're probably firing up the eyeball scanning machines on this news.

dougmwne
0 replies
16h46m

Ooof, I would sell personally unless they manage to seize control of OpenAI by Monday.

devin
0 replies
16h47m

I did both. ;)

minimaxir
12 replies
16h42m

There's no job quitting note like an all lower-case job quitting note.

muzani
7 replies
16h18m

it's a way to soften your tone so it doesn't sound angry. but then it sounds like soft anger.

Mistletoe
5 replies
14h33m

I just read it as illiterate and am amazed someone that deliberately writes like that was chairman of the board.

Spivak
3 replies
14h14m

This is honestly par for the course for anyone who didn't grow up a digital native and is a career businessman who didn't spend it hand-composing messages. I wouldn't read too much into it. Direct emails from executives that aren't filtered by their assistants all look like this.

blackguardx
1 replies
12h30m

You don't think Greg is a "digital native?"

Spivak
0 replies
7m

Oh god no, born in '89 and went straight from college to an executive role and stayed there.

Like it's not a bad thing, I'm not implying any kind of judgement but keeping those things in context helps you know that "K." means something totally different coming from your dad.

solardev
0 replies
12h43m

They're working so hard they don't have time to use the shift key.

flappyeagle
0 replies
12h44m

Because the I’s weren’t capitalized? It just looks like he turned off auto-caps

j45
0 replies
15h19m

Slightly easier to read in many cases in all lower case.

xyst
0 replies
16h18m

2010s business power move

wly_cdgr
0 replies
14h13m

it's so pretentious

wlonkly
0 replies
15h42m

just having a little chat on irc here

nothing too crazy

blackoil
0 replies
11h59m

His previous posts are a mix of all small and regular capitalization. Could be as simple as mobile vs laptop.

og_kalu
11 replies
16h52m

Oh man. https://twitter.com/apples_jimmy/status/1725615804631392637?...

Really wonder what this is all about.

Edit: My bad for not expanding. Noone knows the identity of this "Jimmy Apples" but this is the latest in a series of correct leaks he's made for Open AI for months now. Suffice to say he's in the know somehow.

next_xibalba
8 replies
16h43m

Lolwut? Who is Jimmy Apples?

FridgeSeal
3 replies
16h39m

Don’t you know?

Random twitter guy has thoughts on $Current_Event and a witty quip about the “vibes”. It’s crucial we post this without context to the discussion

og_kalu
2 replies
16h31m

Nobody knows who he is but this is the latest in a series of correct leaks he's made at Open AI for months now. Fair enough I didn't expand but "Jimmy Apples" of all people being unphased by this revelation that supposedly even Microsoft was unaware of is the funniest timeline.

FridgeSeal
1 replies
16h4m

You probably could have included that information in the original comment, as it’s super useful anyone not intimately familiar with the twitter-sphere around openAI.

og_kalu
0 replies
15h54m

That's Fair. I've included it now.

runjake
2 replies
16h34m

Founder of e/acc movement.

reducesuffering
1 replies
16h28m
runjake
0 replies
14h8m

Thank you for correcting me. I'm not sure why I thought he was the e/acc founder.

og_kalu
0 replies
16h36m

Escaped AGI /s

Random? Twitter account who's leaked a few things at Open AI for months now.

refulgentis
1 replies
16h40m

...that's a random post, Jimmy's not some OpenAI insider lmao. Hope he sees this

telotortium
0 replies
16h39m

Pretty sure he works at OpenAI. Beyond that, hard to say how important he is.

layer8
10 replies
16h39m

What is this thing of people not using their shift key anymore? Sam Altman does the same. I don’t trust these people. ;)

durandal1
3 replies
15h43m

all lower case has a longer hacker culture history.

TriangleEdge
2 replies
15h34m

Context please.

rewtraw
1 replies
14h48m

a callback to the old IRC days

Manouchehri
0 replies
13h36m

I still maintain that IRC is better than Discord. It was so nice to have "real" servers.

OJFord
3 replies
15h39m

I'll take lazily not using Shift over pecking at Caps Lock!

meepmorp
2 replies
13h23m

It amazes me there are people who don't remap their capslock to something useful.

metaltyphoon
1 replies
12h30m

Capslock -> Ctrl gang

OJFord
0 replies
7h23m

I don't get that one though, Ctrl is already right there? I do Esc personally, because vim.

chasd00
0 replies
14h15m

My phone and apps autocapitalize I have to go out of my way to use all lowercase. Weird.

MarkusQ
0 replies
16h5m

Why? Do they seem shiftless?

intellectronica
10 replies
15h35m

Neither @sama nor @gdb are known to communicate in all lowercase. That fact that both did today must mean something.

mil22
3 replies
15h25m

It means there was no shifty business.

exikyut
1 replies
15h14m

Facepalm

It took a minute (and reading other comments) before that clicked :)

SHIFT key...

EFreethought
0 replies
11h21m

There is no reason for Altman or Brockman to use the shift key, since as far as OpenAI is concerned they have already lost control.

paulcole
0 replies
14h55m

Hopefully somebody who makes crosswords for the NYT uses this as inspiration.

No shifty business?

LOWERCASEWRITING is one too long for a M-S but could work on a Sunday I guess.

potsandpans
0 replies
12h28m

Don't usually see schizo posting leaking into hn. This is some good stuff

intellectronica
0 replies
15h21m

The coordinated and disciplined communication from both suggests this wasn’t a surprise. Is this a planned move in a grander scheme?

intellectronica
0 replies
1h42m

OK this was clearly BS.

blackoil
0 replies
11h53m

@sama tweets are all in lowercase, except for the acronyms. @gdb has a mix of them.

aboodman
0 replies
13h1m
ArcaneMoose
0 replies
14h30m

There is one set of capitalized letters - in Greg's post: "AGI"

grpt
10 replies
16h52m

Hope they fire the person that forced verified phone numbers on new accounts.

malinens
5 replies
16h6m

Verifying phone number is one of the last things which is still effective when fighting against bot registrations. Alternative is to ask money for registration.

Here is idea for hacker news crowd: make service which is a proxy for phone number validation: user needs to validate his phone number once in that app and any other 3rd-party service can ask this app for security code which confirms phone number ownership. We use something similar by offloading phone number confirmation via Telegram bot. Also this proxy service could optionally offload management of "bad" phone numbers used by spammers and add other protections

zone411
1 replies
15h47m

Nice idea, I hope somebody builds it. Signal just showed that they're spending $6 mil/year for verifying phone numbers.

malinens
0 replies
15h11m

That's why we offloaded phone validation to Telegram - it is too costly to send SMS in other countries than our home market and spammers are finding ways to get phone numbers for free from different VoIP providers. We need to implement complicated SMS sending limit logic to avoid abuse

grpt
1 replies
15h57m

Alternative is to ask money for registration.

I'm 100% ok with this. I have the choice of using a Visa/MC gift card I bought with cash. Same as I can do with Netflix. Better than linking a unique ID I use everywhere else.

I think what bugs me the most is that there's no direct need for the phone. It's reasonable to give my phone number to a doctor's office because I need to hear from them over the phone.

latexr
0 replies
14h10m

I have the choice of using a Visa/MC gift card I bought with cash.

Technically you can also pay for a burner service to get temporary phone numbers to receive SMSs for registering to services. Can’t attest if any of them are good or trustworthy. I recently looked into it but everything I found was a subscription and/or shady looking.

bloopernova
0 replies
15h22m

KeyBase could have been that service. I really wish they'd stayed focused on identification rather than crypto wallets and chat.

I_am_tiberius
3 replies
16h41m

And the person who decided that the settings (disable usage for training data) and (save prompts) can't be individually controlled. Also that the default is that your data is used for training purposes. Both are clear indications that privacy was no importance to them.

brianjking
2 replies
16h3m

You can opt out of training and keep your history turned on in the privacy center. You fill out a form and indicate your country of residence.

I_am_tiberius
1 replies
16h0m

Which privacy center? I only have settings. And in settings there is the sub-category Data controls where I disable "Chat history & training".

brianjking
0 replies
13h54m

It was previously a Google form without any confirmation. As of late October, they moved it to this privacy center that they keep conveniently well hidden.

This page provides confirmation that your request is processed: https://privacy.openai.com/policies

tmsh
9 replies
16h49m

His last sentence is the main hint. Disagreement over dangers of AGI discoveries and how to handle I’d guess.

kromem
4 replies
16h44m

Given the way the board announcement ended on the commitment to the original charter of AI for everyone and the way this mention of safety is being thrown in, I suspect it may have been strong disagreements over continuing to be closed with research in the name of safety or open with the research community in the sake of advancement for all.

crop_rotation
3 replies
16h37m

GPT4 has been closed for a long long time, nothing technical was ever released. The board can't suddenly one day wake up from a long sleep and decide that things have to become open right this minute.

refulgentis
2 replies
16h35m

Maybe it didn't. Maybe it's been pushing on this for a bit and Altman kept paying lip service to it or made commitments in their eyes, and those weren't being followed through on

Sai_
1 replies
15h6m

pushing..for a bit

They could have waited another 30 mins for the markets to close before making the move. This isn’t the culmination of a long-standing problem.

manquer
0 replies
5h10m

A non profit board is not as professional as a public company board.

Looking at the people on the current board, it doesn't seem they have a lot of experience being independent board members in large public corporations.

No non-profit has this level of public scrutiny, It could just be that they were sloppy because they are not professional board members.

duringmath
2 replies
16h42m

No one actually believes in that baloney, not enough to fire the CEO over anyway.

refulgentis
1 replies
16h39m

The money & startup people co-opted the scientific non-profit. Not vice versa.

duringmath
0 replies
16h37m

I mean AGI

denverllc
0 replies
16h35m

I think claiming to quit over AGI danger is more believable to outsiders than “I want to spend time with family, totally coincidentally after my boss was fired”

markus_zhang
9 replies
16h16m

I found the lack of information of the CTO and certain board members disturbing.

Like, who is Mira Murati? We only now that she came from Albania (one of the poorest countries) and somehow got into some pretty good private schools, and then to pretty good companies. Who are her parents? What kind of connections does she pull?

voat
4 replies
16h13m

Why is the fact that she is unknown disturbing?

markus_zhang
3 replies
16h7m

It is more like someone from one of the poorest countries in EU going straight to pretty good private schools and pretty good companies weird.

Not going to say it's impossible, but she is doing so good but left so few footprints on the Internet.

Again just my personal early night conspiracy drink. Don't take it seriously.

zarzavat
0 replies
15h24m

Hopefully not too pedantic but Albania is not in the EU.

ssnistfajen
0 replies
15h20m

And Ilya Sutskever was born in the Soviet Union during its end stage crisis days. Your point being?

foobarian
0 replies
15h23m

If I had a dollar for every Ivy school student whose parents were government functionaries in poor second and third world countries I could probably get a few years of Amazon Prime. But so what? I think it's good for the talent, and good for this country.

crop_rotation
1 replies
16h15m

You should look at the list of board members to get even bigger questions.

markus_zhang
0 replies
16h6m

Yes. Actually that was my motivation to check things up.

tinyhouse
0 replies
15h50m

Who cares where she came from? Do you think that a poor country cannot produce really smart people? Making assumptions that her family has some connections and that's why she is successful is pretty stupid.

ssnistfajen
0 replies
15h22m

Why would you consider the fact that she was born in Albania to be suspicious?

heurist
9 replies
16h56m

Yeah, this is not a good sign for OpenAI. Or at least not for those who appreciate what OpenAI was working to become.

bloopernova
7 replies
16h38m

I guess we'll see much faster enshittification now?

Buy 100 prompts now with AIBucks! Loot box prompts! Get 10 AI ultramax level prompts with your purchase of 5 new PromptSkin themes to customize your AI buddy! Pre-order the NEW UltraThink AI and get an exclusive UltraPrompt skin and 25 AIBucks!

OkayPhysicist
5 replies
16h13m

Altman was the one who stood to benefit financially from OpenAI selling out. The rest of the board do not have equity in the company. If anything we'll see a reversal of the whole "Open"(to paying customers)AI

astrange
4 replies
16h5m

He didn't have equity in OpenAI either, he seemed to be running it for fun. Of course, he can get cash payments.

rchaud
1 replies
13h38m

I'm highly suspicious of these types of claims. Steve Jobs was famously on a salary of $1, to get Apple back on track....no mention of the 7.8 million stock options backdated to maximize the gains on the share price.

They're all getting paid one way or another.

astrange
0 replies
12h1m

Sure, they can promise to give him some later. But he doesn't have them now; unvested anything would have expired.

That's the reason nobody does stock options anymore though, it's all RSUs now.

mkagenius
1 replies
13h56m

Someone had mentioned in the other thread - he had equity via some yc investment somehow.

astrange
0 replies
10h37m

He appears to say he doesn't.

https://x.com/sama/status/1725748751367852439

Though any fund containing MSFT must be correlated.

EVa5I7bHFq9mnYK
0 replies
15h52m

80-clicks captchas is not enough enshittification for you?

ibejoeb
0 replies
16h43m

I'm glad OpenAI happened, but I'd be happier if it stumbles a little and does not full-on capture the entirety of AI. I think a shake-up is good for the world.

mexicanandre
8 replies
16h8m

Ok what’s with the weird punctuation and lack of capital letters in their tweets?

Both seem like they are horribly rushed and no auto complete?

symlinkk
4 replies
15h45m

it’s a causal writing style that has become popular online in the last decade or so. don’t overthink it

Sebb767
3 replies
15h21m

It's usual in chats, not for formal writing like officially announcing your resignation. At least that's my impression.

vikramkr
0 replies
9h39m

This is the sector of the economy that ipos in cargo shorts I don't think formal writing would be expected or appreciated

dkryptr
0 replies
12h16m

To be fair, that's not a very formal resignation. He just... quit.

TheRoque
0 replies
12h39m

Maybe that's why it's used ? Twitter is just addressed to the public, so it's not official in any way, it's just communication. The lowercase casual style is only used to drive empathy probably.

ppeetteerr
0 replies
11h57m

I've seen lowercase wording used by people who don't want to put effort into something. Personally, I consider it a sign of dismissal towards the audience. If you see this coming from someone, it might be an indication they don't case. This is based purely on personal experience.

internetter
0 replies
15h17m

Yes it likely is very likely he had absolutely no idea and rushed something out

brap
0 replies
15h10m

Reminds me of how rich YouTubers make apology videos. No makeup (but yes makeup), no script (but yes script)... make an effort to seem effortless

HissingMachine
7 replies
16h42m

So is this the way the government takes control of the AI, next OpenAI will have new but familiar owners from Raytheon and the other Pentagon crowd?

astrange
3 replies
16h3m

Governments don't need to "take control" of things, they get tax payments and can pass laws.

The US has never spent less on its military than it does now, and the military industrial complex has never been less important, because the rest of the economy has grown so much larger. So it's funny to see people still using Cold War-era conspiracy theories from when it actually mattered.

HissingMachine
2 replies
14h52m

Where is the conspiracy theory?

astrange
1 replies
13h39m

Raytheon and the Pentagon are secretly controlling OpenAI by changing its CEO?

HissingMachine
0 replies
10h46m

I didn't declare it, it was a question, but thanks for slandering me as a conspiracy theorist

malfist
1 replies
16h23m

Postulating crazy conspiracy theories is no way to go through life

HissingMachine
0 replies
14h52m

Where is the conspiracy theory?

andrewstuart
0 replies
16h27m

It’ll be far more mundane than a spy novel plot.

rrsp
6 replies
16h23m

Has the same lack of capitalisation as Sam Altman's message, wonder why

thornewolf
2 replies
16h2m

i predict because it's hip to talk that (this) way nowadays

durandal1
1 replies
15h42m

Not sure if that's why they're doing it, but all lower case writing has strong hacker culture roots.

lern_too_spel
0 replies
14h30m

1 qu17.

duxup
1 replies
16h9m

Some conflict over using toUpperCase() ?

tazu
0 replies
11h46m

snake_case debate got out of control

johannes1234321
0 replies
16h8m

Showing it wasn't written by ChatGPT? (While one can easily ask ChatGPT to use lower case only ...)

robswc
6 replies
16h59m

Jesus. I saw he was "demoted" but he's totally out now, right?

ryanSrich
5 replies
16h56m

Well "out" as in he explicitly quit. He wasn't fired.

dragonwriter
3 replies
16h51m

He was removed as Chairman of the Board but was allowed to remain in his other role as President reporting to the new acting CEO, but apparently wasn't interested.

OTOH, I don't think it would surprise anyone that he would quit, and that may well have been the intent.

robswc
2 replies
16h44m

That's what I thought.

OpenAI's statement implies he was aware of the demotion... but his statement seems to imply he wasn't.

I guess the most likely situation is that they put out the press release, told him (or vice versa) and it took him a bit to decide to quit.

dragonwriter
1 replies
16h29m

OpenAI's statement implies he was aware of the demotion.

Maybe my experience with corporate communications is different, but all it implies to me is that he was not removed as President and was being permitted to stay on under the new CEO.

robswc
0 replies
16h18m

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

Its totally fair (your interpretation) to think that. He was removed as chairman though, which IMO is a demotion. I think its disingenuous on the part of OpenAI _unless_ originally Greg said he was ok with the new terms. If a company says "Person X will remain as President and report to the CEO" you would think they have worked it out with person X _before_ announcing it.

robswc
0 replies
16h52m

For sure.

Wasn't he essentially demoted before quitting though? I guess this means he wasn't even aware he was demoted.

next_xibalba
6 replies
15h27m

Here is Helen Toner's resumé: https://www.linkedin.com/in/helen-toner-4162439a/details/edu...

I am genuinely flabbergasted as to how she ended up on the board. How does this happen?

I can't even find anything about fellow board member Tasha McCauley...

upwardbound
0 replies
9h54m

Helen Toner is famous among the AI safety community for being one of the most important people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy.

She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.

Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...

She's also co-authored several of the most famous "survey" papers which give an overview of AI safety methods: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22h...

She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.

(Copy-pasting this comment from another thread where I posted it in response to a similar question.)

newsclues
0 replies
14h15m
exikyut
0 replies
15h17m

Getting "page not found". I don't think I have an account....

dauertewigkeit
0 replies
14h38m

People are forgetting that OpenAI started as a legit non-profit. It was not meant to be a big money making startup. So presumably getting a cushy board member position was not as hard because this was just some weird philanthropy thing that some SV people were funding...

codys
0 replies
15h1m

McCauley's linkedin: https://www.linkedin.com/in/tasha-m-25475a54/ . From some digging, full name appears to be Aimee Nastassia 'Tasha' McCauley, and is married to Joseph Gordon-Levitt (the actor).

andrewedstrom
0 replies
15h19m

Just linking to the education tab of her profile is misleading.

Many people in AI safety are young. She has more professional experience than many leaders in the field.

https://www.linkedin.com/in/helen-toner-4162439a/

kats
6 replies
16h30m

Meta should do a ChatGPT clone just like they did a Twitter clone.

Alupis
2 replies
15h40m

Because their Twitter clone went so well for them?

veec_cas_tant
1 replies
13h15m

100 million monthly active users at the end of October, and still growing, according to Zuck. Is that not considered successful anymore?

blackoil
0 replies
11h23m

disappointing @veec, with this mindset you'll never be in three comma club. /s

astrange
1 replies
16h7m

They sort of did one before ChatGPT (and after InstructGPT) with Galactica, but didn't have the nerve to keep it up.

computerex
0 replies
15h33m

Galactica was focused on academic use cases and got a lot of hate due to hallucinations.

malfist
0 replies
16h22m

Llama?

gumballindie
6 replies
16h54m

The screw is tightening. Britains largest newspaper has just called out ai companies on their intellectual property and content theft.

The game is over boys. The only question is how to make these types of companies pay for the crimes committed.

mardifoufs
1 replies
16h31m

Oh no not Britain's largest newspaper! A nation known for the quality of its media. It's over for openai, I'm sure they'll fold rather than just not doing business with a comparatively small market if anything came out of that lol.

gumballindie
0 replies
15h52m

You’re missing the point. Raising awareness of the scale of this theft is aimed at swaying public opinion.

That way, legitimate machine learning companies can thrive and research for ai can continue without the nuisance.

Incidentally forums are filling up with horror stories from people working at or interviewing with openai, in spite of their paid trolls spamming forums left and right and reporting reddit posts to suppress people. Perhaps the bubble has burst.

Openai has done more harm to ai than any other company.

The cat’s out of the bag.

stainablesteel
0 replies
16h44m

companies will just go to japan where its legal

paul7986
0 replies
16h37m

The cat's out of the box for profit or not ... those who make living off of copyright will just have to deal with it as they dealt with Napster and all other innovations.

mrmuagi
0 replies
16h45m

I don’t think you can conclude that the ship is sinking per say, rather those at the helm are being changed.

anon291
0 replies
16h40m

There is no crime. observing something is not a crime.

duxup
6 replies
16h11m

Quitting like that makes it seem like Greg already knew what was brewing, whatever the conflict was and it came to a head and he made his call. So not a total surprise to him, at least as far as the backstory goes.

crop_rotation
5 replies
16h9m

On the contrary this makes it seem like he was surprised.

bongripper
2 replies
16h1m

No, he does not seem surprised as he had his mind made up about an important life decision and a carefully crafted resignation letter ready in a matter of hours.

flashback2199
0 replies
15h49m

carefully crafted resignation letter

Are you trolling, the letter is short and all lowercase lol

Paul-Craft
0 replies
8h10m

Maybe he used ChatGPT to write the letter.

stolsvik
0 replies
15h49m
duxup
0 replies
16h6m

I think if he didn't know why, he would wait to find out what the story was.

Instead he knew enough to make is call immediately, knew what he was going to do.

babl-yc
6 replies
16h56m

So given that there are 6 board members, Ilya had to have voted to oust Sam?

dnissley
5 replies
16h54m

Presumably Sam recused himself so not necessarily

babl-yc
3 replies
16h49m

Why would he recuse himself? Sam seemed happy to work at OpenAI.

FWIW, radio silence from Ilya on twitter https://twitter.com/ilyasut

hnarayanan
0 replies
10h56m

Do you mean X?

favorited
0 replies
16h42m

Because it's a conflict of interest - a CEO should absolutely recuse themselves from a vote on whether they should be removed. If a CEO refused to do so, the board should adjourn and reconvene without the CEO present.

dragonwriter
0 replies
16h44m

Why would he recuse himself?

Usually mandatory for decisions about a board member for them to be recused. That there is an overwhelming potential for conflict between personal interest and the firm’s is pretty clear in that case.

SkyPuncher
0 replies
16h24m

Given he was the subject of the vote, he likely wouldn’t even be able to participate.

lazzlazzlazz
5 replies
12h43m

The AI safety people may be one of the most destructive forces in tech history.

zbentley
3 replies
11h36m

Could you elaborate on what you mean by that/why you think so?

satvikpendem
0 replies
9h51m

It is much ado about nothing, to quote our laudet poet in the English canon. Who gives a shit about AI, really?

adastra22
0 replies
10h6m

AI x-risk is a load of hogwash based on extremely faulty reasoning ill-adapted to real ML architectures and political/economic reality. There is absolutely no reason to worry about the deceptive turn of a paperclip maximizer. Yet because of these sci-fi trope fears, real human progress is being held back.

WendyTheWillow
0 replies
10h27m

Because there is nothing even approaching the claimed risk of "AI", and they're stifling the growth and potential that LLMs have at vastly improving our lives.

satvikpendem
0 replies
9h52m

I agree, let me query what I wish from the AI; it is literally no different from current search engines.

georgehill
5 replies
16h54m

Building something on top of GPT, I am now worried.

lexandstuff
2 replies
16h44m

Claude is a drop in replacement for most use cases. You'll be fine.

solardev
0 replies
12h36m

Who is Claude?

atleastoptimal
0 replies
15h47m

claude sucks

bananapub
1 replies
16h48m

why? you should always assume that your dependencies might go down, or shut up shop, or become your competitor.

selfhoster11
0 replies
16h39m

GPT-4 has effectively no competition for what your can extract out of it without any fine-tuning.

andrewstuart
5 replies
16h32m

Either the board will resign and Altman will return.

Or Altman will start a competitor.

naiv
1 replies
16h15m

so I guess non-profits do not have non-competes? I have no idea

wnc3141
0 replies
11h18m

Non competes are illegal in California (if there is no sale of equity in excess of assets involved).

thrillgore
0 replies
11h16m

He's probably on Atlas right now starting the company.

dlandis
0 replies
15h4m

Exactly...new competitor forming as we speak

crop_rotation
0 replies
16h29m

If non profit boards elect themselves, I see no reason for the board to resign, for half the board it is their biggest life accomplishment, and there is no shareholder to vote them out.

m101
4 replies
16h18m
crop_rotation
1 replies
16h17m

That is very old for it to cause a reaction like this now.

cyanydeez
0 replies
16h6m

substantive evidence might have cropped up.

not2b
0 replies
16h4m

Let's suppose all that stuff is absolutely true, that when Sam Altman was a 13 year old kid he assaulted his 4 year old sister, that she's troubled because of it, and he made some attempts to buy her off, perhaps money in exchange for silence. Why would the board decide to suddenly fire him because of that, after all this time? He was a minor who would not understand the consequences.

No, I'm confident that it has nothing to do with that. It must have to do with the current business. Maybe there's a financial conflict of interest. Maybe he's been hiding the severity of losses from the board. Maybe something else. But you don't fire a CEO because you discover that he committed a crime at age 13.

brianjking
0 replies
13h49m

If this is true, which it very well could be, it's clearly not ok/right. However, this was known for some time. She was tweeting about this in 2021, and this was discussed again in October of this year.

None of that makes sense as to why the board would randomly fire him. I don't think it's this.

ldargin
4 replies
16h41m

This suggests that a real schism is occurring at OpenAI. I anticipate hearing of two different philosophies at play.

mark_l_watson
2 replies
15h29m

It would be interesting if Altman and Brockman, assuming that they want accelerated development, ended up in some high level roles at Microsoft. That seems like a win-win-win all the way around since OpenAI could follow their new path, Microsoft and Google could build new things fast, and the open model proponents can keep up their good work.

Except for a clumsy fast press release, this doesn’t really have to end badly for anyone.

Even though I have been an OpenAI fan every since I used their earliest public APIs, I am also very happpy that there is such a rich ecosystem, other commercial players like Anthropic, open model support from Meta and Hugging Face, and the increasingly wonderful small models like Mistral that can be easily run at home.

adastra22
1 replies
10h10m

Sam Altman is not the kind of person that goes into a bigco management job. He'll be doing another startup come Monday.

mark_l_watson
0 replies
4h38m

Yes. I changed my perspective - Altman would not fit in and be happy at Microsoft.

gumballindie
0 replies
14h37m

Sam and those that want to continue stealing ip and destroying entire industries in the process will leave, while ethical machine learning scientists will remain.

yieldcrv
2 replies
16h54m

nice, hope there are openings in high level positions and they switch to remote

I’m never going back to Noe Valley for less than $500,000/yr and a netjets membership

hilux
1 replies
16h51m

You willing to slum it in a shared seat that has supported other billionaires' behinds?

I wouldn't ... but you do you!

yieldcrv
0 replies
16h28m

I’d otherwise be slumming it in a shared seat that supported middle class behinds

I wouldn't move back to San Francisco anywhere and hybrid would be a midweek affair

woeirua
1 replies
16h17m

This is turning into a situation that OpenAI may not be able to recover from. Typically if the CEO and Chair of the board depart under these circumstances there was something illegal happening.

indymike
0 replies
12h20m

typically if the CEO and Chair of the board depart under these circumstances there was something illegal

When they are fired by the board, it sends a very different signal.

speedylight
1 replies
15h30m

I wonder if OpenAI employees will start resigning en masse because of this as a form of protest. The board better have a very good reason to back up their decision, if they decide to elaborate at all anyway.

lucubratory
0 replies
13h42m

There will definitely be people in Sam's camp that want to leave (I would guess a lot of product people?), but a lot of other people in Ilya's camp who want to stay. Notably, Ilya is the actual scientific and technical asset, and his team are much more likely to be loyal to him than Sam because they work with him. Even if Sam takes away a lot of admin and product roles, the core of the tech capabilities is likely to stay under OpenAI controls. That said, Microsoft doesn't have to keep giving them compute (well, they've got signed agreements Microsoft will honour, but MSoft doesn't have to go further than that), so the new OpenAI will still have to make concessions to their commercial investors for the same reason the company got a for profit subsidiary in the first place: because compute costs money, a lot of it.

runjake
1 replies
16h31m

What’s the likelihood this is over a Microsoft acquisition? Purely speculative here, but Sam might have been a roadblock.

Edit: Maybe this is a reasonable explanation: https://news.ycombinator.com/item?id=38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.

w10-1
0 replies
16h29m

Axios claims MS had no prior knowledge.

otikik
1 replies
16h22m

After learning today’s news,

Which news is that?

welpo
0 replies
15h51m
mcenedella
1 replies
16h24m

This Board has to go.

lucubratory
0 replies
13h38m

Sam Altman is much more replaceable than Ilya Sutskever

Zaheer
1 replies
16h48m

Given Greg seems to not have known about the board meeting there's a good chance Mira didn't either. Is she next?

dragonwriter
0 replies
16h46m

Mira wasn't on the board, but if they were concerned about her they wouldn't have made her interim CEO.

zzzeek
0 replies
16h40m

Wow look at that garbage Twitter thread following it, with shitty ads and .eth jackasses. Twitter has really turned into 4-chan lite

wly_cdgr
0 replies
14h22m

Honestly it could be something as simple and sordid as a credible sexual assault allegation that he lied to the board about, or just some plain old fashioned embezzlement.

thornewolf
0 replies
16h56m

many things are happening

thekoma
0 replies
6h39m

Irrelevant but it struck me how both him and Sam Altman write in all lowercase.

stolsvik
0 replies
16h0m

Whether Greg knew of the decision beforehand? I don’t think so. This totally business-as-usual post from Greg Brockman happened 1 hour before the one from OpenAI: https://x.com/gdb/status/1725595967045398920 https://x.com/openai/status/1725611900262588813 How crazy is that?!

smlacy
0 replies
15h35m
singluere
0 replies
11h10m
satvikpendem
0 replies
10h4m

In my opinion, these people were not fit to run an enterprise originally labeled as "Open"AI, especially when Musk donated 100 million dollars to making sure it remained open while others in the company deemed it better to be closed. At this point, I must wonder if I support XAI instead over these companies.

mrangle
0 replies
2h13m

If the primary issue is was safety vs performance, then in the near-future-end performance is going to win. As is the nature of AI that has been written about for decades.

But right now, the board undoubtedly feels the most pressure in the realm of safety. This is where the political and big-money financial (Microsoft) support will be.

If all true, Altman's departure was likely inevitable as well as fortunate for his future.

momofuku
0 replies
12h24m
modernpink
0 replies
16h51m

Mr President, the second tower has been hit

maxdoop
0 replies
16h51m

And satya seems to have been caught off guard as well. What is happening?

ispyi
0 replies
21m

One should consider the fact that in this case the ex CEO is independently wealthy so clearly wasn’t doing it for the money. In addition he started it with partners as originally as a non profit. Finally he didn’t own any equity in the company despite being one of its core team and founding members, in addition to its very public face. How many of us in the same position as him could be controlled given that the only thing one has to lose is their ability to have influence over the other board members and he seems to have lost that anyway.

friendlynokill
0 replies
16h49m

Holy shit

fredgrott
0 replies
3h37m

Remember folks, the trigger point that makes MS's investment worthless is reaching AGI point as that then triggers the charter rules to take the IP back out of commercial products....that is the crux of the firing...one board faction felt they had reached AGI and the people that were forced our or left felt they had not yet reached AGI.

From my brief dealings with SA at Loopt in 2005, SA just does not have a dishonest bone in his body.(I got a brief look at the Loopt pitch deck due to interviewing for a mobile dev position at Loopt just after Sprint invested).

If you want an angel invest play, find out the new VCfund firm Sam is setting up for hard research.

detolly
0 replies
16h56m

what a day

convexstrictly
0 replies
13h49m

"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."

https://time.com/collection/time100-ai/6309033/greg-brockman...

bubmiw
0 replies
16h10m

I don't like the direction this is heading in at all...

blibble
0 replies
16h56m

this keeps getting better and better

acyou
0 replies
14h19m

Despite what the charter says, why is OpenAI called OpenAI and not OpenAGI? Is that at the core of this issue?

_ink_
0 replies
15h31m

Twist: The AGI is not only already there, it is running rampant already.

SXX
0 replies
15h13m

Just hope this will bring more "Open" into "AI.com"

Mentlo
0 replies
11h55m

As a sanity check - when Hinton left google we celebrated it as the faceless business corpo shouldn’t be leading the velocity of the dev, it should be AI pioneers who understand the risks.

If indeed a similar disagreement happened in OpenAI but this time Hinton (Ilya) came on top- it’s a reason to celebrate.

Kye
0 replies
13h4m

Is this like Star Trek where the mission automatically ends if you lose enough of the bridge crew?