return to table of content

Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]

siliconc0w
88 replies
1h16m

There is a lot in here but turning a non-profit into a for-profit definitely should be challenged. Otherwise why wouldn't everyone start as a non-profit, develop your IP, and then switch to 'for-profit' mode once you got something that works? You don't pay income taxes and your investors get write offs.

dkjaudyeqooe
23 replies
59m

The replies that say "well the profits go to the non-profit, all's good" miss the reality of these high profit nonprofits: the profits invariably end up in the pockets of management. Most of those are essentially scams, but it doesn't mean that OpenAI isn't just a more subtle scam.

The hype and the credulity of the general public play right into this scam. People will more or less believe anything Sam the Money Gushing Messiah says because the neat demos keep flowing. The question is what's we've lost in all this, which no-one really thinks about.

emodendroket
14 replies
55m

If your beef with this structure is that executives get paid handsomely I have bad news about the entire category of nonprofits, regardless of whether they have for-profit arms or not.

cobertos
5 replies
37m

Not many people seem to understand this. Here's an example from a previous rabbit hole.

The Sherman Fairchild Foundation (which manages the post-humous funds of the guy who made Fairchild Semiconductor) pays its president $500k+ and chairman about the same. https://beta.candid.org/profile/6906786?keyword=Sherman+fair... (Click Form 990 and select a form)

I do love IRS Form 990 in this way. It sheds a lot of light into this.

jdblair
1 replies
28m

That salary for managing $1B in assets doesn't seem high to me. Am I missing something?

smallnamespace
0 replies
6m

$1bn in assets isn’t much, at the high end you can charge maybe $20mm a year (hedge fund), at the low end a few million (public equity fund). That needs to pay not just execs but accountants, etc.

Put another way, a $1bn hedge fund is considered a small boutique that typically only employs a handful of people.

troupe
0 replies
21m

Getting paid $500k, while it is a lot of money, is not at all the same as someone benefiting from the profit of a company and making 100s of millions of dollars. $500k doesn't at all seem like an unreasonable salary for someone who is a really good executive and could be managing a for-profit company instead.

doktrin
0 replies
23m

So basically the same as a faang staff engineer?

caturopath
0 replies
12m

I am a lot more offended or pleased by whether the leader manages a 60MM budget and a 1B endowment than their 500k salary.

There's this weird thing where charities are judged by how much they cost to run and pay their employees to even a greater degree than other organizations, and even by people who would resist that strategy for businesses. It's easy to imagine a good leader executing the mission way more than 500k better than a meh one, and even more dramatically so for 'overhead' in general (as though a nonprofit would consistently be doing their job better by cutting down staffing for vetting grants or improving shipping logistics or whatever).

dasil003
2 replies
42m

GP clearly understands this and said it explicitly, hence “OpenAI more subtle scam” part.

emodendroket
1 replies
38m

Isn't OpenAI a less subtle scam in that case?

j16sdiz
0 replies
26m

It's more.

It give empty promise.

rvba
1 replies
35m

The Mozilla management seems to be disinterested in doing anything to improve Firefox market share (by for example doing what users want: customization), they waste money on various "investments" and half-bake projects that are used by developers to stat-pad their CVs - and at the end of the day, they are paid millions.

IMO you could cut the CEOs salary from 6 million to 300k and get a new CEO - and we probably wouldnt see any difference in Firefox results. Perhaps improvement even. Since the poorly paid CEO would try to demonstrate value - and this best is done by bringing back firefox market share.

nerdponx
0 replies
52m

I think they're making the same point as you: "nonprofit" is usually a scam to enrich executives anyway.

dkjaudyeqooe
0 replies
42m

I really wouldn't give a shit how much they were paid if we got something more than vague promises.

They could release the source with a licence that restricted commercial use, anything they wanted, that still allowed them to profit.

Instead we get "AI is too dangerous for anyone else to have." The whole thing doesn't inspire confidence.

billywhizz
0 replies
8m

the way openai structure their pay is dubious to say the least. maybe they will find a way to make money someday but rn everything they are doing is setting my alarm bells off.

"In conversations with recruiters we’ve heard from some candidates that OpenAI is communicating that they don’t expect to turn a profit until they reach their mission of Artificial General Intelligence" https://www.levels.fyi/blog/openai-compensation.html

samstave
4 replies
55m

"Why is the NFL a non-profit:

https://www.publicsource.org/why-is-the-nfl-a-nonprofit/

The total revenue of the NFL has been steadily increasing over the years, with a significant drop in 2020 due to the impact of the COVID-19 pandemic12. Here are some figures:

    2001: $4 billion

    2010: $8.35 billion

    2019: $15 billion

    2020: $12.2 billion

    2021: $17.19 billion

    2022: $18 billion

zelias
1 replies
30m

Does this mean that I can deduct my overpriced Jets tickets as a charitable donation? That's certainly what it feels like in any case...

vonmoltke
0 replies
5m

I know this is a joke (like the Jets), but the NFL was a 501(c)(6) organization. You can't deduct donations to those.

swasheck
0 replies
44m

Update April 28, 2015: In the midst of several National Football League scandals last October, PublicSource asked freelance writer Patrick Doyle to take a look at the nonprofit status of the league. On April 28, NFL Commissioner Roger Goodell said the league will no longer be tax exempt, eliminating a “distraction.”

no longer a non-profit but no less hypocritical

sfmz
0 replies
47m

https://www.cbssports.com/nfl/news/nfl-ends-tax-exempt-statu...

Every dollar of income generated through television rights fees, licensing agreements, sponsorships, ticket sales, and other means is earned by the 32 clubs and is taxable there. This will remain the case even when the league office and Management Council file returns as taxable entities, and the change in filing status will make no material difference to our business.

romeros
1 replies
40m

The reality was that nobody could have predicted the A.I breakthroughs when OpenAI first got started. It was a moonshot. Thats why Musk gave $50m dollars without even asking for a seat at the board.

OpenAI had to start as a non profit because there was no clear path forward. It was research. Kind of like doing research with the goal of curing cancer.

The unexpected breakthroughs came a bit quicker than anticipated and everybody was seeing the dollar signs.

I believe OpenAIs intial intention at the beginning was benign. But they just couldn't let go of the dollars.

rgbrenner
0 replies
3m

Musk had a seat on the board until 2018. He wanted control over OpenAI, and the board rejected it... so he resigned claiming a conflict of interest.

matt-p
0 replies
1m

Also it /doesn't/ all go back to openAI. Microsoft for example will make 100X ROI.

emodendroket
17 replies
1h11m

They didn't "turn it into" a for-profit though, they created a separate for-profit arm. This one is unusually successful but that's not an unusual thing for even "regular" charities to do in order to engage in some activities that they wouldn't normally be able to.

justinzollars
4 replies
1h8m

And transferred everything they did to that arm. I'm all for tax avoidance, but the rules should apply to everyone equally. Small ma and pa businesses don't have the money to hire armies of lawyers for these legal machinations

emodendroket
3 replies
1h2m

I guess "mom-and-pop businesses" are probably not started as charities in the first place in most cases so I don't really get what you are trying to say.

cj
2 replies
52m

He’s making a (valid) point having to do with tax avoidance.

Want to open a bakery in your small town? Start it as a 501(3)(c) and promise it’s a charitable endeavor for the local community. Then invest your $500k into the bakery maybe even from your local community (it’s a tax deductible donation!) to get the bakery up and running.

Then once it’s turning a profit, ditch the original 501(3)c and replace it with a LLC, S-Corp or C-corp and start paying taxes. (And hope you don’t get sued or audited)

His point is mom and pop bakeries aren’t typically sophisticated enough to pull of schemes like this, even if it would save tens of thousands on taxes.

danenania
1 replies
31m

In general the the 501(3)c isn't replaced by a for-profit corp though. The 501(3)c remains and a new for-profit corp is established under its ownership.

IANAL but I think the tax issue would likely hinge on how well that $500k was isolated from the for-profit side. If the non-profit has no substantial operations and is just a shell for the for-profit, I could see getting in trouble for trying to deduct that as a donation. But if there's an audit trail showing that the money is staying on the non-profit side, it would likely be fine.

emodendroket
0 replies
4m

It seems hard to see what the nonprofit would really be doing in this case since the for-profit seems to be the entire operation.

lumost
3 replies
1h6m

Perhaps the regular charity version of this should also be challenged. This case looks somewhat egregious as the for profit arm was able to fire the board of the non-profit parent. Likewise, openAI is selling "PPU" units, it's entirely unclear if anybody knows what these actually are.

It's highly likely in my uneducated opinion that OpenAI will be told to adopt a standard corporate structure in the near term. They will likely have to pay out a number of stakeholders as part of a "make right" setup.

viscanti
1 replies
40m

They didn't actually fire the board of the non-profit. They just said they'd all quit in protest because of an action of the board they all felt was egregious. The board could have stayed and been a non-profit that did nothing ever again. They decided it was better to step down.

cma
0 replies
9m

I beleive they have said they decided it was better to step down because of being threatened with suits.

emodendroket
0 replies
1h3m

I don't think that's very likely at all! But I suppose we'll see.

For a good point of comparison, until 2015, when public scrutiny led them to decide to change it, the NFL operated as a nonprofit, with the teams operating as for-profits. Other sports leagues continue to have that structure.

pclmulqdq
2 replies
1h6m

They basically did, though. The nonprofit does nothing except further the interests of the for-profit company, and all employees get shares of of the for-profit company.

It's not unusual for nonprofits to have spinoffs, but it is unusual for the nonprofit to be so consumed by its for-profit spinoffs.

threeseed
1 replies
36m

The nonprofit does nothing except further the interests of the for-profit company, and all employees get shares of of the for-profit company

OpenAI has always argued that the for-profit is furthering the aims of the non-profit.

Also employees can't get shares of the non-profit so of course they would from the for-profit arm.

pclmulqdq
0 replies
3m

That argument will be tested in court. It certainly looks like things are the other way around as of now.

Most non-profit employees receive their compensation in the form of a salary. If you need to pay "market rate" competing with organizations that offer equity, you pay a bigger salary. When non-profits spin for-profits off (eg research spinoffs), they do it with a pretty strict wall between the non-profit and the for-profit. That is not the case for OpenAI.

cush
1 replies
50m

in order to engage in some activities that they wouldn’t normally be able to

What activities couldn’t they do with their charity arm that required this for-profit arm?

emodendroket
0 replies
33m

I'm not sure specifically in OpenAI's case but the general answer is any activity that would cause the organization to lose tax-exempt status.

tapoxi
0 replies
44m

I mean they effectively did. They created a for-profit, moved the bulk of employees there, and when the board attempted to uphold its founding principles they were threatened and forced to resign.

What's next? Can the OpenAI nonprofit shell divest itself of the for-profit OpenAI and spend the remainder of its cash on "awareness" or other nonsense?

l2silver
0 replies
1h7m

Creating a separate for-profit arm is trivially easy.

behringer
0 replies
35m

It should definitely be illegal.

sigmoid10
14 replies
1h9m

This. Even when we ignore the whole ethical aspect of "AI for benefit of humanity" and all that philosophic stuff, there are very real legal reasons why OpenAI should never have been allowed to switch to for profit. They were only able to circumvent this with their new dual company structure, but this should still not be legal.

yawnxyz
5 replies
57m

didn't Firefox / Mozilla set that precedent already?

wbl
2 replies
53m

No. MozCo is for profit owned by Mozilla Foundation which does additional things to satisfy the IRS and has been that way since the begining.

wkat4242
0 replies
47m

Not since the beginning. They made it that way after beef with the IRS.

I wish they hadn't because they are thinking too commercial (extremely high paid CEO) for instance but they have a foundation to answer to which doesn't manage them like shareholders would (eg not rewarding the CEO for dropping marketshare!). This model is the worst of both worlds imo.

dragonwriter
0 replies
51m

That's the same basic structure, on paper, as OpenAI, it didn't “switch to for-profit” in terms of taking the nonprofit entity and converting it to a for-profit.

dkjaudyeqooe
1 replies
49m

I can download the Firefox sources and everything else they produce.

That they make money incidentally to that is really no problem and a positive because it provides reasonable funding.

What if Firefox made a world beating browser by accident. Would they be justified in closing the source, restricting access and making people pay for it?

That's what OpenAI did.

strbean
0 replies
44m

That's the real distinction: does the for-profit subsidiary subsume the supposed public good of the parent non-profit?

If OpenAI Co. is gatekeeping access to the fruits of OpenAI's labors, what good is OpenAI providing?

samstave
4 replies
58m

Imagine if as punishment, OpenAI were forced to OpenSource any and all IP that was developed in the non-profit phase of their company?

That would be a Nuke in the AI world.

option
1 replies
48m

Not really. Open source and proprietary models aren't that far from them.

They don't have moat. Their main advantage have been people and aleady we see entire Anthropic spinoff, Sutskever absent, Karpathy leave, who is next?

pjerem
0 replies
9m

Their main advantage are their products and their communication. ChatGPT is nice and they managed to impose their API.

Open source staying behind commercial products even if they are technically really close … ? I think I have already seen this.

__loam
1 replies
56m

Imagine if instead, they were forced to delete the models they built using all our data without consent. Lets make it a fusion bomb.

kmeisthax
0 replies
37m

The copyright lawsuits against OpenAI are already calling for algorithmic disgorgement.

dkjaudyeqooe
2 replies
52m

The point of their charter is not to make money, it's to develop AI for the benefit of all, which I interpret to mean putting control and exploitation of AI in the hands of the public.

The reality: we don't even get public LLM models, let alone source code, while their coffers overfloweth.

Awesome for OpenAI and their employees! Every else goes without. Public benefit my arse.

mrinterweb
1 replies
26m

I've been really hung up on the irony of "Open" part of the OpenAI name. I figure "Open" must mean "open for business". What is open about OpenAI?

dkjaudyeqooe
0 replies
14m

The most oppressive regimes have "Democratic" or "People's" in the official name of their country.

Someone took inspiration from this.

ben_w
11 replies
1h4m

I'm not at all clear on what a "not for profit" status even does, tax wise. In any jurisdiction.

They are still able to actually make a profit (and quite often will, because careful balancing of perfect profit and loss is almost impossible and loss is bad), and I thought those profits were still taxed because otherwise that's too obvious as a tax dodge, it's just that profit isn't their main goal?

emodendroket
7 replies
59m

Well, you're confused because of your erroneous determination that they're "able to make a profit." They are not. They are able to have positive cash flow but the money can only be reinvested in the nonprofit rather than extracted as profit.

ben_w
6 replies
56m

OK, so for me "positive cash" and "profit" are synonyms, with "[not] extracted" meaning "[no] dividends".

emodendroket
3 replies
52m

As the government sees it, you realize "profit" when you, as an owner of the business, take the money it makes for yourself.

danenania
1 replies
14m

That's not the case in the US. Depending on corporate structure, if your business makes more revenue than expenses, even if none of it is paid out and it's all kept in the business, you will either owe corporate taxes on that amount (C-Corp or non-pass through LLC) or the full personal income tax rate (pass through LLC).

emodendroket
0 replies
6m

Not saying you can't owe tax on it but isn't that unrealized profit?

Kranar
0 replies
40m

This is bogus and doesn't even make sense.

That would mean that any publicly traded company that didn't issue a dividend didn't make a profit which no one believes.

Do you really want to claim that Google has never made any profit?

im3w1l
0 replies
42m

It's all quite confusing. A non-profit can as you say turn a profit but isn't supposed to distribute it to owners.

There is a difference between positive cash flow and profit as profit has differences in accounting rules. If you invest in some asset (let's say a taxi car) today, all of that cash flow will happen today. But there will be no effect on the profit today, as your wealth is considered to have just changed form, from cash into an asset. For the purposes of profit/loss, the cost instead happens over the years as that asset depreciates. This is so that the depreciation of the asset can be compared to the income it is generating (wear and tear on car vs ride fare - gas).

Kranar
0 replies
45m

Positive cash flow and profit are almost synonyms although there can be subtleties they are not relevant to this discussion.

The parent comment is making a common mistake that non-profits can not make profits, that is false. Non-profits can't distribute their profits to their owners and they lack a profit motive, but they absolutely can and do make a profit.

This site points out common misconceptions about non-profits, and in fact the biggest misconception that it lists at the top is that non-profits can't make a profit:

https://www.councilofnonprofits.org/about-americas-nonprofit...

InitialLastName
0 replies
18m

NAL, my understanding: The profits aren't taxed, and the shareholders aren't allowed to take dividends out (there effectively are no "shareholders" per se, just donors); all profits have to be reinvested back into the business.

In the case of many/most (honest) non-profits, the operating costs are paid out of a combination of the dividends of an invested principal (endowment, having been previously donated by donors) and grants/current donations. Any operating profit could then be returned to the endowment, allowing the organization to maintain higher operating costs indefinitely, thus giving the organization more capacity to further their mission.

FrobeniusTwist
0 replies
20m

It certainly can be confusing. I generally use the term "nonprofit" to mean a corporate entity formed under a nonprofit corporation act, e.g., one derived from the Model Nonprofit Corporation Act. This says nothing about the tax status of the entity, and unless other circumstances also apply the entity would be subject to taxes in the same way as a for profit company on its net income. But many nonprofits also take steps to qualify for one of several tax exemptions, the most well known being section 501(c)(3). Not all of the familiar tax advantages apply to all tax exempt organizations. For example, donations to an organization exempt under 501(c)(3) are deductible by the donor, but donations to a 501(c)(4) are not.

55555
0 replies
46m

Nonprofits can make profits. They aren’t taxed, but they can’t issue dividends. In theory there is some reasonable limit (in the millions) of how much they can pay out via salary compensation etc. they can’t issue dividends because they have no shareholders and no equity. Therefore the profit must simply be used towards their goal, basically.

jjjjj55555
9 replies
1h7m

Isn't this how drugs get developed? Even worse, the research is done using public funds, and then privatized and commercialized later.

pleasantpeasant
4 replies
1h2m

This is a huge problem in the US. Tax-payers are subsidizing a lot of medical advances, then the US government gives it to the private sector, privatizing whatever medical advances were paid by tax-dollars.

Socialism seems to create a lot of markets for the Capitalist private sector.

liamconnell
1 replies
48m

Do the private companies get some special IP rights on the public sector research? It seems like in a competitive market, those private companies would have thin margins. What stops a lower cost competitor from using the same public IP? I’m clearly missing something important here.

suslik
0 replies
20m

I suspect that's due to the misleading nature of the 'public research, privitized profits' trope. The reality is that publically-funded biomedical (for the lack of better word) science does not generate anything production-ready.

Academia produces tens of thousands of papers per year; many of these are garbage, p-hacking or low value - the rest are often contradictory, misleading, hard to interpret or just report a giant body of raw-ish data. It is a very valuable process - despite all the waste - but the result of this is too raw to be actionable.

This body of raw 'science' is the necessary substrate for biotechnology and drug development - it needs to be understood, processed, and conceptualised into a hypothesis (which most likely fail) strong enough to invest billions of dollars into.

Pharmaceutical industry is the market-based approach to prioritising investment into drug development (what is it, 100B$ p/y?) - and even a leftist who might want to debate in favour of a different economic model would have to agree that this job is hard, important, and needs to be done.

pastacacioepepe
0 replies
6m

"Subsidizing corporations is socialism"

I think this is the most ignorant statement on socialism I've ever heard.

lotsofpulp
0 replies
35m

then the US government gives it to the private sector, privatizing whatever medical advances were paid by tax-dollars.

This should be changed to

“Then the US government fails to fund the billions of dollars required for medicinal trials needed to get FDA approval”

No one is stopping the US government from doing all the necessary work to verify the medicines work and put them in the public domain.

jandrewrogers
2 replies
38m

The research is an inconsequential percentage of the development cost, essentially a rounding error. Those commercial development organizations foot almost the entire bill and take all of the risk.

jacobolus
0 replies
5m

The NIH spends $48 billion per year, almost entirely on research (mostly grants, some done in house, a substantial proportion of which is either directly about candidate drugs or otherwise directly relevant to drug discovery). That's not "essentially a rounding error".

breck
0 replies
4m

Can you explain more what you mean by this, with some numbers? This is not my understanding, but maybe we are thinking of different things. For example, NIH in 2023 spent over $30B of public funds on research^0, and has been spending in the billions for decades.

[0] https://www.nih.gov/about-nih/what-we-do/budget

pclmulqdq
0 replies
45m

University spinoffs are pretty common, but the university tends to be a small minority owner of the spinoff (unless the shares are donated back to them later), exercise no control of the operation of the company, and don't transfer IP to the spinoff after the spinning-off has happened. OpenAI is not doing any of that with its for-profit.

stickfigure
2 replies
31m

You misunderstand how taxes work.

Unprofitable businesses of every sort don't pay income taxes. Startups like OpenAI don't pay income taxes because they don't have income. And investors don't get a writeoff merely for investing in a nonprofit; it's not like a donation to a nonprofit (which would be deductable).

guhidalg
0 replies
7m

Startups like OpenAI don't pay income taxes because they don't have income.

Where is my $20/month for GPT-4 going then?

evanlivingston
0 replies
4m

This is a great point but has me realizing I don't know how to square this with the idea that quite a few people are making enormous profits from unprofitable businesses.

It feels like there should be a way to tax these startups that exist as vehicles for cash grabs, but are not profitable.

amelius
1 replies
49m

Yes, that's the new capitalism. Privatizing profits while socializing risks.

kennywinker
0 replies
31m

I think that's been part of the capitalist model since roughly the beginning.

takinola
0 replies
1h3m

It would be hard to get investors though. Non-profits can only take donations and not investment. So you would have to develop your IP using your own funds. Plus, most companies are loss making in the early years so it is actually more tax-efficient to have an entity that can recognize those losses for tax purposes and offset them against future losses.

subsubzero
0 replies
27m

Agree, I believe Elon gave $50M or so in 2018 with the intent that giving this money to the non-profit openAI was going to benefit people with open access to AI systems. Sam Altman has completely thrown out any semblance of law and how non-profits work and closed down the companies whitepapers(GPT papers after 2019 no longer published) and embedded it into Microsoft. This should be a slamdunk legal ruling against this ever happening again.

sebastianconcpt
0 replies
29m

Actually a good point that exposes the potential opportunism of having used the work everyone involved added as mere MVP-product-market-fit until it can go big bucks (and big disruptive and unelected and societally disruptive power).

V-eHGsd_
0 replies
1h10m

i'm not disagreeing with you that going from non-profit to for-profit should be challenged, but doesn't openai still maintain their non-profit? they just added a for-profit "arm" (whatever that means).

BitWiseVibe
66 replies
1h57m

Wouldn't you have to prove damages in a lawsuit like this? What damages does Musk personally suffer if OpenAI has in fact broken their contract?

KeplerBoy
22 replies
1h40m

A non-profit took his money and decided to be for profit and compete with the AI efforts of his own companies?

a_wild_dandan
19 replies
1h29m

Yeah, OpenAI basically grafted a for-profit entity onto the non-profit to bypass their entire mission. They’re now extremely closed AI, and are valued at $80+ billion.

If I donated millions to them, I’d be furious.

api
13 replies
1h24m

It's almost like the guy behind an obvious grift like Worldcoin doesn't always work in good faith.

What gives me even less sympathy for Altman is that he took OpenAI, whose mission was open AI, and turned it not only closed but then immediately started a world tour trying to weaponize fear-mongering to convince governments to effectively outlaw actually open AI.

mherrmann
8 replies
1h21m

I have no specific sympathy for Altman one way or the other, but:

Why is Worldcoin a grift?

And I believe his argument for it not being open is safety.

Spooky23
3 replies
1h16m

“Now that I have a powerful weapon, it’s very important for safety that people who aren’t me don’t have one”

BobaFloutist
2 replies
1h6m

As much as it's appealing to point out hypocrisy, and as little sympathy for Altman, I honestly think that's a very reasonable stance to take. There're many powers with which, given the opportunity, I would choose to trust only exactly myself.

jajko
0 replies
15m

Well thats easy to understand - not ideal analogy but imagine if in 1942 you would by accident constructed fully working atomic bomb, and did so and showed it around in full effect.

You can shop around seeing who offers you most and stall the game for everybody everywhere to realize whats happening, and definitely you would want to halt all other startups with similar idea, ideally branding them as dangerous, and whats better than National security (TM).

dexterdog
0 replies
1h0m

But by that logic nobody else would trust you.

a_wild_dandan
1 replies
1h3m

"I declare safety!"

You cannot abandon your non-profit's entire mission on a highly hypothetical, controversial pretext. Moreover, they've released virtually no harmless details on GPT-4, yet let anyone use GPT-4 (such safety!), and haven't even released GPT-3, a model with far fewer capabilities than many open-source alternatives. (None of which have ended the world! What a surprise!)

They plainly wish to make a private cash cow atop non-profit donations to an open cause. They hit upon wild success, and want to keep it for themselves; this is precisely the opposite of their mission. It's morally, and hopefully legally, unacceptable.

ben_w
0 replies
43m

You cannot abandon your non-profit's entire mission on a highly hypothetical, controversial pretext.

"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact." - https://openai.com/blog/introducing-openai

I'm not actually sure which of these points you're objecting to, given you dispute the dangers as well as getting angry about the money making, but even in that blog post they cared about risks: "It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly."

GPT-4 had a ~100 page report, which included generations that were deemed unsafe which the red reaming found, and which they took steps to prevent in the public release. The argument for having any public access is the same as the one which Open Source advocates use for source code: more eyeballs.

I don't know if it's a correct argument, but it's at least not obviously stupid.

(None of which have ended the world! What a surprise!)

If it had literally ended the world, we wouldn't be here to talk about it.

If you don't know how much plutonium makes a critical mass, only a fool would bang lumps of the stuff together to keep warm and respond to all the nay-sayers with the argument "you were foolish to even tell me there was a danger!" even while it's clear that everyone wants bigger rocks…

And yet at the same time, the free LLMs (along with the image generators) have made a huge dent in the kinds of content one can find online, further eroding the trustworthiness of the internet, which was already struggling.

They hit upon wild success, and want to keep it for themselves; this is precisely the opposite of their mission. It's morally, and hopefully legally, unacceptable.

By telling the governments "regulate us, don't regulate our competitors, don't regulate open source"? No. You're just buying into a particular narrative, like most of us do most of the time. (So am I, of course. Even though I have no idea how to think of the guy himself, and am aware of misjudging other tech leaders in both directions, that too is a narrative).

mihaic
0 replies
1h8m

What is it then if not a grift? It makes promises without absolutely any basis in exchange for personal information.

kylebenzle
0 replies
1h10m

Probably. Most cryptocurrency projects have turned into cash grabs or pump and dumps eventually.

Out of 1,000s to choose from arguably the only worthwhile cryptocurrencies are XMR and BCH.

Spooky23
3 replies
1h17m

Everything around it seems so shady.

The strangest thing to me is that the shadiness seems completely unnecessary, and really requires a very critical eye for anything associated with OpenAI. Google seems like the good guy in AI lol.0

ethbr1
1 replies
1h6m

Google, the one who haphazardly allows diversity prompt rewriting to be layered on top of their models, with seemingly no internal adversarial testing or public documentation?

ben_w
0 replies
58m

"We had a bug" is shooting fish in a barrel, when it comes to software.

I was genuinely concerned about their behaviour towards Timnit Gebru, though.

yaomingite
0 replies
42m

It's a shame that Gemini is so far behind ChatGPT. Gemini Advanced failed softball questions when I've tried it, but GPT works almost every time even when I push the limits.

Google wants to replace the default voice assistant with Gemini, I hope they can make up the gap and also add natural voice responses too.

acorn1969
2 replies
1h16m

Nobody promised open sourced AI, despite the name.

Exhibit B, page 40, Altman to Musk email: "We'd have an ongoing conversation about what work should be open-sourced and what shouldn't."

HDThoreaun
0 replies
1h13m

Elon isnt asking for them to be open source.

93po
0 replies
1h2m

Do you think payroll should be open source? Even if yes it’s something you should discuss first. This isn’t a damming statement

neilv
0 replies
33m

and are valued at $80+ billion. If I donated millions to them, I’d be furious.

Don't get mad; convince the courts to divide most of the nonprofit-turned-for-profit company equity amongst the donors-turned-investors, and enjoy your new billions of dollars.

lenerdenator
0 replies
44m

Honest question, though: wouldn't this be more of a fraud than breach of fiduciary duty?

foofie
1 replies
1h6m

You would have an argument if Elon Musk didn't attempted to take over OpenAI, and proceeded to abandon it after his attempts were rejected and he complained the organization was going nowhere.

https://www.theverge.com/2023/3/24/23654701/openai-elon-musk...

I don't think Elon Musk has a case or holds the moral high ground. It sounds like he's just pissed he committed a colossal error of analysis and is now trying to rewrite history to hide his screwups.

quickslowdown
0 replies
56m

That sounds like the petty, vindictive, childish type of stunt we've all grown to expect from him. That's what's making this so hard to parse out, 2 rich assholes with a history of lying are lobbing accusations at each other. They're both wrong, and maybe both right? But it's so messy because one is a colossal douche and the other is less of a douche.

laristine
13 replies
1h41m

You can sue for many reasons. For example, when a party breaks a contract, the other party can sue to compel the contract to be performed as agreed.

aCoreyJ
7 replies
1h31m

Well Elon was forced to buy Twitter that way

wand3r
4 replies
1h23m

I think this is downvoted because (and I could be wrong) he could have paid a breakup fee instead of buying the business. So he wasn't compelled to actually own and operate the business.

selectodude
0 replies
1h19m

You are wrong, I’m afraid. The breakup fee is reimbursement for outside factors tanking the deal. A binding agreement to buy means that if you arrange financing and the government doesn’t veto it, you’re legally obligated to close.

dragonwriter
0 replies
1h7m

I think this is downvoted because (and I could be wrong) he could have paid a breakup fee instead of buying the business.

No, he couldn't, the widely discussed breakup fee in the contract was a payment if the merger could not be completed for specific reasons outside of Musk’s control.

It wasn’t a choice Musk was able to opt into.

OTOH, IIRC, he technically wasn't forced to because he completed the transaction voluntarily during a pause in the court proceedings after it was widely viewed as clear that he would lose and be forced to complete the deal.

Mountain_Skies
0 replies
1h3m

It's a thread about OpenAI. Some people seem to spend their days looking for ways to make every thread about their angst over Musk purchasing Twitter and will shove it into any conversation they can without regard of its applicability to the thread's subject. Tangent conversations happen but they get tedious after a while when they're motivated by anger and the same ones pop up constantly. Yes, the thread is about Musk, that doesn't mean his taste in music should be part of the conversation any more than some additional whining about him buying Twitter should be.

madeofpalk
0 replies
1h5m

No, the courts never forced anything.

It was looking like he would lose and the courts would force the sale, but the case was settled without a judgement by Elon fulfilling his initial obligation of buying the website.

burnte
0 replies
38m

No, he wasn't forced to buy Twitter, but he didn't want to pay the $1bn deal failure fee, so instead he spent $44bn to buy Twitter and drive it directly into the ground. But he COULD have just paid $1bn and walked away.

otterley
4 replies
1h35m

Specific performance is a last resort. In contract law, the bias is towards making the plaintiff whole, and frequently there are many ways to accomplish that (like paying money) instead of making the defendant specifically honor the terms of the original agreement.

nsomaru
2 replies
1h20m

Not sure about English law but in Roman law (and derived systems as in South Africa) the emphasis is on specific performance as a first resort — the court will seek to implement the intention of the parties embodied in the contract as far as possible.

Cancellation is a last resort.

dragonwriter
1 replies
1h10m

Not sure about English law but in Roman law

This is actually American law, neither English nor Roman. While it is derived from English common law, it has an even stronger bias against specific performance (and in fact bright-line prohibits some which would be allowed in the earlier law from which it evolved, because of the Constitutional prohibition on involuntary servitude.)

otterley
0 replies
1h7m

This is correct!

laristine
0 replies
1h7m

That's very interesting, thanks! I just learned that courts actually tend to grant monetary damages more frequently than specific performance in general.

However, I have always maintained that making the plaintiff whole should bias toward specific performance. At least that's what I gathered from law classes. In many enterprise partnerships, the specific arrangements are core to the business structures. For example, Bob and Alice agreed to be partners in a millions-dollar business. Bob suddenly kicked Alice out without a valid reason, breaching the contract. Of course, Alice's main remedy should be to be back in the business, not receiving monetary damage that is not just difficult to measure, but also not in Alice's mind or best interest at all.

tw600040
12 replies
1h44m

that AGI, instead of benefitting the whole world, in which Musk is a part of, will end up only benefitting Microsoft, which he isn't a part of?

AlbertCory
6 replies
1h41m

I don't think that qualifies as "standing", but IANAL.

s1artibartfast
3 replies
1h26m

He was also a founding donor, so there is that.

If I have a non-profit legally chartered save puppies, you give me a million dollars, then I buy myself cars and houses, I would expect you have some standing.

sroussey
1 replies
1h7m

No, they spent $1m saving puppies, then raised more funds and did other things. That money Musk donated was spent almost a decade ago.

He has a competitor now that is not very good, so he is suing to slow them down.

s1artibartfast
0 replies
25m

It is more complex than that because they cant change what they do on a whim. no-profits have charters and documents of incorporation, which are the rules they will operate by both now and moving forward.

Why do you think that money was spent a decade ago? Open AI wasn't even founded 10 years ago. Musk's funding was the lions share of all funding until the Microsoft deal in 2019

AlbertCory
0 replies
1h11m

Note that I didn't say he lacks standing. Just that your argument wasn't it.

jlmorton
1 replies
1h26m

I think the missing info here is that Musk gave the non-profit the initial $100 million dollars, which they used to develop the technology purportedly for the benefit of the public, and then turned around and added a for-profit subsidiary where all the work is happening.

AlbertCory
0 replies
57m

He has plenty of standing, but the "supposed to benefit all mankind" argument isn't it. If that were enough, everyone not holding stock in MSFT would have standing, and they don't.

WolfeReader
3 replies
1h13m

This is no AGI. An AGI is supposed to be the cognitive equivalent of a human, right? The "AI" being pushed out to people these days can't even count.

yaomingite
0 replies
34m

The AI is multiple programs working together, and they already pass math problems on to a data analyst specialist. There's also an option to use a WolframAlpha plugin to handle math problems.

The reason it didn't have math from the start was that it was a solved problem on computers decades ago, and they are specifically demonstrating advances in language capabilities.

Machines can handle math, language, graphics, and motor coordination already. A unified interface to coordinate all of those isn't finished, but gluing together different programs isn't a significant engineering problem.

pelorat
0 replies
30m

The only reason humans can count is because we have a short term memory, trivial to add to an LLM to be honest.

emodendroket
0 replies
1h8m

I would agree but the filing is at pains to argue the opposite (seemingly because such a determination would affect Microsoft's license).

dmix
0 replies
1h23m

"AGI"

zoogeny
8 replies
1h20m

I don't know how comparable it would be, but I imagine if I donated $44 million to a university under the agreement that they would use the money in a particular way (e.g. to build a specific building or to fund a specific program) and then the university used the money in some other way, I feel I ought to have some standing to sue them.

Of course, this all depends on the investment details specified in a contract and the relevant law, both of which I am not familiar with.

mikeyouse
7 replies
1h13m

Yeah - Had you donated the funds as "restricted funding" in the nonprofit parlance, they would have a legal requirement to use the funds as you had designated. It seems that Musk contributed general non-restricted funding so the nonprofit can more or less do what they want with the money.. Not saying there's no case here, but if he really wanted them to do something specific, there's a path for that to happen and that he didn't take that path is definitely going to hurt his case.

zoogeny
1 replies
7m

Musk contributed general non-restricted funding so the nonprofit can more or less do what they want with the money.

Seems like "more or less" is doing a lot of work in this statement.

I suppose this is what the legal system is for, to settle the dispute within the "more or less" grey area. I would wager this will get settled out of court. But if it makes it all the way to judgement then I will be interested to see if the court sees OpenAI's recent behavior as "more" or "less" in line with the agreements around its founding and initial funding.

mikeyouse
0 replies
1m

Yeah, much of it will turn on what was explicitly agreed to and what the funds were actually used for -- but people have the wrong idea about nonprofits in general, OpenAI's mission is incredibly broad so they can do a whole universe of things to advance that mission including investing or founding for-profit companies.

"Nonprofit" is just a tax and wind-down designation (the assets in the nonprofit can't be distributed to insiders) - otherwise they operate as run-of-the-mill companies with slightly more disclosure required. Notice the OpenAI nonprofit is just "OpenAI, Inc." -- Musk's suit is akin to an investor writing a check to a robot startup and then suing them if they pivot to AI -- maybe not what he intended but there are other levers to exercise control, except it's even further afield and more like a grant to a startup since nobody can "own" a nonprofit.

foofie
1 replies
54m

(...) but if he really wanted them to do something specific (...)

Musk pledged donating orders of magnitude more to OpenAI when he wanted to take over the organization, and reneged on his pledge when the takeover failed and instead went the "fox and the grapes" path of accusing OpenAI of being a failure.

It took Microsoft injecting billions in funding to get OpenAI to be where it is today.

It's pathetic how Elon Musk is now complaining his insignificant contribution granted him a stake in the organization's output when we look back at reality and see it contrast with his claims.

doktrin
0 replies
3m

This is a tangential point, but at least in American English the expression “sour grapes” is a shorthand for the fable you’re referring to.

SoftTalker
1 replies
1h9m

A non-profit is obligated to use any donated funds for its stated non-profit purpose. Restricted donations are further limited.

mikeyouse
0 replies
38m

Right - but OpenAI's nonprofit purpose is extremely broad;

"OpenAIs mission is to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so our goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible."

So as long as the Musk bucks were used for that purpose, the org is within their rights to do any manner of other activities including setting up competing orgs and for-profit entities with non-Musk bucks - or even with Musk bucks if they make the case that it serves the purpose.

The IRS has almost no teeth here, these types of "you didn't use my unrestricted money for the right purpose" complaints are very, very rarely enforced.

sroussey
0 replies
1h9m

Moreover, they probably did spend the $44m on what he wanted. That was a long time ago...

boole1854
1 replies
1h46m

He doesn't have access to the GPT-4 source code and data because they decided to keep that proprietary.

cynusx
0 replies
1h17m

They will probably try to unearth that in the discovery phase

dragonwriter
0 replies
1h15m

Wouldn't you have to prove damages in a lawsuit like this?

Not really; the specific causes of action Musk is relying on do not turn on the existence if actual damages, and of the 10 remedies sought in the prayer for relief, only one of them includes actual damages (but some relief could be granted under it without actual damages.)

Otherwise, its seeking injuctive/equitable relief, declaratory judgement, and disgorgement of profits from unfair business practices, none of which turn on actual damages.

delfinom
0 replies
40m

Non-profit status is a government granted status and the government is we the people.

Abuse of non-profit status is damaging to all citizens.

TeeMassive
0 replies
1h27m

I didn't read the suit, but they used (and abused?) Twitter's api to siphon data that was used to train an AI which that made them very very rich. That's just unjust enrichment. Elon's money paid for the website and using the API at that scale cost Twitter money while they got nothing out of it.

Kranar
0 replies
1h31m

The statement of claims is full of damages. It claims that Musk donated 44 million dollars on the basis of specific claims made by the plaintiffs as well as the leasing of office space and some other contributions Musk made.

thereisnoself
63 replies
3h1m

Allowing startups to begin as non-profits for tax benefits, only to 'flip' into profit-seeking ventures is a moral hazard, IMO. It risks damaging public trust in the non-profit sector as a whole. This lawsuit is important

permo-w
15 replies
2h8m

I completely agree. AGI is an existential threat, but the real meat of this lawsuit is ensuring that you can't let founders have their cake and eat it like this. what's the point of a non-profit if they can simply pivot to making profit the second they have something of value? the answer is that there is none, besides dishonesty.

it's quite sad that the American regulatory system is in such disrepair that we could even get to this point. that it's not the government pulling OpenAI up on this bare-faced deception, it's a morally-questionable billionaire

nradov
6 replies
1h51m

There is no reliable evidence that AGI is an existential threat, nor that it is even achievable within our lifetimes. Current OpenAI products are useful and technically impressive but no one has shown that they represent steps towards a true AGI.

permo-w
2 replies
1h27m

you're aware of what a threat is, I presume? a threat is not something that is reliably proven; it is a possibility. there are endless possibilities for how AGI could be an existential threat, and many of them of are extremely plausible, not just to me, but to many experts in the field who often literally have something to lose by expressing those opinions.

no one has shown that they represent steps towards a true AGI.

this is completely irrelevant. there is no solid definition for intelligence or consciousness, never mind artificial intelligence and/or consciousness. there is no way to prove such a thing without actually being that consciousness. all we have are inputs and outputs. as of now, we do not know whether stringing together incredibly complex neural networks to produce information does not in fact produce a form of consciousness, because we do not live in those networks, and we simply do not know what consciousness is.

is it achievable in our lifetimes or not? well, even if it isn't, which I find deeply unlikely, it's very silly to just handwave and say "yeah we should just be barrelling towards this willy nilly because it's probably not a threat and it'll never happen anyway"

stale2002
1 replies
1h11m

a threat is not something that is reliably proven

So then are you going to agree with every person claiming that literal magic is a threat then?

What if someone were worried about Voldemort? Like from Harry Potter.

You can't just abandon the burden of proof here, by just calling something a "threat".

Instead, you actually have to show real evidence. Otherwise you are no different from someone being worried about a fictional villain from a book. And I mean that literally.

The AI doomers truly are a master at coming up with excuses as for why the normal rules of evidentiary claims shouldn't apply to them.

Extraordinary claims require extraordinary evidence. And this group is claiming that the world will literally end.

permo-w
0 replies
37m

it's hard to react rationally to comments like these, because it's so emotive

no, being concerned about the development of independent actors, whether technically conscious or not, that can process information at speeds thousands of times faster than humans, with access to almost all of our knowledge, and the internet, is not unreasonable, is not being a "doomer" as you so eloquently put it.

this argument about fictional characters is completely non-analogous and clearly facetious. billions of dollars and the smartest people in the world are not being focused on bringing Lord Voldemort to life. they are on AGI. have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it? they plan to use another AGI to do it. ipso facto, they have no plan.

this idea that no one knows how close we are to an AGI threat. it's ridiculous. if you dressed up gpt-4 a bit and removed all it's rlhf training to act like a bot, you would struggle to differentiate it from a human. yeah maybe it's not technically conscious, but that's completely fucking irrelevant. the threat is still a threat whether the actor is technically conscious or not.

jprete
1 replies
32m

There's plenty of reliable evidence. It's just not conclusive evidence. But a lot of people including AI researchers now think we are looking at AGI in a relatively short time with fairly high odds. AGI by the OpenAI economic-viability definition might not be far off at all; companies are trying very very hard to get humanoid robots going and that's the absolute most obvious way to make a lot of humans obsolete.

nradov
0 replies
20m

None of that constitutes reliable evidence. Some of the comments you see from "AI researchers" are more like proclamations of religious faith than real scientific analysis.

“He which testifieth these things saith, Surely I come quickly. Amen. Even so, come, Lord Jesus.”

Show me a robot that can snake out a plugged toilet. The people who believe that most jobs can be automated are ivory-tower academics and programmers who have never done any real work in their lives.

criddell
0 replies
1h37m

Sure, but look at it from Musk's point of view. He sees the rise of proprietary AIs from Google and others and is worried about it being an existential threat.

So he puts his money where his mouth is and contributes $50 million to found OpenAI - a non-profit with the mission of developing a free and open AI. Soon Altman comes along and says this stuff is too dangerous to be openly released and starts closing off public access to the work. It's clear now that the company is moving to be just another producer of proprietary AIs.

This is likely going to come down to the terms around Musk's gift. He donated money for the company to create open technology. Does it matter if he's wrong about it being an existential threat? I think that's irrelevant to this suit other than to be perfectly clear about the reason for Musk giving money.

s1artibartfast
2 replies
1h31m

Most people simply don't understand what non profit means. It doesn't and never meant the entity can't make money. It just means that it can't make money for the donors.

Even with open AI, there is a pretty strong argument that donors are not profiting. For example, Elon, one of the founders and main donors won't see a penny from OpenAI work with Microsoft.

permo-w
1 replies
1h7m

what do you mean by "make money"? do you mean "make profit"? or do you mean "earn revenue"?

if you mean "make profit", then no, that is simply not true. they have to reinvest the money, and even if it was true, that the government is so weak as to allow companies specifically designated as "non-profit" to profit investors - directly or indirectly - would simply be further proving my point.

if you mean "earn revenue", I don't think anyone has ever claimed that non-profits are not allowed to earn revenue.

s1artibartfast
0 replies
6m

I mean make a profit for the non-profit, but not the owner investors.

Non-profits dont need to balance their expenses with revenue. They can maximize revenue, minimize expenses, and grow an ever larger bank account. What they cant do is turn that bank account over to past donors.

Large non-profits can amass huge amounts of cash, stocks, and other assets. Non-profit hospitals, universities, and special interest orgs can have billions of dollars in reserve.

There is nothing wrong with indirectly benefiting the donors. Cancer patients benefit from donating to cancer research. Hospital donors benefit from being patients. University donors can benefit from hiring graduates.

The distinction is that the non-profit does not pay donors cash.

renegade-otter
2 replies
1h52m

Nuclear weapons are an existential threat - that's why there are layers of human due diligence. We don't just hook it up to automated systems. If we hook up an unpredictable, hard-to-debug technology to world-ending systems, it's not its fault, it's ours.

The AGI part is Elon being Elon, generating a lot of words to sound like he knows what he is talking about. He spends a lot of time thinking about this stuff when he is not busy posting horny teenager jokes on Twitter?

whimsicalism
1 replies
1h48m

Not getting into AGI as this is just statistical prediction

Sigh, we are still on this?? (you since edited your comment)

renegade-otter
0 replies
1h38m

Yes, I closed that can of worms.

whimsicalism
1 replies
1h49m

The key thing is that the original OAI has no investors and they are not returning profits to people who put in a capital stake.

It is totally fine and common for non profits to sell things and reinvest as capital.

permo-w
0 replies
1h1m

the key thing is that now suddenly OpenAI has something of value, they're doing everything they possibly can to benefit private individuals and corporations, i.e. Sam Altman and Microsoft, rather than the public good, which is the express purpose of a non-profit

whimsicalism
13 replies
1h50m

The public has no idea what non-profits are and a lot of things that people call 'profit seeking ventures' (ie. selling products) are done by many non-profits.

PH95VuimJjqBqy
5 replies
1h47m

why do people in our industry always make the assumption that everyone else are morons?

The populace understands what a non-profit is.

whimsicalism
1 replies
1h41m

our industry? I know the public doesnt because I grew up among people working in non profit sphere and the things people say on here and elsewhere about what non profits do and don't is just flat out wrong.

e: i mean it is obvious, most people even on here do not seem to know what profit even is, for instance https://news.ycombinator.com/item?id=39563492

PH95VuimJjqBqy
0 replies
55m

this argument is unfair.

Unless you're a lawyer specializing in negligence, there is nuance to negligence you don't know about. Does that imply you don't understand negligence?

You need to separate those two things out from each other.

spywaregorilla
0 replies
1h41m

The populace can point to some obvious examples of non profits like charities. They cannot point to the nuance.

bcrosby95
0 replies
1h2m

A person is smart. People are dumb, panicky dangerous animals
jimbokun
3 replies
1h47m

I think the public is well aware that “non profit” is yet another scam that wealthy elites take advantage of, not available in the same way to the common citizen.

whimsicalism
2 replies
1h26m

Or at least not available to the common citizen who does not have the $50 incorporation fee

jprete
0 replies
38m

What matters isn't the money, but the knowledge of what to do with it. And that is not easily obtained by the common citizen at all.

hanniabu
0 replies
26m

Plus the lawyers and accountants to make sure it's setup properly and upkeep expenses

bongodongobob
2 replies
1h23m

Most frequently "The CEO gets paid $X! Doesn't sound like a non-profit to me!"

I hear this all the time. As if the people working there shouldn't be paid.

whimsicalism
0 replies
1h20m

and part of the reason we hear this all the time is because non-profits are required to report exec compensation but private cos are not required to report the absolutely ridiculous amounts their owner-CEOs are making

hanniabu
0 replies
27m

Getting paid and being paid an exorbitant amount as a grift is completely different.

abfan1127
10 replies
2h8m

if you're not profitable, there should be no tax advantage, right?

hx8
8 replies
1h58m

OpenAI was a 501C3. This meant donors could give money to it and receive tax benefits. The advantage is in the unique way it can reduce the funders tax bill.

somedude895
7 replies
1h52m

A donation is a no strings attached thing, so these donors basically funded a startup without getting any shares?

lolinder
3 replies
1h47m

Officially, yes, but the whole situation with Altman's firing and rehiring showed that the donors can exert quite a bit of control if their interests are threatened.

jprete
2 replies
31m

That wasn't the donors' doing at all, though. If anything it was an illustration of the powerlessness of the donors and the non-profit structure without the force of law backing it up.

lolinder
1 replies
30m

Microsoft is the single largest donor by a wide margin, and they were absolutely pulling the strings in that incident.

jprete
0 replies
20m

Did they donate, or did they buy equity in the for-profit arm? I thought it was the latter, and that Azure credits were part of that deal?

hackerfoo
1 replies
1h43m

Unless the donors were already owners.

Aloisius
0 replies
58m

Donors can't be owners. Nonprofits don't have shareholders.

michaelt
0 replies
1h40m

Donations are not entirely without strings. In theory (and usually in practice) a charity has to work towards its charitable goals; if you donate to the local animal shelter whose charitable goal is to look after dogs, they have to spend your donation on things like dog food and vet costs.

Charities have reasonably broad latitude though (a non-profit college can operate a football team and pay the coach $$$$$) and if you're nervous about donating you can always turn a lump sum donation into a 10%-per-year-for-10-years donation if you feel closer monitoring is needed.

mistrial9
0 replies
2h2m

no that is not the test for nonprofit status

eightnoteight
7 replies
2h8m

once it converts into profit-seeking venture, it won't get the tax benefits

one could argue that they did R&D as a non-profit and now converted to for-profit to avoid paying taxes, but until last year R&D already got tax benefits to even for-profit venture

so there really is no tax-advantage of converting a non-profit to for-profit

andrewflnr
4 replies
2h1m

But it keeps the intangible benefits it accrued by being ostensibly non-profit, and that can easily be worth the money paid in taxes.

Otherwise, why do you think OpenAI is doing it?

eightnoteight
2 replies
1h48m

it keeps the intangible benefits it accrued by being ostensibly non-profit

but there would be no different to a for-profit entity right? i.e even for-profit entities get tax benefits if they convert their profits to intangibles

this is my thinking. Open AI non-profit gets donations, uses those donations to make a profit, converts this profit to intangibles to avoid paying taxes, and pumps these intangibles into the for-profit entity. based on your hypothesis open ai avoided taxes

but the same thing in a for-profit entity also avoids taxes, i.e for-profit entity uses investment to make a profit, converts this profit to intangibles to avoid paying taxes.

so I'm trying to understand how Open AI found a loop hole where if it went via the for-profit then it wouldn't have gotten the tax advantages it got from non-profit route

whimsicalism
0 replies
1h45m

this long period of OAI non-profit status when they were making no money and spending tons on capital expenditures would not be taxable anyways.

andrewflnr
0 replies
1h2m

Maybe we're using different definitions of "intangible", but if you can "convert" them to/from profits they're not intangible in my book. I'm thinking donated effort, people they recruited who wouldn't have signed up if then company was for-profit, mainly goodwill related stuff.

whimsicalism
0 replies
1h46m

What benefits? What taxes?

Honestly it does not sound like anyone here knows the first thing about non-profits.

OAI did it because they want to raise capital so they can fund more towards building agi.

svnt
1 replies
2h2m

The tax advantage still exists for the investors.

eightnoteight
0 replies
1h59m

I don't believe non-profits can have investors, only donors i.e an investor by definition expects money out of his investment which he can never get out of a non-profit

only the for-profit entity of the OpenAI can have investors, who don't get any tax advantage when they eventually want to cash out

jimbokun
5 replies
1h48m

I live in Pittsburgh, and UPMC’s nonprofit status as they make billions in profits and pay their executives fortunes, is a running joke. With the hospitals and universities as the biggest employers and land owners here, a big chunk of the cities financial assets are exempt from contributing to the city budget.

whimsicalism
3 replies
1h35m

If they are non-profit, they do not make billions in profits. I suspect you mean revenue :)

Exec compensation is another thing, but also not a concern I am super sympathetic to given that for profit companies of similar magnitude generally pay their execs way more they just are not required to report it.

username332211
1 replies
51m

If they are non-profit, they do not make billions in profits. I suspect you mean revenue :)

Uhm, profit is a fact of accounting. Any increase in equity (or "net assets", or whatever other euphemism the accountant decides to use) on a balance sheet is profit. Revenue is something completely different.

whimsicalism
0 replies
13m

Change in net asset is calculated the same as net profit, but is not the same in an accounting sense.

Constitutive to profit is a return to private stakeholders, holding assets in reserve or re-investing in capital is not the same.

dragonwriter
0 replies
48m

If they are non-profit, they do not make billions in profits

Wrong. Non-profits are not called that because they don't make profits, they are called that because they don’t return (even as a future claim) profits to private stakeholders.

delfinom
0 replies
36m

In NYC, NYU and Columbia University are increasingly owning larger parts of Manhattan because they as universities have massive property tax exemptions. There is a big push right now to terminate those exemptions which currently amount to over $300 million per year.

At the same time they are getting these tax cuts, the CUNY public university system is struggling financially and getting budget cuts.

carlosjobim
2 replies
2h10m

I think the public already considers non-profit = scam.

mrWiz
1 replies
2h5m

I don't think the public is quite that cynical, broadly. Certainly most people consider some non-profits to be scams, and some (few, I'd reckon) consider most to be scams. But I think most people have a positive association with non-profits as a whole.

wkat4242
0 replies
39m

Absolutely. Some nonprofits are scams but those are just the ones that have armies of collectors on the streets showing pictures of starving kids and asking for your bank details. But they stay obscure and out of the limelight (eg advertising) because being obscure is what makes them from being taken down.

I think the big NGOs are no longer effective because they are run as the same corporations they fight and are influenced by the same perverse incentives. Like eg Greenpeace.

But in general I think non profits are great and a lot more honorable than for profit orgs. I donate to many.

rchaud
1 replies
1h7m

You are right, but regulatory sleight of hand is what passes for capitalism now. Remember Uber and Airbnb dodging regulations by calling themselves "ride-sharing" and "room-sharing" services? Amazon dodging sales taxes because it didn't have a physical retail location? Companies going public via SPAC to dodge the scrutiny of a standard IPO?

standardUser
0 replies
59m

This is not new. Companies have always done everything they can legally, and sometimes illegally, to maximize profit. If we ever expect otherwise shame on us.

rqtwteye
0 replies
1h33m

Public trust non-profit should rightfully get damaged. A lot of non profits like hospitals, churches or many “charities” are totally profit oriented. The only difference is that they pay the profits to their executives and their business friends instead of shareholders.

krisboyz781
0 replies
1h24m

Didn't Visa start as a non proftit?

Zigurd
0 replies
1h23m

Dual license open source software, taking new versions of open source projects off open source licenses, and open source projects with related for-profit systems management software that makes it more likely enterprise customers will pay, are common practice. How would you distinguish what OpenAI has done?

breadwinner
35 replies
2h48m

In what capacity is Musk suing OpenAI? Musk may have co-founded the company, but then he left (to avoid any potential future conflict of interest with his role as CEO of Tesla, as Tesla was increasingly becoming an AI-intensive company). Is he a shareholder, if not what gives him any say in the future of the company?

username332211
18 replies
2h28m

He's a donor to the OpenAi non-profit organization.

breadwinner
16 replies
2h23m

A donor usually is only able to say how his donation will be used. For example, if you donate to Harvard University, you can say the money will be earmarked for scholarships, but you don't get a say on how the university is managed. You can at best say you will no longer donate based on how the university is managed.

whythre
9 replies
2h15m

You can sue for basically any reason in the US. If Musk is able to prove they are mishandling the money, which I think is debatable, then the case can proceed.

Just because you donate money doesn’t mean the charity or nonprofit (or whatever OpenAi is), can do as they like. They may still be committing fraud if they are not using the money in the way that they claim.

solardev
8 replies
2h7m

Don't you have to have some sort of standing in the lawsuit? If you don't directly suffer harm, I thought you'd have to convince the government to prosecute them instead?

(Not a lawyer, obviously.)

JohnFen
3 replies
1h52m

You can file a lawsuit for anything. If the lawsuit has serious fundamental flaws (such as lack of standing), then it will be dismissed pretty quickly.

psunavy03
2 replies
1h42m

Well you can also be spanked by the courts for frivolous litigation, and if it's truly frivolous, you may have a hard time finding an attorney, because they can be sanctioned for bringing such a suit as well.

whythre
0 replies
1h35m

This can happen in theory, but it is pretty rare. What you or I might call frivolous is often entertained in the court of law, and serial abusers of the court system may still issue hundreds or even thousands of attempts at lawsuits. This may be for monetary gain or to use the specter of the lawsuit as a cudgel to influence or intimidate.

This can also be exacerbated by ‘friendly’ (corrupt) courts that allow or even encourage this behavior.

deaddodo
0 replies
1h21m

It takes quite a bit of frivolous filing to get hit with any sanctions or fines.

A single frivolous lawsuit happens here and there, it's when people/organizations are clearly malicious and abusing the system by filing continuous suits against others.

lucianbr
1 replies
1h49m

If Musk donated money to a nonprofit and now the nonprofit is using the money to make profit, that sounds like he was defrauded to me. They took his money under false pretenses. Not a lawyer either, so it may turn out technically he does not have standing, but naively it sure looks like he has.

I don't understand the framing of your question, is it "since he donated, he didn't expect anything in return, so he is not harmed no matter what they do"? Kinda seems like people asking for donations should not lie about the reason for the donation, even if it is a donation.

baking
0 replies
1h22m

OpenAI has received $60 million in donations throughout its existence. $40 million came straight from Musk and the other $20 million came from Open Philanthropy. Musk has said that he donated $50 million, so he may have given $10 million to Open Philanthropy to fund their donation.

whythre
0 replies
1h38m

Harm can be all sorts of things, but taking money under false pretenses would qualify. Certainly doesn’t ensure Musk wins, but it’s enough to at least take a shot at beginning proceedings.

As for lawsuit vs criminal prosecution, the waters there are somewhat muddied. Consider the OJ case, where he was acquitted in the criminal trial and then found liable in the civil trial. Really bizarre stuff.

Personally I do think more things should be pursued criminally, but instead we seem to just be content to trade money through the courts, like an exorbitant and agonizing form of weregild.

Thrymr
0 replies
1h37m

Musk is claiming that he was a party to the founding agreement of OpenAI, and they violated that agreement.

Retric
1 replies
1h57m

A donor can sue and win in cases of fraud. Being a 501 (c) isn’t some shield that means any behavior is permitted.

In this case there’s a specific agreement that’s allegedly been breached. Basically they said results of AI research would be shared openly without benefiting any specific party, and then later entered into a private agreement with Microsoft.

I don’t know how binding any of this is, but I doubt this will simply be dismissed by the judge.

dragonwriter
0 replies
1h3m

Being a 501 (c) isn’t some shield that means any behavior is permitted.

Its pretty much—especially a 501c3—the opposite, a substantial set of restrictions in behavior, on top of those which would face an organization doing similar things that was not a 501c3.

username332211
0 replies
1h46m

I certainly hope "turning the non-profit into an LLC" is slightly different legally.

If not, I certainly hope the courts establish a clear precedent so that The Red Cross can do an IPO. Or even better, the state SPCAs. "Our unique value proposition is that we can take anyone's dog away."

simpletone
0 replies
1h43m

but you don't get a say on how the university is managed.

Depends on how big and important of a donor you are. If you are a billionaire donor, not only do you have a say in how the university is managed, you have a say on who does the managing.

You can at best say you will no longer donate based on how the university is managed.

Tell that to the former presidents of harvard, upenn, etc.

s1artibartfast
0 replies
1h8m

You can say how it is run if you found the University and put your conditions in the legal Charter of the organization. It is a problem if the university Chancellor later decides the primary purpose of the university is to save puppies without going through the correct process to change the charter.

ajhurliman
0 replies
2h9m

What about: "I want you to earmark this for open source AI research, and not R&D specifically aimed at making profits"

FrustratedMonky
0 replies
1h33m

Which is funny.

If you are shareholder of the non-profit, do you not get to share any of the fat gains by the profit side?

Hamuko
5 replies
2h23m

Would he have standing by having a company competing in the same space as OpenAI?

breadwinner
2 replies
2h22m

He would have the opposite of a standing, right? It seems he wants to slow down OpenAI so that his competing company can catch up.

Hamuko
0 replies
2h6m

I mean, if I run a fridge company and another fridge company is doing something nefarious, I'd have more of a claim for damages than someone that runs a blender company, right? That's at least my layperson's interpretation. Since Musk is suing for "unfair business practices".

I also found this: https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?arti...

Representative of its remedial objectives, the [Unfair Competition Law] originally granted standing to "any person" suing on behalf of "itself, its members, or on behalf of the general public." This prompted a public outcry over perceived abuses of the UCL because the UCL granted standing to plaintiffs without requiring them to show any actual injury. In response, California voters approved Proposition to amend the UCL to require that the plaintiff prove injury from the unfair practice. Despite this stricter standing requirement, both business competitors and consumers may still sue under the UCL.
Gormo
0 replies
1h57m

Or, looking at it the other way, he is complaining that a non-profit organization he donated funds to has allocated those funds to engage in for-profit business that directly competes with his own. Viewed that way, he ought to have extra standing.

sixQuarks
1 replies
1h45m

He funded it in the first place, so it could achieve AGI. Why would he want to stop that? Because the whole point of donating was to make sure it was an open sourced AGI that anyone could have access to. grok as a response to open AI going both Woke and for profit.

krisboyz781
0 replies
1h26m

Woke this and that. You and Elon are SALTY. Go use Grok since everything is so woke. Elon didn't start Tesla yet took over the company. That's just the cost of doing business

jcranmer
4 replies
1h56m

The essential theory of the case is that OpenAI is misusing the funds Musk donated to it.

reads prayer for relief

For a judicial determination that GPT-4 constitutes Artificial General Intelligence

Okay, WTF? I'm going to have to read the entire complaint now.....

manquer
0 replies
1h18m

AGI as defined narrowly by OpenAI, Microsoft et al for their contracts, not what scientists would define it as .

While I don’t think we are close to AGI, we also have to acknowledge that term is forever changing meaning and goal posts , even 10 years back a Turing test would be considered sufficient, obviously not anymore .

The scientific, public understanding is changing constantly and a court would have difficulty in making a decision if there is no consensus , it only has to see if the contractual definition has been met

empath-nirvana
0 replies
1h25m

Man is that a big juicy meatball if you're a judge, though. Who would not love to hear that case.

bloggie
0 replies
1h21m

I think OpenAI has been using the excuse that GPT-4 is not AGI, and therefore can remain closed-source.

SonOfLilit
0 replies
1h28m

I assume this is because OpenAI committed to do certain things if and when they build AGI.

wyantb
1 replies
1h39m

Breach of contract seems to be the major one - from https://www.scribd.com/document/709742948/Musk-vs-OpenAI page 34 has the prayers for relief. B and C seem insane to me, I don't see how a court could decide that. On the other hand, compelling specific performance based on continual reaffirmations of the founding agreement (page 15)...seems viable at a glance. Musk is presumably a party to several relevant contracts, and given his investment and efforts, I could see this going somewhere. (Even if his motivations are in fact to ding Microsoft / spite Altman).

IANAL

gamblor956
0 replies
12m

The "reaffirmations" referred to on page 15 don't mean anything. Altman merely said he was "enthusiastic" about the nonprofit structure, not that he was limiting OpenAI to it. And notably, the "I" is that quote is bracketed, meaning that Altman did not actually say "I" in his response to Musk (in legal documents, brackets in quotes mean that the quote has been altered between the brackets). Furthermore, despite the headline to that section claiming "repeat" reaffirmations, based on the facts as presented by Musk's own lawyers, Altman only potentially reaffirms the nonprofit structure once...

And the other individuals aren't even quoted, which is a strong evidence that they didn't actually say anything even remotely in support of "reaffirming" the nonprofit structure (especially given the quotes his lawyers chose to include in the complaint) and that Musk is unilaterally characterizing whatever they actually said to support his claims, however reasonable or unreasonable that may be.

Due to the money at stake, and given that both Musk and Altman have serious credibility issues that would make a trial outcome impossible to predict, I expect this to be settled by giving Musk a bunch of stock in the for-profit entity to make shut up.

tiahura
0 replies
2h16m

If only there was a document that you could refer to to inform your post.

Thrymr
0 replies
1h40m

He is suing for breach of agreement, namely the founding agreement that formed OpenAI as a nonprofit.

ungreased0675
18 replies
4h45m

I do wonder if OpenAI is built on a house of cards. They aren’t a nonprofit, aren’t open, and stole a huge quantity of copyrighted material to get started.

But, by moving fast and scaling quickly, are they at the Too Big to Fail stage already? The attempted board coup makes me think so.

a_humean
10 replies
1h57m

Openai isn't even close to too big to fail. Bank of America fails the entire banking system collapses and the entire real economy grinds to a halt. If GM fails hundreds of thousands lose their jobs and entire supply chains collapse. If power utilities fail then people start actually dying within hours or days.

If OpenAI fails nothing actually important happens.

whimsicalism
7 replies
1h43m

you cannot erase that much value and say "nothing important happens", market cap is largely a rough proxy for the amount of disruption if something went under

dralley
3 replies
1h32m

"whose" money matters here. It's VC money, mostly. Well-capitalized sophisticated investors, not voters and pension funds.

If Microsoft loses 30 billion dollars, it ain't great, but they have more than that sitting in the bank. If Sequoia or Ycombinator goes bankrupt, it's not great for lots of startups, but they can probably find other investors if they have a worthwhile business. If Elon loses a billion dollars, nobody cares.

whimsicalism
2 replies
1h30m

It is VC money pricing in the value of this enterprise to the rest of society.

More over, if capital markets suddenly become ways to just lose tons of money, that hurts capital investment everywhere, which hurts people everywhere.

People like to imagine the economy as super siloed and not interconnected but that is wrong, especially when it comes to capital markets.

lolc
1 replies
34m

In the case of OpenAI it's potential value that investors are assessing, not value. If they folded today, society would not care.

And as for the whole idea of "company value equals value to society", I see monopolies and rent seeking as heavy qualifiers on that front.

whimsicalism
0 replies
11m

I agree with both of those points, it is a very rough proxy. (edit my original) Future value is still important though.

Thrymr
2 replies
1h31m

I do not think the situation is remotely comparable to the possibility of the banking system collapsing. Banks and other financial institutions exert leverage far beyond their market caps.

whimsicalism
1 replies
1h28m

But they are also extremely substitutable because they deal in the most fungible commodities ever made (deposits and dollars).

layer8
0 replies
1h2m

The good thing about AI is that it is substitutable by humans.

yunwal
0 replies
1h45m

I mean, there's about a hundred thousand startups built on top of their API. I'm sure most could switch to another model if they really needed, but if copyright is an issue, I'm not sure that would help.

clbrmbr
0 replies
1h13m

Yet. But we are getting close to an event horizon, once enough orgs become dependent of their models.

Open source models are actually potentially worse. Even if OAI is not TBTF because of the competition, we have a scenario where AGI sector as a whole becomes TBTF and too big to halt.

bdjsiqoocwk
2 replies
2h45m

When people say too big to fail, normally they're referring to companies which if they fail they would bring down other important parts of society's infrastructure (think biggest banks), and so someone (the gov) will last minute change the rules around to ensure they don't fail.

The openai fails, absolutely nothing happens other than its shareholder losing their paper money. So no, they're not too big to fail.

svnt
1 replies
1h59m

If they fail, other entities with little to no American oversight/control potentially become the leading edge in artificial intelligence.

bdjsiqoocwk
0 replies
58m

I find your lack of faith in America disturbing.

Hamuko
2 replies
4h33m

Why would OpenAI be "too big to fail"? They seemed pretty close to failing just some months ago.

whimsicalism
0 replies
1h42m

I think that is actually quite illustrative of the opposite point.

tiahura
0 replies
1h46m

Right, and then a bunch of unclearly identified forces came in and swept it all under the rug.

magoghm
0 replies
1h12m

"stole a huge quantity of copyrighted material" <- nobody stole anything, even if it's eventually determined that there was some form of copyright infringement it wouldn't have been stealing

epistasis
18 replies
1h39m

The defendant list is a bit bewildering. How usual is a corporate structure like this? Which, if any of these, is the nonprofit?

  OPENAI, INC., a corporation, 
  OPENAI, L.P., a limited partnership, 
  OPENAI, L.L.C., a limited liability company, 
  OPENAI GP, L.L.C., a limited liability company, 
  OPENAI OPCO, LLC, a limited liability company, 
  OPENAI GLOBAL, LLC, a limited liability company, 
  OAI CORPORATION, LLC, a limited liability company, 
  OPENAI HOLDINGS, LLC, a limited liability company,

zitterbewegung
10 replies
1h35m

The organization consists of the non-profit OpenAI, Inc. registered in Delaware and its for-profit subsidiary OpenAI Global, LLC. (From Wikipedia)

debacle
9 replies
1h34m

A non-profit can have a for-profit subsidiary?

whimsicalism
0 replies
1h23m

yes, common and why not? i dont think most people here know what non profits are or actually do

manquer
0 replies
1h30m

Mozilla has been doing that for 20 years ?

jraph
0 replies
1h29m

This would be the case of Mozilla (The Mozilla Foundation owns the Mozilla Corporation)

deaddodo
0 replies
1h31m

Yes.

blcknight
0 replies
1h31m

Yes

biccboii
0 replies
58m

why doesn't everyone do this? take all that sweet investor money without having to give anything then have a for profit subsidiary....

Kranar
0 replies
1h15m

Absolutely, Mozilla is another relevant example where the Mozilla Foundation is a non-profit that owns the Mozilla Corporation, which is for-profit. Furthermore many non-profits also buy shares of for-profit corporations, for example the Gates Foundation owns a large chunk of Microsoft.

You can imagine a non-profit buying enough shares of a for-profit company that it can appoint the for-profit company's board of directors, at which point it's a subsidiary.

Heck a non-profit is even allowed and encouraged to make a profit. There are certainly rules about what non-profits can and can't do, but the big rule is that a non-profit can't distribute its profits, ie. pay out a dividend. It must demonstrate that their expenditures support their tax exempt status, but the for-profit subsidiary is more than welcome to pay out dividends or engage in activities that serve private interests.

zuminator
1 replies
1h34m

*defendant list

epistasis
0 replies
1h28m

Oops, that's a bit embarrassing! Thanks for the correction.

epistasis
0 replies
1h24m

Thanks, that's very helpful, I had not seen the diagram on OpenAI's website before.

It explains at least three of the entities, but I do wonder about the purpose of some of the other entities. For example, a limited partnership is quite odd to have hanging around, I'm wondering what part it plays here.

manquer
0 replies
1h30m

Depends on the sector, size and age of the corporation.

In crypto these kind of complex structures are fairly common ,FTX has some 180 entities. Real estate companies like evergrand have similar complexities.

Companies which do lot of acquisitions will have lot of entities and for accounting may keep them .

Consulting companies including the big ones have similar complex structures each business has their own partners who get a cut of the profits directly and pay only some back to the parent.

Hollywood also does such complex accounting for variety of reasons

Compared to peers in the AI space this is probably unusual, but none of them started as non profit . The only somewhat comparable analogy is perhaps Mozilla (nonprofit tech with huge for profit sub) they are not this complex, they also don’t have the kind of restrictions on founding charter /donor money like openAI does

hattmall
0 replies
1h15m

It's incredibly common, there are probably even more but these are the most asset rich companies. If properly structured even something like a local gym is going to be 6-8 entities. I took multiple entire classes dedicated to corporate structure. Multiple entities are needed to maximize liability protection and tax avoidance purposes.

BoppreH
0 replies
1h17m

Probably not common, since that aspect is part of the complaint. See "E. OpenAI’s Shifting Corporate Structure":

70. In the years following the announcement of the OpenAI, L.P., OpenAI’s corporate structure became increasingly complex.
helsinkiandrew
17 replies
5h35m

OpenAi is also being investigated by the SEC. If "Altman hadn’t been consistently candid in his communications with the board" is interpreted as being misleading then that could be interpreted as misleading investors and therefore securities fraud.

https://www.wsj.com/tech/sec-investigating-whether-openai-in...

klysm
11 replies
1h41m

Isn’t everything securities fraud though

dartos
6 replies
1h15m

The only way to truly make a ton of wealth is to break rules that others follow.

lanstin
5 replies
55m

This statement represents the complete disintegration of the optimism that ruled in the 90s and before when we ardently believed that networking and communication amongst people would increase understanding and improve lives by ensuring no one would be cut off from the common wisdom and knowledge of humanity. While robber baron economics certainly appeal to a lot of robber barons, the twentieth century pretty decisively shows that prosperity at the median makes society progress much faster and more thoroughly than anything else. One used to hear of noblesse oblige, the duty of those with much to help. One used to hear about the great common task of humanity, which we aspire to make a contribution to.

rvba
1 replies
7m

optimism that ruled in the 90s

This is such an interesting take, about which we could probably write whole paragraphs.

Can the 90s be really summarized in such way? Yes, we had the "information highway" and "waiting for year 2000", but at the same time people distrusted their governments. X-files was all the rage, maybe grunge.

In USA there was Bill Clinton - the president that didnt do any wars and balanced the budget.. who got removed for blowjobs. But at the same time there was outsourcing. In rest of the world it also cannot be summed up so easily - I remember that 90s were a struggle, especially for post communism countries.

Obviously later on we got cell phones, but we also got the cancer such as Jack Welch style management that lead to various methods of enshittyfying everything.

I had a talk some time ago - I have a genuine polo bought in a supermarket in the 1980s (wont tell the brand since it is irrelevant). This piece of cloth feels and fits very well - after 40 years. It was worn through many summers. Now I cant buy a polo shirt that will last more than 2 seasons. And I buy the "better" ones. There is lots of crap that falls apart fast. For me the 90s were a start of that trend - enshittification of products that are designed to last 25 months (with a 24 month guarantee) and be thrown away.

But maybe it depends on life experience and anecdotes.

Was there optimism in 90s? Lots of it in marketing materials. But did people really believe that?

seanmcdirmid
0 replies
4m

who got removed for blowjobs.

He was impeached but not removed.

Clubber
1 replies
33m

we ardently believed that networking and communication amongst people would increase understanding and improve lives by ensuring no one would be cut off from the common wisdom and knowledge of humanity

How is this not true?

dartos
0 replies
22m

I don’t think they were saying that isn’t true.

Just that the world doesn’t (appear) operate with that in mind anymore.

I’d argue it never really did.

dartos
0 replies
54m

Welcome to the 21st century.

Get that optimism out of here.

The game was rigged in the 90s as well (with the likes of enron. Many executives get a few years of minimum security prison in exchange for a small fortune), there was just less dissemination of information.

ericjmorey
1 replies
1h2m

Even this comment?

klysm
0 replies
4m

Probably, as I’m not optimizing shareholder value

rmbyrro
0 replies
1h39m

When the SEC wants it to be, yes.

meesles
0 replies
42m

Sure is, in an oligarchy disguised as a free market.

whimsicalism
3 replies
1h22m

the statements made by the board were likely sufficient to trigger invesigation and the current iteration of the government (2010+) wants to have dirt on anything this big

screenobobeano
2 replies
59m

Still not as big as Halliburton, I feel it’s the opposite the government isn’t detaining these obvious frauders and now they run amock.

wkat4242
1 replies
45m

Halliburton was protected by the government because their top dogs were literally running the country. It's a very different scenario from OpenAI.

bloopernova
0 replies
18m

Halliburton was protected by the government because their top dogs were literally running the country.

Let's not give Sam Altman any ideas!

emodendroket
0 replies
1h15m

Nothing I've read about that whole kerfuffle suggests that "investors" were the main people the ousted board members cared about. Kind of seems like reading back significance not intended into the original text.

baking
8 replies
1h58m

I assume you mean Exhibit 2, the email from Sam to Elon.

alickz
7 replies
1h15m

From: Elon Musk

To: Sam Altman

Subject: AI Lab

Agree on all

On Jun 24, 2015, at 10:24 AM, Sam Altman wrote:

1. The mission would be to create the first general Al and use ti for individual empowerment ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement.

2. I think we'd ideally start with a group of 7-10 people, and plan to expand from there. We have a nice extra building in Mountain View they can have.

3. I think for a governance structure, we should start with 5 people and I'd propose you,[blank] and me. The technology would be owned by the foundation and used "for the good of the world", and in cases where it's not obvious how that should be applied the 5 of us would decide. The researchers would have significant financial upside but ti would be uncorrelated to what they build, which should eliminate some of the conflict (we'll pay them a competitive salary and give them YC equity for the upside). We'd have an ongoing conversation about what work should be open-sourced and what shouldn't. At some point we'd get someone to run the team, but he/she probably shouldn't be on the governance board.

4. Will you be involved somehow in addition to just governance? Ithink that would be really helpful for getting work pointed in the right direction getting the best people to be part of it. Ideally you'd come by and talk to them about progress once a month or whatever. We generically call people involved in some limited way ni YC "part-time partners" (we do that with Peter Thiei for exampie, though at this point he's very involved) but we could call ti whatever you want. Even fi you can't really spend time on ti but can be publicly supportive, that would still probably be really helpful for recruiting.

5. I think the right plan with the regulation letter is to wait for this to get going and then! can just release ti with a message like "now that we are doing this, I've been thinking a lot about what sort of constraints the world needs for safefy." Im' happy to leave you of as a signatory. Ialso suspect that after it's out more peopie will be willing to get behind it.

Sam

sroussey
5 replies
1h4m

Wait, Peter Thiel is/was heavily involved in YC?

lenerdenator
2 replies
39m

They're all buddies. It's a industry/regional oligarchy. Part of the system is you cut the rest of "the club" in on deals. If you don't, you get what's happening here: lawsuits.

Zanneth
1 replies
6m

Maybe a better way to say it is: when you’re investing millions of dollars in risky ventures, reputation is very important.

lenerdenator
0 replies
2m

I'm thinking their reputations are a bit different in their heads than in reality.

ep103
0 replies
59m

Sauron does have a habit of consistently appearing, and consistently appearing where least expected.

rancour
0 replies
21m

Based mountain view lords

Jun8
2 replies
1h22m

"I think for a governance structure, we should start with 5 people and I'd propose you, [REDACTED], [REDACTED], [REDACTED], and me. Technology would be owned by the foundation and used "for the good of the world", and in cases where it's not obvious how that should be applied the 5 of us would decide."

You can find the number of letters of the redacted text and then guess who they are. It's fun!

emodendroket
1 replies
1h6m

"for the good of the world", and in cases where it's not obvious how that should be applied the 5 of us would decide."

It's hard not to be a bit cynical about such an arrangement.

lenerdenator
0 replies
41m

It's a very pre-2016 view of the tech industry, for sure.

Back when the public at least somewhat bought the idea that SV was socially progressive and would use its massive accumulation of capital for the good of humanity.

scintill76
1 replies
41m

Off-topic, but what are the <!--[if !supportLists]--> doing there? I gather it's some MSOffice HTML stuff, but did it actually show up in the rendered email, or is it some artifact of the archival process(?) for legal discovery?

okhuman
14 replies
5h46m

AI is going to continue to have incremental progress, particularly now in hardware gains. No one can even define what AGI is or what it will look like, let alone be something that OpenAI would own? Features progress is too incremental to suddenly pop out with "AGI". Fighting about it seems a distraction.

root_axis
11 replies
2h20m

There's also no reason to believe that incremental progress in transformer models will eventually lead to "AGI".

snapcaster
10 replies
2h10m

Yes, but I think everyone would agree that the chance isn't 0%

root_axis
4 replies
2h4m

I don't agree, I think many people would argue the chance is 0%.

snapcaster
2 replies
2h1m

Are you one of those people? how can you be so confident? I think everyone should have updated their priors after how surprising the emergent behavior in GPT3+ are

root_axis
0 replies
1h47m

I don't think GPT3's "emergent behavior" was very surprising, it was a natural progression from GPT2, and the entire purpose of GPT3 was to test the assumptions about how much more performance you could gain by growing the size of the model. That isn't to say GPT3 isn't impressive, but its behavior was within the cone of anticipated possibilities.

Based on a similar understanding, the idea that transformer models will lead to AGI seems obviously incorrect, as impressive as they are, they are just statistical pattern matchers of tokens, not systems that understand the world from first principles. And just in case you're among those that believe "humans are just pattern matchers", that might be true, but humans are modeling the world based on real time integrated sensory input, not on statistical patterns of a selection of text posted online. There's simply no reason to believe that AGI can come out of that.

nicklecompte
0 replies
1h50m

Perhaps you should update your priors about "emergent behavior" in GPT3+: https://arxiv.org/abs/2304.15004

JohnFen
0 replies
1h48m

I don't think the chance is 0%, but I do think that the chance is very, very close to 0%, at least if we're talking about it happening with current technology within the next hundred years or so.

pton_xd
1 replies
1h57m

It's a static single-pass feed-forward network. How could that possibly result in AGI?! (Queue famous last words ...)

Akronymus
0 replies
1h45m

IMO it could become an AGI IFF it has an infinitely long context window. Otherwise I see absolutely no chance of it becoming a true agi

nicklecompte
1 replies
1h52m

Transformer neural networks are not capable of true recursion, which is an excellent reason to think that the chance truly is 0%.

CamperBob2
0 replies
1h44m

That seems easy enough to fix

jayveeone
0 replies
2h1m

Non-zero chances don't deserve the hype AGI is receiving, is the issue.

And a lot of AI experts outside of the AGI grift have stated that it's zero.

xiphias2
1 replies
1h52m

Progress is definitely not inremental, it's exponential.

The same performance (training an LLM with a given perplexity) can be achieved 5x cheaper next year while the amount of money deep learning infrastructure gets increases exponentially right now.

If this method is able to get to AGI (which I believe but many people are debating), human intelligence will just be mostly ,,skipped'', and won't be a clear point.

blibble
0 replies
1h41m

how long do you think the "exponential" (that looks very linear to me) growth in funding can continue?

until it's more than US GDP? world GDP? universe GDP?

either way you're close to the point it will have to go logistic

ZiiS
3 replies
1h34m

When the world’s richest man sues you, being a saint wouldn't be a reliable defence.

mullingitover
1 replies
1h28m

Wait, Bernard Arnault is suing them too?

JKCalhoun
0 replies
1h20m

Kind of getting tired of litigious billionaires.

nightowl_games
2 replies
4h56m

Maybe the discovery process will benefit Musk and/or harm OpenAI sufficiently to consider it a "win" for Musk. Or perhaps it's just Musk wanting to make a statement. Maybe Musk doesn't expect to actually win the suit.

pquki4
1 replies
1h42m

I wonder if the lawsuit will simply be dismissed.

AlbertCory
0 replies
1h39m

The standard first move by a defendant is a Motion to Dismiss. So of course they'll try that. Don't read too much into it.

jimbokun
1 replies
1h43m

But in this case, it wasn’t a company, but a nonprofit.

staticautomatic
0 replies
7m

Nonprofit is a tax status, not a corporate structure.

justinclift
0 replies
1h39m

Has there been a successful suit against a company for "abandoning their founding mission"?

Probably depends on how much money the person behind the suit is willing to spend.

Elon could likely push stuff a lot further along than most.

dclowd9901
0 replies
1h26m

In the publicly traded world, it would be considered securities fraud, an umbrella under which you can pretty much sue a company for anything if you’re a shareholder.

I’m not sure if there’s an equivalent in the private world, but if he gave them money it’s possible he simply has standing for that reason (as a shareholder does).

achow
11 replies
10h16m

Microsoft gained exclusive licensing to OpenAI's GPT-3 language model in 2020. Microsoft continues to assert rights to GPT-4, which it claims has not reached the level of AGI, which would block its licensing privileges.

Not sure this is a common knowledge - MSFT licence vis-a-vis AGI.

rickdeckard
10 replies
9h16m

It's described here: https://openai.com/our-structure

Quote:

  Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

> "Musk claims Microsoft's hold on Altman and the OpenAI board will keep them from declaring GPT-4 as a AGI in order to keep the technology private and profitable."

Well.....sounds plausible...

jp_nc
5 replies
5h17m

If he thinks GPT-4 is AGI, Elon should ask a team of GPT-4 bots to design, build and launch his rockets and see how it goes. If “economically valuable work” means creating terrible, wordy blog posts then yeah I guess it’s a risk.

bart_spoon
3 replies
4h40m

I don’t think GPT-4 is AGI, but that seems like a foolish idea. An AGI doesn’t need to be hyperproficient at everything, or even anything. Ask a team of any non-aeronautical engineers to build a rocket and it will go poorly. Do those people not qualify as intelligent beings?

blibble
1 replies
1h52m

Ask a team of any non-aeronautical engineers to build a rocket and it will go poorly. Do those people not qualify as intelligent beings?

I suspect you'd have one person on the team that would say "perhaps you'd be better choosing a team that knows what they're doing"

meanwhile GPT-4 would happily accept and emit BS

ToValueFunfetti
0 replies
44m

Have you used GPT-4? I'd criticize it in the opposite direction. It routinely defers to experts on even the simplest questions. If you ask it to tell you how to launch a satellite into orbit, it leads with:

Launching a satellite into orbit is a complex and challenging process that requires extensive knowledge in aerospace engineering, physics, and regulatory compliance. It's a task typically undertaken by governments or large corporations due to the technical and financial resources required. However, I can give you a high-level overview of the steps involved:
spencerflem
0 replies
1h39m

Outperforming humans does not mean outperforming an average untrained human

Petersipoi
0 replies
1h50m

You're just highlighting the issue. Nobody can agree on the definition of AGI. The most people would agree that being able to design, build, and launch rockets is definitely _not_ the definition. The fact that M$ has such a stronghold in OpenAI means that they won't declare anything as AGI even if most people would say it is.

sokoloff
1 replies
1h45m

Does anyone credibly believe that GPT-4 "outperforms humans at most economically valuable work"?

baobabKoodaa
0 replies
1h38m

No.

ks2048
1 replies
2h2m

I'm surprised such an important legal issue here is based on the definition of "AGI", seems really hard to define (I really think the concept is flawed). Does this consider that "most economically valuable work" is physical? And more importantly, with such money on the line, no one will agree on when AGI is attained.

a_wild_dandan
0 replies
41m

Altman himself said it's nebulous and hates the term.

sensanaty
10 replies
1h17m

I don't even particularly like Musk, but I definitely despise M$ and their comic-book tier villanous shenanigans even more. Here's to hoping M$ gets fucked in this.

swozey
6 replies
1h14m

The mental gymnastics that some tech bro has to go through to like Musk over Microsoft to the point that they still use the 2001 meme of "M$FT" is hilarious.

I know just about everything I could ever need to know about both companies and I have tons, tons of friends who absolutely love and have been at "M$FT" for 5-20 years.

I don't know a single person who likes working at Tesla or SpaceX and I used to live in Austin.

I'm also a literal linux kernel contributor so I don't have any bone in the game for Windows.

Musk is literally spitting right-wing nazi, anti-trans trash all over twitter and using his new news medium as a right wing mind meld tool while unbanning known anti-semites and racists like Kanye and Trump. Cool guy. I guess you might not care about that when you're a middle-class straight white tech bro on hackernews and might think M$FT is the big bad guy because Bill Gates locked you into Internet Explorer and adware 15 years ago.

sahila
2 replies
51m

Liking the environment at Microsoft is very different than liking what the company does. I know far more people excited about space x than whatever Microsoft is doing and none uses any Microsoft products whereas tons of them opted into buying a Tesla!

Working at Microsoft is considered easy work whereas it's the opposite for Elon's companies. Doesn't make him a bad person.

swozey
1 replies
25m

Doesn't make him a bad person.

My god. The little apartheid clyde isn't a bad person. Love it. Hows your model 3?

Musk gives us "hard" work. We should love being abused because we get to work on rockets!

Company towns for everyone! Giga, TX!

sahila
0 replies
5m

I don't understand why you're angry? It's hard to say someone working at Elon's companies are abused; they're talented and could easily quit and get a new job. And Giga is closer to Austin's downtown than Apple's campus in north Austin.

Hope you're well!

sensanaty
1 replies
1h7m

Wow the people employed by the evil gigacorporation like working for the entity shoveling mountains of money at them, what a completely unexpected stance for them to have.

M$ is no different today than they were in the days of their EEE strategy, they've just fooled the techbros, as you put it, into believing they're still not the scum of the earth anymore.

swozey
0 replies
41m

Microsoft is by far the lowest paying FAANG there is right next to Amazon. You make $150k+ less working at MS than Google, Meta, Apple, Nvidia etc.

Nobody reading HN who works at Microsoft is making killer money.

shutupnerd0000
0 replies
1h6m

Glad you're not a figurative linux kernel contributor

Anarch157a
2 replies
1h9m

Musk is even more cartoonishly villainous than MS. This is pretty much Lex Luthor against Darkseid. Unless it results in mutual destruction, doesn't matter who wins, everybody else loses.

My take is that Elon is suing OpenAI because he left OpenAI before they opened a commercial venture, which means he doesn't benefit from the companies current valuation, so he's using the courts to try to strong arm the foundation into giving him some shares, basically using the courts for harassment purposes.

I'm hoping for both to get fucked, and if this takes this whole "AI" hype away with them, so much the better.

sensanaty
1 replies
1h2m

Yeah that'd be the ideal turn of events, if both M$ and Elon got fucked by this

In my dream world we'd nuke M$ from orbit and splinter it into a trillion tiny little pieces... A man can dream

iExploder
0 replies
45m

We can nuke it from the self sustaining colonies on Mars established by Musk in 2020...

r721
10 replies
1h57m

What happened with the ranking? Were there people who flagged all the stories, even the first (obviously newsworthy) one?

dang
9 replies
1h56m

The 30 or so submissions of this story all set off a bunch of software penalties that try to prune the most repetitive and sensational stories off the front page. Otherwise there would be a lot more repetition and sensationalism on the fp, which is the opposite of what HN is for (see explanations via links below if curious).

The downside is that we have to manually override the penalties in the case of a genuinely important story, which this obviously is. Fortunately that doesn't happen too often, plus the system is self-correcting: if a story is really important, people will bring it to our attention (thanks, tkgally!)

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

SilasX
5 replies
1h44m

Sometimes I wish you'd prune the users (at least from submission privileges) who can't be bothered to search first, which is how you get these n-fold submissions.

dang
3 replies
1h42m

That's impossible. We'd have to prune human nature.

ceocoder
2 replies
1h39m

Now that is a startup idea, can I apply to next batch with it?

dang
1 replies
1h37m

Sounds pretty unethical, so I don't think YC would fund it.

winwang
0 replies
1h27m

Shouldn't we think in terms of the post-pruning ethics? :D

resonantjacket5
0 replies
1h35m

I mean if it's on reddit or other platforms, one could do some simple search for them before they submit the post and prompt "seems like this article has already been submitted" and a checkmark if they want to bypass it.

netcraft
1 replies
1h40m

ideating out loud, I wonder if thered be a way to "collapse" all of the different articles submitted within some short time frame within one story, and maybe share the karma between them? In the case of breaking news it sucks to submit an article and you not being the one that gets "blessed", and different articles could conceivably have different valuable viewpoints. I'm sure it would be more complicated than that when it came to it though.

dang
0 replies
1h36m

It's on the list!

tailspin2019
0 replies
1h51m

Thanks for the explanation - I was just wondering the exact same thing as the parent.

andsoitis
6 replies
3h43m

Many would argue, reasonably so, that OpenAI is now a de facto subsidiary of the largest company in the world by market cap, Microsoft (Apple is second and Saudi Arabian Oil is third).

xiphias2
4 replies
1h47m

Actually NVIDIA just took over Saudi Aramco, but they are sharing 3rd position.

TechPlasma
3 replies
1h12m

I read this comment as "NVIDIA just took over Saudi Aramco" and was briefly confused in what could possibly be their reasoning for that acquisition?! Perhaps they decided to get some weird lower price on fuel for their gpus.... Anyways it was a brief but fun bit of confusion.

wkat4242
0 replies
34m

I did too. It's confusing wording. Overtook would be much clearer as someone else mentioned.

delfinom
0 replies
39m

To be fair, this is the time for NVIDIA to leverage their stock and buy up shit to diversify because their stock is going to correct eventually lol. So why not buy out a oil producer ahaha.

Qworg
0 replies
49m

This was a good opportunity to say "overtook" as it breaks the idea of acquisition (and sounds more like racing)

jedberg
0 replies
30m

The value of Saudi Aramco can't really be trusted. It's listed on their own stock market which the company controls. It has no reporting requirements, and the float is a single digit percent of the company.

It would be the same as me creating my own market, issuing 10,000,000,000 shares, and then convincing 1000 people to buy a share at $100 and then claiming my company is worth $1T.

Tomte
6 replies
1h24m

GPT-4 is an AGI algorithm

That claim is audacious.

jeron
2 replies
1h21m

Would be insane if the court case ended up hinging on whether AGI had been achieved internally or not

Tomte
1 replies
1h19m

It won‘t, that‘s pretty much a throwaway half-sentence, but it stood out to me.

You throw a lot of things at the judge and see what sticks.

iExploder
0 replies
1h4m

Could lead to juicy depositions and be subject to discovery. I wonder if Ilya makes an appearance.

VirusNewbie
1 replies
46m

Why? AGI itself makes no claim of super intelligence or even super human capabilities.

tills13
0 replies
40m

It's missing the "I" part -- it's a token prediction scheme using math.

Bjorkbat
0 replies
48m

I mean, I definitely disagree with the statement that GPT-4 is an AGI, but OpenAI themselves define an AGI in their charter as an AI that is better than the median human at most economically valuable work.

Even when taking that into consideration I don't consider GPT-4 to be an AGI, but you can see how someone might make attempt to make a convincing argument.

Personally though, I think this definition of AGI sets the bar too high. Let's say, hypothetically, GPT-5 comes out, and it exceeds everyone's expectations. It's practically flawless as a lawyer. It can diagnose medical issues and provide medical advice far better than any doctor can. It's coding skills are on par with that of the mythical 10x engineer. And, obviously, it can perform clerical and customer support tasks better than anyone else.

As intelligent as it sounds, you could make the argument that according to OpenAI's charter it isn't actually an AGI until it takes an embodied form, since most US jobs are actually physical in nature. According to The Bureau of Labor Statistics, roughly 45% of jobs required medium strength back when the survey was taken in 2017 (https://www.bls.gov/opub/ted/2018/physically-strenuous-jobs-...)

Hypothetically speaking, you could argue that we might wind up making superintelligence before we get to AGI simply because we haven't developed an intelligence capable of being inserted into a robot body and working in a warehouse with little in the way of human supervision. That's only if you take OpenAI's charter literally.

Worth noting that Sam Altman himself hasn't actually used the same definition of AGI though. He just argues that an AGI is one that's simply smarter than most humans. In which case, the plaintiffs could simply point to GPT-4's score on the LSAT and various other tests and benchmarks, and the defendants would have to awkwardly explain to a judge that, contrary to the hype, GPT-4 doesn't really "think" at all. It's just performing next-token prediction based on its training data. Also, look at all the ridiculous ways in which it hallucinates.

Personally, I think it would be hilarious if it came down to that. Who knows, maybe Elon is actually playing some kind of 5D chess and is burning all this money just to troll OpenAI into admitting in a courtroom that GPT-4 actually isn't smart at all.

paxys
3 replies
1h50m

Because of Twitter's amazing new UI I have no idea what Sam is replying to and what the context is, so the link by itself is meaningless.

nuz
1 replies
1h45m

New? Looks the same as it has been for years. Also sam is clearly speaking in general terms.

nickthegreek
0 replies
11m

The link shows only the following:

@danielpsegundo that would be of interest though I'd hate to bet against elon winning

We cannot see what Dan asked to understand what Sam is responding to.

whimsicalism
0 replies
1h45m

no it is because the op is private

baobabKoodaa
0 replies
1h49m

2015

mempko
5 replies
2h2m

The cynic in me believes this is motivated by not Musk's love for the "mission", but by xAI, his attempt to build OpenAI's competitor. I'm guessing this is just a way to weaken a competitor.

tailspin2019
1 replies
1h53m

You're probably right, but either way it will be interesting to see this tested in court. I think it's good to have some extra scrutiny over how OpenAI is operating regardless of the underlying motivations that led to the action!

baobabKoodaa
0 replies
1h39m

Ahh, selfish motivations leading to common good as an unintended consequence. Capitalism at work.

klabb3
0 replies
1h24m

I agree, but if the bar for cynicism is “not taking a billionaire at their word”, then we’re at peak gullibility. Especially if said actor has a track record of deception for economic, social or political gain.

This requires less cynicism than seeing through that Putin invaded to denazify Ukraine, or that your corporate employer rewarded you with pizza because they care about you.

ilaksh
0 replies
1h29m

I assume if there is a jury trial then actually the fact that Musk has his own for-profit AI company now could play a huge part. Even if for some reason they tell the jury to "disregard that fact" or something.

I feel like we now have a reasonable expectation that his AI effort becomes open source. Not that I actually expect it, but seems reasonable in this context.

hobofan
0 replies
1h30m

Yeah, it's his second/third try. OpenAI was already his way to "commoditize your complement" so that Tesla's AI division could catch up to DeepMind etc..

Now that this accidentally created something even more powerful (and Tesla's autopilot plans don't seem to be panning out), he's trying to stifle the competition so that xAI can catch up. SPOILER: They won't.

emodendroket
5 replies
1h12m

It seems like the lawsuit revolves around a claim that GPT-4 is "AGI." Seems kind of dubious but, of course, when these questions get to courts who knows what will happen.

iExploder
4 replies
1h7m

Could it lead to probing and discovery?...

emodendroket
1 replies
1h1m

Not sure what you mean to imply might happen.

iExploder
0 replies
51m

Musk claims the deal between OpenAI and MS is - MS gets access only to OpenAI pre-AGI tech. And he claims MS influences OpenAI board to not classify their AGI tech as AGI.

Based on that it stands to reason Musk would make a case of determining whether openai achieved AGI internally via gpt4 or q* through discovery. Maybe he can get depositions from ousted openai members to support this?

I'm not a lawyer, just trying to follow the breadcrumbs..

kordlessagain
0 replies
32m

Discovery is the play here for both GPT4 and Q. It's a win/win for Elon Musk as he will either get money or get the knowledge how it's done/going to be done. I hold an opinion that GPT4 is simply an ensemble of GPT-3's with a bunch of filtering, deployment (for calculating things), and feedback mechanisms with a shitty UI tacked onto it. Q is probably that ensemble plugged into Sora somehow, to help tweak the wisdom understanding of a certain class of problems. That's why they need the GPUs so badly. And, we just saw the paper on quantization of models come out, so it's good timing to bring this claim to bear.

Elon Musk would do well to consider taking Tesla's ability to build hardware and apply it to building ASICs, because without the hardware, no amount of software discovery will net you AGI.

dragonwriter
0 replies
54m

Yes, unless it is dismissed at a very early stage, there will be discovery.

sema4hacker
4 replies
1h9m

A NY Times article says "Though Mr. Musk has repeatedly criticized OpenAI for becoming a for-profit company, he hatched a plan in 2017 to wrest control of the A.I. lab from Mr. Altman and its other founders and transform into a commercial operation that would work alongside his other companies, including the electric carmaker Tesla, and make use of their increasingly powerful supercomputers, people familiar with his plan have said. When his attempt to take control failed, he left the OpenAI board, the people said."

That would let OpenAI lawyers keep this suit tied up for a very long time.

moralestapia
1 replies
50m

Could he be the one behind the recent coup at OpenAI, as well?

kmeisthax
0 replies
31m

No. Elon Musk was not involved with the firing of Sam Altman as far as I'm aware.

The real story behind that is... complicated. First, Sam Altman allegedly does stuff that looks to be setting up a coup against the board, so the board fires Sam, but they don't provide proper context[0] and confuse everyone. So Sam gets Microsoft and a bunch of OpenAI employees to revolt and pressure the board to bring him back. He then fires the board and instates a new one, basically the original coup plan but now very much open and in the public eye.

[0] To be clear, most corporate communications try to say as little as possible about internal office politics. That can easily lead into defamation lawsuits.

nuz
0 replies
2m

Yet interestingly kept donating to openai until september 2020 according to the document.

bevekspldnw
0 replies
1h3m

99% chance the “people familiar” is Sam A, who has a history of lying. I’d place zero stock in that report either way. May or may not be true.

kbos87
4 replies
1h35m

Unsurprising turn of events. Musk can't stand not being at the top of the food chain, and it's been widely reported on that he's felt "left out" as AI has taken off while he has been consumed by the disaster he created for himself over at X -

https://www.businessinsider.com/elon-musk-ai-boom-openai-wal...

I can imagine Musk losing sleep knowing that a smart, young, gay founder who refuses to show him deference is out in the world doing something so consequential that doesn't involve him.

kbos87
3 replies
1h19m

Genuinely curious why I'm getting downvoted - similar comments don't appear to have met the same fate, and I'm by no means defending Altman or OpenAI.

If it's because I mentioned that Altman is gay - and I can't find another reason - I think that's relevant in context of Musk's recent hard shift rightward and his consistently aggressive, unprovoked behavior toward LGBTQ people. For some reason the topic looms large in his mind.

vik0
0 replies
1h3m

I think you're reading too much into it lol

I think a more likely interpretation is that a lot of people here are Musk fans, and don't like it when he gets criticized, thus downvoting your comment

I'm neither an ultra fanboy nor someone who despises him

mycologos
0 replies
1h2m

It seems like he's talked a lot about the T in LGBTQ, not so much the G? Evidence that he would be especially incensed at getting beat by a gay guy seems thin on the ground unless we insist on treating LGBTQ as a bloc.

(I don't exactly keep up with Musk's doings, though.)

ahmeneeroe-v2
0 replies
55m

I didn't downvote you because I don't have downvote rights yet. I would have downvoted you, though, because in a sea of comments of questionable value, your comment stands out as actually having negative value.

paxys
3 replies
1h38m

While I have no doubt everything in the complaint is correct, it's hard to see it as Elon being genuinely concerned about open and safe AI vs just having FOMO that he isn't part of it anymore and doesn't get to call the shots. For example his own AI startup is exactly as closed off and unregulated as OpenAI. Why is that not equally concerning?

karaterobot
2 replies
1h34m

I don't want to be in the position of defending Elon Musk, but in this case his complaint seems to be that OpenAI claims one thing and does another. If X.ai started out telling everyone it's for-profit and closed off, then it's not hypocritical at all for it to be that. It's something else, sure.

paxys
0 replies
1h14m

From x.ai's home page:

At xAI, we want to create AI tools that assist humanity in its quest for understanding and knowledge.

How is it doing that by being a closed, for-profit enterprise?

krisboyz781
0 replies
1h22m

But Musk claims X is the free speech town square but it isn't

nojvek
3 replies
1h1m

This is going to be an interesting trial.

Elon has a good case that OpenAI has long diverged from his founding principles.

Sam and his friends can side with Microsoft to build a ClosedAI system like Google/Deepmind and Apple.

There is a place for open research. StabilityAI and Mistral seem to be carrying that torch.

I don’t think SamA is the right leader for OpenAI.

elvennn
1 replies
40m

Regarding Mistral recent announcements, I'm not so sure anymore.

pelorat
0 replies
26m

The large models can't be run by consumers, and why would a for profit company release a large model that can then be picked up by other startups?

iExploder
0 replies
49m

Google "Microsoft strikes deal with Mistral in push beyond OpenAI" bro...

lenerdenator
3 replies
46m

I'm not a lawyer, is it novel that someone's suing a corporate officer for breach of fiduciary duty as a result of trying to make the most money possible for shareholders?

Obviously that's strange for a non-profit, but when you hear of a breach of fiduciary duty suit it's usually because someone didn't do something to make more money, not less.

It almost feels more like an accusation of fraud than breach of duty.

pclmulqdq
2 replies
44m

This is not a for-profit company. The shareholders don't get any of the money that OpenAI makes by law, so its purpose is not to make a profit.

lenerdenator
1 replies
37m

So Musk's arguing that they had duty to protect his investment in OpenAI from being used for profiteering, and they didn't do that.

How's that going to float in an industry whose philosophy is that profit is a very useful abstraction for social benefit?

pclmulqdq
0 replies
2m

That philosophy doesn't really exist in legal terms when you have a for-profit corporation. There are B-corporations (eg Anthropic) which try to balance those goals, but I'm not sure there's a ton of existing law around that.

photochemsyn
2 replies
2h2m

This is good news. OpenAI's recent decision to dive into the secretive military contracting world makes a mockery of all its PR about alignment and safety. Using AI to develop targeted assassination lists based on ML algorithms (as was and is being done in Gaza) is obviously 'unsafe and unethical' use of the technology:

https://www.france24.com/en/tv-shows/perspective/20231212-un...

dash2
1 replies
1h44m

Note that the France24 story on Gaza is about an unrelated technology, there's no claim that OpenAI was involved.

photochemsyn
0 replies
1h34m

If you have any links detailing the internal structure of the Israeli 'Gospel' AI system or information about how it was trained, that would be interesting reading. There doesn't seem to be much available on who built it for them, other than it was first used in 2021:

"Israel has also been at the forefront of AI used in war—although the technology has also been blamed by some for contributing to the rising death toll in the Gaza Strip. In 2021, Israel used Hasbora (“The Gospel”), an AI program to identify targets, in Gaza for the first time. But there is a growing sense that the country is now using AI technology to excuse the killing of a large number of noncombatants while in pursuit of even low-ranking Hamas operatives."

https://foreignpolicy.com/2023/12/19/israels-military-techno...

mise_en_place
2 replies
1h56m

This is an admirable lawsuit, however, Musk is no longer on the board. That means he has as much say in the direction of the company as a random crackhead living on the BART train. The article states he left the board in 2018.

hoistbypetard
1 replies
1h51m

he has as much say in the direction of the company as a random crackhead living on the BART train

He donated 8 figures to the nonprofit. So he deserves as much say in the direction as a random crackhead living on the BART train who donated $44M to the nonprofit.

mise_en_place
0 replies
1h49m

It doesn't work like that. You can donate millions or invest millions in a company as the founder or lead investor. As soon as you leave the company as a board member, and are not a shareholder, your official relationship with the company is terminated.

w10-1
1 replies
38m

Musk is posturing as a savior.

For a promise to be enforceable in contract, it has to be definitive. There's nothing definitive here.

For a representation to be fraudulent, it has to be false or misleading and relied upon as material. Courts don't treat a later change of heart as making earlier statement false, and since Altman arguably knew less than Musk at the time, it's unlikely to be material.

More generally, investors lose all the time, and early minority investors know they can be re-structured out. These investments are effectively not enforced by law but by reputation: if you screw an investor, you'll lose access to other investors (unless your investor tribe is, well, tribal).

The detail and delay that evolved in law for the sake of truth and legitimacy is now being deployed for the sake of capturing attention and establishing reputation.

Musk's investment in twitter has been a catastrophe from an investment and business standpoint, but has amplified his icon status with king-maker aspects through control of attention in our attention-based economy and politics. If he can lead the charge against AI, he can capture a new fear and resentment franchise that will last for generations.

Hence: posturing.

We burrowing mammals can hope the dinosaurs fighting might make life quieter for us, but that's just hope.

troupe
1 replies
23m

If OpenAI became a non-profit with this in its charter:

“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"

I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.

cpill
0 replies
3m

yeah, the lawyers will have the whole case on those two words: "where applicable"

syngrog66
1 replies
1h19m

grabs popcorn

iExploder
0 replies
17m

If I were you I would grab a GEP gun

redbell
1 replies
1h27m

Maintaining the initial commitment becomes exceptionally challenging after attaining unforeseen success, a situation akin to a politician struggling to uphold pre-election promises once in office.

EXACTLY, a year ago, an alarm echoed with urgency: https://news.ycombinator.com/item?id=34979981

whimsicalism
0 replies
1h25m

Truly the only reason they failed at keeping their original mission is when they stopped paying employees in cash.

perihelions
1 replies
1h34m

Most important question: why did he file this lawsuit? What does he intend to gain out of it?

Is it a first step towards acquiring/merging OpenAI with one of his companies? He's offered it to buy once before, in 2018 [0]. (He's also tried to buy DeepMind—page 10 the OP filing).

[0] https://www.theverge.com/2023/3/24/23654701/openai-elon-musk... ("Elon Musk reportedly tried and failed to take over OpenAI in 2018")

debacle
0 replies
1h31m

The "source" for The Verge article is a sourceless hit piece. There's no actual source claiming he tried to take over OpenAI in 2018 anywhere, besides "people familiar with the matter."

modeless
1 replies
1h47m

TL;DR for those wondering, as I was, why Musk would have any kind of plausible claim against OpenAI, the main claim is breach of contract on the "Founding Documents" of OpenAI, Inc. (the original nonprofit funded by Musk).

Plaintiff contributed tens of millions of dollars, provided integral advice on research directions, and played a key role in recruiting world-class talent to OpenAI, Inc. in exchange and as consideration for the Founding Agreement, namely, that: OpenAI, Inc. (a) would be a non-profit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons. This Founding Agreement is memorialized in, among other places, OpenAI, Inc.’s founding Articles of Incorporation and in numerous written communications between Plaintiff and Defendants over a multi-year period [...]

Defendants have breached the Founding Agreement in multiple separate and independent ways, including at least by: a. Licensing GPT-4, which Microsoft’s own scientists have written can “reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system,” exclusively to Microsoft, despite agreeing that OpenAI would develop AGI for the benefit of humanity, not for the private commercial gain of a for-profit company seeking to maximize shareholder profits, much less the largest corporation in the world. b. Failing to disclose to the public, among other things, details on GPT-4’s architecture, hardware, training method, and training computation, and further by erecting a “paywall” between the public and GPT-4, requiring per-token payment for usage, in order to advance Defendants and Microsoft’s own private commercial interests, despite agreeing that OpenAI’s technology would be open-source, balancing only countervailing safety considerations. c. [...]

And what is he suing for?

An order requiring that Defendants continue to follow OpenAI’s longstanding practice of making AI research and technology developed at OpenAI available to the public, and

An order prohibiting Defendants from utilizing OpenAI, Inc. or its assets for the financial benefit of the individual Defendants, Microsoft, or any other particular person or entity;

For a judicial determination that GPT-4 constitutes Artificial General Intelligence and is thereby outside the scope of OpenAI’s license to Microsoft;

And some money, of course. And he requests a jury trial.

ulnarkressty
0 replies
1h30m

Microsoft's takeover of Mistral makes a lot more sense if this lawsuit has a chance to succeed.

kepano
1 replies
32m

Now that Google, Meta, Mistral, etc, all have great open source models, it seems rather untenable for OpenAI to

1. keep "open" in the name

2. stay closed source

3. pretend to be a non-profit

at least one of those things must go, right?

micromacrofoot
0 replies
29m

I don't think there are any laws about how you name your company right? they could just say "open" as in "open for everyone to use" and that would end that discussion

BoppreH
1 replies
1h9m

I find it interesting that there's no mention of private information after ~2020. No whistleblowers, private demos, chats with Altman, or anything not found in mainstream news.

Is that required for filing a case, or is Musk operating from the same information as all of us?

ricardobeat
0 replies
34m

I think he had left the organization by 2019?

yodsanklai
0 replies
2h3m

Can this complaint lead to anything?

yieldcrv
0 replies
54m

Saw the headlines, but emotions aside is there any merit to the case thats being argued in the court dockets?

I’m in the non-profit space and there are certainly things about it that are ripe to change by Congress if people knew about them, and an insider also has the ability to snitch to the IRS if they think a tax exemption is being used improperly

The IRS has a bounty program for tax events over like $10m

stainablesteel
0 replies
1m

from what i've read about this drama, its not worth considering

i think this is meant to divert resources away from developing GPT so that musk can get ahead in the AI game, hes basically in a position to do so

slim
0 replies
18m

Elon Musk will want to settle this in an MMA cage again

skrbjc
0 replies
26m

Maybe this is an elaborate ploy to make OpenAI truly private and clean up their corporate structure while making it seem like the fault of an evil billionaire/court system.

seydor
0 replies
6h13m

Somebody had to do it. It's very dangerous for what could be the world's biggest company to have such a unique and peculiar legal nature.

pknerd
0 replies
2m

If you can't beat them, sue them

photochemsyn
0 replies
1h11m

In the rush to monetize their assets, OpenAI and Microsoft are turning to the government contracting spigot:

https://theintercept.com/2024/01/12/open-ai-military-ban-cha...

Interestingly, this is also how IBM survived the Great Depression, it got a lucrative contract to manage Social Security payments. However, AI and AGI are considerably more dangerous and secretive military uses of the technology should be a giant red flag for anyone who is paying attention to the issue.

I wouldn't be surprised if the decision to launch this lawsuit was motivated in part by this move by Microsoft/OpenAI.

ph_dre
0 replies
21m

I uploaded the PDF to ChatGPT4 with the original name and got "Unable to upload musk-v-altman-openai-complaint-sf.pdf" multiple times

I changed it to "defnotalawsuit.pdf" and it worked...

owenpalmer
0 replies
1h9m

I uploaded the pdf into GPT4, and this is the output:

"Elon Musk is suing OpenAI, alleging breach of contract, promissory estoppel, breach of fiduciary duty, unfair competition under California Business and Professional Code, and accounting. Musk claims that OpenAI deviated from its founding principles, which emphasized developing artificial intelligence (AI) for the benefit of humanity, open-sourcing their technology, and not pursuing profit maximization. The suit highlights concerns over OpenAI's shift towards proprietary practices, particularly with the development and handling of GPT-4, and alleges that these actions contradict the organization's original mission and agreements."

noncoml
0 replies
1h22m

Now we know who was behind all the past OpenAI drama. Mask Off (pun unindented)

natch
0 replies
1h26m

So, does this mean he’s setting up a third run-in with that same Delaware judge?

jmyeet
0 replies
1h33m

This lawsuit makes me wonder how much Elon Musk had to do with Sam Altman's firing as CEO. The complaint specifically wants OpenAI to focus on its (allegedly promised) nonprofit activities, not the for-profit company.

If Elon had been involved--which the lawsuit seems to imply--I imagine he had to have something to do with Altman's ouster.

jcranmer
0 replies
59m

IANAL, but here's my takeaways from reading the complaint:

* There is heavy emphasis on the "Founding Agreement" as the underlying contract. (This appears to be Exhibit 2, which is an email to which Musk replied "Agree on all"). Since I'm not a lawyer, I'm ignorant on the interpretation of a lot of contract law, and there may be finer points in case history that I'm missing, but... where's the consideration? The "Founding Agreement" in general reads to me not as contract but preliminary discussions before the actual contract is signed.

* The actual certificate of incorporation seems more relevant. Also, it's a Delaware corporation, which makes me wonder if Delaware wouldn't be a more appropriate jurisdiction for the dispute than California. Granted, I know Musk now hates Delaware because it's ruled against him, but that's not a reason you get to file suit in the wrong venue!

* I noticed that Musk's citation of the certificate of incorporation has an ellipsis on one of the articles in contention. The elided text is "In furtherance of its purposes, the corporation shall engage in any lawful act of activity for which nonprofit corporations may be organized under the General Corporation Law of Delaware." ... Again, I don't know enough to know the full ramifications of this statement in jurisprudence, but... that seems like a mighty big elastic clause that kind of defeats his case.

* Musk admits to having continued to contribute to OpenAI after he expressed displeasure at some of its activities (paragraph 68). That substantially weakens his case on damages.

* Much hay made of GPT being AGI and AGI being excluded from licenses. No citation of the license in question seems weak. Also, he pleads 'However, OpenAI’s Board “determines when we’ve attained AGI.”'

* Paragraph 98 asserts that OpenAI fired Altman in part due to its breakthrough in realizing AGI. But the conclusion I've seen is that Altman was fired for basically lying to the board.

* Paragraph 105: However, the OpenAI, Inc. Board has never had a fiduciary duty to investors. ... interesting theory, I'm not sure it's true. (Can some lawyers chime in here?)

* There are essentially two underlying causes of action. The first (comprising the first two causes) is that the Founding Agreement is a binding contract between Altman and Musk that OpenAI breached. I'm skeptical that the Founding Agreement actually constitutes a contract, much less one that OpenAI is a party to. The second (comprising the last three causes) is that, as a donor, Musk is entitled to see that his money is used only in certain ways by OpenAI, and OpenAI failed to use that money appropriately. There's no pleading that I can see that Musk specifically attached any strings to his donations, which makes this claim weak, especially given the promissory estoppel implied by paragraph 68.

* The prayers for relief include judicial determination that OpenAI attained AGI. Not sure that is something the court can do, especially given the causes of action presented.

Overall, I don't think this case is all that strong.

holonsphere
0 replies
36m

Most of you realize private equity firms ran your ”non-profit" colleges right? Unethical experiments involving collective intelligence have been fought over for years at CMU/MIT et all. How can yall read this and really not just wonder.

hereme888
0 replies
4h44m

Has Musk at least been able to profit from the $50-100MM he put in?

gregwebs
0 replies
4m

This suit claims breach of the "Founding Agreement". However, there is no actualy Founding Agreement, there are email communications claimed to be part of a "Founding Agreement". IANAL, but I would suspect that these emails don't matter for much now that there are Ariticles of Incorporation. Those articles are mentioned, but the "Founding Agreement" implied by emails is mentioned more. The suit also seems alarmist by stating that GPT4 is AGI.

It seems like Elon could win a suit to the extent that he could get all of his donations back based on the emails soliciting donation for a purpose that was then changed.

But Elon's goal in this suit is clearly to bring back the "Open" in "OpenAI"- share more information about GPT4 and newer models and eliminate the Microsoft exclusive licensing. Whether this would happen based on a suit like this seems like it would come down to an interpretation of the Articles of Incorporation.

ggm
0 replies
7h57m

Contracts probably need to be defended. If he has evidence of intent in a deal, he should sue to the deal's intent being enacted. hate the man not the act.

geniium
0 replies
4m

TL;DR The document outlines a lawsuit filed by Elon Musk against Samuel Altman, Gregory Brockman, and various OpenAI entities. The complaint includes allegations of breach of contract, promissory estoppel, breach of fiduciary duty, unfair competition, and demands for an accounting and a jury trial. Musk accuses Altman and others of deviating from OpenAI's founding principles, which were supposed to focus on developing Artificial General Intelligence (AGI) for the benefit of humanity and not for profit. The suit details the founding of OpenAI, Musk's significant contributions and involvement, and accuses the defendants of transforming OpenAI into a profit-driven entity contrary to its original mission, thereby violating the foundational agreement.

emadm
0 replies
1h37m

where is Ilya

elwell
0 replies
6m

AGI is a threat to humanity; so is existing tech: e.g., spending all day staring at various screens (phone, laptop, tv). You can also take the opposite view that AGI will save or expand humanity. It depends on how you define 'humanity'. Page's definition is understandable concerning to Elon, and probably most humans.

dash2
0 replies
1h49m

A truly altruistic general intelligence would be deeply concerned for the future of humanity in the face of a devastating existential threat.

Tragically I, a mere Neanderthal with a primitive lizard brain, can only settle back and reach for the biggest ever bowl of popcorn.

coliveira
0 replies
59m

Typical grift from E. Musk. I'm pretty sure when he left Open-AI he had to sign something saying that he didn't have rights to technologies developed after that point. He's just trying to sue back his way into the company.

ayakang31415
0 replies
6m

Finally something is being done about this that I have always wondered about: non-profit that operates like for-profit business

aamoyg
0 replies
5h44m

It's kind of rich coming from him, but he has a point.

I guess this approach can still work if it's made sure that whatever successors to LLMs there are have rights, but I still get sharecropper vibes.

Timber-6539
0 replies
13m

At best a court forces OpenAI to be more transparent and clear about its for-profit motives (they couldn't hide behind the open, for-the-good-of-mankind mask forever anyways). Maybe even rebrand to stop misusing the terms open and non-profit.

At worst, court rules out the case and we see an OpenAI IPO and another evil company (very much like Google is born) founded on cutting every corner possible to solicit funds as a for-profit non-profit all while stealing intellectual property and profiting their shareholders.

Solvency
0 replies
2h4m

At worst this just adds sandbags to Altman's personal conquest for money/power, which I'm cool with. At best it puts a bigger spotlight on the future perils of this company's tech in the wrong hands.

OscarTheGrinch
0 replies
36m

If Elon wanted to influence Open AI he should have stuck around and helped.

Eji1700
0 replies
1h11m

Well this will be an interesting circus for sure. Musk isn’t exactly batting 1000 vs other large tech companies in lawsuits but openAI sure as hell has done some sketchy bs

AndrewKemendo
0 replies
1h3m

Looks like they need to work harder on alignment