Unsurprising but disappointing none-the-less. Let’s just try to learn from it.
It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the market.
Whatever has been written can be unwritten and if that fails, just start a new company with the same employees.
Because at some point, the plurality of employees do not subordinate their personal desires to the organizational desires.
The only organizations for which that is a persistent requirement are typically things like priest hoods
The plurality of employees are not the innovators that made the breakthrough possible in the first place.
People are not interchangeable.
Most employees may have bills to pay, and will follow the money. The ones that matter most would have different motivation.
Of course, of your sole goal is to create a husk that milks the achievement of the original team as long as it lasts and nothing else — sure, you can do that.
But the "organizational desires" are still desires of people in the organization. And if those people are the ducks that lay the golden eggs, it might not be the smartest move to ignore them to prioritize the desires of the market for those eggs.
The market is all too happy to kill the ducks if it means more, cheaper eggs today.
Which is, as the adage goes, why we can't have the good things.
Yeah we agree here, but the problem lies with the team
If you hire people who want to cash out then you’ll get people who prioritize prospects for cashing out
Set another way they did not focus on the theoretical public mission enough that it was core to the every day being of the organization much like it is for Medicins San Frontiers etc.
Medecins Sans Frontieres
Médecins Sans Frontières
Most of the people they hired were to work for OpenAI.com which was a pure profit-driven tech company just like any other (and funded by Microsoft). Those who joined the original OpenAI (including its independent board members) were driven by different motivations more in line with research and discovery.
It always rubs me the wrong way when people justify going for more money as "having bills to pay". No they don't, this makes it seems as if they're down on their luck and have to hustle to pay bills which is far from reality. I am not shaming people for wanting more money of course, but after a certain threshold, framing it as an external necessity is dishonest.
The problem is that you are equating "bills to pay" as living paycheck to paycheck at the minimum level.
It is a metaphor that they are still working class. You can earn 500k-1M/year in salary and be working class. Your monthly expenses may be > than your salary and you need it to keep working to get at the same QOL.
This is absurd and totally out of touch with reality
I live in an exurb of DC, in one of the highest cost of living areas with one of the highest median income in the world.
I have 3 kids who are all in middle and early high school (the most expensive time) and a mortgage and literally just did the math on what my MINIMUM income would need to be in order to maintain a extremely comfortable lifestyle and it’s between $80-100k a year.
Anyone making more than ~100k a year isn’t living paycheck to paycheck unless they are spending way more than their means - which is actually most people
It was botched because the public was too stupid to see how much of a snake Sam Altman is. He was fired from Y-combinator and people were still Universally supporting him on HN.
IF people hated him he would've been dropped. Microsoft and everybody else only moved forward because they knew they wouldn't get public backlash. Seems everyone fails to remember their own mob mentality. People here on HN were practically worshipping the guy.
Statistically most people commenting here right now were NOT supporting his firing and now you've all flipped and are saying stuff like: "yeah he should've been fired." Seriously?
I don't blame the governance. They tried their best. It's the public that screwed up. (Very likely to be YOU, dear reader)
Without public support the leadership literally only had enemies at every angle and they have nowhere to turn. Imagine what that must have felt like for those members of the board. Powerful corporations threatening aspects of their livelihoods (of course this happened, you can't force a leader to voluntarily step down without some form of a serious threat) and the entire world hating on them for doing such a "stupid" move as everyone thought of it at the time.
I'm ashamed at humanity. I look at this thread and I'm seriously thinking, what in the fuck? It's like everyone forgot what they were doing. And they still twist it to blame them as if they weren't "powerful" enough to stop it. Are you kidding?
Genuine question, what did he do that was so unforgivable? If it's so obvious, you should be able to list what happened in an unambiguous way.
https://www.investopedia.com/terms/s/self-dealing.asp
In lesser known places such as Wall St, practices like self dealing are considered illegal. In venture they’re often celebrated. Go figure.
I think there’s a general distaste towards setting up networks of companies B, C, D, planning to profit from a success of another company A where a single person controls all the companies and there’s a reasonable expectation of plans to divert business from A towards B, C, D.
I don’t know the details but there seems to be some gripe about it. I’m speculating.
We can start with the crypto scam that he’s now trying to pivot to the AI space as the “solution” to the problem he created.
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
I can't. Even this company policy being talked about in this thread is ambigiuose. That's the problem.
He was fired from y combinator and the entire board wanted to fire his ass too.
Therefore by this logic he should have universal support for reinstatement and the entire board should be vilified? Makes no sense. But this is exactly the direction of the Mob and was the general reaction on HN.
It was ambigiuose whether Sam was a problem and that makes ambigiuose treatment and investigation warranted. The proper reaction is: "wtf is going on? Let's find out" Instead what he got was hero worship. The public was literally condemning the board and worshipping the guy with no evidence.
And now with even more ambiguous and suspicious facts everyone here is suddenly "level headed." Yeah that's bs. What's going on is mostly everyone here is just going with the flow and adopting the background trends and sentiments. Mob mentality all the way.
It's a mistake to claim that Altman has/had universal support in here. I'm neutral towards him for example, and in all this firing minidrama the only thing I was interested in was to learn the motives of his firing.
For these types of things of course there's alternative opinions. It's like measuring voting... No candidate literally gets 100 percent of the vote.
But to characterize it as something other then overwhelming support to reinstate Sam altman would be false.
Hence why I said most people (key word: most) are just flip flopping with the background trends. Mob mentality.
Yes, the public was to blame, but inherently the board was doomed because OpenAI had morphed from a public benefit mission-driven company to a profit-driven one heavily funded by MSFT which expected its due return. Once the for-profit "sub"-company, the parent mission-driven company was doomed, because most people were going to be working for the profit-driven child company and therefore had different goals (i.e., salary, options) than the mission-driven parent company (scientific breakthroughs, responsible creation/development of AI). This is why the employees (of the child company) revolted and supported Sam (who had reneged on OpenAI's mission and gone full capitalist like any other tech mogul out there). The only question in my mind was whether "Open"AI was always a scam from the start to get attention or a genuine pivot (which the board unsuccessfully tried to stop).
And now responsible AI development is gone and everyone is chasing the money, just like the social media companies did, and well, we know how that ended up (Facebook, Twitter).
Sad day indeed.
-
...Or rather ( $ ) . ( $ ) immediate hindsight eyes...
Is this a boob joke or a money joke?
A haskell one
For great justice!
I wonder if your lesson is "Sam Altman should/would have been fired but for market forces".
The lesson is that "should have been fired" was believed by the people who had power on paper; "should not have been fired" was believed by the people actually had power.
That just simplifies things a hair too much. Remember, the people who worked at OpenAI, subject to market forces, also supported the return of Altman.
Market forces are broad and operate at every level of power, hard and soft.
I believe that's what your parent comment was actually talking about. I read it saying the people in power on paper was the previous board, and the people actually in power were the employees (which by the way is an interesting inversion of how it usually is)
that's because most of those people did not work for the mission-focused parent OpenAI company (which the board oversaw) but it's highly-profit-driven-subservient-to-Microsoft child company (who were happy to jump to Microsoft if their jobs were threatened; no ding against them as they hadn't signed up to the original mission-driven company in the first place).
it's important to separate the two entities in order to properly understand the scenario here
I'm not sure why you attribute that as a shield against the market. That seemed much more like an open employee revolt. And I can't think of a governance structure that is going to stop 90% of your employees from saying, for example, we work for Sam Altman, not you idiots...
An employee revolt due to the market. The employees wanted to cash out in the secondary offering that Sam was setting up before the mess. It was in (market) interest to get him back and get the deal on track.
Broad speculation
Yes, they wanted to work for Sam... because he was arranging a deal to give them liquidity and make them rich.
The board was not going to make them rich.
The things I saw didn't make any sense, so I can't say that it proves anything other than the existence of hidden information.
The board fired him, and they chose a replacement. The replacement sided with Altman. This repeated several times. The board was (reportedly) OK with closing down the entire business on the grounds of their charter.
Why didn't the board do that? And why did their chosen replacements, not individually but all of them in sequence, side with the person they fired?
My only guess is the board was blackmailed. It's just a guess — it's the only thing I can think of that fits the facts, and I'm well aware that this may be a failure of imagination on my part, and want to emphasise that this shouldn't be construed as anything more than a low-confidence guess by someone who has only seen the same news as everyone else.
You obviously have no experience with non-profit governance. OpenAI is organized as a public charity which is required to have an independent board. Due to people leaving the board, they were down to six members, three independent directors plus Sam and two of his employees. They had been struggling to add more board members because Sam and the independent directors couldn't agree on who to add. Then Sam concocted an excuse to remove one of the independent directors and lied to board members about his discussions with other board members.
I think they had no choice at that point but to fire Sam and remove him from the board. When that turned into a shitshow and they faced personal threats, they resigned to let a new board figure out a way out of this mess.
Also, I am not surprised the new board isn't being completely open because they are still probably trying to figure out how to fix their governance problems.
Correct!
As someone with no experience with non-profit governance, this does not seem coherent with (1) they didn't just say that, (2) none of their own choices for replacement CEO were willing to go along with this, and this happened with several replacements in a row.
For (1) I'd be willing to just assume good faith on their part, even though it seems odd; but (2) is the one which seems extremely weird to the point that I find myself unable to reconcile.
It would also not be compatible with the reports they were willing to close the company on grounds of it being a danger to humanity, but I'm not sure how reliable that news story was.
Yes, ideally you would have a succession plan and a statement reviewed by lawyers, but in this case, you had a deadlocked board that suddenly had a majority to act and did so in the moment. If they had waited, they would have probably lost the opportunity because Ilya Sutskever would have switched his vote again. But the end result is that Sam is off the board and that is the important thing.
Maybe you should explain your blackmail theory and we could see which idea makes the most sense.
"Cease quoting bylaws to those of us with yachts"
OpenAI: pioneer in the field of fraudulently putting "open" in your name and being anything but.
Similar naming pattern, like North Korea calls itself “ Democratic People's Republic of Korea” … it cannot be further from being democratic.
Nice comparison. And also certain political factions in the USA try to hide the shamefulness of laws they propose by giving them names that are directly opposed to what they'll do.
The "Defense of Marriage Act" comes to mind. There was one so bad that a judge ordered the authors to change it, but I can't find it at the moment.
All political factions are guilty of this. Patriot Act, Inflation Reduction Act, Affordable Care Act, etc.
USA PATRIOT Act was an acronym, actual name was Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001.
You think they came up with the long name and THEN were astonished to discover that it spells "PATRIOT"?
Yep. That's for sure a revisionist definition.
See also: "Digital Versatile Disc"
Eh, the ACA is the only reason I have "affordable" insurance. In the end it might have been more accurate to say, "Marginally Less of a Rip-Off Care Act."
This is just a normal practice in the US.
Defense of Marriage Act is actually an exception. The people supporting it honestly thought it was defending marriage, and the supportive public knew exactly what it did.
It passed with a veto proof majority a few weeks before a presidential election, received tons of press, and nobody was confused about what it did.
Whereas the Inflation Reduction Act had absolutely nothing to do with reducing inflation.
Seems arbitrary. There is nothing about that act that even borders on defending marriage, and people supporting it know that. It's a comic misnomer.
It’s defending when you view gay people as subhuman animals.
It was, and is, absolutely clear to everyone what this bill was about.
If it had been called the “Support Healthcare for Veterans Act” or even “Interstate Marriage Consistency Act” it would have been dubious.
But the 70% of Americans who opposed gay marriage correctly understood its meaning, as did the gay rights activists who saw gay marriage as unobtainable.
This wasn’t a confusing or misleading title, as is evidenced by the fact that nobody was confused or misled.
[delayed]
Actually, that's my mistake. The examples I was thinking of turned out to be one and the same: It was a California proposition originally titled the "California Marriage Protection Act." That was the one where a judge forced it to be renamed to "Eliminates Rights of Same-Sex Couples to Marry. Initiative Constitutional Amendment"
Suppose there was a country where individualism was prioritized. Having your own opinions, avoiding "groupthink", even disagreeing with others, is a point of pride.
Suppose there was a country where collectivism was prioritized. Harmony, conformity and agreeing with others is a point of pride.
Suppose both countries have similar government structures that allow ~everyone to vote. Would it really be surprising that the first country regularly has 50-50 splits, and the second country has virtually unanimous 100-0 voting outcomes? Is that outcome enough basis to judge whether one is "democratic" or not?
Suppose that countries have more than two parties...
You can democratically decide to have only two parties, or for that matter only one.
It only takes 51% of the vote to outlaw opposition.
Just recently, the US democratic convention stripped all the voters in New Hampshire from their votes the presidential candidates.
Even in multi-party systems, it comes down to ruling coalition vs. opposition. DPRK technically has multiple parties, but they are in a tight coalition.
The funny thing is that I’m sure NK is very democratic, it’s just that voting wrong probably gets you killed
I wonder if anyone that voted "wrong" has ever tried to say the election was rigged, and their votes were changed to avoid their families receiving a bill for a bullet.
From Lord of War:
The lib’dems in Europe are anything but liberal or democratic.
Liberal means less intervention from the state, it has literally changed its meaning to soft-socialism.
Democratic is not when you’re elected as part of Boris Johnson on a program to leave the EU, and 16% of elected MPs left his party after the vote and rejoined the Libdems (withouth giving a choice to electors, nor resigning as MP) to fight to stay in EU, coining the phrase “What voters really meant was stay in the EU with conditions.”
I focussed on England, but lib’dems in every EU country have the same betrayal.
The conservatives are the one true exception these rules. Its right there in the first 3 letters of their party name.
Eh?
This didn’t happen.
https://en.wikipedia.org/wiki/List_of_elected_British_politi...
I think what you are referring to is the tory MPs who defied the government and voted with the opposition on a single vote.
At that time literally one of them permanently defected, very visibly crossing the floor. Many of the rest were booted out of the parliamentary party by Boris, only to be readmitted later (including my MP, who I do not vote for).
There were two or three who joined minor parties, and a handful ended up in the Lib Dems afterwards, but there was never a mass defection to the lib dems, who only have 15 MPs now; 15% of the 2019 Tories would be over 50.
Either way I think your summary misunderstands the reasons all of that happened, and the principles behind it.
It's the same inverse signal in newspaper names too. Russian propaganda Pravda (Truth), Polish tabloid Fakt (Fact), etc. Organisations that practice X every day typically don't have to put X in the name to convince you about it.
Side note of a kinda similar thing happening, forgive me for the sidetrack and side-rant.
PrivatePropery <- was a website in South Africa setup in a market where all real-estate sales was controlled and gate kept by real-estate agents (assisted by Lawyers, various government bodies and even legislation), and its purpose was to allow "Private" individuals to put up their own properties for rent or sale.
Predictably, it eventually got take over by real-estate agents that posed as "private" sellers, and then that caused the entire site to support "Agents" as a concept and here we are. Today, you will hardly ever find a private individual on there and the company makes no effort at all to root them out. The agents just spam all their listings, lie on the metadata for properties, add duplicates, make zero-effort postings and use skew photos, the works.
Another example if you will, AirBnB. Taken over (I exaggerate a bit) by management companies that own many many properties and allocate an "agent" to oversee each property. At least here in South Africa, that is. Might not be that true in other countries, but it's on its way there. Mark my words.
Or more:
Pricecheck <-- Another South African website. Still claims to be a price-comparison website, but is really just like Google shopping, that doesn't do any scraping of prices, but simply "partners" with websites that give it a kickback after a user purchases something.
OSF predates it by almost four decades (even older than open source) https://en.wikipedia.org/wiki/Open_Software_Foundation
Orwell would be proud.
It isn't just money, though. Every leading AI lab is also terrified that another lab will beat them to [impossible-to-specify threshold for AGI], which provides additional incentive to keep their research secret.
But isn't that fear of having someone else get there first just a fear that they won't be able to maximize their profit if that happens? Otherwise, why would they be so worried about it?
"Fusion is 25/10/5 years away"
"string theory breakthrough to unify relativity and quantium mechanics"
"The future will have flying cars and robots helping in the kitchen by 2000"
"Agi is going to happen 'soon'"
We got a rocket that landed like it was out of a 1950's black and white B movie... and this time without strings. We got Star Trek communicators. The rest of it is fantasy and wishful thinking that never quite manages to show up...
Lacking a fundamental undemanding of what is holding you back from having the breakthrough, means you're never going to have the breakthrough.
Credit to the AI folks, they have produced insights and breakthroughs and usable "stuff" unlike the string theory nerds.
Fusion is well on the way, you just don't hear about it as much because the whole point of fusion isn't to make money, it's to permanently end the energy "crisis", which will end energy demand, which will have nearly unfathomable ripple effects on the global economy.
String theory is waste of time and has been for awhile now. The best and brightest couldn't make it map onto reality in any way, and now the next generation of best and brightest are working either on Wall Street or in Silicon Valley.
The robots are also coming sooner than we think. They won't be like Rosey from the Jetsons, but they'll get there.
AGI may or may not happen soon, it's too early to tell. True AGI is probably 100 years away or more. Lt. Cmdr. Data isn't coming any time soon. A half-ass approximation that "appears" mostly human in it's reasoning and interaction is probably 3-10 years off.
We don't hear about it [Fusion] because it doesn't work for energy production.
I don't believe there is a grand conspiracy to keep it down because of money.
The goal of AGI is not to emulate a human. AGI will be an alien intelligence and will almost immediately surpass human intelligence. Looking for an android is like asking how good a salsa verde a pizza restaurant can make.
I honestly don't understand how your comment here relates to what I said...
My point is that there is no "there there". I think all of them get that AGI isnt coming but they can make a shit load of money.
Hope, progress... both of those left the building, it's just greed moving them forward now.
No, it's a fear that the other lab will take over the world. Profit is secondary to that. (Whether or not you or I think that's a reasonable fear is immaterial.)
open training data, training source code & hyperparameters, model source code, weights
I'm not an FSF hippie or anything (meant that in an endearing way), but even I know if it's missing these it can't be called "open source" in the first place.
I don't think the weights are required. They're an artifact created from burning vast amounts of money. Providing the source/methods that would allow one, with the same amount of money, to reproduce those weights, should still be considered open source. Similarly, you can still have open source software without a compiled binary, and, you can have open source hardware, without providing the actual, costly, hardware.
The popularity of fine-tuning demonstrates that the weights are actually the preferred form for making changes.
The precursor form (training data etc) is only needed if you want to recreate it from scratch. Which is too expensive to bother with.
My point is, wanting a finished product that cost millions, without paying for it, is very different than it being open sourced. Models are an artifact, a result, not a source.
Great point. Open source is different from free product.
I would argue that the weights are as much source code as source code. Them being generated doesn't demote them.
I don't even think the distinction is important. The "system" should be open, and that includes data central to the system's operation within certain bounds.
You can open source parts of a system at whichever fine slice you wish, you just have the part which is open A and the part which isn't B.
It's the value of A and B being open that matters, not what A and B are composed of.
Blaming "human nature" is an excuse that is popular among egomaniacs, but on even brief inspection it is transparently thin: Human nature includes plenty of non-profits and people who did great things for humanity for little or no gain (scientists, soldiers, public servants, even some sofware developers). It also includes people who have done horrible things.
Human nature really is that we have a choice. It's both a very old and fundamental part of human nature:
That's the Tree of the Knowledge of Good and Evil, of course (Genesis 3). We know good and evil, we make our own choices; no blaming God or some outside force. If you do evil, it was your choice.Tangential topic, but I've been thinking about that part of the bible recently.
It makes no sense to me.
I don't mean that God, supposedly all good and all knowing, didn't know about the serpent and intervene at the time — despite Christian theology being monotheist, I think the original tales were polytheistic, and the deity of the Garden of Eden was never meant to have those attributes[0].
I mean why was it appropriate to punish them for something they did in a state of naivety, and which was, within the logic of the story, both prior to and the direct cause of gaining knowledge of the difference between good and evil? It's like your parents suing you to recover the cost of sending you to school.
[0] Further tangent: if they're al the same god, why did it take 6 days to make the world (well, cosmos) and all the things in it, but 40 days to flood the Earth to cleanse it of all human and animal life except for the ark? It's fine if they're different gods, a creator deity with all that cosmic power doesn't need to care so much about small details like good and evil, and a smaller and more personal god that does care about good and evil doesn't need to have such cosmic power.
Your first mistake (by trying to make sense) is reading the Bible as a historic book of records that actually happened.
The bible isn't a book by an author (Like the Quran claims to be). It is a mix/match of stories over long periods of time from different people. You read it as parables from the times, not as a history lesson.
Why do you think I'm reading it like that? I thought me saying "nah, polytheism" might have been a hint that I don't take it at all literally.
Likewise that I was referring to the internal logic of the story.
Since you seem to have this figured out and it's not just human nature, Care to list everything that is good and everything that is evil?
Back to reality on this topic. There is nothing wrong with OpenAI employees voting to keep the company for profit and maximizing their own personal gains.
I don't see how this can be anything close to "Evil".
Given this, it's interesting that an established company like Meta releases open source models. Just the other day Zuck mentioned an upcoming open source model being trained with a tremendous amount of GPU-power.
Meta is trying to devalue its upstart competitor openai. When openai was so far ahead in public perception, FB starts gaving away what they had spent oodles of money building in order to lessen openai's hype and stop their investors believing that the next great thing was elsewhere?
Commoditize your complement. I guess Meta sees AI more as something they use than something they offer.
I think that's just them trying to limit what the others can get away with, as well as limiting the competition they have to deal with because the open source models end up as a baseline.
OpenAI etc have to reign in how much they abuse their lead because after some price point it becomes better to take the quality hit and use an open source model. Similarly, new competitors are forced to treat the Facebook models as a baseline, which increases their costs.
do they actually want to be a public good or do they want you to think they want to be a public good?
What? It's business. They want to make money for investors and owners. Whatever helps this main goal.
Except OpenAI kept pretending that they aren't a real "business" for quite a while.
The problem is research into AI requires investment and investors (by and large) expect returns, and, the technology in this case actually working is currently in the midst of it's new-and-shiny-hype-stage. You can say these organizations started altruistic; frankly I think that's dubious at best given basically all that have had the opportunity to turn their "research project" into a revenue generator have done; but much like social media and cloud infrastructure, any open source or truly non-profit competitor to these entities will see limited investment by others. And that's a problem, because the silicon these all run on can only be bought with dollars, not good vibes.
It's honestly kind of frustrating to me how the tech space continues to just excuse this. Every major new technology since I've been paying attention (2004 ish?) has gone this exact same way. Someone builds some cool new thing, then dillholes with money invest in it, it becomes a product, it becomes enshittified, and people bemoan that process while looking for new shiny things. Like, I'm all for new shiny things, but what if we just stopped letting the rest become enshittified?
As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out in my fucking life. What it does is it flattens all products offered in a given market to whatever set of often highly arbitrary and random aspects all the competitors seem to think is the most important. For an example, look at short form video, which started with Vine, was perfected by TikTok, and is now being hamfisted into Instagram, Facebook, Twitter, YouTube despite not really making any sense in those contexts. But the "market" decided that short form video is important, therefore everything must now have it even if it makes no sense in the larger product.
> As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out
Yes, you have; you're just misidentifying the product. Google, Facebook, Twitter, etc. do not make products for you and I, their users. We're just a side effect. Their actual products are advertising access to your eyeballs, and big data. Those products are highly optimized to serve their actual customers--which aren't you and I. The profit motive is working just fine. It's just that you and I aren't the customers; we're third parties who get hit by the negative externalities.
The missing piece of the "profit motive" rhetoric has always been that, like any human motivation, it needs an underlying social context that sets reasonable boundaries in order to work. One of those reasonable boundaries used to be that your users should be your customers; users should not be an externality. Unfortunately big tech has now either forgotten or wilfully ignored that boundary.
Yeap... you get it, the guy above you doesn't.
George Carlin said it best, "It's a big club... AND YOU AIN'T IN IT!"
The public can’t benefit from any of this stuff because they’re not in the infrastructure loop to actually assign value.
The only way the public would benefit from these organizations is if the public are owners and there isn’t really a mechanism for that here anywhere.
I strongly disagree, and think this statement is basically completely wrong. I am part of the public and I'm benefitting tremendously from the product openAI has built. I would be very unhappy if my access to chatgpt or copilot was suddenly restricted. I extract tons of value (perceieved) from their product, and they receive some value in return from my subscription. Its a win-win.
You’re not “the public” you’re a private citizen paying a private org for services
“The public” in this case refers to all people irrespective of their ability to pay
I guess that is the question - how to differentiate between "open-claiming" companies like openAI vs. "truer grass roots" organizations like Debian, python, linux kernel, etc? At least from the view point of, say, someone who is just coming smack into the field and without the benefit of years of watching the evolution/governance of each organization?
Honestly? The people. Calculate the distance to (American) venture capital and the chance they go bad is the inverse of that. Linus, Guido, Ian, Jean-Baptiste Kempf of VLC fame, who turned down seven figures, what they all have in common is that they're not in that orbit and had their roots in academia and open source or free software.
The governance structure is advertising. "trust us, look we're trustable" is intended to convince people to use what they are building.
But the structure is expensive and risky, tossing it aside once traction is made is the plan.
See also this article on the failed social network Ello[1], which also proclaimed a lot of lofty things and also incorporated as a "Public Benefit Corporation."
1. https://news.ycombinator.com/item?id=39043871
part of human nature and will always be
What if we just made it illegal for corporate entities (including nonprofits) to lie? If a company promises to undertake some action that's within its capacity (as opposed to stating goals for a future which may or may not be achievable due to external conditions), then it has to do with a specified timeframe and if it doesn't happen they can be sued or prosecuted.
And the markets they operate in, whether commercial or not, will judge them accordingly.
That's not a corporate-law issue -- it's a First Amendment issue with a lot of settled precedent behind it.
tl;dr: You're allowed to lie, as a person or a corporation, as long as the lie doesn't meet pretty high bars for criminal behavior or public harm.
Heck, you can even shout fire in a crowded theater, despite the famous quote that says you can't.
OpenAI raised $130 million when it was only a non profit and had difficulty doing more, despite the stacked deck and start studded staff and same goal that would value participation units at $100bn
that’s the real lesson here. we can want to redo OpenAI all we want but the people will not use their discretion in funding it until they can make a return
Fully agree on open models, but I think there’s more going on that is important to consider in our own founding journies
It’s not just that there are billions to be made (they always believed that) it’s that people are making billions right now turning them into a paper tiger
When only the tech sector cares about a company it’s fairly straightforward for them to be values driven - necessary even. Engineers generally, especially early adopters, are thoughtful & ethical. They also tend to be fact driven in assessing a company’s intentions.
Once a company exits the tech culture bubble, misinformation & political footballs are the game. Defending against it is something every company learns quick. It is existential & the playing field is perpetually unfair.
To some extent but it's much more egregious in companies like OpenAI where they promoted themselves as being founded for a specific purpose which they then did a complete U-turn on.
It's more like a non-profit saying they're being founded to provide free water to children in Africa and then it turns out that they're actually selling the water to the children. (Yeah, scamming is maybe part of human nature too, but thankfully most people don't resort to that.)
basically, you're discussing enshittification. When things get social momentum, those things get repurposed for capitalistic pleasure.
This is precisely what most safety researchers were asking for in 2016 when openai was recruiting, and why many didn’t go to openai. Like, there’s a lot of other security and safety researchers out there. The OpenAI types draw from an actually fairly narrow self-selecting group within there.